source stringclasses 1
value | snapshot stringclasses 3
values | text stringlengths 264 621k | size_before_bytes int64 269 624k | size_after_bytes int64 185 235k | compression_ratio float64 0 0.01 |
|---|---|---|---|---|---|
warc | 201704 | SUPER SPECIALITIES
OverviewGastroenterology is the branch of medicine whereby the digestive system and its disorders are studied. The word is a combination of three Ancient Greek words gastros(stomach), enteron(intestine), and logos(reason).
Diseases affecting the gastrointestinal tract, which includes the organs from mouth to anus, along the alimentary canal, are the focus of this specialty. Physicians practicing in this field of medicine are called gastroenterologists.The department of gastroenterology at Vydehi hospital provides a complete range of diagnosis and treatment of gastrointestinal diseases and Liver disease through the expertise team of specialists in endoscopy and gastroenterology who uses most advanced diagnostic and treatment techniques. The examination on the gastrointestinal tract is largely carried out with video endoscopes. Video endoscopes produce high quality picture of the internal lining of the gastrointestinal tract. The two common video endoscopy performed are oesophagogastroduodenoscopy (OGD for short; it is often also called as gastroscopy) and colonoscopy.
Physicians in Gastroenterology see patients with diseases or disorders of the pancreas, liver, gallbladder, esophagus, stomach, small intestine and colon. Specialty clinics and areas of expertise include:
Diseases of the Esophagus Barrett's esophagus Cancer of the esophagus Esophageal spasm, non-cardiac chest pain Gastro esophageal reflux disease Diseases of the Stomach Ulcers Cancer of the stomach Chronic indigestion (dyspepsia) Nausea and vomiting Stomach services, including endoscopy, endoscopic ultrasound (EUS) and transit studies Diseases of the Small Intestine, Nutrition Celiac disease Short bowel syndrome Ulcers and bleeding Small bowel and nutrition services Malabsorption of nutrients Diseases of the Pancreas Acute pancreatitis Adult cystic fibrosis Ampullary carcinoma Cancer of the pancreas Chronic pancreatitis Cysts of the pancreas Diseases of the Liver and Bile Ducts Bile duct stones Cancer of the liver Cirrhosis Hepatitis (A, B and C hepatitis) Liver Disease Nonalcoholic fatty liver disease | 2,126 | 1,012 | 0.000991 |
warc | 201704 | I’m thrilled there is a new winter Farmer’s Market here in Portland, Maine. The summer markets on Wednesdays and Saturdays allow me to keep fresh, locally-grown, organic food on my table meal after meal. So when I found out there was a new indoor, winter market I couldn’t be happier.
Unfortunately, the mistakes that I see the winter market making are common for many small business owners. You need the basics of branding and marketing to survive. You are kidding yourself if you think you can do without them.
Some background
For those of you who are not local, let me give you some background information. Portland has two successful farmers markets that run spring through fall. They are dedicated to farmers only, meaning that bakers or fish mongers are not allowed under the current rules. Every winter, many of the farmers take time off from selling, others have started some small-scale direct-to-consumer sales where you can be emailed a list of what’s for sale and pick up at a predetermined location and time. Many of the surrounding communities have also started successful indoor winter farmers market and there was demand in Portland for something similar.
Without a lot of time for planning a group got together and found a vacant store front, worked out licensing agreements with the city and put together a group of vendors for the new farmers market. Kudos to them for getting it off the ground!
Obstacles and opportunities
Let’s outline some of the marketing obstacles and opportunities they have:
The unknown:It’s new, there’s never been a winter market before. However, there is demand, and a loyal clientele for the summer market What food is available in Maine in the winter?Many of us realize that the winter vegetable choices are going to be slim, but with storage vegetables, green houses and non-farm types of vendors there is a lot to offer. When:This is not a regular retail operation with standard operating hours. Visitors need to know the limited hours of operation. Where:The location (unlike the summer markets) is new. It’s also just a vacant storefront, so there is a lot of confusion about where it is. Not much time or money:The market is only running til April (then it will change over to the outdoor regular market), and so they need to act quickly to promote themselves. Also, the fees charged to the participants is low, so there is not much of a budget for branding or marketing.
The answer here is low-cost, quick-turnaround items that will quickly promote the market. They need to focus on the storefront itself and a very simple online presence.
Signage
Like many others, I wasn’t familiar with the address (85 Free Street). So when I was in the area I made a special effort to drive by and figure out where it was. Here is what the storefront looks like when you past mid-week:
There is nothing indicating that a farmers market takes place here on Saturdays. They need signs! Many of the vendors have their own signs, maybe those could get moved to the windows? At minimum They need to put a sign that shows pedestrians and drivers that they should remember to come back.
I put this quick sketch together to show how color and key words describing the time and offerings of the market would go a long way to attract potential shoppers.
On the day of the event, sandwich boards should be placed on the sidewalk. I realize there is neither time nor money to invest in a hanging sign. While I would recommend it for long term usage, it doesn’t make sense here.
Web site
Web sites can be very complex, but they don’t have to be. These guys should grab a URL and do a one-page web site that lists the time, location and vendor names. BTW, I just checked and portlandwinterfarmersmarket.com is available. Go grab it before a cyber-squatter does.
The reason the market needs a web site is that people who use the internet nearly always search online for answers. People in Portland are asking about the market. They are going to Google and look for the when, where and who. A simple site will answer these questions. Then you can refer people to your Facebook fan page for more up-to-date content.
Branding identity
Building a branded identity would be valuable to this group, but it not possible given the time frame. Instead, use your vendors brands to promote your own. [Full disclosure I designed the logo and web site for Cream & Sugar Bakery]. Use the logos and signage from the vendors to attract shoppers, by placing them prominently in the storefront windows and on your new web site.
Will work for food
I’m trying to connect with the management of the winter market. I want this to flourish and be successful for the vendors and for the city. How can I help? Maybe a barter is possible.
See you at the market! Saturdays, 10:00 am – 1:00 pm 85 Free Street, Portland, Maine | 4,911 | 2,350 | 0.000433 |
warc | 201704 | As the humanitarian crisis worsens in the Democratic Republic of Congo, aid workers are warning that the global financial crisis will hurt the world's most vulnerable people. Some are now predicting that death rates for the world's poorest could increase rapidly as the global recession bites. VOA's Mandy Clark reports from London. Flipping through his photo album, Buka Mwanza looks back on his last trip to Congo. He says he was kept busy visiting family. "We're an old-fashioned African family so we have quite a large extensive family," he said. But it is not just the recent fighting in his homeland that worries him. He is also concerned that humanitarian aid to Congo will be reduced following warnings by aid agencies that the global economic crisis will lead to cut backs by governments and private donors. "It is human lives that we are talking about. Money is something that comes and goes but a human can't be replaced," added Mwanza.
Peter Kessler, a spokesman for the United Nations refugee agency, UNHCR, says the agency relies on donations. If it gets less, it must spend less.
"The UNHCR is almost entirely funded by voluntary contribution so every contribution, from individuals on the street to governments in their capitals, is vital. Obviously when governments are devoting huge amounts of money into stabilizing their economies, those have an effect on the humanitarian programs like UNHCR and our partners," said Kessler.
The UNHCR has refugee camps all over the world, from Afghanistan, to southern Africa and even in the Darfur region of Sudan. Kessler warns that the impact of cutting programs could be devastating.
"People shouldn't forget the fact that there are tens of millions of people who rely upon the UN refugee agency and our partners for food, water assistance, protection and without these programs there will not be the care available, that people will indeed suffer and could indeed die," said Kessler. It is not just UNHCR that's concerned. The International Red Cross and Red Crescent held a conference in South Africa recently to discuss the issue of dwindling aid. And there is another danger to the financial crisis - rising global unemployment. The International Labor Organization is warning that as many as 20 million people worldwide might lose their jobs as a result of the financial crisis. Claire Melamed with Action Aid, an international anti-poverty group, says this is a serious concern for developing countries. "This is the flip side of the growth that we have seen in China and India which have been based on exporting things to rich countries. This is the downside in a sense, when demand dries up then jobs disappear," said Melamed. A vicious cycle, aid officials and humanitarian workers warn - and one that could see the world's most vulnerable pushed precariously closer to the edge. | 2,863 | 1,429 | 0.000704 |
warc | 201704 | NSCA Review of Volt Athletics Scott Caulfield, CSCS*D, RSCC*D Head Strength and Conditioning Coach National Strength and Conditioning Association
The National Strength and Conditioning Association (NSCA) is committed to providing its members with new resources as it continues to serve as the leader for continuing education and professional development for strength and conditioning coaches. After having tested a number of web-based platforms on the market, I found that Volt® Athletics offers features that benefit both certified strength and conditioning coaches and sport coaches wearing multiple hats who are constrained by time and resources.
The Volt system provides unique access to the principles of safe and effective strength and conditioning advocated by the NSCA. The result is that strength and conditioning coaches can impact more teams and athletes without adding to their workloads. I personally used the Volt platform to deliver summer training programs to my incoming freshmen and upperclassmen at Colorado College (Hockey), and found it to be practical and truly valuable in today’s modern era of strength and conditioning.
One of the best benefits Volt provides is efficiency in the programming process, as they help coaches maximize the time they spend creating and implementing programs. Using a web-based training program delivery system gives coaches the tools to administer training successfully and manage their team from afar, while giving athletes the ability to access their workouts from any location on a phone, tablet, or printed in hard copy.
KEY FEATURES Workouts are built with the expertise of Certified Strength and Conditioning Specialists (CSCS®) Integrates technology with science and research-based programs Enables coaches to deliver sound and effective programs to their athletes with or without access to an on-site, full-time strength and conditioning coach Easy to use in terms of building and delivering workouts and getting instant feedback from athletes
The Volt system is also designed with the athlete in mind. Training programs are easy to follow, as they include video demonstrations, coaching points, and step-by-step instructions with technique images. These guidelines and images give athletes the best opportunity to perform each movement properly even when training away from their strength and conditioning coach.
Volt’s strength and conditioning coaches are CSCS certified by the NSCA and are committed to delivering training that is of the highest quality. The Volt workouts are built upon sound training principles, which have been proven through decades of research and practical application, and are recommended by a number of NSCA high school and collegiate strength and conditioning coaches across the country.
The NSCA is pleased to partner with Volt Athletics in support of a shared desire to pass on research-based knowledge and its practical application to improve athletic performance and fitness. The NSCA is committed to helping our members shape the future of the strength and conditioning profession, and will continue to provide more resources to make the most of your athletes and your career.
Scott Caulfield, CSCS,*D, RSCC*D Head Strength and Conditioning Coach Coaching Education Manager National Strength and Conditioning Association | 3,346 | 1,461 | 0.000691 |
warc | 201704 | Advertisement - story continues below
Consider the evidence and the growing alarm expressed from many quarters:
“We cannot continue to rely on our military in order to achieve the national security objectives that we’ve set. We’ve got to have a civilian national security force that’s just as powerful, just as strong, just as well-funded.”
Advertisement - story continues below
-Barack Obama, Colorado campaign speech, July 2008
The American people have been patient and far too accommodating. For many years, we have respected the rule of law, even as those who rule over us break the law. We have been too accommodating. And now, it is clear that our worst enemy is our own federal government.
When things collapse (and it won’t be long), can there be any question that the resulting disorder will be catastrophic? Imagine an energy or monetary crisis so severe that trucks cannot deliver food. Imagine stores stripped bare. Hunger makes people crazy.
Advertisement - story continues below
Barack Obama is out of control. Even though he says “I am not a dictator,” it’s clear he is working to become one since he hates Congress and threatens the Supreme Court. He would rather have a free hand, something he is close to achieving.
Bottom line: either we throw a net over this bugger; or he will destroy our country, vaporize our liberty, and confiscate our property.
The outrage known as ObamaCare should have been sufficient cause for rebellion.
By corrupt means, he rammed this atrocity through against the will of the majority; and now, we have Sen. Orin Hatch claiming it was purposefully designed to fail so as to make way for complete government takeover.
. . . within the immediate future the Democrats are going to throw their hands in the air and say, ‘It’s not working. It’s unaffordable. And we have to go to a single-payer system,’”Hatch said, adding, “. . . where the government controls everybody’s lives.
All of this thoroughly trashes our Constitution, ushering in an age of despotism we once thought of as impossible in America.
Advertisement - story continues below
The Poser is purposefully destroying our economy. You have to be blind not to see it. He is selling us down the river to our enemies. You have to be deaf not to hear the war drums. He is destroying our liberty by centralizing control in the executive branch. You have to be dumb not to perceive it.
The clearest signal of all: the combined efforts to disarm American citizens and the massive weapons build-up within the Department of Homeland Security (DHS).
DHS does not have war powers. With whom are they preparing to fight? We’ll give you one guess. Take the Los Angeles riots (or better, the riots in the 60s) and multiply them many fold, nationally. Get the picture?
Ask yourself: Why does DHS need billions of rounds of ammunition, much of it hollow point? Hollow point rounds are specifically designed to blow giant holes in people, not to wound but to kill.
Why does DHS needs thousands upon thousands of assault rifles, the very rifles the Poser wants to ban generally?
Why does DHS need 3,000 armor-plated troop carriers, detention camps, and ready-made plastic coffins by the thousands?
And why does DHS refuse to answer any of these questions?
Congressional representatives are increasingly alarmed about these developments, and they are demanding answers, among them Rep. Leonard Lance (R-N.J.) and Rep. Timothy Huelscamp (R-Kansas.)
And get a load of this razor sharp letter written by a retired Army captain and sent to Sen. John Cornyn, Texas:
It is with gravest concern that I write to you today concerning the recent appropriation of weapons by the Department of Homeland Security (DHS) that can only be understood as a bold threat of war by that agency, and the Obama administration, against the citizens of the United States of America.
Captain Terry M. Hestilow, United States Army, Retired
Capt. Hestilow urges us to raise hell with our elected representatives. Amen. I suggest we raise hell with the media as well. Why are they ignoring this as they ignored Benghazi? Obviously, most have been bought off.
Finally, a recent editorial in Investor’s Business Daily reminds us of an important attitude articulated in a written report by the head of DHS in 2009, targeting conservatives and the unemployed. The report was titled: “Rightwing Extremism: Current Economic and Political Climate Fueling Resurgence in Radicalization and Recruitment.”
“The economic downturn and the election of the first African American president present unique drivers for right-wing radicalization and recruitment,” we were warned.
This IBD editorial provides even more alarming evidence citizens must consider very seriously, for we can no longer be accommodating or patient.
If our so-called representatives cannot or will not stop the maniac we call ‘president,’ then We The People will have to take matters in hand (or the military will have to restrain this idiot.) Short of military intervention, the only realistic option is a general strike where citizens refuse to pay taxes and occupy the Capitol.
Photo credit: Dan Jacobs (Creative Commons)
The views expressed in this opinion article are solely those of their author and are not necessarily either shared or endorsed by the owners of this website. | 5,451 | 2,686 | 0.000383 |
warc | 201704 | Linen is a delicate fabric with fibers that can become easily damaged by stains. Special care is required to remove stains from linen so that the table cloth, fancy napkins, summer dress, or whatever you're trying to clean is not damaged. Stain removal is a simple process that will keep your linens looking clean and new.
Steps 1Removing New Stains 1 Act quickly to clean the stain.The longer a stain is allowed to sit on your nice tablecloth or summer dress, the harder it will be to get out. Whether a stain is from food or ink or anything else, drawing it out from linens will work best when it hasn't yet dried. Some older stains require dry cleaning to be removed. Dry cleaning can ruin linens so it’s imperative that you try and treat stains quickly so that you don’t have to resort to harsher methods to remove the stain. 2 Scrape off excess liquid or solids.Use a flat butter knife or spoon to gently lift off any residue. For example, jelly can be scooped up with a spoon so that there’s less of a mess to clean up. You want to remove as much of the substance as possible before beginning to treat the stain. [1] Do not squeeze or press the linen or stain. Doing so may grind the stain substance into the fibers of the linen and make it harder to get out. You can gently shake off residual liquids like wine or juice instead of wringing them out. 3 Blot the stain with a white cloth or towel.Gently dab up and down with a paper towel, for example, to lift the stain from your linens to the towel. Work from the outside of the stain’s perimeter to the inside. This will prevent the pressure of blotting from spreading the stain. [2] 4 Apply a chemical solution to the stain.For best results, use a specific product for removing stains rather than regular soap. A chemical reaction is an efficient way of removing stains from your linens. Lay out your linen and place a few paper towels or rag clothes underneath to catch excess liquid. Sprinkle baking soda on the stain and add a few drops at a time of vinegar. Blot the stain with a paper towel to soak up the moisture. Lemon juice will help whiten any dingy materials. Squeeze some juice onto a stain or discolored linen item and let it sit until you see it begin to lighten, and then rinse it out. You can also buy a stain treatment to apply to the stain like Tide or Oxyclean. Don't even rub a stain. Rubbing and applying too much pressure will help a stain to set into the linen rather than get it out. 5 Fill a sink with hot water.Let the faucet run long enough to fill a sink, bathtub, or washing machine with enough water to cover the linens you’re washing. Hot water should only be used with an additive to help lift the stain. Heat makes stains settle into the fabric so make sure you’re adding another ingredient to the water. [3] 6 Add another cleanser to the water.Because hot water alone is detrimental to proper stain removal, you need to pair it with another cleanser. You can either purchase a specific stain removal product or make your own with household items. An example stain removing recipe is as follows: 1 scoop of Oxyclean, 1 cup of Biz, ¾ cup of ammonia, and a gallon of hot water. [4] White vinegar will help to cut grease as well. Use an ⅛ to a ½ cup based on how big your load of laundry is. [5] A mild dish detergent will work well too. Use a quarter to a full cup of detergent depending on how much you’re washing. [6] An example stain removing recipe is as follows: 1 scoop of Oxyclean, 1 cup of Biz, ¾ cup of ammonia, and a gallon of hot water. 7 Submerge your linen items in the sink.Make sure the fabric is completely saturated and under water. Let the material soak for at least an hour or overnight. Every once in a while, stir the water with a wooden spoon to agitate the water and make sure the solution is dispersed well. [7] 8 Drain the sink and wash your linen items normally.Put them on a gentle cycle in the washing machine and not in hot water so that the delicate fibers aren’t ruined. You can add more white vinegar, Oxyclean, or mild dish detergent to the load to help fight stubborn stains. 9 Hang to dry.Your dryer is another heat source that could cause a stain to set into the linen. Instead, air or line dry the fabrics so that you don’t undo your progress after soaking them. Hanging linens to dry also helps minimize wrinkles. 2Treating Old Stains 1 Soak the linen item in a hot water bath with a stain treatment additive.Before going to any additional methods, try removing a stain the same way you would with a new stain. You may be able to get rid of the stain by simply soaking the fabric and then machine or hand washing it. If linen items are stored improperly or are put away with existing stains, it may be more difficult to remove the stains. Fill up a bathtub or sink with cool water for soaking. Hot water needs an added cleanser to prevent stains from setting. Every once in a while, check on the stain to see if it is being absorbed into the water. To test the stain, lightly rub the material between your fingers to see if it's coming out. Be gentle so that you don't rub the stain into the fabric. 2 Lay out the linen items in the sun.If the stains persist through multiple soakings and washing, let the fabric sit in the sun for a few hours. Sunlight can also damage fabric and bleach too much, so it’s important to keep an eye out if your fabric begins to look too light. Remove your linens from the sun if they begin to fade beyond the original color. You can put the linens out completely dry, or you can lightly mist them with a spray bottle filled with water, non-chlorine bleach, or any other liquid stain remover. [8] Do not soak the fabric if you’re leaving it in the sun. It may create an unpleasant odor. Vintage fabrics may become damaged from direct sunlight so use caution when deciding whether or not to put antiques in sunlight. You can put the linens out completely dry, or you can lightly mist them with a spray bottle filled with water, non-chlorine bleach, or any other liquid stain remover. 3 Press older linen items by ironing immediately after washing to preserve them.It’s best to iron linen while it’s slightly damp. [9]Once you have successfully removed a stain, you can safely apply heat to any of your linen items. Use the proper setting on your iron so that you don't cause any damage. By pressing fabric, it is then easier to store and less susceptible to damage and wrinkling. Ironing a stain is a perfect way to seal the stain into the fibers. Check your whole garment or fabric to make sure that there are no other hidden stains. 4 Hang dry the linen if ironing is not needed.No matter the age of a stain, putting linens that have been rescued from stains in the dryer is not advised. Use a drying rack, clothesline with clothespins, or a clothes rack to air out your linens. 3Using Household Items to Treat Stains 1 Dab fresh lemon juice onto a new stain.Apply fresh lemon juice to the stain and sprinkle salt over top. Let the linens sit in the sun for several hours before washing. Check on it periodically to make sure the stain is beginning to fade. If it is not, add more juice and salt. [10] Be careful on bright, sunny days because the sun could lighten your linen items very quickly. Set a timer to check on the progress so that you don't end up with splotchy fabric. For difficult stains, repeat this process several times. Wash the fabric in between repetitions. For large stains or dingy white tablecloths for example, combine lemon juice and dissolved salt in a spray bottle and lightly spray the whole thing. Let it sit in the sun laid out flat so that the effect is uniform. 2 Absorb new stains with a baking soda mixture.Make a baking soda paste with 4 tablespoons (59.1 ml) of baking soda mixed with equal amounts water. Mix and apply gently so you don't rub the paste into the stain. After the paste has dried and sat for about 15 to 30 minutes, scrape off any excess paste before washing the linens normally. [11] 3 Treat oil stains with cornstarch.Oils stains are some of the most difficult to get out of fabrics. Sprinkle cornstarch on the stain and waiting 15 minutes for it to set. Then, scrape the starch off. Wash the linens in a sink bath with some dish washing soap or in the washing machine at a gentle cycle. [12] Don't coat the stain in too much cornstarch. You only need a small coating to absorb the stain. You can reapply another coat after the first one if the stain persists. If you need to rinse the cornstarch out, use cool water to keep the stain from sticking around. Community Q&A I have linen curtains that have a water stain. I had them dry cleaned and the stain is still there help!wikiHow ContributorSometimes a stain can come out just by soaking or letting it fade, depending on the stain and the material. Check to see if your curtains can get wet, and try soaking them in a warm bath in the tub or a sink with a cup of white vinegar. Rinse it out and try again if the stain persists. Things You'll Need Sink full of hot water A drying rack or clothesline Bicarbonate of soda (baking soda) Lemon juice and salt Oxyclean, Biz, or dish detergent Sources and Citations ↑ http://www.cleaninginstitute.org/clean_living/stain_removal_chart.aspx ↑ http://www.marthastewart.com/275491/how-to-wash-and-remove-stains#234649 ↑ http://www.realsimple.com/home-organizing/cleaning/cleaning-stain-removal/stained-dress ↑ http://www.cleaninginstitute.org/clean_living/stain_removal_chart.aspx ↑ http://www.marthastewart.com/275491/how-to-wash-and-remove-stains#222741 ↑ http://www.huffingtonpost.com/2014/11/21/cleaning-table-linens_n_6186274.html ↑ http://www.keeperofthehome.org/2010/08/how-to-get-set-in-stains-out-of-almost-anything.html ↑ http://www.antique-linens.com/laundryTips.html ↑ http://www.keeperofthehome.org/2010/08/how-to-get-set-in-stains-out-of-almost-anything.html ↑ http://www.organicauthority.com/sanctuary/clean-uses-with-lemon-and-salt.html ↑ http://www.cleaninginstitute.org/clean_living/stain_removal_chart.aspx ↑ http://www.marthastewart.com/275491/how-to-wash-and-remove-stains#234649 | 10,253 | 4,199 | 0.000242 |
warc | 201704 | : IPP&CR - The first results of a review of police pay & conditions designed to improve service for the public & maximise value for money have been published by the Independent Basic pay - Army private £17,014 rising to £26,404 – Police officer: £23,259 rising to £36,509, which is set at right level? Police Pay and Conditions Review. The independent study is intended to ‘ help bring modern management practices into policing and increase operational flexibility for the country's 43 territorial forces’.
The review found that police officers are comparatively well paid, 10-15% higher than some other emergency workers & the armed forces, as well as
up to 60% higher than the average local earnings in regions such as Wales & the North East.
In the short term, Tom Winsor recommends that a power to make officers compulsorily redundant is not necessary. This makes police officers unique in the public sector and this protection comes at a price, namely:
* suspension of all chief officer & superintendent bonuses
* abolition of the £1,212 Competence-Related Threshold Payment (CRTP)
* abolition of the discredited Special Priority Payments (SPP), of up to £5,000
* freezing progression up the pay scale for two years for all officers & staff
* savings of up to £60m in the annual overtime budget
The projected savings & costs arising from this review suggest that, if implemented from September 2011, these recommendations will produce net savings of £485m over 3 years.
: PSPC - Lord Hutton of Furness has set out his proposals Will this be ‘it’, or will they continue to ‘move the goalpost’ every couple of years? for comprehensive, long-term structural reform of public service pension schemes. The final report of the Independent Public Services Pension Commission follows a comprehensive 9-month review. It ‘sets out a number of detailed recommendations to the Government on how public service pensions can be made sustainable & affordable in the future, while providing an adequate level of retirement income’. The average pension paid to pensioner members is around £7,800p.a. – Around half of pensioners receive less than £5,600p.a.
The main recommendation of the report is that
existing final salary public service pension schemes should be replaced by new schemes, where an employee's pension entitlement is still linked to their salary (a ‘defined benefit scheme’), but is related to their career average earnings, with appropriate adjustments in earlier years so that benefits maintain their value.
The report suggests that it should be possible to introduce these new schemes before the end of this Parliament, in 2015, while allowing a longer transition, if needed, for groups such as the armed forces & police.
Other key recommendations in the report include:
* Linking Normal Pension Age (NPA) in most public service pension schemes to the State Pension Age
* Introducing a Normal Pension Age of 60 for those members of the uniformed services
* Setting a clear cost ceiling for public service pension
* Honouring, in full, the accrued rights already earned by scheme members
: HO - Plans to tackle violence against women & girls were launched by the Home Secretary last week. The Unacceptable behaviour by any measure 'Call to End Violence Against Women and Girls - Action Plan' was published alongside the ‘government's response to Baroness Stern's review into the handling of rape complaints’.
The action plan focuses on
4 key areas:
* the prevention of violence including reducing repeat victimisation
* the provision of support
* the bringing together of groups to work in partnership
* action to reduce risk by ensuring perpetrators are brought to justice
Baroness Stern said:
"I welcome the government's response to the recommendations in my report. Particularly in a time of financial stringency it is good that the government recognises the importance of a specialist and supportive response to rape victims."
: WAG - The vote on Friday 4 March 2011 means that the What does the Yes vote mean for Wales? National Assembly will be able to make laws on subjects in all of the 20 areas for which it has powers, without first needing the UK Parliament's agreement to giving the Assembly the necessary powers. The Assembly will now be able to table its own Bills and vote to pass its own Acts.
: CLG - Councils are being asked for the bureaucratic burdens they wish to throw away in the first ever central Many legal requirements funded by central government’s best guess ‘average’ budgets review of their statutory duties. To date no Government has ever assessed the cumulative burden imposed by the hundreds of legal duties placed on local government.
The Review is intended to proactively identify unnecessary burdens & barriers preventing councils from getting on with their job. The
Department of Communities and Local Government has published an initial list of more than 1,200 legal duties imposed mainly by primary legislation. The Localism Bill, currently going through Parliament, is already set to remove some of these duties.
For the time being, the review excludes duties stemming from Parts
1 & 2 of the Building Act 1984 as CLG are in the process of undertaking a separate review of these. There are also other reviews underway that are likely to impact on this work such as the Law Commission's review of adult social care.
: STFC - The Put your bid in now Science and Technology Facilities Council has announced a call for applications to the Projects Research & Development scheme (PRD). The applications should be submitted by 3 May 2011 and will be reviewed at a meeting of the PPRP Panel on 6/7 July 2011. STFC intends to allocate a total of around £1.5m, a large proportion of which will be for spend in the financial year 2011/12. : DH - Up to £775m is to be made available for translational research (research that is dedicated to deliver benefits to NHS patients) to help We need solutions to help mitigate increasing demand for health services secure the UK as a world leader in life sciences. It will be made available over the next 5 years to NHS/university partnerships through the National Institute for Health Research. Applications are encouraged to focus on improving health outcomes for patients in high priority disease areas such as dementia, cancer & heart disease. Further information about the call for proposals: Click HERE. Press release
: Forthcoming Event - Gartner CIO Leadership Forum: Creative Destruction: Radically Redefining IT | 4-6 April 2011 | London This event is designed exclusively for CIOs by CIOs. Join forces with our executive analyst team, peer CIOs and high-profile industry achievers for collaborative dialog, debate and problem solving. You’ll see first-hand how others are resolving ground-breaking IT issues. And you’ll take away powerful new strategies that will position you as a leader in an IT environment where the pace of change is accelerating even faster. The forum will ask key questions including: * How do you meet high corporate expectations while adhering to lean budgets and limited resources? * How do you balance the drive for enterprise innovation with the need for efficiency and cost control? Click here for the full agenda and more details. | 7,443 | 3,649 | 0.000281 |
warc | 201704 | Learn something new every day
More Info... by email
A black bun is a little deceptive in name, since it may not be a bun at all. It is similar to fruitcake, but instead of having a batter into which fruits are mixed, it consists of a pastry crust surrounding a heavily spiced, and sometimes brandied, raisin or currant filling. The typical black bun is made in large loaf pans, and may be made several days in advance of eating it so that the currant and raisin mix mature. Usually the black bun is served in slices.
This dessert is famed in Scotland where it was once most often served on Twelfth Night, but it is now more associated with the celebration Hogmanay, the Scottish New Year’s Eve. It forms a traditional dessert uniquely connected to Scotland, and recipes for it exist well into the past. Scottish elders fondly remember the dish as served by their grandmothers, and some historians suggest the earliest recipes date to the 16th century. These recipes may have been inspired by some of the fruit rich cakes of Italy.
A number of spices make up the interior of the black bun. These include some traditional ones like ginger, cloves, allspice, cinnamon, and nutmeg. Most recipes also call for black pepper, which adds an unusual and stronger overall taste. Cooks may use citron or candied peel, and directions in recipes often call for soaking currants and/or raisins for several minutes in either water or alcohol to plump them up.
Though most recipes for black bun are made in loaf form, you can make individual black buns in round tarte shapes, or a black bun pie. One unusual cooking direction in a majority of recipes is cooking time, which in pie or loaf form can be as long as three hours. This lengthy cooking time is recommended so that the raisins and currants condense into a sticky, spicy solid mix. Others recommend allowing the finished confection to sit for several days to create the solidified center.
The solid interior of a black bun makes it more like candy than cake or pie. It does suggest certain desserts found in Italy. It particularly suggests panforte, a chewy citron, raisin, and nut cake that is baked in a round pan. Both are solid and somewhat sticky, yet very delicious.
There are a number of recipes online for black bun, and each may differ a little in cooking time, ingredients and the like. Pastry dough used is most often very similar to piecrust, but if you’re making the loaf form, the dessert may require much more crust than the average pie. This isn’t exactly a low fat treat given the amount of butter or shortening used to make the crust, but it is an essential dessert if you plan to celebrate the New Year in authentic Scottish style.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! | 2,954 | 1,439 | 0.000701 |
warc | 201704 | Learn something new every day
More Info... by email
People who compile dictionaries are called
lexicographers. While sometimes thought of as a branch of linguistics, the art of lexicography is properly considered a distinct field. Some are written by a single lexicographer, but many of the most respected and widely used examples today are the work of many individuals.
The lexicographer has many considerations to keep in mind when writing a dictionary. First of all, there are many different types of dictionaries with just as many intended uses. They may simply provide definitions, pronunciation, and basic origins — such as "Greek" or "Old French" — or they may provide more extensive derivations and histories of each term. Some, like the Oxford English Dictionary, provide textual examples of terms. These books may focus on specific subsets of a language, such as slang or legal terminology, or they may be used to give translations from one language to another. The first known dictionary, compiled in Latin during the first century BCE by Verrius Flaccus, listed only archaic and difficult terms.
Keeping the intended purpose of the book in mind, the lexicographer must choose which words to include, how much information to provide for each entry, and how to organize the data. Some aspects of organization seem fairly obvious, such as alphabetizing terms in an English dictionary or categorizing Chinese characters by radical and stroke count, a system known as
lexicographic order. Alphabetized dictionaries did not appear in English until 1640, however, and earlier ones grouped words according to thematic similarity, such as listing all animals together.
There are also more subtle considerations regarding the organization of terms, such as how accented letters should be dealt with. A lexicographer must consider whether certain inflected terms, such as "children" in English, be listed on their own or included under the uninflected or
lemma entry, "child" in this case. In some languages, all words with the same root are grouped together. In English, this would result in words like "im portant" and "re port" appearing under the entry for "port" instead of under I and R respectively.
Some lexicographers have become household names, and revised editions of their work are still in use decades after their dictionaries first appeared. The most well-known of these are perhaps Noah Webster and Pierre Larousse.
shell4life
Post 11
I can't believe that words were once grouped by category instead of alphabetically! That would make them so much harder to find!
What if you disagreed about which category a word should belong to? You could spend so much time looking for it in one section when it might be way over in the back of the book under a totally different category, just because the lexicographer thought that was where it should go.
You can't argue with alphabetical organization, though. It's the most cut and dry way to arrange words in a dictionary, free of debate.
feasting
Post 10
How can anyone know enough words to write a dictionary? Surely, they must have reference books and cheat sheets.
If that's the case, though, you have to wonder where the original lexicographers got their information. What did they use for reference?
OeKc05
Post 9
@pastanaga – I do the same thing! I love the English language, and I think it's good to increase my vocabulary whenever possible.
It can be difficult to remember new words if you have just read them in the dictionary once, however. So, I make notes of words that I find that I actually think I might use someday. Writing down the definition with my own hand helps me store it in my memory.
orangey03
Post 8
@donasmrs – I remember learning a little about Noah Webster in class. I was really impressed by him, because he obviously knew so many words.
When he wrote the dictionary, Webster was around 70 years old. So, he got to put a lifetime of learning and experience into it.
He had to learn several different languages in order to know the origins of all the words. I remember that he changed the spelling of a few words, because he thought they were confusing. He changed “musick” to “music,” so we have him to thank for the current spelling of music.
ZipLine
Post 7
@anon241367-- That would be the editor. This is the same process for all printed material. It has to be proofread by at least one person and that person is called the editor. There can be multiple editors for one literary work, in fact, that's usually the case these days.
But an editor is not the same as a writer. So you can have multiple writers writing a dictionary and then multiple editors editing it. And this usually only applies to printed material. Electronic dictionaries don't usually mention writers and editors.
Did this make sense? I hope I didn't confuse you even more!
discographer
Post 6
@anon127337, @irontoenail-- There are actually some online dictionaries that accept entries by regular folks like you and me. They're usually urban dictionaries for informal words and phrases in the English language. Since this is an area that normal people are most knowledgeable about, they can contribute to it.
Of course, someone has to review entries and make sure it's appropriate and well worded. Nevertheless, you are writing it.
donasmrs
Post 5
So Webster's dictionary was written by Noah Webster? That's cool! I never knew that.
What kind of a background did Noah Webster have? I'm sure he must have studied English but where did he receive his education and how old was he when he wrote Webster's dictionary?
I'm just trying to get an idea of what a lexicographer is like.
anon241367
Post 4
Who actually proofreads a dictionary before it is put into print?
pastanaga
Post 3
I know it's geeky but I really like flicking through a good dictionary when I'm bored. It's interesting finding new words. That's one reason I'll always have a paper dictionary and not rely on an online dictionary like some people I know. Being able to randomly open the pages and find some new word I've never seen before is worth paying for.
irontoenail
Post 2
@anon127337 I think if you had a special kind of dictionary in mind, like a Klingon dictionary, or a slang dictionary for a particular area, that only you could write, and that there was a market for, you would have a chance to write one.
In that case, you would write out a proposal to a non-fiction publisher like you would with any other non-fiction book. Then they would either accept it or not.
For ordinary dictionaries, they take so much research I think they are mostly run by particular companies, like Oxford. You might be able to work for the company, and so work on the dictionary, but for the most part I can't see a publishing company wanting to publish a new language dictionary when there are already so many established and respected names in the field.
anon127337
Post 1
I was wondering if any person can write a dictionary or should be a professional related to linguistics? Also, dictionaries are published like any other book or do they go into a determined process? Thanks.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! | 7,447 | 3,408 | 0.000298 |
warc | 201704 | International: "How to End 'Islamophobia'" - By Tawfik Hamid
In an interview at the time, CAIR spokesman Nihad Awad accused Rep. Peter King (R., N.Y.) of being an "extremist" who "encourages Islamophobia" for pointing out what most people would think is obvious, that such a lawsuit would have a chilling effect on passengers who witnessed alarming activity and wished to report it. We can only assume that Mr. Awad believes flyers should passively remain in a state of fear as they travel and submissively risk their lives. In this case, Congress is acting appropriately and considering passing a law sponsored by Mr. King that would grant passengers immunity from such lawsuits.
It may seem bizarre, but Islamic reformers are not immune to the charge of "Islamophobia" either. For 20 years, I have preached a reformed interpretation of Islam that teaches peace and respects human rights. I have consistently spoken out--with dozens of other Muslim and Arab reformers--against the mistreatment of women, gays and religious minorities in the Islamic world. We have pointed out the violent teachings of Salafism and the imperative of Westerners to protect themselves against it.
Yet according to CAIR's Michigan spokeswoman, Zeinab Chami, I am "the latest weapon in the Islamophobe arsenal." If standing against the violent edicts of Shariah law is "Islamophobic," then I will treat her accusation as a badge of honor.
Muslims must ask what prompts this "phobia" in the first place. When we in the West examine the worldwide atrocities perpetrated daily in the name of Islam, it is vital to question if we--Muslims--should lay the blame on others for Islamophobia or if we should first look hard at ourselves.
According to a recent Pew Global Attitudes survey, "younger Muslims in the U.S. are much more likely than older Muslim Americans to say that suicide bombing in the defense of Islam can be at least sometimes justified." About one out of every four American Muslims under 30 think suicide bombing in defense of Islam is justified in at least some circumstances. Twenty-eight percent believe that Muslims did not carry out the 9/11 attacks and 32% declined to answer that question.
While the survey has been represented in the media as proof of moderation among American Muslims, the actual results should yield the opposite conclusion. If, as the Pew study estimates, there are 2.35 million Muslims in America, that means there are a substantial number of people in the U.S. who think suicide bombing is sometimes justified. Similarly, if 5% of American Muslims support al Qaeda, that's more than 100,000 people.
To bring an end to Islamophobia, we must employ a holistic approach that treats the core of the disease. It will not suffice to merely suppress the symptoms. It is imperative to adopt new Islamic teachings that do not allow killing apostates (Redda Law). Islamic authorities must provide mainstream Islamic books that forbid polygamy and beating women. Accepted Islamic doctrine should take a strong stand against slavery and the raping of female war prisoners, as happens in Darfur under the explicit canons of Shariah ("Ma Malakat Aimanikum"). Muslims should teach, everywhere and universally, that a woman's testimony in court counts as much as a man's, that women should not be punished if they marry whom they please or dress as they wish.
We Muslims should publicly show our strong disapproval for the growing number of attacks by Muslims against other faiths and against other Muslims. Let us not even dwell on 9/11, Madrid, London, Bali and countless other scenes of carnage. It has been estimated that of the two million refugees fleeing Islamic terror in Iraq, 40% are Christian, and many of them seek a haven in Lebanon, where the Christian population itself has declined by 60%. Even in Turkey, Islamists recently found it necessary to slit the throats of three Christians for publishing Bibles.
Of course, Islamist attacks are not limited to Christians and Jews. Why do we hear no Muslim condemnation of the ongoing slaughter of Buddhists in Thailand by Islamic groups? Why was there silence over the Mumbai train bombings which took the lives of over 200 Hindus in 2006? We must not forget that innocent Muslims, too, are suffering. Indeed, the most common murderers of Muslims are, and have always been, other Muslims. Where is the Muslim outcry over the Sunni-Shiite violence in Iraq?
Islamophobia could end when masses of Muslims demonstrate in the streets against videos displaying innocent people being beheaded with the same vigor we employ against airlines, Israel and cartoons of Muhammad. It might cease when Muslims unambiguously and publicly insist that Shariah law should have no binding legal status in free, democratic societies.
It is well past time that Muslims cease using the charge of "Islamophobia" as a tool to intimidate and blackmail those who speak up against suspicious passengers and against those who rightly criticize current Islamic practices and preachings. Instead, Muslims must engage in honest and humble introspection. Muslims should--must--develop strategies to rescue our religion by combating the tyranny of Salafi Islam and its dreadful consequences. Among more important outcomes, this will also put an end to so-called Islamophobia.
Dr. Hamid, a onetime member of Jemaah Islamiya, an Islamist terrorist group, is a medical doctor and Muslim reformer living in the West.
May 25, 2007 | 5,491 | 2,736 | 0.000368 |
warc | 201704 | Lee Berger led a team of trowel-blazing scientists behind one of the richest collections of hominin fossils ever discovered
Lee Berger put his ad up on Facebook on October 7th, 2013. He needed diggers for an exciting expedition. They had to have experience in palaeontology or archaeology, and they had to be willing to drop everything and fly to South Africa within the month. “The catch is this—the person must be skinny and preferably small,” he wrote. “They must not be claustrophobic, they must be fit, they should have some caving experience, climbing experience would be a bonus.”“I thought maybe there were three or four people in the world who would fit that criteria,” Berger recalls.
“Within a few days, I had 60 applicants, all qualified. I picked six.” They were all women and all skinny—fortunately so, given what happened next. Berger, a palaeoanthropologist at the University of the Witwatersrand, sent them into the Rising Star Cave, and asked them to squeeze themselves through a long vertical chute, which narrowed to a gap just 18 centimeters wide.
That gap was all that separated them from the bones of a new species of ancient human, or hominin, which the team named
Homo naledi after a local word for “star.” We don’t know when it lived, or how it was related to us. But we do know that it was a creature with a baffling mosaic of features, some of which were remarkably similar to modern humans, and others of which were more ape-like in character.
This we know because the six women who entered the cave excavated one of the richest collections of hominin fossils ever discovered—some 1,550 fossil fragments, belonging to at least 15 individual skeletons. To find one complete skeleton of a new hominin would be hitting the paleoanthropological jackpot. To find 15, and perhaps more, is like nuking the jackpot from orbit.
The early hominins included the australopiths, with their sturdy builds, long arms, short legs, and small brains. A couple of million years ago, they were joined by the first members of our genus
Homo, with their longer legs, stiffer walking feet, more dextrous fingers, and much larger brains. And some curious species harbor traits that are typical of both lineages.
In 2008, Berger found one such mosaic in South Africa’s Malapa cave: a new hominin called
Australopithecus sediba. He spent the next five years studying it. The project became so all-encompassing that in 2013, Berger, an explorer at heart, realized that he had stopped exploring. To rectify that, he enlisted two cavers, Rick Hunter and Steve Tucker, to explore other South African caves that might yield important fossils. The Rising Star Cave was one of them.
When the duo entered it in October 2013, they weren’t expecting much. Cavers had thoroughly explored the system for some 50 years, and the chances of finding anything new were low. Tucker did so by accident. During a rest, he wedged himself in a crevice—and found that his feet didn’t touch the bottom. The crevice, it turned out, led to an absurdly narrow shaft, which descended for 12 meters before opening into a chamber. When Tucker dropped into it, he found bones. He took out his Go-Pro and snapped some shots.
When Berger saw the pictures, he was amazed. He was clearly looking at the skull and jawbone of a hominin, maybe an
Australopithecus. “That evening, I couldn’t sleep,” he says. At 2 in the morning, he called Terry Garcia, the National Geographic Society’s chief science and exploration officer, who had funded Berger’s digs before. “If you ever believe in me, believe in me now,” Berger said. “Terry said: Do whatever you need to do.”
Berger quickly rounded up a team of scientists—and six skinny cavers. Marina Elliott was the first and oldest of them on the scene. When she first saw Berger’s ad, she was finishing off a Ph.D. at Simon Fraser University and had already done a lot of fieldwork in Siberia and Alaska. “I was predisposed to extreme environments,” she says. “Telling me that I’d have to do climbing, that it would be underground, and that it would be strange and potentially dangerous… it appealed.” She was joined by five others: Elen Feuerriegel from Australia, and Americans K. Lindsay Eaves, Alia Gurtov, Hannah Morris, and Becca Peixotto.
By November 7th, a month after Berger’s ad went up, a 60-person camp had assembled next to the Rising Star Cave. Three days later, the team ventured inside. “We knew the fossils were going to be super-important. They clearly weren’t human,” says John Hawks from the University of Wisconsin–Madison, whom Berger recruited. “We thought we were going to excavate one skeleton. Because what else would you find?”
After entering the cave, the team almost immediately hit several narrow, pitch-black corridors, a knife-edge ridge called the Dragon’s Back with steep drops on either side, and finally
that 12-meter chute. “It’s a long crack, punctuated by shark-teeth protrusions,” says Elliott. “I remember looking down and thinking: I’m not sure I made the right decision.”
Once you get to the very bottom of the shoot, there’s a very small opening. I actually fit my chest in that opening and inflated my lungs; as I exhaled, I could lower myself inch by inch to the floor of the fossil chamber, which is about a four meter drop. So my legs are dangling in the air, and I’m sliding down inch by inch, trying to find a foothold on the cave wall or on the ladder that’s below me.
It’s very physical. It’s very intense, both physically and mentally. You need a lot of physical strength and mental endurance to go all of that way and then spend six hours in an underground chamber.” –Hannah Morris
At the bottom, the team eased down into an area they dubbed the Landing Zone, before entering the fossil-filled trove that they called the 101 Chamber. On the first day, they excavated a single bone—a mandible. “It came out and we said: Wait a minute, this isn’t what we thought,” says Hawks. Tucker’s images hinted at an australopith-like jaw but he had forgotten to include a scale bar. The actual specimen was vastly smaller, and its teeth were almost human-sized. “We thought: We’re looking at something special here.”
On day two, they settled into a rhythm. The six cavers went underground in six-hour shifts, working in claustrophobic darkness and often on all fours. The most challenging part “was the emotional intensity of recovering the fossils themselves,” says Elliott. “There was so much material and it was friable and delicate. And every day, we realized that we were pulling out another 40 or 60 fragments of this thing that was going to be incredible.” Back on the surface other scientists started preparing and cataloging the fragments. As they unwrapped the packages, “we realized that we had been wrong,” says Berger. “It wasn’t a skeleton. It was more than one.”
NOVA/National Geographic Video: footage of scientists exploring the cave.
By the end of the week, the team had excavated more fossils than had ever been found in a South African site. Shortly thereafter, they exceeded the tally from all of southern Africa from the previous 90 years. It took months to process all the 1,550 or so fragments and assemble them into 15 skeletons—male and female, elderly and infant. As hominin fossils go, that’s a superlative haul, paralleled only by Spain’s Sima de los Huesos cave, a bonanza of Neanderthal remains.
These types of multiple individual sites are rare and important for looking at variation, which is after all the thing that evolution works from,” says Susan Antón from New York University, who was not involved in the project. Some other hominins are known only from the most measly of specimens, like small pieces of jaw or finger. “You’re always used to taking a single scrap and working it to death,” says Hawks. “If we only had one piece of
Homo naledi, we wouldn’t know anything like the picture we have.”
That picture is evocative but confusing. It shows a slender, upright hominin, which stood between 4.5 and 5 feet tall. It had relatively long legs and very human-like feet, which probably made it a good long-distance walker—a trait that Berger describes as “the defining characteristic of the genus
Homo.” Then again, its hip bone was flared in a australopith-like way, and its thigh bones had ridges that were unlike any found on other hominins.
Its arms are similarly confusing. The shoulders are almost ape-like, but the hands “are more human-like than any other fossil hand, except for Neanderthals,” says Hawks. However, Its fingers are incredibly curved and its first thumb bone had unique ridges for the muscles that draw it close to the hand. This was a creature with a very powerful grip. “I don’t know what to make of that,” says Berger. “They’re climbing, but I don’t know
what they’re climbing.
And then there’s the skull. As I talk to Berger over Skype, he picks one up from his desk. It neatly fits in his hand, as though someone had shrunk the skull of
Homo erectus down to the size of Australopithecus.“By the second morning, we were asking ourselves: What are we looking at here?” says Hawks. The team quickly discounted the idea that they had found several species, since the various copies of any single bone were all the same. Every femur looked like every other femur. H.naledi was clearly a single creature, albeit one with a confusing mish-mash of features.
“Everything was just all over the place,” says Berger, whose earlier discovery,
A.sediba, also married features from both Australopithecus and Homo. “We were one of the better teams in the world to make this discovery. The idea of mosaicism was drilled into our heads,” says Berger.
It’s tempting to suggest that both species—
A.sediba and H.naledi—were intermediate steps on straight evolutionary climb from Australopithecus to ourselves. But these are no “missing links.” Both may be mosaics, but they’re different mosaics. Each has different sets of australopith-like and human-like traits that can’t be easily reconciled on the same family tree.It’s especially difficult to do so because the team still haven’t dated the specimens—a fact that has vexed several other paleoanthropologists.
“I am puzzled by the apparent lack of attempts to estimate its age,” wrote Chris Stringer from the Natural History Museum in London, in a related commentary.“If these fossils were three million years old they would tell us something totally different than if they were thirty thousand years old,” adds Carol Ward from the University of Missouri. “Without dates, the fossils reveal almost nothing about hominin evolution, beyond supporting the growing realization that there was much more species diversity than previously thought.”
Hawks is less concerned. “They could be the ancestors of humans. They could be some sort of really primitive creature that lived alongside modern humans,” he says. “To not be able to tell which of those is the case is pretty engaging.”And their behavior might have been similarly engaging. “We have very strong reason to suspect that
H.naledi was doing culturally interesting things, and was doing it with a small brain,” Hawks adds. What kinds of things? “Well, like depositing their bodies in a cave.” How did all those bodies get in a chamber that is 70 metres from the outside world and many metres down an impossibly tiny crevice?
On the 1,550 pieces of bone, the team couldn’t find a single mark made by a tooth or a stone tool, or any trace of a fracture that happened when the individuals were still alive. “These were the healthiest dead things ever seen,” says Berger. That ruled out cannibals, prehistoric serial killers, or predators that dragged them down into the crevice. The sediment in the chamber also revealed no evidence that water had carried the bodies in from outside. There’s no debris to suggest that the individuals were actually living in the cave. And most tellingly of all, except for a few bones from a bird and some rodents,
H.naledi is the only thing in that chamber. “We found nothing else, and the only time you ever find just one thing is when humans deliberately do it,” says Berger.
Perhaps they took their dead to the cave and dropped them in from the top of the chute. “I don’t see any other conclusion,” Berger adds. “You have a cave that has always been in the dark and has never been exposed to the outside world. There’s no water flowing in. No other animal could get into that chamber. And you have a whole bunch of this one species of hominin, that don’t come in at the same time, and that have no damage, or signs of scavenging. Wanna call it burial? If we found them in any human context, anywhere, you would. We have no better hypothesis.”Was
H.naledi really carrying out a burial ritual, despite having a brain no bigger than a gorilla’s? Did they invent the practice independently of our ancestors? “These are going to be great things to explore,” says Berger.
The Rising Star team is now busy trying to describe specific parts of
H.naledi’s anatomy, such as its feet, hands, and legs. And they are encouraging other anthropologists to study the fossils, by uploading scans and 3-D models of the fragments to an open database. Eventually, anyone should be able to print H.naledi if they want to.
The team is also actively searching for even more fossil troves in other parts of the Rising Star Cave, as well as other sites. Elliott, one of the skinny cavers who answered Berger’s ad, is now directing the operation on the ground, as well as leading expeditions into other caves.
“Five years went by and we sat in the lab having won the lottery and not going out into the field,” says Berger. “So I say: Buy another ticket. Because it appears that the odds are not that bad.” | 14,512 | 6,511 | 0.000162 |
warc | 201704 | A WORTHING school is trialling a large-scale composting system in a bid to be more eco-friendly.
Field Place First School is one of only two schools in the county to take on the challenge, and the composter will stay until September.
Pupils, led by the school’s Eco Team, were shown how to use the Ridan food waste composter by Rachel Carruthers, waste prevention team member.
It works at the turn of a handle by combining food waste and wooden pellets.
The partially composted food waste is added to maturation bins where it can be mixed with shredded paper, and when fully composted after around three months, the mixture will be ready for use in the garden.
Head teacher Linda Bateman said: “Field Place is very keen to be as eco-friendly as possible and in September, all infant children will be eligible for a hot school lunch so ultimately, we will be serving 360 meals a day.
“The thought of all the waste from this going into landfill worried me, so any system to reduce the amount of waste was worth looking at.
“We plan to use the resulting compost on our allotments and this will help produce fruit and vegetables to use by the children in learning around healthy food.”
The school hopes to make good use of the composter, recycling all the food waste produced by the school, following in the footsteps of 100 Devon schools which are already successfully using the system.
Fruit and vegetable peelings from snack time will also be composted.
Ridan composters use no electricity and composting takes place on site. | 1,561 | 826 | 0.00124 |
warc | 201704 | Case Study 1 Q1: Describe the HRM practices that were conducted at House Smart. What evidence is there that the four key roles of an HR manager were applied? What impact did this have on organizational growth and survival?
(a) The HRM practices were not very much followed in the company. Following are the few HRM practices that were used in the company.
Social gathering of employees
Good relationship among employees and management
Motivating employees by treating them like a family
Selection among the existing employees (Jane was selected)
Leadership (Leader was only Colin)
Centralized decision making (by Colin)
(b) There was no evidence that the four key roles of HR manager were applied. The company was not following any HR practices in their management, and Colin as a leader and as a HR manager was not so impressive to apply HR practices.
(c) As the HR practices were not followed and the key HR roles were also not applied this thing damages the organization growth.
Company was affected badly due to the HR roles were not applied. The employees were not performing well and there was no job description available for the employees. Lack of planning was there due to which they don't know what to do in specific conditions. The reason the company was struggling for the survival was due to the HR problems in the company. If the HR management was good then company may not go to decline. . Due to these unsuitable HR practices and inefficient management, the House .mart Furniture Company suffered bankruptcy and had to shut down its office in Hong Kong.
Q2: Analyze the life cycle of the organization and compare it with the HR planning and forecasting needs to encourage growth and sustainability. What influence would effective HR planning have had... | 1,782 | 859 | 0.001173 |
warc | 201704 | Accounting with Finance (Hons)
Advanced Financial Accounting Coursework
'Discuss the impact of the adoption of international accounting standards in the UK'
Most companies in today's world have to prepare some form of detailed record of its activities during the year. There are several users who require such detailed information, and therefore companies produce yearly accounts, showing items such as incomes, cash flows, and asset measurement. These users usually have a vested interest in that company; the most in need are probably investors, who have a monetary interest in the company, and directors who seek to maintain the financial well-being and future growth of a company, but there are also others who may take an interest. Creditors may be interested to know the likelihood that a company can honour its debts. In addition financial statements are required when a company produces its yearly tax return. A useful set of financial accounts provides shareholders with precise information about the value of their shares, levels of dividend receivable, and future value of their investments.
In addition the actions of directors are made clear and any actions adverse to the business' performance can be avoided. In the interest of fairness and comparability several rules and regulations known as standards apply to the writing of financial reports. These are enforced independent accounting bodies. In the UK the main body involved with the setting up of accountancy standards is the Accountancy Standards Board.
The accounting standards operated under by each country can vary, in something as basic as inventory evaluation, practices in major countries include:
* Cost (FIFO) (e.g. some Japanese companies)
* The lower of FIFO and net realisable value (e.g. general UK practice);
* The lower of LIFO and current replacement cost (e.g. common US practice).
Therefore existence of difference standards can lead to... | 1,940 | 963 | 0.001044 |
warc | 201704 | With the change of the season and the beginning of a new quarter, now is a good time to take a look at all those “New Year’s Resolutions”; what have you done towards accomplishing them? Did you forget about them? Are you tired of starting a task but not finishing it, or feeling stuck and unhappy? It is time to wake up and bloom during this spring. Spring is typically a season most people think that rebirth and renewal happens. Yes that means it is time to get up off your butt and start to put energy and time behind your heart’s desires. There is no more room for excuses during this spring. Step into a place of commitment. Commit to your growth; investing in yourself is one of the best investments you can make in your lifetime. So if working out is what you told yourself you would do then do it, if looking for a new job is what you told yourself you would do then do it, if saving money is what you told yourself you would do then do it. Pay close attention to what you are saying to yourself because the more you say you are going to do something and don’t do it; you are dishonoring yourself and creating a space of not trusting yourself. You don’t want that, do you? Then stop it now, right now, use your power and do what you say you are going to do. Start working on a new routine for this spring. It takes about six weeks to develop a new habit. Are you ready?
You also may be hearing people talk about spring cleaning during this time of year; and they think of it as a time that they have to clean their house, garages and switch their closet externally and quite frankly it is an opportunity to check your internal inventory and clean up what’s going on inside and make a switch. Your house can be used as a symbol of your body, your garage can be used as a symbol of your mind and your closet can be used as a symbol of your heart. You need to be brutally honest with yourself about where you are in your life and take complete ownership this the way you can truly do a proper spring cleaning. Through this cleaning you can begin to plant your seeds that will reap success.
Here are three key ways that you can bloom…
1. Align With Your Creator
You have been created for a specific life assignment yet you tend be distracted by life’s noise. Begin to listen to your gut feeling because that is your Creator speaking to you. Also be open to receive messages from nature and other people. Your Creator has made you an unlimited being so tapping into that source is profound way for growth.
2. Change Your Mind
Remember tiny changes make huge results. Become present in each moment. It takes consistent practicing to master changing your mind. You have about 60,000 thoughts a day; start crafting a change within for your negative thoughts and develop a new perspective.
3. Accountability
Write down your plans/vision and read it often. Also get someone to hold you accountable, a friend, family member, or a coach. You will be surprised how your levels of commitment and productivity will increase.
As Neale Donald Walsch said, “by your decisions you paint a portrait of who you are”. Use this new season to bloom into your authentic self. Stay focused and encouraged; what new decisions will you choose to make during this spring?
like us on facebookIf you 'like' us, we'll LOVE you! | 3,370 | 1,607 | 0.000635 |
warc | 201704 | Ready to take going down to the next level?
Oral sex is a highly intimate sex act. Many people find oral sex to be way more intimate than intercourse. Perhaps this is because oral sex triggers a lot of feelings of vulnerability and it's emotionally intense to let someone so close to your most sensitive parts. Yet, oral sex is also one of the most pleasurable and orgasmic sex acts, so learning how to enjoy it fully is an essential step in loving your sex life.
Oral sex positions do more than create specific angles and access to sensitive spots; they communicate your feelings and excitement about receiving it.
In researching our book
The Pleasure Mechanics Guide To Cunnilingus, we surveyed thousands of men and women about what they find sexy about oral sex. The overwhelming response was that those giving oral sex want their lovers to demonstrate their enthusiasm and excitement by participating. The biggest turn-off? Giving oral sex to someone who stayed still and silent.
Exploring new oral sex positions can help you open up new channels to communicate your pleasure and arousal while receiving, and make giving oral pleasure a lot more fun, too. Here are a few positions to experiment with as you lean back and enjoy the pleasures of oral sex.
For the best oral sex positions for cunnilingus, try: 1. The Classic
The classic oral sex position for women receiving cunnilingus is a classic for a reason. One of the reasons oral sex is so pleasurable is due to the opportunity to lie back, relax and receive pleasure. Getting comfortable in bed and allowing your lover to lavish you with pleasure is a delicious and simple position.
You can experiment with a few modifications to make this classic oral sex position even more pleasurable.
Don't make your lover suffocate under the sheets while going down on you.Throw off the covers and allow your lover to breathe freely while pleasuring you. The bonus? You get to feast your eyes on your hot lover in between your thighs. Make it even more intimate by making eye contact as your lover continues to pleasure you. Place a pillow under your hips to elevate your pelvis, giving your lover a bit more breathing room. Plant your feet on the mattress, allowing you to push off your feet and move your hips.This can be your first step in becoming a more active receiver during oral sex. By shifting your hips a little up and down, or to the right and left, you can help your lover's tongue find your most sensitive spots. Once you gain more confidence and are ready to go wild, try moving your hips in circles to maximize your pleasure. Adapt the classic oral sex position by wrapping your legs around your lover's shoulders. 2. Queening
If you want to take control of your pleasure and give your lover an overwhelming experience, try queening, which is also known as facesitting. Like a true queen, you take your throne by sitting on your lover's face.
Get your lover comfortable lying down in bed and then kneel over them, presenting your most intimate parts for them to pleasure. You can kneel up or down in order to control how much they can reach you, move your hips for your own pleasure, and even experiment with engulfing their face in your flesh for short, intense bursts.
Don't be shy about suggesting this position; many guys fantasize about this experience and think it's a total turn-on. Queening is truly an empowering experience, so if you're ready to claim your erotic power and beauty, assume the throne.
3. Doggy style
Ready to get primal? Get on all fours and allow him to stimulate you while you move and rock your hips. He can either be totally behind you if you're comfortable with his face so close to all your parts, or he can lie down and use his mouth around your clitoris. Enhance the primal experience of this position by making lots of noise, expressing your pleasure with moans, groans and even growls.
4. Recline and receive
If you have a comfortable recliner or reading chair, try putting a pillow on the floor in front of you, scooting to the edge of the chair and inviting your partner to kneel in front of you. This can also work on the edge of a bed. Inviting your lover to kneel in front of you and perform oral sex is a delicious way to allow them to worship your lovely body.
For the best oral sex positions for blowjobs, try these: 1. The Classic
The classic oral sex position for men is very similar to the classic oral sex position for women. Guys get to lie back, relax and enjoy all of the pleasure of fellatio. This position is great for the giver as well because it allows the giver to control the depth and speed of penetration. This position works well for an oral quickie when you're using oral stimulation as part of foreplay. But if you want to make oral sex last a long time, this position isn't the best choice.
The downside to the classic oral sex position for men is that the giver has to support their body weight, which usually means losing the ability to use their hands as part of the erotic stimulation. Hands are an essential part of giving great oral sex to a man, so freeing up the hands is a great reason to explore new oral sex positions.
2. Standing up
One great oral sex position that frees up the giver's hands is having the man stand up while the giver sits on the side of the bed or on a low chair. The giver can also kneel in front of the man, using a pillow to be more comfortable. Sitting is more comfortable than kneeling for most people.
Receiving oral sex while standing up opens up a few great pleasure benefits. Most importantly, the giver uses their hands to add more stimulation to oral sex. Using hand job techniques during oral sex allows complete stimulation of the entire penis while taking the pressure off the giver to take the entire length of their lover's shaft into their mouth.
Standing also allows the man to move his hips, which a lot of guys find highly pleasurable. Explore allowing him to move his hips gently to add more sensation while receiving.
3. Irrumatio Irrumatio is a Latin term to describe oral sex where the receiver thrusts into the mouth of the giver. While this term has some negative associations, consensual irrumatio can be incredibly hot and is worth trying out. With the receiver kneeling or sitting comfortable, the giver takes control and thrusts into the giver's receptive mouth.
Trust and communication are essential here. The man must exercise enough self-control to keep the thrusting comfortable and pleasurable for his lover, while the receiver must stay relaxed and receptive enough to receive the thrusts. If you want to try irrumatio, we recommend having a hand signal in place to communicate if the thrusts ever get too deep or too fast. This oral sex position can be wildly thrilling for both partners if done with respect and awareness.
4. Head off the bed
If you want to explore deeper oral penetration, try the "head off the bed" position. The giver reclines on the bed with their head off the edge, so the neck is bent backwards. This position lines up the mouth and the throat, allowing deeper penetration with less gagging.
The trick here is getting the height right. The guy needs to be able to line up with the head without stretching or squatting. You can also try this oral sex position while lying on a dining room table or even a pool table. Remember, deep throating is a very advanced skill and won't be pleasurable or comfortable for everyone, so go slowly with exploring this position and make sure you're both enjoying it.
5. The Throne
While there's no male equivalent for the queening position, men deserve to feel like royalty, too. Receiving oral sex while seated in a comfortable chair, his lover kneeling in front of him, is perhaps the closest men can feel to sitting on a throne. Make it even more special by allowing him to watch a sports game, porn or another video of choice. Or put a special drink in his hand, or light a cigar before going down.
The goal is to add extra sensual pleasures while giving the guy great head. Save this as a once in a while treat to celebrate birthdays, anniversaries or other important achievements, and you'll have him looking forward to it all year long.
For the best oral sex positions for simultaneous oral sex, try these: 1. 67
We all know the classic 69 position, but very few people can actually reach orgasm in this position. One of the greatest things about oral sex is taking turns. One person gets to totally relax and receive, and the other person can focus on giving as much pleasure as possible. 69 can be really distracting, and most people just end up moaning and gasping rather than focusing on giving.
Try the 67 position: the giver on top of the receiver, but with their body off to the side a bit. This way, you can enjoy the full body contact of having your lover on top of you while you're being pleasured, but without the distraction of having to give at the same time.
2. 69, on your sides
If you really want to explore the 69 position, try it with both of you lying on your sides. This prevents one of you from having to support your weight while hovering over your lover's body. Each of you can drape one leg over the other's shoulder to create access for oral stimulation.
Side-lying 69 can be very slow and luxurious. Try taking turns: one of you stimulates the other for a few moments and then switch roles. 69 is overwhelming for many people, as it's hard to focus on giving and receiving all at the same time. But it's also thrilling, so try it out once in a while to add variety.
Exploring new oral sex positions can open up new pleasures for both you and your lover. Remember, the sexiest qualities during oral sex are confidence and enthusiasm.
Each new oral sex position gives you permission to express yourself in a new way, showing your lover how much you're enjoying their generous oral stimulation. The sexiest oral sex position is the one that allows you to fully enjoy your lover's stimulation, so explore them all and find your personal favorites.
like us on facebookIf you 'like' us, we'll LOVE you! | 10,128 | 4,203 | 0.000239 |
warc | 201704 | A More Appropriate Comparison
Comparing different European chocolate is quite fun and tasty work to have to do, but a more appropriate comparison might involve American chocolate. While the styles are extremely different we have gone ahead, once again, and made the sacrifice to equip you to make an informed choice, this time between French and American chocolate. We have taken two different types of chocolate to compare; bars and filled chocolates. As should be done with any gourmet chocolate we will look at the ingredients, preparation, and presentation (more with the filled chocolates because after all, how many ways are there to present a bar of chocolate?). We hope that you will use this, more than anything, as motivation to make a comparison of your own.
Setting the Bar
The comparison of bar chocolate was fairly straight forward. All of our taste testers agreed that American chocolate is sweeter, plain and simple. French chocolate retains more of the rich chocolate flavor, particularly when it comes to dark chocolate. As milk chocolate in bar form is not really a specialty of France it was fairly easy for the US to win this category. But in the tasting of dark chocolate in bar form, unless what you crave is overly sweet dark chocolate, the French definitely come out ahead.
According to one taste test it was said that, “Traditional American-Style chocolate is lighter and sweeter than European-Style chocolate and the flavors are even more pronounced and identifiable.” Where as French-Style chocolate they describe as, “darker and less sweet, has subtler flavors”. We know though that many people base their taste for chocolate off of what they are used to so we would encourage you to make your own comparison and decide for yourself!
The Obvious Observations
The category of filled chocolates was definitely a bit more interesting. There are a few things that are noticed right off the bat. The first is the packaging. American chocolate is presented in a large flat box with individual paper cups protecting one chocolate from another. French chocolate on the other hand is presented in the traditional ballotin, the Belgian invented packaging that is used throughout Europe to house their variety of delicate creations. The second easy observation is the size of the chocolates. This should come as no surprise but American chocolates are bigger. They include multiple whole nuts instead of the occasional one whole nut that you may find in a French chocolate. And while there is a bit of variety in their shape and size there is definitely more variety with French chocolate.
A Surprising Juxtaposition
When it comes to the flavor of the chocolates there was an interesting juxtaposition that appeared. French cuisine is known for being subtle and simply delicious. They use few ingredients but they use them well, creating amazing sauces to go with well prepared meats and skillful composition to cook things in a way that brings out the best in their natural flavor. American cuisine on the other hand is bolder and uses a lot more ingredients; they combine an abundance of flavors to create busy dishes that rely more on flash than skilled preparation. These cultures best chocolate offerings seem to follow the exact opposite rules.
The Major Flavor Difference
American chocolate uses a small variety of ingredients, mainly caramel, almonds, peanuts, and chocolate cream (75% of the chocolates that our participants tasted included one or more of these ingredients). French chocolate is the complete opposite of this in that it uses a wider variety of all kinds of flavors; fruits, nuts, spices, herbs, caramels, ganaches, and more!
Perfect Subtle Flavors
French chocolate, while it may seem more complex than traditional French cooking, does also hold a few similarities. Each chocolate seems to be its own little masterpiece. French chocolatiers appear to put as much care and effort into one small chocolate as a French chef would put into an entire entrée. Also, as with French cooking, one flavor never overpowers another. One of the observations made during our taste test was that the chocolate flavor in the French treats was less overpowering and allowed the flavor of the filling to come out more than with the American chocolates.
Some Other Observations
A few other things came out while we were conducting out taste test that are worth sharing. Our American participants had to take some time to get used to the creative essence of the French chocolates. At first they thought they were a little too overwhelming but by the end they realized there was a subtlety to it that they really enjoyed. Also they were frustrated by not being able to identify the different flavors but eventually found the joy that came with seeking out each individual, creative aspect of the various chocolates.
Search for Yourself
So we have given you our opinion and that of our taste testers and once again French chocolate really truly comes out on top. We hope though that our work will do no more than motivate you to conduct a test of your own. Invite some friends, splurge on truly fine chocolates, and take your time. We hope that it proves to be a tasty adventure! | 5,243 | 2,279 | 0.000442 |
warc | 201704 | In uncertainty following the Fukushima Daiichi incident, citizen science groups Safecast and Radiation Watch have been providing a critical service for nervous residents, both by building radiation monitors and collecting data from the fallout zone. IEEE Spectrum reports.
Safecast was organized just days after the explosions. Its founders hoped to buy and distribute Geiger counters to residents in the fallout zone so they could do their own monitoring, but the world's supply of Geiger counters had already sold out.
So the tech-savvy, DIY-inclined volunteers built their own device that they called a bGeigie (pictured above), which they had strapped to their cars while they drove through Fukushima. The bGeigie takes a radiation reading every five seconds and tags it with the GPS location; that data is used to build the maps that Safecast puts online.
Their bGeigie devices (the “b” is for “bento”) costs $1000 per unit. Twenty-five have been deployed. At a school playground, for example, the device showed that decontamination work had been relatively effective in the playground itself, but radiation levels were higher in the undergrowth on the edges of the play area.
The group is now working with the country's postal agency, Japan Post. With bGeigies on deliverymen’s motorbikes, radiation data will be collected daily for the Safecast maps.
The latest model takes two minutes to get a dose-rate reading -- this fifth generation device costs about $70. There’re now about 12,000 Pocket Geiger users.
Images: Nokton via Flickr (top) / Radiation Watch (bottom) This post was originally published on Smartplanet.com | 1,666 | 910 | 0.001125 |
warc | 201704 | Our dying services sector
What it means
In sifting through more economic data and trying to pin down exactly what is going on in this recovery (yes, it is a never-ending task) I uncovered some very interesting trends. You might find them interesting, too…
Although we are in a global slump and US export markets are now under pressure in this economic recovery if we compare it to past cycles it turns out that the US manufacturing sector is doing much, much, better than the services sector. Not only are services jobs lagging much worse than what is normally the case but services spending is anemic, too. On the goods side of the economy industrial output is actually near normal despite the ‘weak recovery.’ It is astonishing. And while ‘demand’ or GDP has been weak in this recovery ‘public enemy number one’ might just be the composition of demand more than demand itself. There has been too little demand for services and therefore too little job growth.
Cyclical overview of industrial output
If we look at output from the start of the recession (instead of the end), of course it is weak. Moreover, it is weaker than in any other Post-War recession period for all categories of industrial output. But that is obvious. It’s for the same reason that you run the 50 yard dash faster than the 60-yard dash. This recession has been longer that past Post-War recessions, so the comparison is not fair - and then there is that ‘weak’ part. To put comparisons on a ‘fair footing’ we look as the rise in output in this cycle compared to past cycles counting from the end of the recession. To do this we group past cycles into two divisions: the expansions from the 2001 and 1990 recessions, and their weak recoveries; and, the recessions of 1982 and 1982 which had strong recoveries.
It turns out that this recovery in industrial output is pretty much a middle case; it is nearly uniform across the sectors I have compared. It is faster than the slow ones and slower than the fast ones. Looking at the headline for industrial production and several major divisions (MFG, durables, non-durables, consumer goods, consumer non-durables, consumer durables and business equipment) the average of the fast-slow cycles was faster than this cycle in only three out of eight categories. And in this recovery overall durable goods output and business equipment output is actually faster than it was at this point –even in the two strong recoveries. If we make the comparisons for services nothing is stronger in this cycle.
Goods and services in GDP- the structural shift
I have a longer research piece with exhibits on my blog (robertbrusca.blogspot.com) that shows charts on goods and services spending in the aggregate. Since 1985 the trend for GDP spending on goods economy-wide is dead flat at about 4% per year. The same calculation for overall services spending shows a considerable down shifting and a clear shrinking trend. This is for overall economic services spending not just the more familiar spending on services in PCE.
For both of these broad sectors employment trends are also on the decline.
While there is a lot of focus on this recession, and special problems have emerged since the financial crisis, it seems that this recession has simply peeled back a view of the economy that has been hidden. Some suggest that the boom in construction hid the structural changes in the economy because it allowed some very under-skilled workers to obtain and hold relatively high paying jobs.
Elsewhere I have written on the long term shifting trends in labor force participation rates (see http://seekingalpha.com/article/355341-a-closer-look-at-labor-participat...). While there is a focus on the current economic situation is seems that our circumstances are the result of a number of irons that have been warming in the fire for some time, rather than effects that have sprung full formed out of the financial crisis itself..
One of the intriguing things to me is why the services sector is so weak. Is it linked to other problems and imbalances? While there is a reason to be concerned about the large US current account deficit and the damage being done by other countries that pursue export led growth strategies displacing US-sourced production that does not seem to be the front and center problem in services. Still, the loss of output in the US regardless of the firmness of goods growth in GDP implies that there have been fewer domestic multiplier effects than there might have been. That means outsourcing goods output has contributed to some of the weakness in services. By having slower MFG output growth we have churned out fewer positive externalities in the domestic economy that might have stimulated services.
More to the point is that our MFG sector has been kept at a world class pitch by the competition from abroad. And that competition has been painful. Meanwhile, domestically there has been much less competition in the non-traded goods sector and now those chickens are coming home to roost…
We can see this very clearly in the way the services sector encompassing government has raised pay, added benefits and promised pensions that have created a string of liabilities that is crippling government today. State and local government have added to the bulk without adding contribution to GDP an amount that is commensurate. Many localities are now being crushed under the burden of retirement benefits they can’t afford to pay and for which they have not properly reserved.
In this cycle some of that is being addressed. It is the pullback in state and in local hiring that is the unprecedented factor in this cycle. But over the longer run it has been a reduction of services demand and jobs in the private services sector that has suddenly emerged as an issue even though it has been ‘in train’ for some time as the trends clearly show. Why has this happened?
When we look at services what do we see? We see some of our least competitive and most controversial sectors. We see education where public sector quality has slipped and private sector costs have soared. We see healthcare where much the same has been in train with costs rising faster than the CPI by a large margin and persistently; in medicine innovation and technologies are keys drivers of costs making it unique. These sectors along with government make up about 36% of services employment. When we compare core services prices to core goods prices we find that services prices have been advancing faster than goods for some time. Since 1990 the ratio of core goods to core services prices (taking energy out of the picture, entirely) has fallen by nearly 25%. Services have becoming very expensive relative to goods; is it any surprise that we are buying fewer of them? Cable TV has become increasing expensive, for example, despite technology and competitiveness from the internet. Here we have a business that has in many communities monopoly status.
I think what we are seeing in the US is less the impact of some sudden shift than an ongoing adjustment to technology and sorting out competitive forces. With more pressure on budgets consumers are going to look more closely at some of the services they were used to purchasing rather than providing them themselves. Much services expenditure is associated with homeownership. The stress on that sector has undoubtedly cut back on that spending. But that is only the new aspect. Being unemployed focuses your attention on things you can do for yourself when time is more plentiful and money is scarce. That helps to explain why some of this contraction has been expedited in this downturn.
Still I think the point for services is that the down trends go back to 1985.
Cyclical comparisons by sector
When we compare some aspects of this recovery to past the past seven recoveries services spending emerges as exceptionally weak. Household and utilities spending, health care spending and recreation spending as well as spending for financial services are the absolute worse at this point in the cycle compared to past cycles. Transportation services are better in this recovery than in only one other. Food bests two other recoveries. Meanwhile goods in GDP are faring better, generally but not for all categories. Recreational goods purchases are the weakest in this recovery but spending on vehicles is in the 54th percentile of the high-low range making more it or less average Spending on other durables including household items like furniture and electronics is in the 30th percentile - weak but not as bad as for services. These calculations are from GDP accounts and track demand trends.
So why is the goods sector doing better? It certainly is true than multinational corporations have the option it to go ahead and do things abroad that are either taxed the most here, regulated the most here or are just plain cheaper abroad. It should make you wonder. Has this migration abroad been just for cheap labor or to evade regulation? The NFIB (National Federation of Independent Businesses) has a survey that assesses what its memberships’ biggest problem is. ‘Government requirements’ has just jumped up to its second highest reading at this point in the expansion and is very near the all-time cyclical high. Manufacturing firms can outsource to avoid the worst of the requirements in the US but services firms are stuck. Another reason is prices. As we saw in the previous discussion, goods are simply relatively cheaper than services.
Right now health care is a big issue. The Supreme Court is deciding the constitutionality of the Obamacare program. Some firms are said to be wary of hiring with this large obligation hanging in the balance even though its start date is in the future and the law’s status is unclear.
Goods Vs Services
Very few economists have formed the problem of the economy in this way as a goods Vs services issue. But there is a very clear dichotomy between how well the goods sector is doing and how well the services sector is doing. The issue is that we are getting our weakest growth in our most important job-producing sector. Shouldn’t we be paying attention to that?
When I compare spending in recoveries it turns out that in this recovery spending on goods especially durable goods and on business equipment and software have pretty much kept pace with recoveries in the past. At one point business equipment and software spending was faster in this cycle than in any previous recovery. The problem, as I like to re-cast it HAS NOT BEEN WHOLLY WEAK DEMAND. It has been the composition of demand. IF we had bought more services and fewer goods there would have been much more jobs growth in this expansion. With that, income growth would have been higher and corporate profits would have been lower. And the recovery would have been more balanced. We need to look more closely into this area to try to understand what has happened to our services sector.
Some figures for perspective
In this recovery consumer services bought/supplied have grown by 3.2 percent from their level at the end of recession as of the 33rd month of the expansion. It is the weakest performance we have seen by a long shot in the last eight recoveries that lasted this long. The previous low point at this point in the cycle was in the 2001 recovery at 6.5% before that it was the 9.2% rise in the 1990 recovery. Again, in those comparisons you get the sense of structural change as it is in the most recent recoveries that growth has become progressively weaker. The average for this point of the expansion cycle would be an 11.4% gain in services output if we had normal service sector growth. IF we had that, we would have had 5.5 million MORE jobs even after discounting for productivity growth in the sector and the loss of goods sector jobs from that demand shift to services. That means about 165K more jobs per month than what we have had all recovery long. This not a trivial problem it is a huge problem. And no one seems to be thinking about it.
It is something to think about. | 12,194 | 5,279 | 0.000192 |
warc | 201704 | Given two words, can we connect them by a chain of synonyms? For example:
. Try some: minuscule — little — short — poor — wretched — ugly — frightful — tremendous — enormous [Note, the tool's offline right now due to a system reinstallation ... should come back shortly ... -- Dec 6, 2011]
Connect two words with a chain of synonyms:
In the graphs above, each node is a word and an edge connects each pair of displayed words that can be synonymous. The raw data on semantic relationships is from Princeton's WordNet project.
Many word pairs are not linked. How large is the largest connected component—that is, the largest set of words such that any pair in the set can be connected by a chain of synonyms? An interesting graph theoretic question.
The answer is that the largest component has
25,105 words, or 17% of all words in the database. Meanwhile, the second largest component is over six hundred times smaller, with only 38 words ( show it above).
Is this structure bizarre? Actually it's roughly what one would expect knowing random graph theory, which I'll now attempt to explain in one paragraph. A classic (Erdős–Rényi) random graph consists of
n nodes, connected by a bunch of edges chosen uniform-randomly. Suppose the number of edges is such that on average, each node has d neighbors (for us, d synonyms). Imagine exploring a connected component by starting at one node and expanding outward: first looking at the nodes that are one step away from the starting point, then two steps away, and so on. What happens to the size of this frontier? If d < 1 then the frontier tends to shrink by a certain percentage in each step, so with high probability it dies out before the component gets very large. On the other hand, if d > 1, then the frontier tends to expand by a certain percentage in each step. Chances are pretty good that the frontier just gets bigger and bigger until the component includes a good fraction of all n nodes in the graph. That's the giant component. The second largest component must be very small, specifically O(log n) nodes, since otherwise it's likely to intersect with, and thus be absorbed by, the giant component.
Our graph of English has
d = 2.92 synonyms per word. But of course English is not a random graph—which, with the same d, would have a largest component of 93% of its nodes and a second largest of about 5 nodes. Considering that we get within an order of magnitude without modeling any of the structure of the language except the number d, this is not so bad. And WordNet doubtless does not perfectly represent English: for example, it's plausible that common words are better annotated with synonymy relations than, say, abulia or vituperation.
I was led to think of this topic during conversation with some folks from UCL and CMU. But others have built a graph out of WordNet too (after all, it is called Word
Net). A paper by Kamps, Marx, Mokken, and de Rijke, Using WordNet to Measure Semantic Orientations of Adjectives, scores words based on their relative distance to "good" vs. "bad". | 3,133 | 1,585 | 0.000648 |
warc | 201704 | For women For men Videos News For professionals
Produced for the Your Fertility program, the short and fun animations show how age, weight, smoking, alcohol use, timing of sex and sexually transmitted infections can affect your chances of conception.Watch animations
Having the facts about fertility can help men and women make choices that maximise their chances of conceiving and having a healthy baby.Find out from the experts
Your Fertility’s series of personal stories.Hear real life stories
Dr. Raelia Lew, Fertility Specialist and Gynaecologist tells us about the risks of smoking and how it can affect your fertility and health of your future child.
Rob McLachlan, Director of Andrology Australia discusses the importance of DNA, and highlights how the quitting smoking will improve sperm quality and the health of a future child.
Quit expert Dr. Sarah White, Director of Quit Victoria shares her advice with future parents on how they can quit to improve their fertility and the health of their future child.
We approached university students to see what they knew about their fertility. How does your knowledge compare? If you plan on having children in the future, It's important to know factors about fertility early on.
Professor Sarah Robertson, Director of The Robinson Research Institute, University of Adelaide highlights the key time before pregnancy that your health is most important to ensure your child has the best start to life.
Ricci-Jane and her partner were told they wouldn’t have a baby without IVF. Ricci-Jane had PCOS, and a stressful new job and poor eating habits had taken their toll on her body. Ricci-Jane talks about her journey to get ‘baby-fit’ and try to conceive without IVF | 1,746 | 903 | 0.001127 |
warc | 201704 | McKinley Hildebrandt
Home development could be a very interesting thing to get involved with. It's so many ways that it can be tailored for every house and the possibilities are very nearly endless. This may make it difficult for a newcomer would you not have an idea where you should begin. This list of methods may prepare you for the process.
Repainting, replacing the carpet, or finding new plants and other designs are simple methods to upgrade your property. Dig up more on an affiliated encyclopedia by going to heating repair in manassas va. Ordering furniture and adding mirrors could let the room look larger and more airy. Make certain the room provides the functionality that it is supposed to serve while expressing your personality and preferences.
Installing tile can be costly job and a very frustrating which means you need to make sure to still do it. Clicking get ac repair manassas va perhaps provides lessons you can use with your sister. Make sure that you effectively seal the grout when you're performing the tile, because if you do not it may absorb water, dirt, and various kinds of stains.
Quality counts when you are looking for do-it-yourself supplies. Keeping a couple of pounds on building materials and devices may be tempting. Even so, it could be a much better investment to invest more income now in the place of later. Get some thing durable whether or not it's a tad bit more expensive.
If you're selecting a home improvement company, be sure to be cautious about scams. A great contractor won't solicit door-to-door but will watch for one to come to them. Also, as you will be anticipated to cover anything up front, in most cases a reputable company won't expect full payment before job is completed.
That you do not need certainly to get rid of your old wicker patio and garden furniture. Sure, it may be weathered, nevertheless you can make them good as new. Replace your old blankets, or sew a brand new cover for them. Get some low priced spray paint in surprising black or stylish white and spray your wicker occur the opted for color. Ensure that you wear a protective mask over the mouth area for safety. This will make your deck collection look as new good and give you a reason to spend the afternoon outdoors.
Use color made for touching up appliances to cover up weakn | 2,327 | 1,218 | 0.000824 |
warc | 201704 | With it being that time of year again when ‘best of’ lists and predictions for the New Year get published I thought I’d take a hybrid approach in looking at the current state of the Washington, DC start-up community of which I’ve been a part of for the past dozen years. It was a web generation ago that the DC area, namely Northern Virginia, was considered an important technology hub thanks to the likes of AOL, which drove consumer adoption of the web, and UUNET, which built the underlying delivery infrastructure. Unfortunately there’s not much left to show from these early internet successes (just look at AOL’s former campus headquarters in Dulles, VA which mostly consists of Raytheon, a defense contractor, signage these days). If the DC region hopes to reestablish itself as an important tech ecosystem it needs a company, or two, to become anchors in the community that will not only draw technical talent to the region but also enable employees to leverage these companies to start new ventures of their own. So I put together a list of a few start-ups that could become one of these pillars.
Two caveats in putting this list together though (1) I excluded any company I’m directly or indirectly involved with (as an employee, investor, mentor, etc.) so as to remove any personal biases and avoid disclosing any non-public information. This meant that portfolio companies of NextGen Angels (where I am a member), for instance, were not considered (and anyway, how could I choose the best amongst all of my companies!) and (2) Advertising-supported businesses, like Vox Media, were also eliminated from consideration. The rationale for focusing this list on start-ups that build and sell technology rather than are just tech-enabled is the diversity of engineering talent needed to run these types of organizations, which are critical in sustaining a tech ecosystem, as well as their ability to build defensible businesses longer-term that are not as susceptible to changing consumer momentum and tastes.
So with these disclaimers out of the way, here are the 3 companies that laid the groundwork in 2013 to potentially become outsized success stories and reestablish the DC region as a major technology hub longer term:
FoundationDB In 2010 then Google CEO (and now Chairman) Eric Schmidt was famously quoted as saying that the amount of information created in 2 days at the time equaled all the information created between the dawn of civilization and 2003- and the pace of creation was only increasing. Even if Schmidt was wrong by a factor of 10 that’s still a lot of information that needs to be captured, stored and made available for retrieval. Add to this the complexity of handling disparate data types with varying rules around what information actually constitutes data and you have the reason for FoundationDB’s existence.
The company is developing a new type of core database that supports the modern day needs of web applications by storing and scaling data regardless of the data model being used (geospatial, graph, JSON, traditional, etc.). By combining the scale and distributed architecture of NoSQL databases (which the likes of MongoDB, which has raised over $220 million to date, have popularized) with the power of ACID transactions, FoundationDB is creating an industrial strength database technology similar to the one used by Google to run AdWords. Having started out building its core features to support NoSQL, the company acquired Akiban, another database start-up that used the same abstraction FoundationDB’s substrate uses but for SQL, earlier this year giving the combined entity a unique hybrid solution. The company’s 4-year effort in building a fault-tolerant system was rewarded last month with a $17 million Series A investment, bringing FoundationDB’s total funding to $23 million.
MapBox Apple’s launch of the App Store in 2008 ushered in the era of computing and with it the importance of location in adding context to mobile application experiences. One of the simplest ways to provide location-based information is via maps- and that’s where MapBox’s technology comes into play.
The company provides cloud-based tools for developers to add interactive maps to their web and mobile applications by leveraging OpenStreetMap data (an open-source project which MapBox also contributes to). Even though the company competes with the likes of Google Maps and ESRI it counts Evernote, Foursquare GitHub, Hipmunk and Uber among its 2,500 paying customers. Due to this early success, MapBox was able to raise a $10 million Series A in October to expand its offering. That’s because location data is just the starting point of potential for the company as it looks to partner with other data providers to incorporate other types of content to its map offering which would allow it to apply other, relevant, contextual signals that developers could use to enhance the capabilities and user experience of their apps.
SmartThings The “internet of things” promises to move the internet beyond just computing devices to include everyday devices such as door locks and thermostats. By 2018 it’s estimated that there will be 9 billion such devices connected to the internet- roughly equal to the number of smartphones, smart TVs, tablets, wearable computers and PCs combined. So it’s no surprise that DC-based SmartThings is tackling this huge market opportunity. The company, which raised a $12.5 million Series A last month bringing its total funding to $15.5 million since its founding, started out as a Kickstarter project in 2012. Since then SmartThings has launched its own online store to promote its ‘Smart Hub’ which allows consumers to connect various packages of sensors and devices to the internet to solve specific problems.
While the company faces competition from the likes of Nest (founded by ex-Apple employees that are building a vertically integrated solution) and Revolv (which launched its own hub that connects to existing connected devices but lets people create their own notifications) its biggest threat might come from their market timing of consumer understanding and adoption of these types of solutions.
So is this list perfect? No. Could I be dead wrong? Absolutely. But the fact that these companies touch on early-stage and fast-growing technology market opportunities gives me hope for their success. All they have to do now is execute. | 6,529 | 3,135 | 0.000325 |
warc | 201704 | Wednesday morning, starting at 10am, I’m on a panel testifying to the Senate Budget Committee about the need for a fiscal stimulus. The other witnesses are Mark Zandi and John Taylor.
I’ll post my written testimony after the hearing. I expect to make three main points in my verbal remarks:
1) We are heading into a serious global recession, caused by and in turn causing a process of global leveraging (i.e., reduction in lending and borrowing). We have never seen this kind of deleveraging – synchronized around the world, fast-moving, and with an unknowable destination.
2) I do not think we can prevent this deleveraging from happening. Nor do I think we should even try to keep asset prices high (or at any particular level). But in the United States we have the ability to mitigate some of the short-run effects and to lay the groundwork for a sustainable, strong recovery. One sensible tool to use in this context is fiscal policy. I lean towards smart spending programs, but as the economy continues to worsen, I think some kind of temporary tax cut could also help – it can potentially have relatively quick effects. (Note: contrary to those who think that if tax cuts are saved by consumers, they are somehow “wasted,” I would point out that anything that improves consumers’ balance sheets is both good for them and for the financial institutions that lend to them.)
3) But there is a real limit to how far we can go with fiscal policy (and with other policy measures). Irresponsible budget policies would not be a good idea – we need to continue a process of fiscal consolidation; it is most vital that people around the world remain confident in the U.S. government’s balance sheet. Some of the highest numbers now being proposed for a fiscal stimulus are probably too high and a mega-stimulus could be counterproductive if it undermines confidence.
I’m proposing a fiscal stimulus of roughly 3% of GDP, to be spent over several years. Given the uncertainties involved, this seems like reasonable middle ground – it’s enough to make a difference, but doesn’t promise a miracle; it can be spent sensibly and at an appropriate speed; and it will not undermine our ability to consolidate the U.S. fiscal position (i.e., bring government debt onto a sustainable path) over the medium-term. | 2,370 | 1,247 | 0.000827 |
warc | 201704 | Formation of Cosmic Dust Bunnies View/ Open Date2007 Author
Matthews, Lorin
Hayes, Ryan
Freed, Michael
Hyde, Truell
MetadataShow full item record Abstract
Planetary formation is an efficient process now thought to take place on a relatively short astronomical time scale. Recent observations have shown that the dust surrounding a protostar emits more efficiently at longer wavelengths as the protoplanetary disk evolves, suggesting that the dust particles are coagulating into fluffy aggregates, "much as dust bunnies form under a bed." One poorly understood problem in this coagulation process is the manner in which the micrometer-sized charged grains form the fractal aggregate structures now thought to be the precursors of protoplanetary disk evolution. This paper examines the characteristics of such fractal aggregates formed by the collision of spherical monomers and aggregates where the charge is distributed over the aggregate structure. The aggregates are free to rotate due to the collisions and dipole-dipole electrostatic interactions. Comparisons are made for different precursor size distributions and like-charged, oppositely charged, and neutral grains. | 1,183 | 642 | 0.001571 |
warc | 201704 | Gifts are given and received in various ways. One of my Christmas memories is of my grandmother who often made the presents she gave to her family. The only problem was that my grandmother never quite finished anything. When we passed out our presents, we often were not able to take them home with us. If she had made a shirt for me, the buttons might not have been sewed on quite yet. If it was a dress she had made for one of my sisters, it was not hemmed. When my grandmother died a number of years ago, her house was filled with unfinished presents which never quite made it to completion.
The spiritual gifts God gives to every believer are not like those my grandmother gave. God’s gifts are complete. Not only does God give to each of us spiritual gifts by which the body of Christ is supported and sustained, He also gives us all that is needed to carry out those functions vital to the health and ministry of His body, the church. With those gifts, God gives to each of us not only a measure of grace to empower us for service, but a measure of faith as well. Our text will teach us more about these two endowments.
I have yet another Christmas memory of a relative who seldom kept the gift he was given. If we gave him a new shirt for Christmas, he was as likely to give it away as to keep it. He might very well turn to a relative beside him and ask, “Do you like this shirt? Here, take it.” This was frustrating to watch and difficult to accept. And yet, in a sense, he was an example of the way God wants us to receive and share the gifts He has given to us. Spiritual gifts are not to be hoarded and kept only for our own benefit. They are to be used for the benefit of the body. Spiritual gifts are to be given away, in service.
We should first agree that the subject of spiritual gifts is relevant and vitally important to Christians today. Some evangelical Christians believe and teach that spiritual gifts are no longer applicable, that spiritual gifts were given for the church in its infancy. If this is so, why does Paul choose to speak first of spiritual gifts in this portion of Romans? Why does a matter of minimal importance have such a prominent place in this Epistle? If these spiritual gifts are necessary for the functioning of the church, how could they now be extinct? Elsewhere, Paul explains why spiritual gifts have been given and when these gifts will no longer be needed:
Love never fails; but if
there are gifts of prophecy, they will be done away; if there are tongues, they will cease; if there is knowledge, it will be done away. For we know in part, and we prophesy in part; but when the perfect comes, the partial will be done away. When I was a child, I used to speak as a child, think as a child, reason as a child; when I became a man, I did away with childish things. For now we see in a mirror dimly, but then face to face; now I know in part, but then I shall know fully just as I also have been fully known. But now abide, faith, hope, love, these three; but the greatest of these is love (1 Corinthians 13:8-13).
But to each one of us grace was given according to the measure of Christ’s gift. Therefore it says, “WHEN HE ASCENDED ON HIGH, HE LED CAPTIVE A HOST OF CAPTIVES, AND HE GAVE GIFTS TO MEN.” (Now this
expression, “He ascended,” what does it mean except that He also had descended into the lower parts of the earth? He who descended is Himself also He who ascended far above all the heavens, that He might fill all things.) And He gave some as apostles, and some as prophets, and some as evangelists, and some as pastors and teachers, for the equipping of the saints for the work of service, to the building up of the body of Christ; until we all attain to the unity of the faith, and of the knowledge of the Son of God, to a mature man, to the measure of the stature which belongs to the fulness of Christ. As a result, we are no longer to be children, tossed here and there by waves, and carried about by every wind of doctrine, by the trickery of men, by craftiness in deceitful scheming; but speaking the truth in love, we are to grow up in all aspects into Him, who is the head, even Christ, from whom the whole body, being fitted and held together by that which every joint supplies, according to the proper working of each individual part, causes the growth of the body for the building up of itself in love (Ephesians 4:7-16).
If I understand Paul’s teaching correctly, spiritual gifts are needed as long as we are living on this earth as members of the body of Christ.
It is only when our Lord returns, when the church is taken up into glory and fully perfected, that the need for spiritual gifts will cease. While some may differ as to whether all the gifts are necessary in this age, it is very difficult to understand how none of the gifts are needed. Paul’s teaching assumes that teaching about spiritual gifts is both basic and fundamental to Christian living. Spiritual gifts are those endowments of power which enable us to carry out the vital functions of our body life in Christ as members of His body. These endowments are a supernatural enablement so that supernatural results are produced.
Let us therefore approach our text with a deep sense of the importance of this teaching on spiritual gifts, observing closely so that we might learn well. May we then be obedient to that truth which we learn, by His grace and to His glory.
In chapters 1-11, Paul laid the doctrinal foundation for the lifestyle he now calls upon all Christians to adopt and to manifest in day to day living. In verses 1 and 2 of chapter 12, Paul has characterized the lifestyle which God’s mercies motivate and which God’s grace enables. The Christian is expected to respond, motivated by the mercies of God. Grace should beget gratitude, and it is on the basis of gratitude that Paul bases his appeal to Christians. Paul calls for a lifestyle characterized by worship, worship expressed in self-sacrificial service. This service must first and foremost be to God, expressed through service to others. Our service of worship should be the logical outflow of God’s Word and His work in our lives. It is a reasoned worship, not at all like the frenzied, sensual, self-indulgent worship of the heathen. To practice this kind of worship, we must cease being shaped by the world around us, and have our minds renewed and transformed so that we look at all things from a divine perspective.
The verses which follow spell out the exercise of this renewed mind in greater detail. Paul outlines in verses 3-8 the Christian way of thinking concerning spiritual gifts. In verses 9-21, Paul describes the Christian’s relationship with others as the outworking of love. This new mind relates differently to human government, realizing that it has been given divine authority (13:1-7). The new mind relates to others out of the obligations required by true Christian love (13:8-14), realizing that strength is given by God to minister to those who are weak (14:1–15:6). It requires Jews and Gentiles to relate in an entirely different way than they have done before (15:7-13).
Our text divides into three main sections. In verse 3, Paul introduces the subject of spiritual gifts with a call to clear thinking. In verses 4 and 5, Paul calls Christians to think corporately. Spiritual gifts must be understood and practiced in the context of the body of Christ. Paul illustrates Christian thinking in verses 6-8 by focusing on the attitudes and actions appropriate to specific spiritual gifts. We can therefore outline the structure of our text:
(1) A call to straight thinking about spiritual gifts — verse 3
(2) A call to corporate thinking about spiritual gifts — verses 4-5
(3) A call for practice consistent with spiritual gifts — verses 6-8
3 For through the grace given to me I say to every man among you not to think more highly of himself than he ought to think; but to think so as to have sound judgment, as God has allotted to each a measure of faith.
Paul begins to address the subject of spiritual gifts by first telling his readers that in the process of teaching on this subject, he also is exercising his own spiritual gift. He speaks through the “
grace” given to him, that “ grace” to which he referred at the beginning of this Epistle:
Through whom we have received grace and apostleship to bring about the obedience of faith among all the Gentiles, for His name’s sake (Romans 1:5).
(and perhaps other gifts as well) as he writes these words of instruction and exhortation. Having been prevented from being physically present with these saints for the time being did not keep Paul from exercising his gift “by mail.” This he did not only to the profit of the Roman saints, but to all those who have been blessed by this Epistle down through history. This Epistle to the Romans is an illustration and evidence of the gifts God gave to Paul for our edification. Paul exercises his spiritual gift of apostleship His teaching is not addressed to any one individual, nor to some small group, but rather “ Paul’s teaching here is universal. He has already informed us that we must not be “ Paul calls for sound thinking and judgment.
Thinking too highly of ourselves may be illustrated in the matter of spiritual gifts.
Spiritual gifts are gifts of grace. “ First, we may think too highly of ourselves because of the gifts God has given to us. Consider these words of Paul recorded in 1 Corinthians: Second, our response to being given a less prominent gift may reveal an inflated estimation of ourself.
For the body is not one member, but many. If the foot should say, “Because I am not a hand, I am not
a part of the body,” it is not for this reason any the less a part of the body. And if the ear should say, “Because I am not an eye, I am not a part of the body,” it is not for this reason any the less a part of the body (1 Corinthians 12:14-16).
At first I was inclined to think that the “
foot” and the “ ear” did not regard themselves highly enough, but this is not what Paul is saying. The “ foot” does not say, “Because I am a foot, I am not a part of the body.” He says, “Because I am not a hand, I am not a part of the body.” The “ foot” does not think too little of himself; he thinks too much of himself. He (wrongly) thinks that being a “ hand” is more important (prestigious?) than being a “ foot.” If he cannot be a “ hand,” the “ foot” refuses to function as a part of the body at all. The “ foot” thinks he is better than the gift he has been given. He thinks too highly of himself. There is no sacrificial service of worship here but only self-seeking ambition. The “ foot” needs not more self-esteem but more humility and gratitude. The “ foot” needs to “die” to himself and to fleshly desires and ambitions. Whenever our ego is involved in our thinking, our thinking becomes distorted. Because of our natural self-love, we will always think too highly of ourselves. Paul calls for sound thinking which is based upon humility and faith. This statement needs further consideration, because most of us do not really believe it. We tend to think “rational thinking” is that which the natural man does. We conclude that thinking on the basis of faith must therefore be unreasonable—that thinking by faith must involve the setting aside of the rational mind and acting apart from rationality, apart from sound judgment. Thinking in accordance with faith is thus thought to be at odds with sound judgment. But Paul tells us that thinking by faith Faith is the basis of sound thinking.
How can this be so? Let us consider this matter further. To the unbeliever, faith is mere foolishness; it is believing what is not true. To the Christian, faith is believing what is not seen but is true nonetheless:
Now faith is the assurance of
things hoped for, the conviction of things not seen. For by it the men of old gained approval. By faith we understand that the worlds were prepared by the word of God, so that what is seen was not made out of things which are visible (Hebrews 11:1-3). The foundation for mere human thinking is that which is seen or that which appears to be. The foundation for Christian thinking is the Word of God—that which is revealed and which is believed by faith. Sound thinking is thinking based upon the revelation of God, contained and communicated by His Word, and illuminated by His Spirit. Sound thinking is based upon those truths which God has revealed to us, which are unseen, but true.
Abraham was thinking soundly when he chose to obey God, even if it involved the sacrifice of his son, Isaac (see Genesis 22). Abraham acted out of faith when he sought to obey God’s command, even though it was the most difficult test of his life. His faith was a reasoned faith, based on “sound judgment.” He had come to realize that God is able to give life to the dead. This is what God had done to enable Abraham and Sarah to have a child, even though they were “as good as dead” with regard to child-bearing (see Romans 4:16-21; Hebrews 11:17-19). Abraham’s obedience was based on sound judgment, and his sound judgment was based upon that which God had revealed which he believed by faith.
But why is faith necessary in relation to spiritual gifts? Why does Paul tell us that we are to think “
so as to have sound judgment, as God has allotted to each a measure of faith”? There are several reasons.
The prophets of old faithfully ministered, and yet most of them appeared to fail in their own lifetime. They did not see many repent and turn to the Lord. They were rejected, persecuted, and even put to death. The results were not immediately evident. Even the prophecies they gave concerning the Messiah were perplexing to them (see 1 Peter 1:10-12). Yet they faithfully persevered with no evidence of success. They served by faith, knowing that God’s Word would not return unto Him void (Isaiah 55:11).
Because the exercise of a spiritual gift may be unseen, faith is required. Most often the ministry of spiritual gifts is described in terms of the function of the human body. In the human body some members are visible and prominent such as the hands and the eyes. But there are other unseen members like the heart and lungs. These unseen members are the “vital” organs. Likewise, the vital members of the body of Christ may very well be unseen; thus faith is necessary.
The analogy of the body should be pursued even further. The work of God is carried out through the body of Christ, the church. God’s work is achieved corporately, as a team, and not just by individuals working independently of others. The hand cannot function alone nor can any other member of the body. God’s work is not achieved directly by any one member but by the body as a whole. The function then of any given member of the body may seem insignificant, even unspiritual, unless viewed in the light of the function of the body as a whole. The one who has the gift of helps may not seem to be doing much in the way of evangelism, but if they are serving in a way that edifies the body of Christ, they have a part in the ministry of the body as a whole. Faith enables us to understand this proper functioning of the body of Christ.
We have at this time a large number of troops in the Middle East. The one peeling potatoes, hauling water, or building outdoor toilets may not seem to be doing much for the cause of world peace. But apart from these vital functions being done, no army could survive, much less win, a military conflict. Each army member has a vital role. Each member of the body of Christ plays a part in the work of the body, as a whole. This the Christian believes by faith.
Only a renewed, transformed mind can think of spiritual gifts as Paul has exhorted here. Our culture would convince us to do the opposite of what Paul teaches. Paul warns us not to “overthink” and not to “
think of [ourselves] more highly than we ought to think.” The world tells us we do not have a good enough estimate of our own worth. In the secular way of thinking, we need to think more highly of ourselves. Many tell us there are no limits placed on our abilities except those we impose on ourselves. The solution, we are told, is to believe that within us (not apart from ourselves, enabled by the Holy Spirit) there is unlimited potential for success and achievement. We are told that if we but think more positively, more highly of ourselves, then success is guaranteed—the higher our thoughts and goals, the higher our performance.
The world looks inward to what is within man and finds unlimited potential.
The world believes we cannot think too much of ourselves; Paul The Bible instructs us to look Godward, to look to the Holy Spirit and His enablement, to live our lives in a way that will sacrificially serve God and men.
Thus we are challenged to consider the subject of spiritual gifts with our minds thinking clearly and straight. This is to be
. We will think in accordance with reality, and in accordance with the faith we have been granted. accomplished by means of true humility, recognizing that all that we have and are, all that we will ever accomplish, is by the grace of God, and not of ourselves
4 For just as we have many members in one body and all the members do not have the same function, 5 so we, who are many, are one body in Christ, and individually members one of another.
We live in a very independent, self-centered age. In many ways, we are teetering on the brink of anarchy. The winning governor-elect in a recent state election boasted that people could live without government looking over their shoulder, restricting or condemning individual freedom and choices, choices which clearly included abortion and homosexuality. Marriage is being redefined, because neither the husband nor the wife wish to give up their independence. “Self” is the watchword of our culture. The public good seems to be eagerly sacrificed to individual freedom. Being independent and self-sufficient is viewed as the goal for many. The new evil of our day, from which people need desperately to be delivered, is “co-dependency.”
Paul teaches that Christians must think quite differently. The watchword of our text could be “inter-dependency.” Spiritual gifts are God’s means for sustaining His body, the church. Spiritual gifts mean that I am both weak and strong. I am strong in the area of my gift; I am weak in the areas where others have been gifted. Thus, I must minister to the body of Christ and others out of my strength, and I am dependent upon the ministry of the rest of the body in my areas of weakness.
We cannot look at ourselves as an island, independent of all others. For the proper functioning of spiritual gifts, we must cease thinking individualistically and begin to think corporately.
While we have been individually chosen, called, and justified, we have been joined to a body, the body of Christ. We must therefore think and act as members of this body. Spiritual gifts are one of the means by which the body of Christ is sustained and through which the life of our Lord is manifested. Thinking straight necessitates thinking corporately.
6 And since we have gifts that differ according to the grace given to us, let each exercise them accordingly: if prophecy, according to the proportion of his faith; 7 if service, in his serving; or he who teaches, in his teaching; 8 or he who exhorts, in his exhortation; he who gives, with liberality; he who leads, with diligence; he who shows mercy, with cheerfulness.
Paul’s expressions, “
according to the grace given to us” in verse 6 and “ as God has allotted to each a measure of faith” in verse 3, point to an important truth to consider before pressing on in our study. Spiritual gifts have nothing to do with ambition. The spiritual gifts mentioned here and elsewhere are not a shopping list from which we make a choice and then seek to gain that gift. The gift(s) we have received have been sovereignly bestowed by God. We already possess the gifts. God gives to us not only the gift (the grace), but also the faith by which they are to be exercised. When we belittle the gift we have been given, we quibble and question the sovereign will of God which determined the gift given to us, along with the place of ministry and measure of success (see 1 Corinthians 12:4-6).
Each of the gifts given to us, and to the rest of the body, are given in such a way as to provide all that the body of Christ needs to function properly. Given these different allocations of grace and faith, each of us must exercise our gifts in a certain way if we are to please God and be consistent with His purposes. If verses 3-5 emphasize proper thinking about spiritual gifts, verses 6-8 stress those attitudes and actions vital to the proper exercise of these gifts. Verses 6-8 emphasize what we are to do and how we are to do it in the context of spiritual gifts.
The structure of verses 6-8 seems to be indicated by Paul. This is somewhat evident in the English translations and more clearly evident in the Greek text. Allow me to arrange Paul’s words according to the structure I think he intends us to recognize:
And since we have gifts that differ according to the grace given to us,
let each exercise them accordingly: if prophecy — according to the proportion of his faith; if service — in his serving; or he who teaches — in his teaching; or he who exhorts — in his exhortation; he who gives — with liberality; he who leads — with diligence; he who shows mercy — with cheerfulness.
Paul’s words in the first half of verse 6 tie what follows with what he has just said in verses 3-5. The last words of verse 6 seem to distinguish
This same distinction is found in 1 Peter 4: two major categories of gift: (1) spoken gifts (prophecy) and (2) serving gifts (service).
As each one has received a
special gift, employ it in serving one another, as good stewards of the manifold grace of God. Whoever speaks, let him speak, as it were, the utterances of God; whoever serves, let him do so as by the strength which God supplies; so that in all things God may be glorified through Jesus Christ, to whom belongs the glory and dominion forever and ever. Amen (1 Peter 4:10-11). and Teaching
The spoken gifts as a group are given one major word of exhortation, one fundamental guideline: “Keep within the boundaries of the revealed Word of God.” The New American Standard Bible and many other translations seem to stress the need to stay within the boundaries of the faith God has allotted us. This is certainly consistent with Paul’s words in verse 3, but why should Paul need to repeat this again? The rendering is also consistent with the lexical definition of the term employed. There is, however, a second meaning, one that seems more appropriate. This meaning is, “
in agreement with,” rather than “ according to.” This first, more restrictive meaning is totally consistent with the second, more general meaning. I think Paul is cautioning all who speak to do so in a way completely consistent with Scripture. Paul seems to be saying the same thing to the Corinthians when he writes,
Now these things, brethren, I have figuratively applied to myself and Apollos for your sakes, that in us you might learn
not to exceed what is written, in order that no one of you might become arrogant in behalf of one against the other (1 Corinthians 4:6).
Notice that in Paul’s words to the Corinthians the danger of going beyond “
what is written” is arrogance, the very thing Paul is warning us about here in Romans 12.
Those who serve are given the exhortation to be diligent in their service. If those who speak are in danger of wandering beyond the prescribed boundaries of God’s Word, those who serve are in danger of wandering outside the context of the service they have been given. Servants are tempted to critique and correct their fellow servants when their God-given calling is to perform their own service (Romans 14:4).
Having given a general exhortation to everyone whose gifts fall under one or the other of the categories he has used, Paul now gives more specific exhortation. He first addresses those in the category of the speaking gifts in verse 7b and 8a, specifying the gifts of teaching and exhorting. He then turns to those who serve in verse 8b, specifying the gifts of giving, leading, and showing mercy.
Those with gifts of service have already been urged, as a group, to diligently devote themselves to their areas of service (verse 7a). In verses 7b and 8a, Paul urges those who have the speaking gifts to likewise devote themselves to doing that which they have been gifted to do. The one with the gift of teaching should devote himself to teaching. The one with the gift of exhortation, to exhortation.
50
But why would the teacher need to be exhorted to teach and the exhortor to exhort? Is this not their natural tendency? Our
tendency is to be natural
Those who fall under the speaking gifts category have been urged to stick with it, as those who serve have also been exhorted. Now Paul turns to those in the category of serving gifts to encourage them to exercise their gifts and ministries with
which are befitting and edifying. spiritual attitudes and motivations
The one who has the gift of giving is encouraged to give “
generously” or, as the marginal note in the NASB indicates, with “ simplicity.” I think it is this second sense which is prominent in Paul’s words. Those who give may be tempted to give in a way that “works both ends against the middle.” Giving, in other words, might be done in a way that appears to be generous and sacrificial but which is actually self-serving. Ananias and Sapphira (see Acts 5:1-11) seem to have given with multiple motives and thus became deceptive and dishonest. They were not as generous as they wished to appear. One’s giving should be done for the benefit of the recipient, not the gain of the donor. The emphasis of “giving in order to get” appears to be in contradiction to Paul’s teaching here. Giving, as with the exercise of all other spiritual gifts, is to be a self-sacrificing act of worship and service (see 12:1).
The one who leads is to do so “
with diligence.” Because spiritual leadership may not enhance and promote the leader (as spiritual leadership is marked by servanthood, and not lording it over others), he may be tempted to back off of spiritual leadership. Because the rewards of spiritual leadership come from God and not men, and they come at His return and not immediately, Christian leaders may be tempted to exercise their leadership in more “fulfilling” and “self-serving” causes. They may be tempted to go about their tasks casually and half-heartedly. This is not the manner of exercising the gift of leadership in which God takes pleasure.
The one who shows mercy is to do so “
with cheerfulness.” All of us have attempted to show mercy at various times. Showing mercy is acting graciously toward those who need mercy. Often, such people are not pleasant to be around. All too often, such people are not even grateful for the mercy they are shown. It may not take long for the one showing mercy to be tempted to have a cynical, sour grapes attitude. The gift of showing mercy (and every other gift as well) must be exercised in a gracious way so that God’s grace is neither distorted nor disfigured by our service. Spiritual gifts are gifts of God’s grace, and they are to manifest God’s grace to those whom we serve.
Paul sees
We are therefore challenged to devote ourselves to the function for which God has gifted us and to the ministry to which He has called us. And we are to do so in a manner pleasing to Him and consistent with the goal of the task in the overall plan and purpose of God. two great dangers in the exercise of spiritual gifts. The first is in not devoting ourselves to doing that which we are gifted to do.51 The second is exercising our gifts in a way inconsistent with the grace of God which is to motivate them and be manifested by them.
Paul’s words raise some important questions I call to your attention, for they require answers which only you can give. I will conclude by raising the questions, and I urge you not to leave this text without arriving at some answers.
Paul is speaking to believers about the spiritual gifts God has bestowed upon each of those who have become His children, by faith.
Have you been born again? First, have you received God’s gift of eternal life? If you wish to sacrificially serve God by serving others, then spiritual gifts are the means God has provided for you to do so. Second, is your pursuit and interest in spiritual gifts one of personal ambition motivated by self-interest? Or do you, out of gratitude, wish to offer up your body to God in sacrificial service to others? Paul teaches that every believer has a special enablement, a spiritual gift, by which to serve God. Third, have you discovered the special abilities God has given to you and the place of service where these can be employed for His glory? Spiritual gifts are not given so that we may set ourselves above or apart from the rest of the body. Spiritual gifts are to be employed by serving the body, and they also cause us to be dependent upon the body for those areas in which we are not strong (gifted). Fourth, how closely are you linked to a local church and to the broader body of Christ, so that your gift may benefit others and so that you may draw from the strengths and gifts of others?
Using the analogy of the Book of Nehemiah, I ask you very practically, “
” What are you contributing to your local body and to the body of Christ at large? What are you doing in obedience to this passage to fulfill your responsibilities to the body of Christ? What is your piece of the wall?
Allow me to assume that you cannot satisfactorily answer my question, and that you are uncertain about what your spiritual gift is and the ministry where your gift can be employed.
If you do not know your spiritual gift and ministry, God is not hiding it from you, if you are seeking to be obedient to Him. Spiritual gifts are not intended to be a mystery. The teaching of spiritual gifts is both fundamental and elementary. Give yourself to serve Him sacrificially, selflessly, through serving others. This is the starting place Paul specifies in Romans 12:1-2. It should also be our starting place. (1) Offer yourself to God as a living sacrifice, out of gratitude for His mercies to you. The gift of exhortation, for example, is illustrated by the life of Barnabas, the “ (2) Study the Scriptures which not only name the spiritual gifts, but also describe their function. We are commanded to give (verse 13). Pray for wisdom and insight as to how you may give in a way that pleases God. I am convinced that (3) Be obedient to the commands of Scripture. Look for those who are weaker than you, and serve them from your strength. (4) Look for needs, and seek to meet them.
I am convinced that the matter of spiritual gifts is not as mysterious as some suggest and as it might seem at first.
If you have first given yourself to God, and you are seeking to obey Him in the strength He supplies, you will know what He has given you to do, and you will have the faith and the grace necessary to do it.
47 When I speak of the church here, I am referring not only to the local church, but to the broader body of Christ of which the local church is but a small part. Each individual believer is thus conceived of as a member of the church universal and as a member of a local congregation of believers. Our ministry should not to be restricted only to the local church. Paul’s ministry, for example, was much broader, although he did minister to the local church.
48 Two serious errors are the root of pride over the possession of our spiritual gifts. First, we may believe we deserve the credit for what God has given us and what He is doing in and through us. We dare not take credit for grace, neither saving grace nor serving grace. Second, the most visible and prominent gifts are not necessarily the most important gifts (see 1 Corinthians 12:22-24). Our vital organs are not visible. So too the vital members of the body of Christ may be the least visible.
49 Even so-called “self-hate” is really “self-love.” We “hate” ourselves because we fail to live up to that which we think we are worthy of and deserving. We hate ourselves for failing to live up to that which our self-love desires and demands.
50 This is not intended to mean that the teacher only teaches and the exhortor only exhorts. All of us are to give, to teach, to encourage, to show mercy, and so on. But the one with the gift of teaching ought to make teaching a priority. One should do most what God has enabled him or her to do best. This is good stewardship (see again 1 Peter 4:10-11).
51 Could this also be because we do not gratefully accept the gift God has given to us, but stubbornly seek to do that which we think is more important, more spiritual, more fulfilling, and self-serving? | 33,874 | 12,893 | 0.00008 |
warc | 201704 | Noise is defined as unwanted and annoying sound. It is measured in decibel or db. The acceptable noise standard is 80 db noise levels for 8 hours per day, 90 db of noises for 4 hours per day or 100 db of noise for 1 hour per day etc.Hence permissible noise is dependent upon the time of exposure. More the intensity, there is less time of exposure. When the intensity exceeds permissible level it causes noise pollution. It affects both physiologically und neurologic ally and in extreme case, there is impaired hearing of either temporary or permanently.Average noise in a city-life is 70 db. Motor-truck creates noise of 80 db. The intensity of 120 db is painful limit to ear.Government's offices, major installations and displaced people have been settled to the new town. But the large counts of displaced population say that they are still waiting for the compensation; they have been told about.
Environmentalist's Protest:Tehri dam is a gigantic multipurpose project. The dam has been surrounded by controversy since it's very inception. The Tehri Bandh Virondhi Sangrash Samiti was formed as soon as the project implementation was started and people were threatened with displacement. The Cipko Movement, led by Shri Sundarlal Bahuguna, started a campaign to save the Ganga and Himalaya. Chipko activists intensified their struggle to save the Ganga from the hazards of Tehri Dam after a disastrous earthquake hit the Garhwal region on October 20, 1991. The protestors were arrested on 27th Feb, 1992. Subsequently Shri Sundarlal Bahuguna went on a 45 day fast.The main reasons why the citizens of Tehri Garhwal, several scientists and ecologists have opposed the Tehri dam are the following:(i) The uprooting of more than one lakh people from their homes in Tehri town and surroundings villages. Only a very small percentage of population to be displaced has been resettled so far.(ii) The high risk of dam failure, whether by an earthquake of higher intensity than what the design provides for, or by other factors such as overtopping and erosion of hillsides into the reservoir. In the case of such an event the acute threat to dense urban and rural habitations in the down-stream areas, including the culturally important towns of Dev Prayag, Rishikesh and Hardwar.(iii) The threat of reservoir induced siesmicity (RIS), after the creation of the huge new artificial reservoir, to the people living around this reservoir, a threat that arise from the height of the dam and other factors favorable to RIS found at and around the dam site.(iv) Some experts assess the threat of rapid siltation of reservoirs due to the high erosion in the catchment areas, the present life of the reservoir at only 60 years.(v) The project has been steeped in financial waste and corruption. The Comptroller and Auditor General have raised disturbing questions about this project. The collapse of the Soviet Union has also put the project in financial crisis since the project was being financed with Soviet aid.(vi) Adverse impact on fisheries, other fauna and flora, and various other adverse effects.It is important to stress that each of these adverse effects have been sufficiently confirmed and stressed by experts and by the government from time to time to inquire into the project due to its controversial nature. The opposition to the project is not based just on the basis of the understanding of social activists; it has also been questioned by the government appointed experts. | 3,485 | 1,709 | 0.000586 |
warc | 201704 | Contents
The Reasons An Ex Would Try And Make You Jealous
Perhaps it’s because they want to make you suffer… (maybe. . . revenge??)
Or maybe they want something from you…
They may be wanting to get back with you… If that’s what you really want
. . . simply find out with either of these posts: how to get a second chance with my ex boyfriend. . . or . . . how to get my ex back when she has moved on. . .
Regardless, whatever their intentions are, they’re not trying to be very nice.
Why?
Well, breakups are not normally cordial events, there is usually at least one from the partnership that is unhappy about it.
Sometimes even the one who did the breaking up becomes unhappy as well because they realize they’ve made a big mistake and don’t know how to fix it.
. . . Are you repelling your ex? Take this quiz and find out . . .
Some try to make their ex jealous as a way to rescue their breakup and get their ex back again. Others try to make their ex jealous for entirely different reasons…
Do they want to ‘remain friends’ with you? Do you want to remain friends with them?
Just bear in mind that the idea — We can be friends — is largely a myth, and that post will show you that when your ex wants to be friends it’s not always for the reasons you think. . .
How would you feel if the person you are dating wanted to meet for drinks with an old flame? Or they were calling their ex to tell them about their day?
“Your ex – is an ex – for a reason. . . it didn’t work out!”
Forget being friends.
That doesn’t mean you have to ignore them completely, or be impolite when you bump into each other. But, being polite and neighborly doesn’t mean you have to be friends. (And if you want to be more than “just friends” there’s a couple of links at the end to other articles that will help get your ex back…)
Now, why are they trying to make you jealous?
When an ex is trying to make you jealous, it’s often just to get a reaction from you. They want to see if you still care. . . or not.
If you are dating someone else, your ex may simply be upset by this, and trying to bolster their own ego by making you jealous.
They’re either trying to hurt you in order to feel better about themselves, or trying to let you see that you’re missing out on ‘something’ by not being with them anymore.
They’re trying to show you how desirable they are.
If they have a new relationship, then they are most likely trying to hurt you for some other reason. Perhaps it was you they blame for the breakup.
A more sinister reason could be that they’re trying some ‘control freaky’ stuff on you. Was your ex a controller? If so, they no longer have any influence over your life so they’re trying to affect you this way, an ego thing as much as a control thing.
But, just maybe, their intention is not to make you jealous at all, but simply trying to get your attention. They may really miss you and want you back. We know it’s not the best way of doing it, but does your ex?
Are you just being annoyed by their jealousy antics or are you actually feeling jealous?
If your ex is managing to make you jealous, then you’d want to ask yourself why?
Do you still have feelings for them?
If so, do you want to reverse your breakup and get back with them?
Or, in spite of your feelings, do you really just want to move on?
Ask yourself this — Why does it matter?
Everyone knows about the way partners act after a breakup, so you don’t need to worry about it as long as you haven’t done anything wrong.
So why does it bother you?
Why we get feelings of jealousy isn’t always obvious to us and why someone is trying to make us jealous, whether they succeed or not, can be equally obscure.
The point is, there is never any good reason for us to take any notice of this at all. Simply shrug your shoulders, walk away and get on with your life… Leave them looking, and feeling, very stupid.
Buying into the emotional side of relationships
There are only a few reasons why your ex would try to make
you jealous, but you might find it useful to understand a little more about jealousy itself, that way you might be able to handle your situation in the best way… or even use it to your advantage. Feelings, Jealousy and our Brain
Relationships are emotional connections between people. Whenever we are responding to our feelings we normally let go completely of our ability to use logic. They both rely on different parts of our brain to function and one will usually dominate the other.
For instance, sales people don’t use logic to sell to us, they try and make an emotional connection between us and whatever they are selling. If they manage it, logic disappears completely from our buying perspective.
Buying something is, after all, an emotional experience.
This lack of logic can make our reactions in a relationship seem questionable.
The things we see as obvious and completely understandable can lead others to being confused, they simply do not understand our reasoning.
Emotional responses are usually hard to make any sense of, and…
… Jealousy is an emotional response.
Jealousy is not logical, and for the most part, it isn’t reasonable either. It can rear it’s ugly head when we least expect it.
But that doesn’t mean that jealousy is completely unpredictable. Some people seem to be born with the ability to use it at will.
In that situation, it’s a form of manipulation.
Sometimes it’s used to try and breakup a relationship.
Sometimes it’s used to try and put a broken relationship back together.
I hope this has helped answer your question, “why does my ex try to make me jealous?” But don’t leave it there… if you want to rekindle your relationship you won’t get your ex back by waiting for them to make the decisions. . .
. . . Getting your ex back is the easy bit! . . . . . . Watch this video and learn about the psychological techniques you MUST use . . .
. . .
==== OR ====
. . . Getting your ex back. . . the bottom line . . . | 6,256 | 2,731 | 0.000387 |
warc | 201704 | Pharmacokinetics of Ertapenem in Continuous Venovenous Hemodialysis
This study has been completed.
Sponsor:
University of Michigan
Information provided by (Responsible Party):
Bruce A. Mueller, University of Michigan
ClinicalTrials.gov Identifier:
NCT00877370
First received: April 3, 2009
Last updated: August 29, 2012
Last verified: August 2012
Purpose
Detailed Description:
Critically ill patients in the intensive care unit often receive continuous hemodialysis to treat their kidney failure. Ertapenem is an antibiotic often used in these patients. Continuous dialysis may remove ertapenem, putting patients at risk for inappropriate treatment of their infection. This study will determine how much ertapenem is removed by continuous hemodialysis.
Study Type: Interventional Study Design: Intervention Model: Single Group Assignment
Masking: Open Label
Official Title: Pharmacokinetics of Invanz® (Ertapenem) in Critically Ill Patients Receiving Continuous Venovenous Hemodialysis
Resource links provided by NLM:
U.S. FDA Resources
Further study details as provided by University of Michigan:
Primary Outcome Measures:
Ertapenem Transmembrane Clearance by Continuous Hemodialysis. [ Time Frame: 24 hours after receiving first 1 gram dose ]
Enrollment: 8 Study Start Date: February 2009 Study Completion Date: March 2011 Primary Completion Date: March 2011 (Final data collection date for primary outcome measure)
Arms Assigned Interventions
Experimental: ertapenem
subjects will receive ertapenem while receiving CVVHD
Drug: ertapenem
One gram ertapenem will be infused intravenously in subjects receiving continuous hemodialysis (CVVHD). Pharmacokinetic sampling in this study will occur with the first dose of ertapenem. While on CVVHD, enrolled subjects will receive ertapenem 1 g intravenously administered over 30 minutes. Two blood samples (5 mL each) will be collected from the arterial (pre-diafilter) port of the CVVHD tubing at time 0 (baseline), ½ hour (end of infusion), 1, 1½, 2, 3, 6, 12, and 24 hours. Effluent (5 mL) will also be collected at these predefined time points from the effluent port of the CVVHD tubing. If ertapenem is discontinued after the first dose then additional samples will be collected at 36 and 48 hours, otherwise ertapenem will be administered as soon as the 24 hour sample is obtained.
Other Name: Invanz
Detailed Description:
Subjects receiving CVVHD will receive a one gram dose of ertapenem. Serial blood samples over 24 hours will be taken to assess the ertapenem blood concentrations over time. Spent dialysate and urine samples (if any) will also be measured for ertapenem content to determine how much drug is removed by CVVHD and kidneys. A pharmacokinetic evaluation will be made to determine what is the most appropriate dose for this drug in patients receiving CVVHD to achieve pharmacokinetic and pharmacodynamic goals.
Contacts and Locations
Choosing to participate in a study is an important personal decision. Talk with your doctor and family members or friends about deciding to join a study. To learn more about this study, you or your doctor may contact the study research staff using the Contacts provided below. For general information, see
Please refer to this study by its ClinicalTrials.gov identifier: NCT00877370
Learn About Clinical Studies.
Please refer to this study by its ClinicalTrials.gov identifier: NCT00877370
Locations
United States, Michigan University of Michigan University Hospital Ann Arbor, Michigan, United States, 48109
Sponsors and Collaborators
University of Michigan
Investigators
Principal Investigator: Bruce A Mueller, Pharm.D. University of Michigan | 3,705 | 1,640 | 0.000619 |
warc | 201704 | John Thompson recently wrote about autonomy and accountability on Alexander Russo’s This Week in Education blog. These lines struck me: NYC’s leaders knew all along that accountability couldn’t just be a punitive regime. There also had to be high levels of psychological safety for adults to learn — a high level of trust. And: …engineering high … Continue reading
Education reform, both north of the border where I am in Canada, and south of our border in the Unites States is a very serious business. Everywhere I see the discussion taking place, I see heated conversations and a lot of rhetoric being passed both ways, between those who appear to be driving a very … Continue reading
I sometimes pretend and say ‘Yay!’ – my 7 y/o daughter explaining how she learned to ‘comply’ in her school Incentive schemes, which are supposed to encourage desired behavior through the use of rewards, or discourage undesirable ones through punishments, are all too familiar to all of us. Probably similarly familiar are the findings that … Continue reading
I recently posted my thoughts on Schooling the World, an important film that takes a look at the value of bringing Western-style education to sustainable indigenous cultures and beyond. I personally — and as I mentioned in my post, it seems Sir Ken Robinson too — believe the film raises many important questions which are … Continue reading
|Kelly Tenkely| Today’s #edchat topic for discussion on Twitter was: In a time of cut backs in education for the sake of the economy, should sports and extra curricular clubs take a back seat? Those “extras” we are referring to: the arts and physical activities (sports). For me, this #edchat topic succinctly summarizes what is … Continue reading
David suggested I write about charter schools. The best I can offer right now is an ambivalent primer. While I defend charter schools from attack on principle, I don’t promote them as any kind of one-size-fits-all fix for public education. Rather, the value of charter schools lies in their diversity and flexibility to address the … Continue reading
Is scaling up a matter of philosophy or ritual? Of belief or benediction? Of action or intercession? Does #edreform really have one edge instead of many sides? Many sides that make one edge? Are we on a loop, as well as in a box? Just because both close, is there really no way out? Is … Continue reading
A few weeks ago, I ran across Educators for Excellence (@Ed4Excellence) and their online campaign to give teachers “an independent voice in the debate surrounding education reform.” This morning I read “Klein Meets With Hired Thugs” on NYC Educator. NYC Educator takes on Educators for Excellence over their funding: [A Gotham Schools article’s] “clarification” explained … Continue reading
Technology is neither good nor evil. We are. It doesn’t heal or hurt. We do. It doesn’t connect or sever. We do. It doesn’t teach or learn. We do. We are impatient for change. Technology changes quickly. Therefore, technology is an attractive panacea to the problems of teaching and learning. Let me note that I … Continue reading
The Dark Side Let me be frank. I don’t get excited about standards. I have colleagues that I esteem who think they are vital to a strong educational system and I respect their opinion. And to an extent I get it. Here’s my problem with them. They’re boring, they’re not radically different from anything else … Continue reading | 3,652 | 1,822 | 0.000583 |
warc | 201704 | ** Update – if you want to know how to network well, Mary has a great guide **
One of the most popular books about the social media powered digital revolution is
Groundswell, by Charlene Li and Josh Bernoff. Published in 2008, it took a private sector view of the benefits of listening to customers and engaging with them in online spaces. It’s a worthwhile read.
The two authors have subsequently published new books, though not together. What I find interesting is the fact that the follow ups (Li’s
Open Leadership, and Bernoff’s Empowered) both took on the next logical step – how do you fix your organisation’s culture to make the most of the lessons of Groundswell? Again, both are a good read.
Both Li and Bernoff come to similar conclusions: an enlightened form of management is required, one which assumes competence in staff and provides them with access to the tools to do their jobs. More than anything staff need to have confidence that they are trusted by management to do their jobs.
It’s intriguing the way that both authors end up at a similar conclusion via slightly differing routes – Li focuses on leadership while Bernoff really puts staff at the centre of his book. The end result is pretty much the same, but the two books do complement one another quite nicely, and confirms my view that just a top-down or a bottom-up approach isn’t enough to change culture – you need both, in tandem.
This links in nicely with another train of thought I’ve had recently around the changing nature of work and professionalism, particularly in relation to public services. The way people work is definitely changing – both as a result of technology plus wider changes in society.
What effect does this have on the general role of the public servant? Does the traditional skill set still equip people with the abilities they need to both do their jobs well, and enhance their careers?
I won’t bore you with my own backstory, but when I worked within local government it involved changing jobs regularly, not being afraid to move from authority to authority in search of promotion and new challenges, and putting a lot of after work hours into building relationships with people and being helpful through my blog.
I started making some notes on what the networked public servant looks like. It’s by no means definitive (or indeed correct!) but is a start and I would value feedback on this stuff – including what use it is and how it might be developed.
Be networked– be comfortable meeting new people and cultivating relationships. Be happy to connect with folk online and off. Concentrate on networking with people outside your organisation as well as inside it. Get to know people, what they are good at, and connect them with others. Be entrepreneurial– have a strong commercial sense of value and opportunity. Be creative with the budgets you have and find new ways of improving them. Be inspirational– through your actions and words, be able to enthuse and motivate people to go outside their comfort zones. Be collaborative– understand the value of involving others in what you are doing. Be aware of your own skills and the gaps, and welcome people who can help fill them for you. Be creative– don’t just look to what other people have done and replicate it, but come up with your own solutions and ideas – and don’t be afraid to share them with others. Be risky– understand risk and how to manage it. Don’t see risk as an excuse for inactivity but as a challenge to be met head-on. Be bold– if you are convinced an approach is the right one to take, do so with confidence and encourage others to support you. Don’t be fearful of what others may think. Be human– don’t be a corporate drone. What makes you different to everyone else? Emphasise it, and make the most of it. Be someone people outside your organisation don’t mind talking to. Be studious– always be learning and looking out for new things to understand. Never stop looking round the corner to see what the next new thing is going to be. Be generous– with your knowledge and your time. Having a reputation for helpfulness is a wonderful asset. Be open– accept when you’re wrong, or when you aren’t sure about something. If you have half an idea, share it, and let others help out and finish it. Be innovative– always be on the lookout for new, better ways of doing things. Be open to new ideas, no matter where they emerge from. Develop systems and workflows for testing and implementing new ideas to ensure the best ones succeed. | 4,700 | 2,230 | 0.000468 |
warc | 201704 | I thought I’d done writing about council newspapers when communities secretary Eric Pickles introduced his new code of conduct which prohibited councils from publishing their own newspapers more than four times a year.
Of course, given that Mr Pickles’ rules on council newspapers are a code of conduct, rather than law, there was always a danger the most ardent supports of council newspapers would carry on regardless. And so it proved in Greenwich.
They produce Greenwich Time there, a weekly publication which is, according to the council, ‘written in a journalistic style, containing a degree of community news, certain lifestyle features and residents’ opinions.’ Residents opinions tend to stand a better chance of getting into print if the concur with the council’s view of the world – and the 853 blog is worth checking out for more on Greenwich Time
As for how journalistic it is, it certainly lacks news values. Take the August 16 edition, which covered the riots in London (a good 10 days after it happened). The intro on the front page was:
“Greenwich councillors have given a firm commitment to stand beside residents and businesses affected in last week’s rioting and looting in Woolwich and Charlton.”
As local councillors, should we expect any less? The previous week’s front page, which hit the streets while the looting was dying down, was dominated by the fact a visit by a children’s author to a library was a sell out. Hold the front page! If ever there was proof that councils make duff newspapers, this it it.
Of course, I’m not saying the local newspapers in the area are perfect, of course they’re not. But in council newspapers, with their guaranteed income from their own departments’ advertising, we don’t get to see the news of the community, we get to see the news the council wants the community to see.
And when you look at Greenwich Council’s distribution points – which include libraries, stations and public buildings – you have a council publication which doesn’t have an unfair playing field so much as an unfair Olympic stadium at its disposal.
Not surprisingly, the decision to carry on publishing has upset the Tory opposition in Greenwich, which has sought to get the matter discussed by the overview and scrutiny committee at Greenwich.
Holdthefrontpage reports that this attempt had failed. No surprise there – it’s proof of the futility of the overview and scrutiny committee, which replaced the traditional talk-before-we-decide committee structure which had served local councils pretty well for hundreds of years.
But it was the reason for refusing to discuss it which caught my eye:
A spokesman for Greenwich Council said: “In initiating a ‘call in’ of the decision on Greenwich Time, councillor Drury requested the council to ‘properly’ explore the production of GT every two weeks.
“This itself would have been in breach of the guidance set out by the new code and was rejected by the Scrutiny Committee.”
In other words, Greenwich Council won’t discuss fortnightly publication at the scrutiny committee because it would breach Pickles’ code, the same code which the council has no problem with breaching to publish their weekly ‘newspaper.’
It’d be funny if it wasn’t so arrogantly breathtaking.
The Bexley Times reports that the council has now been reported to the District Auditor for not sticking to the code. The Auditor’s ruling will, presumably, determine whether Pickles was right in settling for a code of conduct rather than an outright law. | 3,700 | 1,739 | 0.000607 |
warc | 201704 | Have you ever left your home, only to turn your car around and head back because you weren’t sure you unplugged your curlers? Well, I mean, bald guys wouldn’t have this to worry about. But, some women do. Like me. I worry that I don’t have things unplugged or turned off. I am a “turner-arounder”. That is a person who turns around and comes back home to double check. I guess you could call it a “Double checker”. Or a “Go back homer”. There are many things to call people like that. But, do we have OCD? Obsessive Compulsive Disorder? I don’t think I am OCD by any means. If I was OCD, I would first have to re-arrange the letters to CDO so they would be in alphabetical order.
I am going to share a few of the things that make me a “Go back homer” or a “Turner-arounder”, or a “Double checker.” I really like all of these phrases. I just don’t know which one to use. I will have to think about this for a few hours, OCD-like…But, read on and see if you can relate. Maybe we have one or two in common.
1. “
Did I close the garage door?”-This is really important, because if you left the garage door open, thieves could just walk in and take your…paint cans or wheelbarrow or tool (I am sure we had more than one). Better yet, raccoons could walk in and then fall asleep and then when you come home at night and drive your car into the garage, and shut the door, they would become trapped in your garage and poop all over your car and scratch, “LET ME OUT, YOU JERK” on the side of the car. Or, someone like Ted Bundy would be waiting in the dark, and when I would step out of my car, kill me, well, just because. Then he would leave a note like, “She really should have shut her garage door….Love, Ted.”
2.
“Is my toilet running?”- Yeah, that gets me all the time. I always use the bathroom before I leave the house. Isn’t it great how I can share my “pee time stories” with strangers? Well, I have to drive 30 minutes to work and I drink a lot of water. Anywho, I usually wait by the front door until the toilet stops making that “I’m filling back up with water now” noise and then I shut and lock the door.And drive off. “But, wait. Did I wait this time? I can’t remember. Did I go to the bathroom before I left? What if my toilet ran all day? I wonder what the hell my water bill would be?… Shit…I better turn around.”
3. “
Are my curlers unplugged?” This is the worst one, because I never can remember. I know in my mind that I unplug after I put the last curler in my hair. But, did I really unplug this time? My poor family would all pile in the car to go somewhere and we would get halfway down the driveway and I would say, “I am not sure I unplugged my curlers.” It got to be to the point where as soon as we would get in the car, my husband or kids would ask me. And I would ALWAYS go back. Now that I am divorced, and live by myself, I stare at the plug outlet and say to myself or sometimes out loud if I was really feeling like a loser, “Unplugged.” And I would wrap up my curlers and put them under my bathroom sink. But, my mind is not free. While driving, I would then think, “hmmmmmm, I wonder if I put those curlers away too warm? Could they start a fire?”
4. “
Did I leave food for the cat?”- Well, this is important, because if I have a car accident and my head is wrapped in gauze, they won’t be able to hear me saying, “My poor cat has no food.” Therefore, it is imperative to leave her dish full of food and…just in case, the bag nearby. That way, she can knock the bag over when her dish is depleted of food and she can just eat out of the bag until I am released from the hospital. I do have one of those self-feeders, but my cat won’t eat out of it. I guess the food gets stale tasting if it is out too long and she sticks her nose up at it. Well, think about it. Would you eat a piece of toast with butter and jelly after 6 hours of being on the counter? I didn’t think so.
5. “
Off, off, off…off..off..off..”-In OCD talk, that means, “Go make sure the oven knobs are all turned off.” I can’t begin to tell you how many times I have turned around to make sure my oven was turned off. And as I touch each knob, I would say those words..”off, off, off, off, off, off.” And then I would stare at the oven, just one last time. Yes, they are off ….for sure. I would even ask the kids to check. I could hear them say, “off, off, off, off, off, off” in that mocking manner. They were probably thinking, “What the hell? We’ve eaten out for the past 2 nights.”
I guess we all have our idiosyncrasies. That word looks weird….I guess we are all weird in some way or another. I forgot to mention that I make sure that the match I use to light a candle stays in a little jar of water for at least an hour before I throw it away. I heard about a match being in a garbage bag and then smouldering and then burning down a house about a year ago..I like to burn my hazelnut cream candle about every day and don’t want to burn my apartment down. So, the match gets to drown to make sure it is not a fire hazard.
So, do I have OCD? Should a “turner-arounder” be labeled as having an obsessive behavior? I really don’t know the answer to that.
I do know that I have to stop writing this blog now because it is bed time. I have to go make sure my alarm clock is set. You never know when the electric will go off and you would then sleep in for work.
I am pretty sure I have it set… Maybe…..shit….maybe not….I will have to check after I make sure the tires on my car are not flat for the drive to work tomorrow. | 5,925 | 2,692 | 0.000399 |
warc | 201704 | Where does BLM get the authority to manage public lands?
The Congress of the United States delegated authority to BLM to manage public lands with the passage of the Federal Lands Policy Management Act of 1976.
As directed in FLPMA, BLM uses Resource Management Plans (RMPs) to provide management direction, adapting to the changing resource and use demands, balanced with compliance with other federal, state, and local laws and policy. The RMP provides direction in managing the many resources and resource uses of public lands. What is the purpose of the Resource Management Plan?
What lands does the RMP apply to?
This RMP will apply to BLM managed public lands in southern Nye and Clark Counties.
Why are there four alternatives?
The National Environmental Policy Act requires that federal agencies analyze a reasonable range of alternative approaches to completing actions. What this means is that in the RMP, BLM will consider meeting the multiple-use mandate a number of different ways. By doing this, BLM can analyze and compare different ways of doing things before making final decisions. The four alternatives in this Draft RMP are
1. no action (continue as described in 1998 RMP)
2. emphasis on resource protection of public lands
3. preferred alternative, emphasis on balancing uses of public lands
4. emphasis on development of public lands
Does the Preferred Alternative mean the decisions are already made?
No. The plan is still in the draft stage. In developing the range of alternatives, and further analyzing and comparing them, BLM has identified one alternative that is believed to best match the multiple use mandate from Congress. Response from the public, new analysis, and/or suggested solutions will help before decisions are reached. At that point, a Proposed RMP / Final EIS will be published, followed by a Record of Decision.
any of the alternatives in the Draft RMP, and/or the current preferred? What if I don’t like
By commenting on the Draft RMP/EIS you will retain standing in the process. If you then feel like your concern has not been addressed, you can protest the Proposed RMP/Final EIS following the process outlined in Department of the Interior polices.
How do I know you will listen to my comment?
BLM tracks every single comment submitted. As BLM moves through the process of revising the Draft RMP/EIS, we will show how each comment has been addressed.
When did this Resource Management Plan Revision start?
The RMP process started four years ago. In 2010, BLM hosted a series of public meetings throughout Southern Nevada, for the public to identify the resources, uses, and management concerns they had. At that time, 263 people provided more than 500 comments about management of BLM managed public lands in southern Nevada.
When will the RMP be finalized?
The comment period will end March 9, 2015. We will then review and address all comments and prepare a proposed RMP/Final EIS which is expected to be released about a year after the comment period ends.
FAQs from BLM-Washington Office regarding RMPs http://www.blm.gov/wo/st/en/prog/planning/planning_overview/frequently_asked_questions.html | 3,174 | 1,503 | 0.000671 |
warc | 201704 | ERIC Number:ED198195 Record Type:Non-Journal Publication Date:1978 Pages:50 Abstractor:N/A Reference Count:N/A ISBN:N/A ISSN:N/A
Trends in Inequality of Educational Opportunity in Brazil: A Test of Two Competing Theories.
Bills, David B.; And Others
A fundamental worldwide trend of the past few decades has been the remarkable expansion of national systems of formal education. Two competing theories have been adduced to explain how this expansion affects patterns of equality of educational opportunity. The thesis of industrialization holds that educational expansion tends to increase equality of opportunity. Elite mass theories claim that we can expect no necessary trend, and that expansion may easily increase as well as diminish privilege. These competing theories are tested by examining cohort trend data that elucidate the relationship between educational attainment and sex, region, and family background in Brazil. Evidence exists to suggest that educational inequalities based on sex are breaking down. There is little evidence, however, that regional or family background disparities are declining in Brazil. The data provide firmer support for the elite mass model than they do for the thesis of industrialization. (Author/APM)
Descriptors: Developing Nations, Development, Economic Development, Educational Opportunities, Educational Theories, Elementary Secondary Education, Equal Education, Foreign Countries, Human Capital, Industrialization, National Programs, Sex Differences, Socioeconomic Background, Socioeconomic Influences, Socioeconomic Status
Publication Type:Numerical/Quantitative Data; Reports - Research Education Level:N/A Audience:N/A Language:English Sponsor:National Science Foundation, Washington, DC.; Wisconsin Univ., Madison. Authoring Institution:N/A Identifiers - Location:Brazil | 1,831 | 951 | 0.001056 |
warc | 201704 | ERIC Number:ED426546 Record Type:RIE Publication Date:1998-Aug Pages:83 Abstractor:N/A Reference Count:N/A ISBN:N/A ISSN:N/A
Multimodal Treatment of Attention-Deficit Hyperactivity Disorder: An Updated Review of the Empirical Literature.
Maier, William J.
This paper presents an updated review of the empirical literature which examines multimodal forms of treatment for Attention Deficit Hyperactivity Disorder (ADHD). Multimodal treatment typically involves some combination of psychostimulant medication, behavior modification, and cognitive training. Results of studies were grouped into three categories: medication plus behavior modification, medication plus cognitive training, and other treatment combinations. Studies most often used clinical outpatient populations, and interventions were implemented by clinicians, parents, and/or teachers. Findings indicate that: (1) for many children, stimulant medication, along or in combination with behavior modification and/or cognitive training, appears to improve behavior at home and school and contribute to improvements in academic achievement; (2) behavior modification appears to be effective in improving children's behavior in the specific situations where it is utilized, but when combined with stimulant medication does not appear to add additional benefit beyond that offered by the medication; (3) cognitive training does not appear to significantly improve the behavior of ADHD children; and (4) for some children, combining a low does of stimulant medication with a behavior modification intervention appears to facilitate the same level of behavior improvement as a high does of stimulant medication alone. (Contains 38 references.) (CR)
Publication Type:Dissertations/Theses - Doctoral Dissertations Education Level:N/A Audience:N/A Language:English Sponsor:N/A Authoring Institution:N/A Note:Doctoral Research Paper, Biola University. | 1,910 | 924 | 0.001086 |
warc | 201704 | ERIC Number:ED556429 Record Type:Non-Journal Publication Date:2013 Pages:118 Abstractor:As Provided Reference Count:N/A ISBN:978-1-3036-0455-3 ISSN:N/A
Quantitatively Studying the Relationship between Retention and Completion of an Online Orientation Program
Daniel, Andrea D.
ProQuest LLC, D.B.A. Dissertation, Northcentral University
In 2012, funding for many colleges and universities was determined by the graduation rate at the institutions. The problem that was addressed was a business question that many community college officials must solve as low student success rates measured by retention and student grade point average lead to higher dropout rates. The purpose of this quantitative, non-experimental, comparative study was to use existing data to examine whether or not students who completed a college online orientation program had different rates of retention and grade point average (GPA). The population for the study was a cohort of 1,643 first-time full-time and first-time part-time students at one college. The sample was comprised of the entire group of students who did not complete the program (N = 499) and a group of students (N = 499) who completed the program and were matched to the students who did not complete the program based on the variables of gender, full-time and part-time status, and race and ethnicity. A chi-square test examined average retention rates over a 2-year period for students who did and did not complete the orientation program and found a significant difference for completion, ?[superscript 2](1, N = 499) = 16.28, p < 0.05. A t-test examined average GPA over a 2-year period for students who did and did not complete the orientation program, controlling for race/ethnicity and found a significant difference for completion, t(997) = 94.72, p < 0.05. The findings of this study suggest that online orientation can have a positive influence on student retention rate. Therefore, college administrators may want to make such orientation programs available to students and encourage students to use them. Future research is needed, however, of the effects of online orientation on retention across a wider sample of colleges, universities, students, and time periods. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]
Descriptors: Educational Finance, Online Courses, Orientation, Academic Persistence, School Holding Power, Grade Point Average, Gender Differences, Comparative Analysis, Community Colleges, Dropout Rate, Race, Ethnicity, Program Effectiveness, College Students
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site: http://www.proquest.com/en-US/products/dissertations/individuals.shtml
Publication Type:Dissertations/Theses - Doctoral Dissertations Education Level:Two Year Colleges; Higher Education; Postsecondary Education Audience:N/A Language:English Sponsor:N/A Authoring Institution:N/A | 3,187 | 1,524 | 0.000658 |
warc | 201704 | The Federal Bureau of Investigation (FBI) on August 25, 2011 released the following:
“David B. Fein, United States Attorney for the District of Connecticut, announced that JOHN A. ORTIZ, 54, of Stratford, was sentenced today by United States District Judge Janet C. Hall in Bridgeport to 18 months of imprisonment, followed by two years of supervised release, for illegally structuring more than $943,000 in cash transactions. Judge Hall also ordered ORTIZ to forfeit approximately $388,540 to the government, and to pay a fine in the amount of $75,000.
Federal law requires all financial institutions to file a Currency Transaction Report (CTR) for currency transactions that exceed $10,000. To evade the filing of a CTR, individuals will often structure their currency transactions so that no single transaction exceeds $10,000. Structuring involves the repeated depositing or withdrawal of amounts of cash less than the $10,000 limit, or the splitting of a cash transaction that exceeds $10,000 into smaller cash transactions in an effort to avoid the reporting requirements. Even if the deposited funds are derived from a legitimate means, financial transactions conducted in this manner are still in violation of federal criminal law.
According to court documents and statements made in court, ORTIZ maintained a money market savings account at a credit union, and also had a personal line of credit at a bank. Between May 2006 and October 2009, ORTIZ made more than 70 large cash deposits into his savings account and more than 30 large cash payments to his personal line of credit account. The vast majority of the cash transactions were in the amount of $9000, and none exceeded $10,000. In total, ORTIZ structured approximately $943,000 in cash deposits and line of credit payments.
ORTIZ used the deposited funds to purchase, or to obtain credit in order to purchase, properties in Connecticut and Florida. ORTIZ also used more than $270,000 of the structured funds to settle a business dispute with his former partner.
ORTIZ owns and operates towing and auto repair businesses in Bridgeport and Stratford.
On May 25, 2011, ORTIZ waived his right to indictment and pleaded guilty to one count of structuring cash transactions.
This matter was investigated by the Internal Revenue Service—Criminal Investigation and the Federal Bureau of Investigation. The case was prosecuted by Senior Litigation Counsel Richard J. Schechter.”
To find additional federal criminal news, please read Federal Crimes Watch Daily.
Douglas McNabb and other members of the U.S. law firm practice and write extensively on matters involving Federal Criminal Defense, INTERPOL Red Notice Removal, International Extradition and OFAC SDN List Removal.
The author of this blog is Douglas McNabb. Please feel free to contact him directly at mcnabb@mcnabbassociates.com or at one of the offices listed above. | 2,914 | 1,431 | 0.000705 |
warc | 201704 | In the Garden:
Southern Coasts
July, 2004
Regional Report
Use shelving to elevate plants for their benefit and yours.
Take it From the Pros
As you cultivate your garden space and style, incorporate some good ideas from professionals at nurseries and plant boutiques. They offer timesaving and smart organizational techniques we all can use.
Elevate and Circulate Designers looking to create helpful displays and clever work spaces put plants and tools within easy reach. Hang or put up a shelf, stand a rack or old bookshelf against the deck rail, and fill it with pots. You'll find the collection looks better and is much easier to water. Elevating the plants can also mean fewer back problems for whoever does the watering. Staging Success For maximum use of space, make a plant stage. Set up a bench made of hardware cloth or slatted wood elevated on cinder blocks, then add more benches in stages like high school riser seats. Going vertical makes room for more plants and lets some plants shade others that need it. More plants can occupy the same space while retaining good air circulation, essential in humid conditions that favor fungus development. Like the professionals, you can put small pots in front as a first alarm for dry pots, and group similar pots near each other for better water management. Wise Choices Most plants available commercially come in plastic pots for two reasons: they're cheaper and they don't need watering as often as clay of the same size. In the home garden, you can choose the type of pot according to how you prefer to water. If you like a daily watering ritual, go with clay. Otherwise, follow the growers who use plastic because they water hundreds of plants. You'll also notice few saucers under pots at nurseries. Because they must be emptied, they require too much labor. Empty yours regularly, or eliminate them. Take Time At every great commercial plant facility, large or small, there is a policy that truly can benefit the home gardener. Every plant that arrives is isolated for a week or ten days before it's introduced to the rest. This is especially important for houseplant, greenhouse, or sunroom collections. And if any container plant shows signs of pest invasion, isolate it immediately to prevent spread. Care to share your gardening thoughts, insights, triumphs, or disappointments with your fellow gardening enthusiasts? Join the lively discussions on our FaceBook page and receive free daily tips! | 2,469 | 1,312 | 0.000765 |
warc | 201704 | For Immediate Release
Office of the Press Secretary November 8, 2007
Fact Sheet: Keeping America's Promise to Those Who Have Defended Our Freedom President Bush Discusses Administration Progress Implementing Dole-Shalala Commission Recommendations, Calls On Congress To Act
President Bush Visits Wounded Warriors at Center for the Intrepid
Today, President Bush visited the Center for the Intrepid at Brooke ArmyMedical Center, Fort Sam Houston, Texas. The Center for the Intrepid,built thanks to the generosity of more than 600,000 Americans, is aworld-class physical rehabilitation facility focused on medical andrehabilitative care of wounded warriors and veterans, injured in service toAmerica. The facilities include a military performance lab, a pool, anindoor running track, a two-story climbing wall, and a prosthetics center. Speaking before Veterans Day 2007, the President reaffirmed his commitmentto the well-being of those who have served, and discussed theAdministration's efforts to ensure our veterans have all they need. Caringfor our veterans is a solemn responsibility of the Federal government, andit is our enduring pledge to every man and woman who puts on our Nation'suniform. If the FY08 Budget request is enacted, President Bush will haveincreased funding for veterans by 77 percent since taking office. The Administration Is Working Hard To Improve Care For America's WoundedWarriors In March, President Bush signed an Executive Order creating a bipartisancommission to conduct a comprehensive review of the services America isproviding our returning wounded warriors. Co-chaired by Senator Bob Doleand Secretary Donna Shalala, the President's Commission on Care forAmerica's Returning Wounded Warriors released its findings on July 25,2007, and the President immediately instructed the Secretaries of Defenseand Veterans Affairs to implement the Commission's six recommendations thatcan be done administratively: The first Federal Recovery Coordinators, who will individually guideseriously wounded service members through their recuperation, will be hiredover the next three weeks. A program establishing a single comprehensive disability exam - replacingthe two separate bureaucratic processes in the Department of Defense (DOD)and Department of Veterans Affairs (VA) - will be field tested in theNational Capital Area this month. A new National Center of Excellence for Post-Traumatic Stress Disorder(PTSD) and Traumatic Brain Injury has just hired its first staff membersand moved into temporary offices in the D.C. area. A single Web portal where wounded service members can track their medicaland recovery records will be beta-tested soon. A proposed regulation to update the disability schedule for Traumatic BrainInjury and burns will be published soon and released by the VA for publiccomment. DOD is using special authorities to retain the best health professionalsworking at Walter Reed right up to its scheduled closure. President Bush recently sent Congress legislation to implement therecommendations that require Congressional action. Congress should respondto the President's leadership and consider this legislation promptly tobring the highest quality care to those injured while protecting ourfreedoms. The VA will initiate two important technical studies that will allow athorough updating of our military disability system. The President'sCommission called for Congress to establish these studies, but theAdministration found the funding and the authority to start them now. The President Has Nominated A Distinguished Surgeon And Combat Veteran AsVA Secretary On October 30, President Bush nominated Lieutenant General James B. Peake(Ret.), M.D., to serve as our Nation's Secretary of Veterans Affairs. Dr.Peake is a twice-wounded and highly decorated veteran of the Vietnam War,who recently served as the U.S. Army Surgeon General. His career spansover 40 years in the field of military medicine. Congress must act quickly to confirm this highly qualified nominee, so thathe can begin the hard work that lies ahead of him. This includescontinuing to implement the reforms recommended by the Dole-ShalalaCommission and building on the Administration's work to provide the bestcare possible to America's returning wounded warriors. President Bush Has Demonstrated A Strong Commitment To Improving TheQuality Of Life For Veterans And Their Families If the FY08 Budget request is enacted, President Bush will have increasedfunding for veterans by 77 percent since taking office. The President hassubmitted a budget of nearly $87 billion for our veterans - the highestlevel of support for veterans in American history. President Bush calls onCongress to send him a clean veterans' spending bill as soon as possible,so our Nation's veterans do not pay the price for Congress' inability toget its most basic work done. Since 2001, the President has extended medical treatment to a millionadditional veterans, including hundreds of thousands of men and womenreturning from Afghanistan and Iraq. The President's FY08 budget continuesto prioritize resources for returning combat veterans and other veteranswith service-related disabilities, low incomes, and special needs. New VA facilities are being built in communities where many veterans live,so that more veterans can access top-quality health care closer to theirhomes. The FY08 budget provides $750 million in medical care constructionfunds to better align facilities with patient needs, provide care in placeswhere veterans needs are greatest, and improve access to both primary andspecialty care services. The proposed budget will continue expanding VA access to non-institutionallong-term care, enabling veterans to live and be cared for near, or in thecomfort of their homes, surrounded by family. It will also enable allcombat-era veterans to obtain prosthetics and sensory aids. The President remains committed to reducing processing time for veterandisability benefit claims by continually improving methods and technology. Since the President took office, average waiting time has dropped from 230days to 177, and the President's FY08 budget provides resources to reduceprocessing time to 145 days. In 2003, the President signed into law the National Cemetery Expansion Actof 2003, directing the establishment of six new cemeteries. The FY08budget would fully fund the final design and construction of thesecemeteries, and advance the President's goal to ensure that most veteranshave a final resting place within 75 miles of their homes. DOD and the VA have made great progress in sharing electronic datanecessary to make eligibility determinations for VA benefits and servicesfor separated service members. Over the past year, the Departments havereduced the time it takes for making DOD deactivation and separation dataavailable to VA hospital and benefits processing centers from 90 days towithin three days.
# # #
Return to this article at: /news/releases/2007/11/20071108-9.html | 6,997 | 3,188 | 0.000314 |
warc | 201704 | Women’s issues became a topic of debate last night at the second presidential debate after being overlooked in the first.
This gave Republican contender
Mitt Romney the opportunity to appeal to women voters, which polls show he is not as popular with.
An uncommitted female voter asked:
In what new ways do you intend to rectify the inequalities in the workplace, specifically regarding females making only 72 percent of what their male counterparts earn?
In response,
President Obama talked about how and why he signed the Lilly Ledbetter Act to help close the pay gap. However, Romney’s answer failed to deliver a solution or even address the pay gap between men and women.
Instead, he alluded that employers should personally choose to hire more women, like he did when he referred to “binders full of women” in his hiring process. Somehow, he thinks that “binders full of women” will eliminate workplace inequality!
“And I—and I went to my staff, and I said, ‘How come all the people for these jobs are—are all men.’ They said: ‘Well, these are the people that have the qualifications.’ And I said: ‘Well, gosh, can’t we—can’t we find some—some women that are also qualified?’ And—and so we—we took a concerted effort to go out and find women who had backgrounds that could be qualified to become members of our cabinet. I went to a number of women’s groups and said: ‘Can you help us find folks,’ and they brought us whole binders full of women.”
Romney’s “binders full of women” phrase immediately went viral and caused a frenzy in the social media world, which took it to the next level by creating a Facebook, Twitter and Tumblr account.
To make matters worse, Romney then followed up, stating his solution to pay inequality would be strengthening the economy because, apparently, a fluid marketplace would suddenly cause employers to treat women fairly!
“We’re going to have to have employers in the new economy, in the economy I’m going to bring to play, that are going to be so anxious to get good workers they’re going to be anxious to hire women.”
Slate brought up a good point dismantling Romney’s theory:
If the free market alone could fix the problem, then women during boom times would have, according to Romney’s logic, achieved equal pay. They did not. That’s because the problem is far more complex than Romney lets on here. A little bit more flex time is nice, but it doesn’t do enough to make up for the yawning gaps in affordable child care, for instance. Plus, Romney completely breezed by the continuing problem of discrimination, which is all the Lilly Ledbetter Act addresses.
Furthermore, Romney dug himself in a deeper ditch when he relayed the story about how he granted one of his female employees flexible working hours so that she could be with her family and children. This was a nice gesture, but again, how does this help working women, with or without kids, who get paid 72 cents to every $1 a man makes? Not to mention that the gap is even larger with women of color:
African-American women making only 62 cents, and Hispanic women only 54 cents. Here’s what he said:
“I recognized that if you’re going to have women in the workforce that sometimes you need to be more flexible. My chief of staff, for instance, had two kids that were still in school. She said: ‘I can’t be here until 7 or 8 o’clock at night. I need to be able to get home at 5 o’clock so I can be there for making dinner for my kids and being with them when they get home from school.’ So we said fine. Let’s have a flexible schedule so you can have hours that work for you.”
Political writer David S. Bernstein of the Boston Phoenix went on to expose the truth behind Romney’s “binders full of women” comment.
The Daily Beast reports:
“What actually happened was that in 2002—prior to the election, not even knowing yet whether it would be a Republican or Democratic administration—a bipartisan group of women in Massachusetts formed MassGAP to address the problem of few women in senior leadership positions in state government. There were more than 40 organizations involved with the Massachusetts Women’s Political Caucus (also bipartisan) as the lead sponsor.
“They did the research and put together the binder full of women qualified for all the different cabinet positions, agency heads, and authorities and commissions. They presented this binder to Governor Romney when he was elected.”
Now Romney did, according to Bernstein—who cited information from MassGAP and MWPC—have 14 women among his first 33 senior-level appointments (42 percent), but Bernstein then cited a UMass-Boston study that found that “the percentage of senior-level appointed positions held by women actually
declinedthroughout the Romney administration, from 30.0 percent prior to his taking office, to 29.7 percent in July 2004, to 27.6 percent near the end of his term in November 2006. (It then began rapidly rising when Deval Patrick took office.)”
And at Bain Capital, which Romney ran for 15 years until 1999, there are only seven women among the company’s 87 managing directors and senior executives, or 8 percent.
Romney’s comments and “solutions” to inequality are out of touch with the issues that women face. He revealed at the debate that he can neither relate to the struggles women face, nor does he really care to implement real solutions. He delivered a top down, sexist approach that would only set women back if he became president!
Check out the memes that the social media world created to mock Romney’s “binders full of women” event above and the video clip of the debate below. | 5,948 | 2,779 | 0.000383 |
warc | 201704 | Labor Market Reforms: Issues, Evidence and Prospects
The study of labor market segmentation and the estimation of the deadweight loss due to policy distortions reflected in wage structures require analyses of labor force surveys. These data are increasingly available in most countries. But evaluations of labor market reforms are uncommon. The lack of documented labor market reforms may reflect the difficulty of reducing wage distortions by direct policy measures, and the greater capacity of trade reforms and changes in industrial structure to erode wage distortions indirectly, and thereby promote efficiency and economic growth. The economic case for labor market reforms should nonetheless strengthen support for allied policies.
To our knowledge, this item is not available for download. To find whether it is available, there are three options: 1. Check below under "Related research" whether another version of this item is available online. 2. Check on the provider's web page whether it is in fact available. 3. Perform a search for a similarly titled item that would be available.
Length: 55 pages Date of creation: 1999 Date of revision: Handle: RePEc:fth:yalegr:802 Contact details of provider: Postal: U.S.A.; YALE UNIVERSITY, ECONOMIC GROWTH CENTER, YALE STATION NEW-HAVEN CONNECTICUT 06520 U.S.A
Phone: (203) 432-3610
Fax: (203) 432-3898
Web page: http://www.econ.yale.edu/~egcenter/
More information through EDIRC
When requesting a correction, please mention this item's handle: RePEc:fth:yalegr:802. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 2,629 | 1,364 | 0.000739 |
warc | 201704 | Reserve price when bidders are asymmetric
The authors analyze the optimal reserve price in a second price auction when there are N types of bidders whose valuations are drawn from different distribution functions. The seller cannot determine the specific type of each bidder. First, the authors show that the number of bidders affects the reserve price. Second, they give the sufficient conditions for the uniqueness of the optimal reserve price. Third, the authors find that if a bidder is replaced by a stronger bidder, the optimal reserve price may decrease. Finally, they give sufficient conditions that ensure the seller will not use a reserve price; hence, the auction will be efficient.
Length: Date of creation: 2013 Handle: RePEc:zbw:ifwedp:201319 Contact details of provider: Postal: Kiellinie 66, D-24105 Kiel
Phone: +49 431 8814-1
Fax: +49 431 8814528
Web page: http://www.economics-ejournal.org/
Email:
More information through EDIRC
References listed on IDEAS Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: Lebrun, Bernard, 1997. " First Price Auctions in the Asymmetric N Bidder Case," Cahiers de recherche 9715, Université Laval - Département d'économique. Lebrun, Bernard, 1999." First Price Auctions in the Asymmetric N Bidder Case," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 40(1), pages 125-42, February. Lebrun, Bernard, 1999. " Levin, Dan & Smith, James L, 1996." Optimal Reservation Prices in Auctions," Economic Journal, Royal Economic Society, vol. 106(438), pages 1271-83, September. Laffont, Jean-Jacques & Maskin, Eric, 1980." Optimal reservation price in the Vickery auction," Economics Letters, Elsevier, vol. 6(4), pages 309-313. Krishnendu Ghosh Dastidar, 2010. " Auctions where incomes are private information and preferences (non quasi-linear) are common knowledge," ISER Discussion Paper 0790, Institute of Social and Economic Research, Osaka University. Rene Kirkegaard, 2011. " Ranking Asymmetric Auctions using the Dispersive Order," Working Papers 1101, University of Guelph, Department of Economics and Finance. Cantillon, Estelle, 2008." The effect of bidders' asymmetries on expected revenue in auctions," Games and Economic Behavior, Elsevier, vol. 62(1), pages 1-25, January. Estelle Cantillon, 2000. " The Effect of Bidders' Asymmetries on Expected Revenue in Auctions," Cowles Foundation Discussion Papers 1279, Cowles Foundation for Research in Economics, Yale University. Estelle Cantillon, 2008. " The effect of bidders' asymmetries on expected revenue in auctions," ULB Institutional Repository 2013/9001, ULB -- Universite Libre de Bruxelles. Estelle Cantillon, 2000. " McAfee, R. Preston & Vincent, Daniel, 1997." Sequentially Optimal Auctions," Games and Economic Behavior, Elsevier, vol. 18(2), pages 246-276, February. Kirkegaard, René, 2009." Asymmetric first price auctions," Journal of Economic Theory, Elsevier, vol. 144(4), pages 1617-1635, July. McAfee, R. Preston & McMillan, John, 1987." Auctions with entry," Economics Letters, Elsevier, vol. 23(4), pages 343-347. Kirkegaard, Rene, 2005." Participation fees vs. reserve prices in auctions with asymmetric or colluding buyers," Economics Letters, Elsevier, vol. 89(3), pages 328-332, December. Full references(including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:zbw:ifwedp:201319. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (ZBW - German National Library of Economics)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services. | 4,829 | 2,216 | 0.000458 |
warc | 201704 | English français Detection and sequence/structure mapping of biophysical constraints to protein variation in saturated mutational libraries and protein sequence alignments with a dedicated server
Background: Protein variability can now be studied by measuring high-resolution tolerance-to-substitution maps and fitness landscapes in saturated mutational libraries. But these rich and expensive datasets are typically interpreted coarsely, restricting detailed analyses to positions of extremely high or low variability or dubbed important beforehand based on existing knowledge about active sites, interaction surfaces, (de) stabilizing mutations, etc. Results: Our new webserver PsychoProt (freely available without registration at http://psychoprot.epfl.ch or at http://lucianoabriata.altervista.org/psychoprot/index.html) helps to detect, quantify, and sequence/structure map the biophysical and biochemical traits that shape amino acid preferences throughout a protein as determined by deep-sequencing of saturated mutational libraries or from large alignments of naturally occurring variants. Discussion: We exemplify how PsychoProt helps to (i) unveil protein structure-function relationships from experiments and from alignments that are consistent with structures according to coevolution analysis, (ii) recall global information about structural and functional features and identify hitherto unknown constraints to variation in alignments, and (iii) point at different sources of variation among related experimental datasets or between experimental and alignment-based data. Remarkably, metabolic costs of the amino acids pose strong constraints to variability at protein surfaces in nature but not in the laboratory. This and other differences call for caution when extrapolating results from in vitro experiments to natural scenarios in, for example, studies of protein evolution. Conclusion: We show through examples how PsychoProt can be a useful tool for the broad communities of structural biology and molecular evolution, particularly for studies about protein modeling, evolution and design.
Keywords: Deep sequencing ; Next-generation sequencing ; High-throughput ; Protein evolution ; Protein design ; structure-function relationships ; Protein biophysics ; Structural biology ; Neutral drift
Reference
Record created on 2016-10-18, modified on 2016-10-18 | 2,386 | 1,148 | 0.000875 |
warc | 201704 | Plumber Waltham Ma Things That You Need To Know About Plumbing
You are not alone if plumbing is something that you find difficult or intimidating. Many people can't solve their own plumbing problems, which means that plumbers can charge a lot of money for evensimple and quick repairs. Don't let this happen to you, read on to learn how to solve your own plumbing problems!
Before you start a plumbing project you should tighten all of the pipes that are easily accessible. Especially if your pipes are making a range of loud banging sounds, as this is a clear sign thatthere are loose pipes along the line. It is also a good idea in case there http://plumberwalthamma.com
is a clog so the excess pressure released does notbreak a loose pipe.
Don't pay a plumber until the job is completed. It's wise to wait until the job is completely done before giving him the entire payment, though a plumber may require some money upfront. Many thingscan happen between the stop and end of a job, so to be safe wait until you are satisfied with the completed work before paying.
To avoid having your outdoor faucets freeze up in the winter, detach all hoses before the first freeze. Also, close the shutoff valve that leads to the outdoor faucets, then turn on the outdoorfaucets to let any remaining water in the lines drain. Once temperatures warm up in the spring, you can reverse the process.
To winterize a house that will sit unused during the winter months, you must completely drain all of the pipes. After turning off the main water supply, let all of the water drain from the faucets,toilets, and water heater (turn off the gas). Add a quart of antifreeze to sinks and the tub to prevent water from freezing in the drain trap.
During the winter, preventing frozen pipes when you live in a small dwelling can be something good to know. Frozen pipes will not only stop your flow of water but can crack and damage pipes. You canavoid this, by running a little water out of every faucet during the coldest parts of the day.
Making sure you know all of your problems so you can have them fixed by one plumber visit is very beneficial. Having them all fixed in one visit takes a lot of money off of your bill because youdon't have to pay for the visit multiple times so make sure you make a list first.
Think about what plumbing work you need, then schedule them all at once. You might be tempted to contact a plumber every time you face a small problem, but if you have problems repaired all at once,you will have time to save money for the fixes. Most plumbers have an hourly rate and a flat rate for the trip: asking a plumber to fix multiple problems in one trip is cheaper than calling them morethan once.
If you have a crack in your toilet tank, you can sometimes fix this with an epoxy resin. It is very difficult to keep up with this type of maintenance, and the best bet may be contacting the supplierand ordering a new tank to be installed in your bathroom. Nevertheless, keeping some epoxy resin on hand for emergencies is a good idea.
To prevent pipe banging when you turn on the water, think about rubber blankets or straps. Instead of assuming the pipes need replaced, consider anchoring them or buffering them. Leave them some roomfor expansion and contraction if your pipes are plastic. If pipes do not leak but just make noise, eliminate the noise.
There are a few things that you should know first if you are looking into becoming a plumber. Plumbers who work for companies do not make a high salary. That is the most important thing. You shouldtry to find a way to work for yourself, in order to increase your earnings.
One way to avoid a common plumbing problem is to make sure never to flush anything but human toilet and waste paper down a toilet. Other things made of paper like tissues, paper towels, and the likedo not dissolve the same way toilet paper does and can get stuck.
As the weather turns frosty, make sure faucets outside are not dripping or leaking. You will have to fix this potential problem before freezing temperatures come. Regardless of what the pipes in yourhouse are made out of, freezing water will cause them to crack. Even a very small crack can cause significant water damage or even can flood your entire home.
If you have an ice maker or other plumbing going to your refrigerator, every six months or so pull the refrigerator away from the wall and inspect this plumbing. There should be no condensation orcorrosion on these plumbing lines, if there is, contact a plumber and have them look at it.
If they go through the blades, Sticky substances like bananas, chicken skin and pumpkin pulp can clog your drain even. Make sure these hard-to-grind items are disposed of first by putting them in thetrash can, then you are able to put the other food into the garbage disposal.
If you think you understand what is wrong with your toilet, but aren't sure, you should first do some more research on the internet. Most plumbing problems are fairly standard, and you should be ableto read about the problem in great detail on several amateur plumber forums to make sure you understand the problem.
If you are going to update the plumbing in your house, one thing to consider is installing a new tank-less water heater. They are much smaller than traditional tank heaters, which is a space-saver.Tank-less water heaters are available in gas or electric, depending on what your house needs.
If you have infrequently used drains in your house, you should pour water into them on a regular basis. This water will fill the trap and will prevent odors from entering the house. If you have slowfloor drains, you should snake them to ensure they are capable of carrying water away quickly in the event of a flood.
Coat your sinks with expanding foam to deaden sounds. If you have to sinks side by side, sounds will resonate strongly between them and create vibrations. This is not good for the pipes. Theexpanding foam will reduce the vibrations and protect your pipes. Before you install the sink, you might find it easier to apply foam.
You have learned many ways to handle a wide variety of plumbing problems. Take the advice in this article to use as a guide. If you have any further questions, make sure to take the time to find theanswers using the internet or a professional so that you know exactly what you are doing before you start. | 6,395 | 2,908 | 0.000345 |
warc | 201704 | General Market News
· US consumer spending climbs 0.3%
· Shell cuts $2 billion from capital-spending plan
· Asian markets move higher as oil recovers
· Noble exits agricultural markets with unit sale to China’s COFCO http://goo.gl/OcNhdu
· Codex takes another step toward processed cheese standard http://goo.gl/C4ObE3
· Young CA Dairy Leaders Selected to Ride on Inaugural Real California Milk Rose Parade Float http://goo.gl/sPBiQN
Class III, Cheese, and Whey
The commodity rout keeps coming and dairy has been no exception. Class III continued its downward trend yesterday but this time the second half took the brunt of price declines. The July to December Class III price average breached the $16.00 market yesterday settling more than 30 cents lower from Monday at an average of $15.81. The first half pack is now trading at $14.02. Since November 1
st, the Class III pack has lost $1.88.
Stagnant spot prices in the low $1.40’s has led to the whittling away of futures premium for both class III and cheese over the past several weeks. So far this week the trade is less about “whittling” away premium and more about “panic”. And the sharp declines this week have occurred on heavy trading volumes. In fact, yesterday, both class III and cheese posted over 1,900 contracts changing hands and Open Interest catapulted by 796 contracts in class III and 959 in cheese as a good mix of both producer and speculative selling pummeled the markets.
Dry whey futures also got into the mix and traded sharply lower. A new daily volume record of 408 contracts changed hands on the Globex platform. Back on January 29, 2010, 534 trades were reported but over 400 occurred ex-pit on that day making yesterday a record for the electronic trading platform. The sell-off pushed the January to December dry whey pack to a new low just south of 25 cents – the average finished at 24.8750 yesterday. The reality of the global supply situation is coming to fruition domestically. The supply side continues to dictate direction even as the domestic demand remains somewhat robust. The market is adjusting to the realities right now and it’s difficult to know when that will stop or what prices are “too low” for the current situation. The market is driven by emotion in times like these and it typically makes for increased volatility. If you missed an opportunity yesterday, just wait – it may not be long before you get it again. If you need proof of this, just look at the cattle market over the past few weeks.
American cheese stocks increased in November, which has happened three of the past four years now. American cheese stocks of 697.9 million pounds were 9.8% higher than a year ago and 0.2% above the previous month while October stocks were increased 1.2 million pounds. Total cheese stocks in November of 1,145.6 million pounds were 12.6% higher versus a year ago, but down 0.05% from last month. Stocks were 27.3 million pounds more than expected, while the previous month’s stocks were revised 2.3 million pounds lower, bringing the total cheese supply at the end of November to the equivalent of 35 days use.
We expect class III, cheese and whey to open steady to lower.
Spot Session Results
BLOCKS
7
$1.4100
DOWN 3
2
0
BARRELS
5
$1.4200
DOWN 3
0
1
GRADE A
0
$0.7600
UNCH
2
0
GRADE AA
9
$2.0350
UP 1 ¾
1
1
Class IV, Nonfat, and Butter Futures
NFDM values plummeted yesterday with the bulk of decline occurring in the deferred months. Fundamentally little can be argued on the bullish side of things when it comes to NFDM. Any buying that is occurring seems to be “hand to mouth” as buyers step aside to see how low things can go. Production, from what we are hearing, is steady and inventories are more than adequate. With that said, rains in CA and NZ are not helping the situation in NFDM either as weather has been favorable as of late.
Butter futures rallied on a move higher in spot. While recent declines have subsided, futures continue to remain relatively supported as we move past Christmas and into the New Year. With that said, we should expect cream supplies to be more available.
Butter stocks were lower, which typically happens all of the time in November but were 28.7 million pounds above the five-year average. Butter stocks at the end of November were nearly 6 million pounds below expectations at 132.70 million pounds as continually strong demands has underpinned prices. Butter inventories fell nearly 26% from October yet remained 23.37% above last year’s levels
NFDM is called to open steady lower while butter is called to open steady higher and Class IV mixed.
NZX Futures
There was no trade on the NZX Futures yesterday as much of the market has been thinning out for Christmas.
Grains
Scattered rains and general commodity market weakness yesterday put our bumper crop in the limelight driving price lower. It seems that when the news is stale, the market wants to go lower. Corn prices are still holding in around recent lows however, so the net result of the past several trading sessions has really been choppy and sideways. We may make new lows, but if not we expect more continued choppy sideways action here for corn. And the longer that lasts, the more it looks like a bottom. We shall see.
March Corn Daily Chart
Mato Grosso rains were deemed “scattered” over the past 24 hours, with the heaviest activity still south; rains will continue over the next two days, though, before mostly drying up over the next ten days. Northern rain chances are better in the 11-15 day today. Flooding problems will remain localized in southern states despite a very active precipitation stretch continuing into January.
We look for a mixed opening in the grain complex today. Corn and Wheat are called slightly higher, Soybeans slightly lower.
Unless otherwise noted, the posts on this blog should be construed as market commentary, merely observing economic, political and/or market conditions, and not intended to refer to any particular trading strategy, promotional element or quality of service provided by INTL FCStone Inc. or its subsidiaries. INTL FCStone Inc. is not responsible for any trading decisions taken by persons viewing this material. Information contained herein was obtained from sources believed to be reliable, but is not guaranteed as to its accuracy. These materials represent the opinions and viewpoints of the author, and do not necessarily reflect the viewpoints and trading strategies employed by INTL FCStone Inc. or its subsidiaries. Reproduction without authorization is prohibited. All rights reserved. | 6,732 | 3,212 | 0.00032 |
warc | 201704 | Ask any senior manager what are the key attributes they look for in an effective team member and the chances are reliability and consistency will be up there. But while most of us manage to be predictably excellent in the workplace, there are other parts of our lives where we are far less consistent.
Health and fitness is, of course, one of these areas. The road to fitness is littered with discarded gym memberships, deleted apps and unused trackers. The sad truth is that most companies operating in this area simply don’t care if you fall off the wagon. Why else would your gym insist on a 12 month contract. You’ve bought their product, they’ve taken your money, and the rest is up to you. No wonder people give up.
Sadly, many existing corporate wellness programs suffer the same fate. Competitor programs tracking activity and sleep at a group level have shown high drop off rates – as much as 50% after 6 months in some cases.
That doesn’t mean corporate wellness programs aren’t a good idea. In fact, they have a hugely positive impact for the companies and employees using them. Repeated studies have shown their value for people operations – including saving on healthcare costs, reduction in disability claims, better performance, fewer sick days, and lower attrition rates. Integrating wearable activity trackers is the newest best practice for building a robust and effective wellness program, and companies are snatching up trackers for employees. Activity trackers are logical tools to empower your workforce to eat, sleep, and live better. HR execs are investing time and money into launching and promoting these programs.
But the question remains – why are so many of these existing programs falling short? The answer is engagement. People simply lose interest as they are not being told anything new. The solution? Drive engagement with a platform that delivers meaningful experiences. A system that lets you track, understand and then ‘act.’ This is the key element to sustained use and a successful program.
Drive Longterm Sustained Engagement
At Jawbone we have learned that providing data on its own is not enough. The key is a holistic mix of meaningful and personal contextual intelligence, social motivation, and goal reinforcement. This is how you unlock real, long term, quantifiable change. Our UP for Groups corporate wellness program integrates our UP activity bands into a lifestyle system to support employees every step of the way. We’ve developed an intelligent system that powers UP called Smart Coach. Through rigorous testing and studies, we have seen Smart Coach work, helping people get to bed 23 minutes earlier on average and move 27% more during the day. Through Smart Coach and the UP system we have gained valuable insights that help us achieve higher retention rates and greater engagement with our application. Backed by science and proven by users, you can put these learnings to use at your company to sustain positive habits.
Context is Everything
Most trackers just serve up information. A lifeless number and a boring line chart aren’t very motivating. There is no context, no celebration, no competition. Whether you were walking your dog or playing with your kids, that moment is entirely lost. Activity trackers certainly provide utility. To get them to stick long term, you must drive meaningful change in users’ behaviors and habits.
Useful information is targeted, personal, and relevant. At Jawbone, our UP system interprets your data for you. We answer the big questions. So what? What now? Personalized insights help your employees understand the relationship between their activity and the context of their lives. UP uses this context to nudge users towards healthier choices. Health suggestions based on personal habits are going to be a lot more meaningful than generic facts. Better is different for everyone
Socialize
Choose a program that encourages social interaction amongst your employees and a sense of belonging to something larger than themselves. The more you can open interactions across social networks, the better. The best system allows your employees to engage in friendly smack talk, challenge each other and other departments to competition, and share their accomplishments and insights with friends and family. Group psychology is intrinsically motivating, and having a support network amplifies positive outcomes.
Recognize and Celebrate the Right Goals
Incentives are great. Who doesn’t want to win? However, it’s paramount to reinforce and recognize achievements for the whole and not just the top 1%. Many tracker programs offer challenges where winners are consistently the most athletic people at a company. That’s no surprise, and not going to motivate your average health employees who don’t run marathons on the weekends. I’m not going to compete if I know I can never win. Don’t glorify and benchmark against the gym rat at the head of the pack, but uplift and motivate the entire group to move with competitions that are completion based. Keep this positive and attainable focus, and all of your employees will be on the track to bettering themselves.
Moving Forward
Once you’ve built a tracker program that employees continue to use over time, you can parlay the data provided by these programs into valuable organizational decision making. Rich engagement metrics and intelligent analysis lets HR make better, more timely decisions and build implicit credibility.
More than 13 million wearable fitness tracking devices are expected to be incorporated into employee wellness programs within the next five years. It’s a natural progression for companies invested in the health of their workforce, and the benefits are overwhelmingly positive: clearer goals, proactive approaches to health, and employees who are thinking about improving their lifestyle on a regular basis. Make sure you establish the right program to ensure your investment pays off in dividends and not in dust.
–
This article was originally published on the EHBC website. | 6,164 | 2,907 | 0.000351 |
warc | 201704 | Technology advancements have had a transformative impact on law firms. Whether it is the venerable fax machine or a modern suite of integrated cloud services, no law firm can exist without technology supporting it.
The challenge of designing, building, and maintaining a technology infrastructure that helps a law firm meet its objectives is difficult for solo, small, and large firms alike. Lots of time and energy is required when implementing a law firm’s technology infrastructure from scratch.
In many instances, ethics opinions recommend lawyers rely on experts, which is why firms are increasingly hiring technology consultants for recommendations on and the management of their practice’s software and hardware needs. These legal technology consultants help lawyers make well-informed decisions regarding the security architecture they have in place and the collaboration and practice management tools they are utilizing—freeing lawyers to focus on their cases, clients, and firms, instead of computer repairs.
Would your firm benefit from using a legal technology consultant?
Find out in this free Clio webinar with special guest, Andres Hernandez, the CEO of Wingman LegalTech. In this hour-long presentation, Clio will explore:
Types of services offered by legal technology consultants Questions to ask when considering legal technology consultants Certifications carried by legal technology consultants Sources for finding local legal technology consultants Speakers: Joshua Lenon, Lawyer in Residence, Clio Andres Hernandez, CEO of Wingman LegalTech | 1,584 | 793 | 0.001279 |
warc | 201704 | For over a year I’ve been fielding questions, having conversations and receiving…ehem, interesting emails over a post called “The Silence of Paul On Evangelism.” I’ve been called a heretic and I’ve received an amazing amount of notes of gratitude. I expected the criticism but not the gratitude. And honestly, I’m thankful for both.
But I think it’s time to say more. I know it is unthinkable to wait as long as I have to say more in a world like our own. Everything so immediate. But the weight of the subject has kept me back from saying much more. But I need to say more and maybe…hopefully close the discussion. Eventually
So I wanted to offer some thoughts that will hopefully clarify further, provoke some more thought and get us on the journey of loving God and neighbor.
A couple things before we begin: First, please read all of these. Especially if you find yourself confused or angry. Second, I ask that you stop and think before you respond. I do not ask you to agree with me so much as think deeply.
With that, let’s begin.
1. There is no command after Pentecost for believers to evangelize. This is a fact. Paul does not command evangelism in his letters to the churches. Peter doesn’t do it. John doesn’t do it. In the letters to the churches, the command is just not there. This is not an argument for anything. It’s just not there. I know there are some passages which come close and there are examples. But close is not the same. And examples are not commands. There are commands to pastors and vocational missionaries to evangelize but not to ordinary believers.
2. The above statement should not cause you alarm if you love evangelism. If you think it is dangerous for me to point this out, your beef is with reality. With God himself. You are alarmed that someone is pointing to something the Holy Spirit did. Not Matt Redmond. Not anyone, save God. The truth sets us free because it is entirely in step with the character of Jesus. Embrace it and look to it for warmth in the midst of a cold world that denies the power of truth.
3. No one gets reprimanded for not evangelizing in the Scriptures. There are no rebukes. No one is made to feel they have erred or sinned. No one is guilted into doing it more. No tweetable statements making those who have not evangelized today (or lately) feel like they have neglected their duty. There is nothing of the sort. This is in stark contrast where the majority of pastors across the evangelical landscape lean heavy upon those in the pew to be about the business of evangelism. And yet, there is no example for them to do so.
4. The lack of commands in the letters to the churches must be meaningful. If there were many commands we would point to them often. Those who are passionate about evangelism would wield them with ferocity against those who questioned the wisdom of evangelism. If you deny any meaning to the silence, you cannot ask with any seriousness for us to pay attention to the noise.
5. I think evangelism MUST have some place in the Christian life. I just do not think it is the thing. I do not think it is the sign of faith. In other words we have no justification for questioning the salvation of a person who is not engaged in evangelism. But evangelism to some degree may be beyond a command. Why do I say this? We must speak of the reality around us, within us, beyond us and out in front of us. But primarily because it is reality. To call this evangelism always is to reduce it to something smaller than what it is.
6. The guilt poured upon those who have no desire nor inclination to do cold evangelism is wrong. I do not think we can justify it biblically. We have no cause to guilt someone into a practice which most unbelievers have no desire to be involved with. This is not a blanket condemnation of cold evangelism. But I do not think it needs to be reigned in.
7. Evangelism seems to be the trump card for the evangelical church. This is insane. Think about it. A guy can be a complete jerk, lack any generosity, have a mess of a home-life but if he is known as a soul-winner, nothing can be said against him. For some reason we have exalted a practice the ordinary believer is not commanded to pursue. And we have done this while ignoring the prevailing horizontal ethic of the New Testament: Love. That is what we are called to over and over and will be distinguished by.
8. There has to be a happy medium between those who make evangelism the most important thing and those who would make it nothing. I’ve no tolerance for either position. Both are skewed to the personality. One is a bully pulpit and one is a coward’s castle. I do not know that this happy medium can be plotted on a graph or made into a plan of action. But I do assume if we continue to love God and love our neighbor, believing the gospel of grace in Christ and seek to manifest the fruit of the Spirit, we will see people converted.
9. C.S. Lewis said, “You can’t get second things by putting them first; you can get second things only by putting first things first.” And I think this wisdom can be applied to our present subject. I think if we seek after what is explicit – love and all else commanded (did you know we are commanded to live quiet lives more times than we are commanded to share our faith in Paul’s letters?) – we will see what we want to see happen. I think we will see people who cannot help but talk about reality. But as it is, all of reality is filtered through a grid where all information, experience, knowledge and need must pass through the non-commanded command of evangelism.
10. I often wonder if our lack of trust is betrayed by our feeling we must always be talking about something Paul never really talked about… much less commanded. We have constructed a narrative which says you do not trust God if you are not always encouraging and engaging in evangelism. But I wonder if such activity is clouding our ability to see that we really do not trust him. Stop. I know what you are thinking. But I am not a hyper-Calvinist. I believe the pastor should call unbelievers to belief. I believe there are times neighbors should do the same. But I wonder if we are trying to get ahead of God. God’s means just may not be pre-packaged formulas given by spiritual spammers to real people with real beating hearts and real problems and real dreams and real failures. They are often more kind and loving than we are. Maybe we should trust God when he says to love them – by doing so I assume we will be seeking the kingdom and then maybe, just maybe all else will be added. Including some conversions.
Conclusion: I am pro-evangelism. However, I do not think it is a central part of Christian ethics. I think our current teaching on evangelism is out of proportion to the teaching contained in the Scriptures and this leads to misplaced guilt and ends up being a hindrance to the spread of the glory of God instead of a profusion.
What do you think? Why do you think we see no commands in the letters to the churches and yet are so quick to command people to evangelize? How can seeing the lack of something help us do something better? | 7,266 | 3,307 | 0.000308 |
warc | 201704 | The Test Mercenaries were a team of engineers within Google dedicated to helping development teams improve their testing practices and code quality from late 2006 until early 2009. Two or more Mercs would be embedded within a development team full-time for months, using the Testing Grouplet’s Test Certified program as the basis for concrete action and improvement. The team was formed after Bharat Mediratta and Mark Striebeck successfully proposed to Alan Eustace that such a dedicated team be hired to improve engineering practices and code quality throughout Google via hands-on intervention, and was officially placed within the relatively new Engineering Productivity focus area, alongside Testing Technology, Build Tools, and Test Engineering. Mark Striebeck assumed management of both the Test Mercenaries and Testing Technology, and passed management of the Test Mercenaries on to Brad Green in late 2008.
I was a co-leader of the Testing Grouplet at the time I joined the Mercs in mid-2007, shortly after organizing my second company-wide Testing Fixit, arriving from the Build Tools team. Mark Striebeck sent me to start the New York branch of the team in November 2007. I left the Mercenaries to join the websearch team in January 2009, just a month or so before the entire team was disbanded.
This post goes into much greater detail on some context that I’ve only hinted at or briefly summarized in earlier posts, hence the length. Hopefully the section breaks make it palatable as a multi-session read—just in time for beach getaways, soaking in the sultry summer sun!
Even so, as always, my side of the story isn’t the complete story. There were many Test Mercenaries, and the topics I choose to cover are limited to those with which I had direct experience. Perhaps other Mercs will have much more to say about important aspects I’ve neglected to mention or cover in sufficient depth.
Googlers and ex-Mercs: Fact check-me, and I’ll change what I should.
Genesis
Proving Negatives In Metrics We Trust Conviction Modus Operandi Cross-pollination Expansion Man on the Moon Revolution NYC Testing Grouplet NY NYC Testing Summit Contraction Life After Service Software Engineers in Test Judgment Footnotes Genesis
The origins of the Test Mercenaries are rooted in the Testing Grouplet’s Test Certified program, a concrete roadmap for development teams to improve their testing practices and, consequently, code quality. After TC had signed up a number of early-adopter teams and looked to expand its influence, it became apparent than many teams had at least some members who were open to TC and automated developer testing in general, but who would need some help—both technical and social—in effecting a meaningful, lasting change. In concrete terms, they needed hands-on help to climb the “TC Ladder”.
1
The Testing Grouplet leaders at the time—Bharat Mediratta and Nick Lesiecki—plus Mark Striebeck began working on a proposal to create a team of dedicated, full-time Software Engineers (SWEs) that would act as kind of an internal consulting company, with the concrete goal of helping development teams climb the TC Ladder. The working name for this team was “Test Mercenaries”, reflecting the focused-yet-temporary nature of the individual missions the team would embark upon, where members would be on loan to development teams for a limited time. Not a lot of thought was given to the name, but everybody seemed to dig it.
Bharat and Mark had a meeting with Alan Eustace, the Senior Vice President of Engineering, to pitch their idea; but when they proposed starting with only five engineers, Alan responded that he had several hundreds (at that time) of project teams, and that they needed to come back with a bigger proposal. Hard to imagine a more encouraging rejection than that! The original proposal was more of a let’s-try-and-see-what-we-can-do suggestion. Alan effectively challenged the Testing Grouplet and its Test Certified program to step up to the software engineering equivalent of cleaning the Augean stables.
2 It was as inspiring a challenge as it was grungy and terrifying.
Shortly afterwards, Bharat, Nick, and Mark held a brainstorming session with Neal Norwitz, Christian Sepulveda of Pivotal Labs, and myself to discuss what kind of big proposal to make, to fit with a big strategy to change all of Alan’s engineering project teams. Wish I could recall that meeting well enough to tell you the details of what we talked about, but given my memory of how events actually unfolded, the details of that session are probably not relevant.
Eventually, however, Bharat and Mark met with Alan again with a new proposal, this time starting with (I think) ten engineers, with plans to aggressively expand. Alan gave the project the green light; Mark, who had been a manager in AdWords, assumed management of the new team, along with Testing Technology, the in-house testing infrastructure team. Neal Norwitz became the first official Test Mercenary in late 2006. Amongst existing full-time Googlers, Jeffrey Yasskin was next, if memory serves, and I joined soon after in mid-2007.
Given the small number of full-time Googlers that were signing up for Mercenary duty, Mark also started hiring contractors from well-known consulting companies. Some of the earliest were Peter Epstein of Pivotal; Sam Newman and Paul Hammant from Thoughtworks; and Keith Ray, Tracy Bialik and Russ Rufer from Industrial Logic.
3 Proving Negatives
In an ideal world, a pair of Test Mercenaries would walk into a client team’s office area one sunny morning, sit at desks right next to the rest of the team, and begin offering helpful advice and submitting helpful changes to show everybody how automated developer testing is done. The team would be receptive to and thankful for the help, see a marked improvement in testing practices and code quality, and the Mercs would be on their merry way right on schedule three months later—riding into the sunset of one successful engagement, straight into the sunrise of another. History would repeat itself with precious little variation.
Not so at Google, which is an organization made of real people, not two-dimensional, Stepford-esque caricatures, not happy shadows cast by the sun of a radiant utopian vision. People, by their nature, are largely suspicious of ideas that run counter to their experience and intuition, and of those who espouse them. In contrast to open-minded innovator or early-adopter types, when knee-jerk rebuttals draped in the finery of rational, authoritative knowledge fail to discourage the disruptive element, the closed-minded will exert resistance—sometimes active, sometimes passive—against these different ideas until the momentum of majority action socially isolates and overwhelms them—and/or until they begrudgingly try such an idea and it leads to a positive experience.
4
In a slightly-less-than-ideal-but-significantly-better-than-real world, the impact of automated developer testing—and its related tools and practices—could be definitively measured and presented in such a way as to provide overwhelming evidence in favor of its adoption, nipping any “rational” resistance in the bud. However, it’s a bit burdensome to ask engineers to track:
how many bugs they caught in their own code as they were writing it thanks to a unit or integration test they wrote along with the code (whether they actually executed the test before finding the bug or not); how many bugs they avoided after redesigning function signatures and class interfaces when unit or integration testing revealed how cumbersome they were to use in a controlled, correct way; how many bugs they avoided due to the clarity afforded by splitting one class into two or more classes, to test different behaviors in isolation that were originally welded together; how many bugs and build time/complexity issues they avoided by better-specified code dependencies due to such separation; how many bugs they avoided due to rethinking library- or system-level interface contracts and architecture decisions thanks to questions raised by automated testing; how many bugs they caught after writing a passing test, integrating the latest changes from their teammates, having the test break, conferring with teammates, and fixing the test or the code, leading to a broader, shared understanding of the given feature under test; how many bugs they caught when someone from another team submitted a change that legitimately broke a test, requiring either a rollback of the change or a fix to the code; how many bugs they avoided when performing a refactoring, 5major or minor, thanks to a thorough collection of automated tests; how many bugs they avoided by performing a refactoring to improve the readability, extensibility and maintenance of the code that they wouldn’t’ve dared without a thorough collection of automated tests; how many bugs they or their peers didn’t write when changing or extending the code, thanks to clear code and good tests; how many bugs they didn’t leave behind for the Test Engineers to find and for someone to fix (thanks to automated tests); how many bugs they didn’t leave behind for other developers to find or fix (thanks to automated tests); how many bugs their peers didn’t leave behind for them to find or fix (thanks to automated tests); how many bugs they didn’t leave behind for themselves to eventually find or fix (thanks to automated tests); how many bugs they didn’t leave behind for customers to find (thanks to automated tests); and how much time, trust, reputation and revenue they saved by avoiding all of these bugs and issues before even sending their code for review, submitting it to the source control repository, or pushing it to production (thanks to automated tests).
To be fair, testing cannot and will not find 100% of bugs; it can only prove their existence, not their absence, after all. But good testing practice goes a long way towards finding and killing a lot of bugs before they can grow more expensive, and possibly multiply. The bugs that manage to pass through a healthy testing regimen are usually only the really
interesting ones. Few things are less worthy of our intellectual prowess than debugging a production crash to find a head-slappingly stupid bug that a straightforward unit or integration test could’ve caught. It’s even less sexy to drown in a sea of such bugs, or to live in perpetual, abject fear of them. In Metrics We Trust 6
Google is a very measurement-driven culture, and in such a culture, concrete measurements provide the incentives towards which people will align and optimize their efforts. The aforementioned productivity-saving/catastrophe-avoidance phenomena being very hard (if even possible) to measure, engineers largely neither valued nor optimized for any of these concerns. They valued and optimized for what could be measured, such as:
writing a lot of code; doing a lot of code reviews; launching products/features; measuring the impact of launched products/features in terms of: page views; active users; queries-per-second; server response latency; user interface latency; machine resources saved or utilized (CPU, RAM, disk, network bandwidth); data processing throughput; quality/impressions/click-throughs (i.e. ad revenue); and recovering from catastrophies, user-visible or -invisible.
These metrics are critical for possibly the most valued set of metrics:
favorable peer reviews; favorable manager reviews; review scores calibrated between managers; and promotions.
These far more measurable phonomena
are very important (with the possible exception of worrying too much about reviews and promotions). The challenge was that people back-in-the-day often assumed that the productivity-saving/catastrophe-avoidance phenomena were sufficiently addressed by hiring only “smart” engineers, by the deeply-ingrained tradition of code reviews, 7 by letting the Test Engineers find all the bugs, and/or by putting out production fires if—If!—they ever happen. These are all indeed important practices, even with a testing culture now firmly in-place; and in fact, history would seem to support the hypothesis that until 2005 or 2006, the strategy of relying largely on these practices alone worked very well for the company. Developer testing was considered of little or no value to—if not in direct opposition to—writing/reviewing code, launching, and getting promoted. In other words, testing was a waste of valuable engineering time. Hence the popular lament: I don’t have time to test. Conviction
However, those of us in the Testing Grouplet, the Test Certified program, and Test Mercenaries were convinced that the center could not hold without widespread adoption of automated developer testing. Many of us either had an immersive experience with teams that already had a healthy testing culture, or had been driven to the brink of madness on death march projects, given a harsh dose of the real value of all of those productivity-killing, catastrophic bugs that can only be measured
after they appear. I fell into the latter camp, having experienced team-wide despair on a project at Northrop Grumman Mission Systems, despair that I later helped to abolish after experimenting on my own with unit testing, purely on a whim, and having an enormous amount of success with it. For those of us who had experienced the bitter fear and loathing of such projects, and the sweet relief brought about by adding testing to the mix, testing wasn’t the cultish fantasy some accused us of promoting; it was a genuine experience with proven results.
Google, as I mentioned just above, did very well for itself with minimal developer testing for much of its history up until the inception of the Testing Grouplet, Test Certified, and the Test Mercenaries. But we knew that no matter how smart and wonderful Google engineers were, or would be in the future, the scale of complexity—of the code; of the infrastructure pieces running in the datacenters; of the products that would continue to be launched, updated, extended and integrated into other products—would reach a point too great for any organization of any size, even with the best engineers in the world, to manage without the discipline and security provided by the practice of automated developer testing. Good hiring and code reviews went an extremely long way, and are still critical; but their limits, absent of automated testing and better build and test tools, were being stretched. I tried to illustrate this scenario in an earlier post, Coding and Testing at Google, 2006 vs. 2011.
Modus Operandi
Given the cultural bias against testing on the basis of measurability, especially given the strong incentives towards seeking promotion, we realized our challenges were not purely technical. We needed to spread knowledge, we needed better tools, and we needed to figure out how to navigate extremely complex and diverse team dynamics.
Assessments: Before each engagement, the Mercs who were to engage with a prospective client team would perform an “assessment”, basically a series of meetings with the manager, tech lead(s), and individual team members to get a feel for the team’s existing process and pain points, as well as team-wide and personal goals. And, of course, we were gauging the team’s attitude and dynamics, as best we could, so we could avoid entering into situations where success seemed unlikely. (Unfortunately, this process proved somewhat fallible.) The last step in the assessment was to present to the entire team the status quo as best we understood it, and to reach agreement on specific goals, specific problems that the Mercs and the team wanted to solve during the course of the engagement. Of course, an up-front part of the deal was to get the team on the Test Certified ladder; getting them to TC Level One should be pretty quick, as it was all about putting measurements in place, whereas reaching Levels Two and Three would be the real goal, the real challenge. Proximate seating: After a few productive and less-than-productive early experiences, we began to demand up-front that the Mercs must be given desks in the immediate physical proximity of the team. While a well-meaning manager or tech lead might petition for Mercenaries to help out the team, if that manager only arranged for the Mercs to sit in a separate location out of sight and earshot of the team, the Mercs’ effectiveness was severely compromised. A big part of the reason Google spoils its engineers with cafés and microkitchens is because it encourages an information-sharing, tribe-building dynamic based on physical proximity (however unmeasurable the direct impact). No matter how receptive the team to the Mercs, if the Mercs were kept separate, they just couldn’t integrate themselves into the project and have any kind of significant impact without that physical proximity. When the team was largely apathetic or hostile—exactly the kind of situation the assessments were designed to avoid, but which happened with some frequency anyway—their impact was reduced to pretty much nil, outside of possibly getting a Chris/Jay build and other tools set up to achieve Test Certified Level One. Two Mercs per engagement: After Neal Norwitz’s early experience as the lone Mercenary on a test-friendly team, the firm policy of at least two Mercenaries per engagement was established. No matter how receptive the client team, going at the job alone was a lonely and tiresome task; having a second to back one up provided vital morale and energy, aided the generation and refinement of ideas, and helped lighten the overall load. Some engagements, depending on the size of the team and complexity of the product, even required three or four Mercenaries. High-profile projects: Given our limited resources, Mark had to choose our client projects carefully. In particular, while there were many teams and engineering offices that were interested in Test Mercenary support, he decided the best strategy was to seek out the highest-visibility projects that needed help, focusing exclusively on Mountain View at first, to maximize the visibility of the Mercs and their accomplishments. 8 To a large extent, this strategy worked, but it also risked backfiring in those cases where, for one reason or another, the Mercs weren’t able to get their teeth into a project and make a significant difference. Code reviews: Code reviews are the windows into the soul of a team at Google. They are an archive of the technical details and design decisions of the product as defined by the code, and an archive of the team culture and dynamics as defined by the comments. One can explore the roster of who sends reviews to who, how people choose to communicate their ideas and criticisms, and where the hot spots and pain points of the system appear in reality. For that reason, when I took on an explicit leadership role within the Test Mercenaries in NYC, I coached the Mercs to spend the first couple of weeks of an engagement doing nothing but reading and voluntarily commenting on the team’s code reviews, even if they weren’t included in the review’s to: or cc: field. Take the bases in order: 9 The advice to focus purely on code reviews in the beginning also tied into the principle of not coming on to a team too strong up front. Just like in love, one needs to warm up slowly to the target of one’s interest, so that he/she feels you are genuinely, patiently interested in getting to know him/her rather than desperate to cling to the nearest object as a crutch or a liferaft for one’s own ego. And no team wants you to ride into town like the new sheriff, guns a’ blazin’, laying down the law. Programmers, just like any other humans, are social, emotional, tribal beings that are not purely influenced by rational argument alone, and one must build trust with the tribe before assuming one has been granted permission to influence it. 10
In the meanwhile, the first period of lurking on code reviews is also a great time to put the tools in place to get the team to Test Certified Level One; that was an up-front part of the deal, after all, and is pretty straightforward, one-time work that doesn’t require too much of the team’s specific involvement. Once that is done and the level of familiarity and trust has been established,
then the Mercs have license to suggest bigger changes, in terms of design, implementation, and testing techniques, in terms of updates to tools or processes or policies—especially the Test Certified Level Two policy of requiring that all nontrivial changes must be accompanied by tests. The most successful engagements led to situations where engineers from the team were working very closely with the Mercenaries to make bigger, broader, sweeping changes to the project in terms of either code, tools, process, or all of the above. Three months: The “ideal” time for each Mercenary engagement was three months. In practice, it turned out to be the “minimal” time. As mentioned above, this process of starting small, building trust, and working towards bigger goals is not a two-week job. A lot of the time, the Mercenaries would stay for longer than three months if they were making significant progress that they didn’t want to break off abruptly, or if they felt there was still hope of improvement on an engagement that hadn’t quite worked out yet. Some engagements worked out so well that the team just wanted to keep collaborating with their Mercs for longer and longer, and if there were no more pressing projects in the pipeline, we let the engagement continue. Retrospective: After each engagement, the Test Mercenaries would follow-up with a retrospective meeting with the client team. The purpose was to collectively document what worked, what didn’t, what goals were reached, what goals were missed, and what surprises occurred, for better or worse. Some of these retrospectives were very encouraging and inspiring; some of them, quite the opposite. But all were informative; given Mark’s experience with agile retrospectives, as well as the experience that many of the contractors brought, we always walked away from an engagement with a clear sense of the impact we really had, and why.
However, as mentioned above, we had a very hard time pointing to metrics that the majority of the company could value or understand. Yes, we could point to Test Certified progress. Yes, we could say specific behaviors were learned, tools were adopted, design goals were reached. We could even say we did X number of code reviews or submitted X number of changes, if we wanted. But the real value of our impact proved illusive when we did have an impact, and we had precious little to fall back on when it was obvious our impact was minimal or nonexistent. We weren’t primarily responsible for feature or product launches. Production fires were beyond our realm of responsibility. Making features more maintainable or easy to implement and avoiding production fires in the first place were not accomplishments for which we could receive objective credit.
Cross-pollination
In addition to the two-Mercs-per-engagement policy, the Test Mercenaries also had a very active internal mailing list and a weekly meeting in which we would all commisserate about our individual engagements and discuss developments, both technical and social. We would share ideas and, just as importantly, tools between our engagements and client teams this way, both spreading what we each had learned and taking the lessons and information from others back to our individual projects. One fond memory to serve as an example: For months, every sentence uttered by Peter Epstein in these meetings contained the word Guice.
11 (Probably every sentence outside the meetings, too.)
During following weeks, we’d revisit the same tools and ideas and offer our experiences and refinements, a positive feedback loop that impacted every engagement. Having Mark Striebeck also managing the Testing Technology team was a huge boon in this regard, as he could direct Testing Tech to capitalize on these developments, working in close collaboration with the Build Tools team. Testing on the Toilet proved an ideal outlet for all of the tool development and technical insights resulting from Test Mercenary activity, and the Mercs were able to help keep a steady flow of material through the TotT pipeline. Feedback on the episodes from readers throughout Google also fed back into Mercenary discussions.
In this way, the Test Mercenaries served as a radiant crucible for the most advanced development tools and techniques currently known to the company. When the team was still small, we’d go into a lot of rambling discussions on random tangents where we’d explore ideas deeply; as the team grew much larger, we had to nip that in a bud to give everyone a fair slice of time to hit the highlights. Random ramblings still happened on the mailing list, but I usually found them excessive and tedious in that medium. Great ideas still spread, but something of the camaraderie, at least for me, was lost.
Expansion
Though Test Mercenary membership was open to nearly all interested Software Engineers (SWEs)—there were interested Test Engineers and Software Engineers in Test, but neither were considered as appropriate for the role at that time, as we wanted the client teams to see the Mercs as “fully-equal” SWEs—there were precious few existing Googlers who signed up for the task. Mark Striebeck, being a very well-connected member of the capital-A Agile community at that time, began to aggressively recruit engineers from well-known software consulting companies as contractors, i.e. full-time employees that are not given full-time benefits.
12 This did provide the huge advantages of bringing fresh perspectives, free of Google baggage, and broad consulting experience to our table, and enabled the team to scale up much more and much faster than it would’ve otherwise. Having a Googler Merc paired up with a newly-recruited contractor also greatly helped the new folks ramp up on Google culture, tools, and standards.
Plus, Mark was able to hire contractors with specific technical experience based on demand, experience that the existing full-time Googlers didn’t necessarily have, such as extensive experience with Java and Javascript. Neal, Jeffrey, Kurt, Noel, and myself were mainly C++ and Python programmers, but many engagements ended up being Java gigs; Miško was the first existing full-timer with extensive Java experience. As demand grew for projects with a large Javascript component, more contractors with Javascript experience were brought in.
The downside for me was that group got so big at one point that, being in NYC at the time and feeling more disconnected from the Mountain View team, I lost track of how many Mercs there were. Let me run through the list of all the Mercs than I can remember (though I know there were many more):
Googlers who transferred from another team: Mike Bland * Kevin Cooney Neal Norwitz Miško Hevery Kurt Steinkraus Jean Tessier * Jessica Tomechak * (team Tech Writer) Adam Wildavsky Noel Yap * Jeffrey Yasskin Googlers directly hired as Test Mercenaries: Dave Astels * Misha Gridnev Christian Gruber Industrial Logic: Alex Aizikovsky ** Tracy Bialik ** C. Keith Ray Russ Rufer ** Gene Volovich ** Pivotal: Adam Abrons Peter Epstein ** Matt Hargett Dave Smith Paul Zabelin Thoughtworks: Dennis Byrne Bradford Cross Paul Hammant Ralph Jocham Jonny LeRoy Sam Newman Tim Reaves Scott Turnquest Jonathan Wolter * No longer at Google ** Contractor eventually hired full-time
** Contractor eventually hired full-time
Apologies to those whose names’ve slipped my mind. Ping me and I’ll update the list. (Thanks to Christian Gruber for helping me fill in a few folks I’d forgotten.)
Man on the Moon
Shortly after joining the team, I grew restless that the Testing Grouplet, Test Certified, and the Test Mercenaries seemed to be out-of-sync, not communicating and collaborating as well as I’d hoped. I was actively involved in all three, but no one else was, really. We all wanted to improve testing at Google, but weren’t terribly clear as to what we were doing to reach that state together, exactly, other than producing Testing on the Toilet episodes, recruiting Test Certified participant teams, and launching a handful of Test Mercenaries engagements.
This increasingly disjoint set of initiatives seemed ripe for unification towards a common goal, lest we duplicate effort or become a set of competing factions, weakening the overall testing mission. After conferring closely with my main partner-in-crime at the time, Intergroups program manager Mamie Rheingold, we realized we needed a Man-on-the-Moon mission: something bold and grand to shoot for that would align everyone’s efforts and produce an enormous impact beyond that which we could possibly predict.
Having grown up in Hampton, VA, where there is more military than any other metropolitan area on the planet, I very naturally took to the military metaphor implied in the “Mercenaries” part of the name, and formulated the first draft of a shared mission thus: The Testing Grouplet and Test Certified should combine forces to make the Test Mercenaries as successful as possible—to “support our troops”, as it were. I liked it; I shopped it around to folks involved in all the groups, and they were just kinda OK with it, except one: my Testing Grouplet co-leader, Michelle Levesque.
One day I proposed that Michelle and I take a walk around the Googleplex to work out our differences over the mission. She didn’t disagree that we needed a mission—no one disagreed on that—but, being Canadian, she thought Americans were too hero-worshipping of the military, and the metaphor of “supporting the Mercenary troops” didn’t sit well with her on that basis. Of course, I was seeing nothing wrong with the implied honor of a military metaphor, but asked what she would propose as an alternative. She said something along the lines of, “Why don’t we make Test Certified the focus of all the groups?”
Bang, that was it! Clear as day, right under our noses! I pounced on this idea immediately, and after we finished our walk, the two of us pulled the third co-lead, Neal Norwitz, away from his desk to run the idea by him. He thought it was a great idea, too, and it was settled—at least as far as the Testing Grouplet was concerned. But it was a very easy sell to the Test Certified (obviously) and Test Mercenaries contingencies as well. So our mission became thus:
Ensure every team at Google reaches Test Certified Level Three by the end of 2009.
Very shortly after getting unanimous buy-in on the new Man-on-the-Moon mission from the Testing Grouplet, Test Certified, and Test Mercenaries, Mark Striebeck, Antoine Picard, and I were invited to a leadership offsite for the nascent Engineering Productivity focus area. The big theme of the offsite was finding a way to have Test Engineers (TEs) and Software Engineers in Test (SETs) engage more effectively with their product teams on issues of overall product quality. The sense at the time was that there was a giant disconnect, that TEs and SETs were basically just cleaning up after the developers without having a say in the development process, without a chance to introduce improvements that would ensure better product quality, to say nothing of making better use of the TEs’ and SETs’ time and talents.
Feeling very excited and confident, I very vocally promoted what I called my “secret agenda”: Getting TEs and SETs to become active advocates of the Testing Grouplet’s Test Certified program for their product teams. Test Certified is primarily concerned with code quality, not overall product quality, but my argument was that issues of product quality were suffering because the TEs’ and SETs’ time was often consumed with diagnosing code quality-related issues—i.e. stupid bugs that a unit test could’ve easily, quickly, and cheaply caught—that were distracting them from doing the more important, more interesting work of ensuring overall product quality. Fixing code quality wouldn’t necessarily fix product quality, but done well, it could only help.
What’s more, Test Certified was already very active and somewhat proven, with dozens of teams already participating, and provided an easy-to-follow package of steps based on de facto-standard internal tools, infrastructure, and policies. This wasn’t something for Eng Prod to have to start from the ground up; they could quickly pick it up and build on the momentum that was already there, and both they and the development teams could see immediate, tangible progress. It gave both the TEs/SETs and their development teams something in common, something absolutely concrete, to discuss and work towards with an eye to improving code quality as it ultimately impacts overall product quality.
After that day, Eng Prod’s endorsement and adoption of the Testing Grouplet’s Test Certified program was rapid and absolute, to the point that Test Certified became more associated with Eng Prod than it did with the Testing Grouplet.
13 As a Testing Grouplet leader, I was happy to negotiate a successful symbiosis between the Testing Grouplet and the Eng Prod organization as a whole, but especially with the individual TEs and SETs who also felt an investment in the concept of automated testing, but didn’t fit into the existing developer-exclusive Testing Grouplet and Test Mercenaries cliques.
This Testing on the Toilet episode featuring Test Certified, which I happened to write, encapsulates the big picture of everything that was going on with regards to the Testing Grouplet, Test Certified, the Test Mercenaries, Engineering Productivity/Test Engineering, and Testing on the Toilet itself (click on it for the larger, readable image):
This episode also features the Test Certified and Test Mercenaries logos (the TC Shield and the Test Mercs knight, respectively) designed by Mamie Rheingold and the Testing Grouplet lightbulb logo designed by Johannes Henkel (with a small contribution from yours truly). Published in mid-2007, it explicitly mentions the Test Mercenaries, the ability to avoid having bugs slip through to QA and production, and the explicit mission of ensuring all Google teams reach Test Certified Level Three within two years. Apparently we hadn’t yet given up on Ambient Orbs by then. “OKRs” stands for “Objectives and Key Results”, which are the quarterly goals that each individual, team, and focus area, as well as the company as a whole sets at the beginning of each quarter and reviews at the end to gauge progress, impact, and to develop goals for future quarters. Test Certified was designed to be very OKR-friendly.
Revolution
As I mentioned in the Chris/Jay Build post, shortly after the Eng Prod offsite, during the end of July, I experimented with SrcFS and Forge when setting up the Chris/Jay build for my first Mercenary engagement. (Being the first engagement for both Tracy Bialik and I, we didn’t yet think to set up the CJ build before diving into code reviews and some refactoring.) The impact of the new tools was phenomenal, and when I realized the ease with which incompatible tests were fixed, I began planning what would become the Revolution Fixit of January 2008 (rolling out Blaze, the Make replacement, as well).
Given the length of this post, and the fact that I’ve a lot more context and detail to explain about the Revolution, I’ll just give the punch line here: I saw in the new tools a huge part of the solution to the number one obstacle in our quest to get every Google engineering team to Test Certified Level Three—the “I don’t have time to test” excuse. I enlisted the rest of the client team to use the new tools, and sold the tools very hard to all my fellow Mercenaries. Mark
loved the fixit idea, and he and I worked very closely to make it happen. We did twist the Build Tools team’s arm a bit, as well as that of Ambrose Feinstein, Forge’s author, who was only working on it as a 20% project at the time. I’m not particularly proud to admit that, but in the end, everybody was moving full-speed-ahead in the same direction, and the effect was awesome: Awareness of, interest in, and adoption of the new tools took a quantum leap during the Fixit; the old build tools were mostly dead within three to six months, and totally dead within about a year; and Mark Striebeck formulated his vision for the Test Automation Platform (TAP) based on the power and potential of the new toolchain. Mark assigned Mercenary Sam Newman to lead the initial development effort of Sponge, TAP’s company-wide build-and-test result data collection component, which was integrated into Blaze well in advance of TAP’s rollout. 14 Even before TAP rolled out, the “I don’t have time to test” obstacle/excuse was dying, and it was certainly dead afterwards. “Revolution” was the correct name; I’m sure John Lennon would’ve approved, and would’ve loved to see the plan.
The Revolution was one of the quintessential outcomes demonstrating what I mean by the Test Mercenaries serving as a radiant crucible for ideas and tools that spread throughout Google. Again, I promise, I will speak squarely about the Revolution in the near future; with the Test Mercenaries background in place, I’ve got a couple of Testing Fixits to fill in, at which point we’ll be ready for the Revolution.
NYC
By late 2007, Mark Striebeck decided it was time to consider branching the team out to Google’s second-largest engineering office, New York City. Plus, we landed a huge new client team, with major components in both Mountain View and New York, which actually necessitated a New York presence. Having assumed a leadership role within the Test Mercenaries, and known for my personal love of NYC, Mark chose me to lead the new team. Alex Aizikovski, Jeffrey Yasskin and Tim Reaves were also assigned to the engagement, and Mark sent Alex to accompany me in New York.
I arrived in New York on November 20, 2007. About a week later, while watching Boss Tweed, a rockabilly-blues trio, perform at the Mercury Lounge, I realized that I absolutely wanted to move back to New York, where I had briefly moved during the period of time that I was interviewing for Google. Eventually, ten-and-a-half months later—staying in corporate housing the whole time—official permission was granted and I moved to the West Village.
15
Things got off to a very rough start, though: That first, big, highly-visible engagement was an unqualified disaster, for which I assume responsibility. Our daily stand-up meetings and agreements didn’t help. That team, with its strong engineers with equally strong personalities, its already-improving discipline and policies, and extremely complex product that the lead engineers knew exactly how they wanted to change, was already in a state far beyond our ability to make an impact, or perhaps just beyond my ability for which to lead a successful engagement. However, despite that negative experience, and the grueling retrospective, we managed to salvage good relationships with those engineers, and to move on to much more successful engagements. Along the way, Tim Reaves joined us from Mountain View, and Adam Wildavsky, a full-time NYC Googler, also decided to join our team.
Testing Grouplet NY
Despite New York’s status as the second-largest Google engineering office, the lack of NYC/East Coast Engineering Productivity director at the time made it difficult for the Test Mercenaries, Test Engineers (TEs) and Software Engineers in Test (SETs) to pull together as a community. I had my weekly meetings with Mark Striebeck as a lifeline, but most of the TEs and SETs felt very disconnected from the Mountain View-centric Test Engineering/Engineering Productivity organization. Even the TEs and SETs in Zürich/EMEA had a regional Test Engineering director. I felt some degree of responsibility to do
something to help create a sense of community, a sense of connection between those of us in the testing community beyond the few personal friendships some engineers had formed.
In early 2008, I conscripted my partner-in-crime David Plass to start a new Testing Grouplet chapter, the Testing Grouplet NY.
16 I was never an official leader, but I came to the early meetings to provide ideas and moral support, until David, Prakash Barathan, Catherine Ye, and Tony Aiuto, among others, found their legs and ran with it. That was one of the most fun times to be a Groupleteer, as we all brainstormed New York-centric ideas to promote Test Certified, such as the Statue of Liberty build monitoring orbs and the Testing Grouplet NY logo: The Statue of Liberty holding the green Testing Grouplet logo lightbulb and an Apple PowerBook.
To kick-start the grouplet’s New York-focused efforts to achieve the Test Certified-everywhere mission, I suggested that we try a very New York-specific format for introducing prospective development teams to Test Certified Mentors: Speed dating. But I realized, I didn’t really know how speed dating actually worked. So, in the interest of research, I went speed dating. It was so much fun, and I learned a lot about how to run such an event! After that, we set up the event, with Test Certified mentors at each table and representatives from interested teams rotating around—or was it the other way?—and everybody had a lot of fun and found a match. Mark Striebeck claimed that the best expense report he ever approved was for my speed dating receipt.
NYC Testing Summit
After the DoubleClick integration in April 2008, as a means of galvanizing the members of this now-even-larger New York testing community, I had the idea to organize a NYC Testing Summit, a multi-day, informal, internal conference of TEs, SETs, Test Mercenaries, and test-friendly Software Engineers to take place in the New York office, independently organized from the Engineering Productivity organization in Mountain View (though with help from its budget). I applied Fixit organization tactics to the process, defining explicit roles and handing them out to specific individuals to own, with clear directives and responsibilities and the freedom to execute on them however they chose.
We extended invitations to TEs, SETs, and Mercs company-wide, and we got a good number of folks from Mountain View and other offices to travel to NYC for the event. We had three days of planned presentations and workshops, with all the New York folks mingling with engineers and managers from Mountain View, putting faces and handshakes to names. The DoubleClick folks—in particular, Alex Chu, Pavithra Dankanikote, and Dianna Chou—had a blast organizing the event and being a part of the bigger Google Test Engineering family coming together in New York. And we had several kegs of beer from the nearby Chelsea Brewing Company to keep the proceedings lubricated throughout the afternoon.
The summit was held in mid-August 2008. We had no idea how great our timing was.
Contraction
September 2008: The US housing market collapse.
Eric Schmidt and the Google leadership team deserves a
ton of credit for sailing the ship through those rough, stormy, unfamiliar waters. To my knowledge, not a single full-time employee was laid off, and most of the perks Google is famous for—cafés, microkitchens, other events and treats—were retained. But the future was extremely uncertain, and Google made one of the most sensible decisions it could have: It decided to let go of nearly all of its contractor employees—and that meant that the Test Mercenaries’ days were effectively numbered.
A number of Test Mercenaries contractors were successfully hired as full-time employees, either immediately or eventually, but many more were just let go. The team as a whole did not shut down right away, but Mark Striebeck did pass managerial duties on to Brad Green shortly thereafter, to focus on Testing Technology and the Test Automation Platform (TAP). In New York, the team became just Adam Wildavsky and me, working on one final engagement together. During my final trip to Mountain View the first week of January 2009, after a few meetings with Brad and other folks, I realized I no longer had anything left to give to the mission, and went back to New York to shut down the NYC Test Mercenaries team. A month or two later, Brad announced that the team was being dismantled completely.
Life After Service
One of the perks of being a Test Mercenary was the opportunity to get hands-on experience across many different Google properties, and to hear about the experiences of others. One could get a good idea of what kind of project one would like to work on after the Mercenaries, and already have familiar contacts in-place. The last engagement Adam and I worked on was for a websearch team, and I grew fascinated with the core product and the culture that had built up around it. What’s more, as a whole, the websearch focus area seemed to take testing really, really seriously; in fact, that last engagement was another where we weren’t sure how much we could help, and I repeatedly told the team’s manager, John Sarapata, that I wasn’t sure we should go through with it. But he was so interested and insistent, and most of the team so receptive, that we went ahead with the engagement and had a great time, even if we couldn’t say that we did all that much. We did give them a helpful push up the Test Certified ladder, though; and the bridge-playing members of the team were in awe of being seated in proximity of
the Adam Wildavsky, international bridge championship regular.
When it came time to choose a new team, I gravitated towards websearch. I seriously considered John’s team, and in retrospect, perhaps I should have joined his team instead. But I still got to work closely with John and his team as part of the team I ultimately joined, and I learned a lot about how the Google websearch sausage was actually made. Plus, I always said that, post-Mercs, I wanted to retire into a team that was doing interesting, challenging work, and already had a strong testing culture such that I wouldn’t have to fight anyone over writing automated tests anymore—and that’s what I got.
What I didn’t realize was that the vast majority of teams at Google could be described thus by early 2009, or that I would be called into service once more by Mark Striebeck to lead the Test Automation Platform Fixit in March 2010 to put the final piece of the years-long testing mission puzzle in place.
Software Engineers in Test
Allen Hutchison, an American from the London office who was the Test Engineering director for Europe at the time (and has since moved on to another department), and I had a chat not long after I got involved with the Test Mercenaries. He mentioned the idea for having Test Mercenaries work closely with Software Engineers in Test (SETs), the full-time engineers hired to focus exclusively on testing and test infrastructure concerns for a product team, such that the SETs could carry on as the keepers of the testing faith, knowledge and practices for the team long after the Mercs had moved on to another engagement. This sounded like an ideal arrangement to me, but I don’t recall either of us doing that much to actively promote it. The idea eventually eventually manifested itself based on experience, necessity, and maybe a hint of the power of suggestion—especially after the 2008 phynancial crisis resulted in the mass-shedding of contractors, many of whom were manual testers, necessitating a proactive approach on the part of both SETs and development teams.
After I successfully sold Test Engineering on the idea of using the Testing Grouplet’s Test Certified program to drive discussion and adoption of automated developer testing within their client teams, the ranks of Test Certified Mentors flooded with SETs eager to get their projects and their neighbors’ projects on the TC Ladder. In fact, one of the largest testing-related fixits, the months-long 2008 Test Certified Challenge, was organized and executed by two SETs from Cambridge, MA, Matt Vail and Tayeb Karim. Awareness of Test Certified and the number of teams on the TC Ladder exploded during this time, as individual developers, teams, and engineering offices jockeyed for position on the company-wide TC Challenge leaderboards.
Even before the breakup of the Test Mercenaries, the emphasis of the SET role became increasingly focused on infrastructure development and monitoring/improving testing practices for the whole product team, while being engaged in product and feature development discussions with an eye towards testability. Development skill became more important, as John Turek, the former East Coast Test Engineering director,
17 fought fiercely to ensure that any candidate considered for a SET role must be just as technically capable as a normal Software Engineer candidate. No longer would SETs write all the unit tests for the team, or hack together only team-specific testing scripts, or find all the problems after-the-fact for the team to go back and fix; they were about proactively working to ensure code quality up-front as part of the overall product quality mission, and finding ways to adopt and to contribute to Google-wide testing infrastructure and practices. They picked up where the Mercs left off, and then some, since they were seen more as permanent members of a team rather than vagabond missionaries. In fact, several former contractor Mercs were eventually hired as full-time SETs, such as Russ Rufer, Tracy Bialik, Alex Aizikovski, and Gene Volovich. Judgment
Mark Striebeck has gone on the record as stating that the Test Mercenaries, in the end, were a failure. It was too difficult to “teach” testing in a way that stuck with engineers lacking intrinsic motivation; experience is always the best teacher, and the Test Mercenaries usually couldn’t scale the transmission of that experience directly to each member of a team within the scope of a single engagement. We had difficulty defining meaningful metrics to illustrate our impact and difficulty achieving success according to the ones we did define. In some engagements, we had precisely zero impact, as the team proved either too closed-minded as a whole, too motivated to self-medicate without us being able to find a concrete means of adding value to their existing process, or had goals that proved too large and complex for us to effectively contribute to in the time that we had. The approach didn’t scale; when the housing market collapsed, the contractor-heavy enrollment ensured that the team’s days were numbered.
Again, as much as I admire and appreciate Mark, I respectfully disagree with him on the grounds that this assumes a too-narrow, metrics-focused, culturally-biased assessment of “success”. Thanks to the Test Mercenaries, several dozen full-time engineers spent years thinking about the hardest technical, social, and organizational obstacles to automated developer testing at Google—not just team-by-team, but in the large. My involvement as both a Test Mercenary and Testing Grouplet leader led me to seek the Test Certified-everywhere mission as the unified focus for both groups (plus the nearly-independent Test Certified organization itself), behind which I then successfully convinced Test Engineering/Engineering Productivity to also throw its support. Other Mercs were inspired to create or adopt new tools that could be applied and shared between some of the highest-profile projects in the company, eventually making the tools sharper and driving their adoption as company-standard infrastructure. Distinct personalities with unique technical and social strengths helped drive a company-wide dialogue around Test Certified, testing terminology and methodology, and tool adoption, producing numerous Testing on the Toilet episodes that shaped how Googlers from Mountain View to Sydney think and talk about developer testing, as well as the Revolution Fixit, which went a long way towards removing the “I don’t have time to test” excuse.
Some client teams may have become Test Certified eventually without us, but certainly for many of them, the Test Mercenaries helped bring about the necessary changes much faster than they would have otherwise. Even if a team didn’t advance very far up the TC Ladder, or risked sliding back down after we left, the experiences informed the company-wide discussions and tool/process developments that, in turn, eventually did produce a permanent culture change in favor of automated developer testing.
The 2008 housing market collapse, as terrible as it was, happened at an opportune time in the sense that the Test Mercenaries, as well as Test Certified, the Testing Grouplet, and Test Engineering/Engineering Productivity in general, had reached a tipping point that only became apparent in 2009, when despite the loss of the Mercenaries and a lot of manual testers, developer testing discipline and the tools used to achieve it continued to steadily improve—at the same time the phynancial crisis underscored the necessity of doing “more with less”—resulting in the resounding successes of the October 2009 Forgeability Fixit and the subsequent March 2010 Test Automation Platform (TAP) Fixit. The concept and implementation of TAP was a direct consequence of the Mercenary-inspired Revolution Fixit, and after TAP rolled out company-wide, the “I don’t have time to test” excuse was completely dead. Consequently, nearly every project team at Google was effectively executing at Test Certified Level Three, whether they were officially recognized on the TC Ladder or not, given that practically every team had at least one TAP integration build, and everyone was bound by the implicit cultural policy of keeping their own and everybody else’s builds passing—and this happened only about one quarter behind the schedule of the stated mission.
The Test Mercenaries may not have succeeded according to accepted, measurable criteria, but it seems highly unlikely to me that the sea change whereby automated developer testing became the expected cultural norm could’ve or would’ve happened without the focused intensity of the Test Mercenaries, augmenting the volunteer activism of the Testing Grouplet and Test Certified programs, filling the Testing on the Toilet pipeline with battle-proven material, pointing the Engineering Productivity organization towards the Test Certified program, helping to shape and spread Testing Technology and Build Tools innovations, and setting the precedent for the fully-engaged Software Engineer in Test role. The immeasurable catastrophes avoided and revenue saved made for a priceless experience.
Footnotes 1The “TC Ladder” was the published roster of teams participating in the Test Certified program and their corresponding TC levels. 2The relevant bit from the Wikipedia description: “The fifth Labour of Hercules was to clean the Augean stables. This assignment was intended to be both humiliating (rather than impressive, as had the previous labours) and impossible, since the livestock were divinely healthy (immortal) and therefore produced an enormous quantity of dung. These stables had not been cleaned in over 30 years, and over 1,000 cattle lived there.” Unlike the stables, Google had been around less than ten years. Unlike Hercules, it took us more than a day. 3Russ and Tracy actually had their own consulting company, Pentad Software Corporation, and were subcontracted via Industrial Logic. Both are now full-time Googlers; some would suggest that they are actually a single Googler, RussAndTracy. 5Refactoring is making changes to the code without changing its function or behavior. This is often done to clean up the code, to make it easier to implement new features, or to make it easier to test. Refactoring a complex system or widely-used library/piece of infrastructure is much more difficult to perform quickly and confidently in the absence of good automated tests. 7The “Given enough eyeballs, all bugs are shallow” philosophy, aka Linus’ Law, which is true up to a point; but adding (good) automated developer tests to the mix has only proven to make code reviews even more valuable, since the code is usually clearer, and the reviewer can largely trust that the author has written the code to pass the stated tests—and can possibly suggest further tests. 8After my trip to Europe in 2006, when I was planning to join the Mercenaries but hadn’t yet, my partner-in-crime Ana Ulin and I had excitedly dreamed up a plan to have me stationed in Zürich, working with her (a Software Engineer in Test at the time) to build up the Mercenaries in Europe. Mark, despite being German, gently-yet-firmly shot that idea down, which I have to admit was the right decision; the team hadn’t yet formed, and making it multi-site from the get-go before it gained any practical experience was a bad idea. Plus, Zürich is a long way from Mountain View, reducing visibility, no matter how large an office or how important the projects there. With Fixits, however, I repeatedly adopted a non-Mountain View-centric strategy, a kind of flanking maneuver that involved extensive engagement from so-called “remote” offices, that worked very well. 9For the non-Americans, this is a baseball reference; it’s also a double entendre. My Keeping It Legal days are squarely behind me. Anything I write that could possibly be interpreted in a prurient fashion, should be. 10Hat tip to Seth Godin for the “tribe/permission” language here. I’m not a Seth Godin fanatic, and I think he gets a little full of his own ideas sometimes, but sometimes he does talk good, plain sense. 11Guice is now maintained within Google by ex-Merc Christian Gruber. 12Contractors were identified by the red background behind their names on their Google badges; the technical access-control difficulties they experienced as well as the implied sense second-class status led some of them to refer to the “Red Badge of Shame” (in contrast to the Red Badge of Courage). Some of their gripes, especially the technical ones, were legitimate; sometimes I felt they played the victim card a bit. Understandable to some extent, though; class distinctions in general, regardless of setting, often breed resentment. 13James Whittaker, who joined Google in 2009, nearly two years after Eng Prod’s involvement with Test Certified began, neglected to credit the Testing Grouplet explicitly as originating the Test Certified program in his book, How Google Tests Software, released at about the time of his return to Microsoft in early 2012. The Testing Grouplet, Testing on the Toilet and Fixits receive passing mention in the interview chapter with Mark Striebeck, Neal Norwitz, Tracy Bialik, and Russ Rufer; the Test Mercenaries are not mentioned in the book at all. 14Michael Chastain, an extremely well-respected C++ expert within Google, once replied to a request for help on an email thread: “Sponge link or it didn’t happen.” That’s one of the ultimate high forms of validation of Sponge’s impact in my mind. 15Just moved from the West Village to Brooklyn a few weeks ago; found a place with twice the space for half the rent. 16David actually wanted it to be known as “Testing Grouplet: The New York Group”, or “TG:TNG”, as a hat tip to “Star Trek: The Next Generation”, but it didn’t seem to catch on as well. 17The position wasn’t filled, nor was John hired, until after the Test Mercenaries disbanded. | 60,838 | 24,010 | 0.000043 |
warc | 201704 | Chronicles Of Depression 2.0: #363: Hollow
To review the point that I made in Chapter 7 of my book “Greenspan’s Bubbles” and in many columns:
In the past expansion, economic growth was almost entirely about real estate. Gross-domestic-product growth, excluding mortgage-equity extraction, was almost nonexistent. In addition, when you consider that 30% to 40% of all jobs were real-estate-oriented, it’s clear how hollow the economy is liable to be going forward.
Emphasis added by me.
Is there anyone out there reading this who can confirm that statistic?
This is perhaps the most important statistic I’ve ever seen cited.
It explains
a lot and ties into the next post I’ll be doing.
I need to give some background here.
One phrase I have heard over and over again in relation to various scenarios is this:
Progress is automatic.
It’s been used to describe the future as depicted by the 1939 World’s Fair.
It’s been used to describe various aspects of Darwinistic beliefs.
It’s been used as the cornerstone of some beliefs in America.
It has, in fact, been the foundation of advocates of pure reason.
But progress is not automatic.
Human history testifies to the fact that civilizations deemed advanced have collapsed. Jared Diamond’s book,
Collapse: How Societies Choose to Fail or Succeed, explores the factors that have wiped out past human societies.
The financial crisis we are facing —
and are still heading towards — is one of those things that has a high probability of bringing on such a societal collapse.
However, that collapse does not have its roots in this recent round of financial rapacity and outright fraud. The seed for the true roots were planted with a noxious idea called deindustrialization. The wikipedia entry does not do the term justice (nor does the entry cite the seminal book on the topic!).
Deindustrialization falls under the foolish notion of “progress is automatic.”
That is, as a population becomes materially better off, it will also aspire to better jobs. Thus, at some point in the future, the population will become too educated to sustain a manufacturing base. Manufacturing is seen as labor “unworthy” of the educated.
There are several fallacies to that line of thinking. The most dangerous one is that it posits all of a nation’s citizens advancing educationally
at the same rate.
We only have to look around to see that’s not true.
An additional error in that wikipedia entry is this:
Total industrial employment has been roughly constant at around 30 million people since the late 1970s (though there has been a steady decline since the all-time peak of 31.5 million in 2000).
Wait a minute. What about industrial employment
before 1970?
Before 1970, this nation made its own televisions, radios, and more. Look at the back of any electronic device today and you’ll find Made in China. (The same thing for toys … and much, much more.) When it comes to employment,
this is where economics devolves into “winners and losers.”
Free trade fundamentalists will argue that cheaper goods are beneficial to a nation. They ignore the fact that the cost of these goods comes at the expense of a nation’s workers losing their jobs —
and of potential future employees not having a job waiting for them. Of what benefit are cheaper goods if the nation’s population can’t afford to buy them?
A devastating ongoing consequence of deindustrialization is a growing population of the discontented and discouraged. Facing a bleak future, their allegiance to the society as a whole is weakened and can ultimately descend into a dark mirror image of the sociopathology at the top of the society — a sociopathology based not on the “luxury” of monetary greed, but based on the overwhelming need to
merely survive.
There are two factors today that disguise just how bad things actually are in America:
1) Safety net disbursals by governments (which still don’t prevent a persistent homeless population)
2) The shadow economy based on illegal drugs (which manifests itself in others ways; one being predation upon others)
If those monies — and “opportunities” — were subtracted from the American economy, what would be left as its primary engine of growth?
Would it be what the statistic above cites —
real estate?
If that’s true, then indeed, “Now we’re sitting on the biggest bomb man’s ever made.”
And the true dimensions of this bomb have suddenly
become greater than all past estimates.
See the next post for just how big.
Explore posts in the same categories:Bank Collapse Watch, C.O.A.T. - Belief, C.O.A.T. - Money, C.O.A.T. - Plague, C.O.A.T. - Scams, C.O.A.T. - Self-Defense, Depression 2.0, Reference - Life, Stock Market Crash Watch | 4,918 | 2,425 | 0.000435 |
warc | 201704 | In this issue: "Traces of a Distant Past": The paths our ancestors took out of Africa have been lost in time. However, hints of them have survived in the genetic landscapes of populations created along the way. "The Neuroscience of Dance": Recent brain-imaging studies reveal some of the complex neural choreography behind our ability to dance. "Hands-On Computing": Multi-touch screens could improve collaboration without a mouse or a keyboard. "No-Till: The Quiet Revolution": Because plowing degrades the land, farmers are increasingly turning to a more sustainable alternative. "New Jobs for Ancient Chaperones": With newly recognized roles in cancer and immunity, the heat shock proteins that normally protect cells against stress might become therapeutic allies.More
We've sent an email with your order details. Order ID #:
To access this title, visit your library in the app or on the desktop website. | 922 | 582 | 0.001745 |
warc | 201704 | To get started with our ModWindows Cost Calculator, just tell us the types of windows you’d like to replace, and how many you need of each kind. Next, enter your city and state and let the system work its magic!
You’ll get an instant estimate, broken down for both labor and materials. You can even adjust your results to see how different frame materials and window qualities affect your overall costs.
Save and share your estimate to email it to yourself or a friend, or click Connect With a Local Pro to get started on your replacement project today. Window Replacement in Cicero, Illinois
Hello, winter chill! If you’ve lived in Cicero for a while, you know that the weather here can be intense. With some of the nation’s highest wind speeds, Illinois residents need to be prepared for whatever the elements throw at them–and one of the best ways to do that is to install a new energy-efficient window.
In Cicero, however, there are a couple of requirements you’ll need to meet to keep your new windows safe and up-to-code. Follow this guideline to ensure that everything is above board, as well as some information about how to save and some tips for selecting your contractor.
Window Contractor Requirements in Cicero
The city doesn’t necessarily require you to hire a contractor to complete small repairs around you home. However, if your replacement includes major alterations to the surrounding walls, you may be instructed by the Building Department to use a licensed contractor who can verify that everything is constructed according to code.
In fact, unless you are experienced with window repair, you may want to hand over the project entirely to a licensed professional–that way you’ll know that it’s up to spec. When you’re first visiting the town Building Department, you can verify that the contractor you’re considering has no outstanding debts to the city–that’s a good sign that they are reliable and can be trusted to complete your project in a timely and safe fashion. You can also check the contractor’s background by asking to see a few references and following up on them.
Applying for Window Permits in Cicero
In Cicero, in order to start your window replacement, you’ll need to have successfully filed for a building permit. Additionally, if you’ll be hiring a contractor, you’ll need to have that person lined up before you apply–their name and contact information needs to appear on the application.
If you’ll be completing extensive work, you may be asked to provide plans or drawings as well showing what you’ll be doing, as well. A city official should let you know what kinds of additional documentation are necessary when you visit the office to obtain the application. If you want to get a jump start on things, however, you can view the permit online here. The Building Department’s physical location is 4949 W Cermak Rd, Cicero, IL 60804.
Window Requirements in Cicero
Like most towns in the state, Cicero abides by Illinois’s 2012 Energy Code, which sets forth certain restrictions on which products can be used around homes in order to keep energy use reasonable. One of the most important requirements, in terms of your new windows, is the limit on U-factor, or the measure of how well they insulate your home. In Cicero, the U-factor for windows needs to be no greater than 0.32–the lower the rating, the better the insulation.
When you shop for windows, you should see the window’s U-factor listed along with its other specs on the labeling or in the product brochure. But if you want your windows to be as efficient as possible, there’s another part of the label you’ll want to pay attention to: the ENERGY STAR certification. ENERGY STAR is a program run by the EPA that rates products according to their energy use, so when you see that sticker, you know you’ll be saving energy.
Window Inspections in Cicero
After the work has been completed, it’s time to get it inspected! Most permitted work requires a visit from one of the city’s inspection officials. You can schedule an appointment at the Building Department. On the day of inspection, make sure to have the permit and any related documents displayed on-site for the inspector to review.
Insulation and Window Care in Cicero
In Illinois, insulation isn’t just a good idea–it’s a requirement! The state’s energy code dictates that window frames meet certain insulative measurements as well. That means rating at least a 20 in R-value.
You can further your heating budget by taking some additional insulation measures as well. First, you’ll want to make sure your window was properly caulked when it was installed. It’s easy to overlook, but makes a big difference in terms of heat leaks. The caulk should be applied in a neat bead that makes a smooth barrier between the glass and the wood. Additionally, weatherstripping should be added to the moving parts–that’ll keep your home toasty warm.
Rebates and Incentives in Cicero
The city of Cicero does not offer any rebates or incentives for energy-efficient windows; however, that doesn’t mean that you can’t still reap some savings on them. If you purchase ENERGY-STAR labeled products, you can file for a rebate on your federal taxes. Just fill out form 5695 with the rest of your paperwork–you could earn back up to 10 percent of the cost of the windows, up to $200. | 5,547 | 2,520 | 0.000414 |
warc | 201704 | World-class research facilities: “From bench to bedside” Contact
Today, Mr. Normand Rinfret, Director General and CEO of the McGill University Health Centre (MUHC), and Dr. Vassilios Papadopoulos, Executive Director and Chief Scientific Officer of the Research Institute of the MUHC (RI-MUHC), are proud to inaugurate the RI-MUHC's new research facilities at the Glen site, with Mr. Yves Bolduc, Quebec Minister of Higher Education, Research and Science.
“We are proud to inaugurate the Research Institute today, as this represents the first step in the redeployment of MUHC 2015 at the Glen site,” stated Dr. Papadopoulos. “Biomedical and research facilities, combined with renovated labs and redesigned care units at the Glen and the Montreal General Hospital, will redefine how our researchers and students conduct cutting-edge research, with the ultimate goal of advancing 21st-century medicine.”
Today, hundreds of healthcare professionals, members of the research community and industry partners toured the ultramodern facilities, where RI-MUHC researchers, students and staff will be moving at the end of the month.
“The dream of the RI-MUHC at the Glen site has become a reality, and this move represents a historic transformation for Montrealers and Quebecers,” stated Mr. Rinfret. “This new home will be a platform for an integrated approach to research, clinical care and education. Our researchers will continue to foster the RI-MUHC's, world class reputation here and around the globe.”
The RI-MUHC at the Glen was designed to allow researchers and clinicians to work closely together under the same roof. Pediatric and adult research activities will be combined so that scientists can study the onset and impact of diseases on individuals throughout their lifespan. The complexity of medical problems such as diabetes, cancer and respiratory diseases, amongst others requires researchers and medical staff to collaborate to better understand these diseases and more easily develop new diagnostic tools, improved therapies, and more strategic approaches to population health.
Research activities will be divided into three pillars:
The Centre for Translational Biology (CTB) will be the hub for fundamental research. Scientists will work in open laboratory units with state-of-the-art equipment and computer systems to develop novel curative compounds. The McConnell Centre for Innovative Medicine (CIM), integrated within the hospital, will specialize in clinical trials and research to transform discoveries into new treatments. The Centre for Outcomes Research and Evaluation (CORE) will include specialists in epidemiological, statistical, economic and biopharmaceutical research who will evaluate the impact of new treatments, diets and environmental factors on health.
“The RI-MUHC's high-tech facilities at the Glen site along with those we are upgrading at the Montreal General Hospital will let us push the boundaries of medicine like never before. We will therefore remain at the forefront of research and excel in our mission to improve health outcomes,” concluded Dr. Papadopoulos.
The Redevelopment Project of the Research Institute of the MUHC required an investment of $210 million for construction and design costs, to which must be added $100 million for research equipment. The funds come from the Government of Quebec ($160 million), a $100 million grant from the Canada Foundation for Innovation (CFI) and a contribution of $50 million from donations to the MUHC foundations through the Best Care for Life campaign. | 3,619 | 1,709 | 0.000597 |
warc | 201704 | Every day, on my walk to work through downtown Vancouver, I pass a poster for a road safety campaign. It says “Being hit while jaywalking only happens to other people…” As someone who originates from England, where jaywalking is normal practice on all but the busiest roads, it is something of which I need to particularly take notice.
The internet equivalent of jaywalking might be something like peer-to-peer file sharing. If you download a file called “WorldOfWarcraftKeyCrack.exe”, do not be surprised if your anti-virus software detects that it is, in fact, an online gaming password stealer.
However, what does it mean when your anti-virus suddenly alerts on normally legitimate software, from a source you trust?
Golden Rule number 1: What is the nature of the beast? Always read the malware description!
When Sophos products detect malware, the alerts include a handy link to the malware description on our website. Use it! In particular note what type of infection is being reported:
If it is a file-infecting virus then it may indeed be infecting otherwise legitimate software.
W32/Induc-A is no exception to this rule.
As Sophos has already blogged, we have seen over 3000 files infected by the Induc virus.
Furthermore, in the last 24 hours there have been at least 11 cases where customers have submitted samples claiming that we are erroneously detecting legitimate software.
All of them have been genuine infections.
Let me underline this point: We have not had a single false positive on W32/Induc-A, nor are we ever likely to see one. If Sophos says you have a W32/Induc-A infection, we mean exactly what we say.
The manner of W32/Induc-A’s infection mechanism makes it even more likely to spread from supposedly legitimate sources.
As was already explained in Richard’s blog article, infected executables do not directly infect other executables. Instead they infect a library module (SysConst.dcu) in the Delphi Development environment. When a software house producing Delphi applications becomes infected in this way, every executable it compiles is infected with the virus.
Internal applications quickly spread the infection to all the company’s developers, while external applications are distributed to customers. Customers may include other Delphi programmers, and thus the virus spreads.
What should I do if I have a W32/Induc-A, W32/Induc-B, Mal/Induc-A or Mal/Induc-B infection?
If you are a customer who has received an application infected with W32/Induc-A or W32/Induc-B, please contact the supplier of the software. Inform them of the infection, and please ask them to contact either Sophos or the technical support of their anti-virus supplier as appropriate. When they have cleaned up their Delphi installation, they should then be able to supply you with clean versions of their software.
If you are a Delphi developer, or if you have Delphi installed and have possibly executed an infected application, then it is not sufficient to simply disinfect infected executables. You will also need to clean your Delphi development environment. The most important part of this procedure is to make sure your anti-virus software can detect infected SysConst.dcu units, and replace these with clean backups. Then recompile clean versions of your software to distribute to your customers.
Of course, you should probably warn your customers about the problem at the same time.
However, we would still like to see more samples of SysConst.dcu, SysConst.bak and SysConst.pas from any Delphi developers potentially affected by this virus, especially if you have customized versions of these units.
Sophos customers needing further assistance with W32/Induc-A, W32/Induc-B, Mal/Induc-A and Mal/Induc-B infections can always contact Sophos technical support. | 3,836 | 1,781 | 0.000571 |
warc | 201704 | How Do We Know God?
My last article posted dealt with how we know God. There is so much to say on this topic that it has filled many, many volumes. I’ve been addressing the topic carefully in a brief book I’m writing on stages and transitions of spiritual growth, and which you will hear more about in coming days… to be released in just over a month!
Today I want to add just a little to the last post and deal further with a fundamental issue, perhaps
the fundamental issue about knowing God: Do we know God through things God chooses to say and demonstrate to us through selected messengers? Or do we know God through a vast array of messengers, not selected out individually by God and promoted by a religious community? Is this aided by our observation of the world – nature, people, spiritual phenomena and such?
These significantly different methods relate directly to the kind of God people believe they (or others before them) have discovered. Now some holding to the first, “special revelation” approach do leave place for some input from the second, “natural revelation” approach. But inevitably, they see the definitive and vital specifics coming from revelation to God’s chosen messengers.
The important thing to notice is this: By deciding that special revelation is the way we know the most important things about God and how to have relationship with God, one has decided a very basic thing about God, perhaps inadvertantly, without realizing it.
That person now believes that God does some important intervening in the normal course of human affairs. And not only human affairs, but all natural processes of life and the universe.
What they now have basically ruled out is that God may be known equally in depth or accurately by anyone who doesn’t agree on their idea of who the special messengers of God are and what messages through them are revelations about God. (Of course, there are also matters of interpreting those messages since they are far from clear, regardless which set of scriptures we are speaking of, but that aside.)
So the issue of how we know God is thus immediately and closely tied to what kind of God
exists to be known.
Is it a God who “supernaturally” (an important term) breaks into normal affairs, at least on occasion? Or is it a God who only operates within natural processes (although God may have loving, persuasive influence
within them in subtle ways that operate continually and thus “naturally”)?
Now let me be quick to add that the latter kind of God, as seen by process theology, is
not restricted from having created genuinely choosing (free will) beings, not only as humans but also perhaps as angels, “demons” or in other forms. In other words, the supposed choice set up by the rivalry of supernaturalistic religion (its still-dominant form) and naturalistic science (of pure materialism, its still-dominant form) is BOGUS. But that false either-or choice is what most people believe is their only choice…. Sad, unfortunate! It is something my writing, including part of my upcoming ebook joins a relatively small number of other Progressive Christians in seeking to change. How about you? Have you felt restricted by this faulty either-or choice? What have you done or plan to do about it? | 3,349 | 1,648 | 0.000625 |
warc | 201704 | OFFICE
OF
THE INSPECTOR GENERAL SOCIAL
SECURITY ADMINISTRATION SINGLE
AUDIT OF THE
STATE OF COLORADO
FOR THE FISCAL YEAR ENDED
JUNE 30, 2003 December
2004
A-77-05-00004 MANAGEMENT
ADVISORY REPORT
Mission
We improve SSA programs and operations and protect them against fraud, waste, and abuse by conducting independent and objective audits, evaluations, and investigations. We provide timely, useful, and reliable information and advice to Administration officials, the Congress, and the public.
Authority
The Inspector General Act created independent audit and investigative units, called the Office of Inspector General (OIG). The mission of the OIG, as spelled out in the Act, is to:
Conduct and supervise independent and objective audits and investigations
relating to agency programs and operations.
Promote economy, effectiveness, and efficiency within the agency. Prevent and detect fraud, waste, and abuse in agency programs and operations. Review and make recommendations regarding existing and proposed legislation and regulations relating to agency programs and operations. Keep the agency head and the Congress fully and currently informed of problems in agency programs and operations.
To ensure objectivity, the IG Act empowers the IG with:
Independence to determine what reviews to perform.
Access to all information necessary for the reviews. Authority to publish findings and recommendations based on the reviews.
Vision
By conducting independent and objective audits, investigations, and evaluations,
we are agents of positive change striving for continuous improvement in the
Social Security Administration's programs, operations, and management and in
our own office.
MEMORANDUM Date: December 7, 2004
To: Candace Skurnik
Director Audit Management and Liaison Staff
From: Assistant Inspector General for Audit
Subject: Management Advisory Report: Single Audit of the State of Colorado for the Fiscal Year Ended June 30, 2003 (A-77-05-00004)
This report presents the Social Security Administration's (SSA) portion of the single audit of the State of Colorado for the Fiscal Year ended June 30, 2003. Our objective was to report internal control weaknesses, noncompliance issues, and unallowable costs identified in the single audit to SSA for resolution action.
The Colorado State Auditor performed the audit. Results of the desk review conducted by the Department of Health and Human Services (HHS) have not been received. We will notify you when the results are received if HHS determines the audit did not meet Federal requirements. In reporting the results of the single audit, we relied entirely on the internal control and compliance work performed by the Colorado State Auditor and the reviews performed by HHS.
For single audit purposes, the Office of Management and Budget assigns Federal programs a Catalog of Federal Domestic Assistance (CFDA) number. SSA's Disability Insurance (DI) and Supplemental Security Income (SSI) programs are identified by CFDA number 96. SSA is responsible for resolving single audit findings reported under this CFDA number.
The Colorado Disability Determination Services (DDS) performs disability determinations under SSA's DI and SSI programs in accordance with Federal regulations. The DDS is reimbursed for 100 percent of allowable costs. The Department of Human Services (DHS) is the Colorado DDS' parent agency.
The single audit reported that the State Treasurer did not evaluate the reasonableness of payment clearance patterns when it changed financial institutions. The Cash Management Improvement Act requires that payment clearance patterns be reviewed when a State changes financial institutions to prevent untimely draws of Federal funds and interest liabilities to the Federal government. The corrective action plan indicated that the State Treasurer will calculate new clearance patterns for payments issued by the State when data for a complete fiscal year are available (Attachment A, pages 1 through 3).
We recommend SSA verify that payment clearance patterns were reviewed by the State Treasurer for reasonableness.
The single audit also disclosed the following findings that may impact DDS operations although they were not specifically identified to SSA. I am bringing these matters to your attention as they represent potentially serious service delivery and financial control problems for the Agency.
Controls over the accounting function were weak, specifically, legal spending limits were circumvented, reconciliations were not adequate, expenditure information was not properly reported, and payroll documentation was not adequate (see Attachment B, pages 1 through 3).
Supporting exhibits for year end financial reports were inaccurate (see Attachment B, pages 4 through 7).
Supervisory controls over employee timesheets and payroll were not adequate (see Attachment B, pages 7 and 8).
Please send copies of the final Audit Clearance Document to Shannon Agee and Rona Rustigian. If you have questions contact Shannon Agee at (816) 936 5590.
Steven L. Schaeffer
Overview of the Office of the Inspector General
The Office of the Inspector General (OIG) is comprised of our Office of Investigations (OI), Office of Audit (OA), Office of the Chief Counsel to the Inspector General (OCCIG), and Office of Executive Operations (OEO). To ensure compliance with policies and procedures, internal controls, and professional standards, we also have a comprehensive Professional Responsibility and Quality Assurance program. Office of Audit OA conducts and/or supervises financial and performance audits of the Social Security Administration's (SSA) programs and operations and makes recommendations to ensure program objectives are achieved effectively and efficiently. Financial audits assess whether SSA's financial statements fairly present SSA's financial position, results of operations, and cash flow. Performance audits review the economy, efficiency, and effectiveness of SSA's programs and operations. OA also conducts short-term management and program evaluations and projects on issues of concern to SSA, Congress, and the general public.
Office of Investigations
OI conducts and coordinates investigative activity related to fraud, waste, abuse, and mismanagement in SSA programs and operations. This includes wrongdoing by applicants, beneficiaries, contractors, third parties, or SSA employees performing their official duties. This office serves as OIG liaison to the Department of Justice on all matters relating to the investigations of SSA programs and personnel. OI also conducts joint investigations with other Federal, State, and local law enforcement agencies.
Office of the Chief Counsel to the Inspector General
OCCIG provides independent legal advice and counsel to the IG on various matters, including statutes, regulations, legislation, and policy directives. OCCIG also advises the IG on investigative procedures and techniques, as well as on legal implications and conclusions to be drawn from audit and investigative material. Finally, OCCIG administers the Civil Monetary Penalty program. Office of Executive Operations OEO supports OIG by providing information resource management and systems security. OEO also coordinates OIG's budget, procurement, telecommunications, facilities, and human resources. In addition, OEO is the focal point for OIG's strategic planning function and the development and implementation of performance measures required by the Government Performance and Results Act of 1993. | 7,561 | 3,219 | 0.000313 |
warc | 201704 | Duration: 00:24:58; Aspect Ratio: 1.333:1; Hue: 268.642; Saturation: 0.217; Lightness: 0.286; Volume: 0.155; Cuts per Minute: 0.360 Summary: War is mostly about waiting. In strategy there is delay. In fear there is hesitation. In engagement there is exit. Languor, boredom, nullity: waiting around for orders, mobilization or attack. Capture and detention, setbacks and impediments, quagmires and fog: in all there is a tendency to confusion and deferral. Far from the politics of decision and the friend/enemy divide, waiting reminds us of all that is uncertain or undecidable in war. And if, by other means, war is politics, it signals the importance, perhaps the virtue, of patience in political life. Waiting, in other words, is a register of all that cannot be won or lost in war. Above all, it is a tonality of experience triggered by a sense of finitude, by the expectation that war will end. But what becomes of this expectation when war becomes permanent, perpetual or infinite? In a war without limits of time or place, what is the sense of waiting?
Pad.ma requires JavaScript. | 1,090 | 637 | 0.001574 |
warc | 201704 | Climate Change and Discounting the Future: A Guide for the Perplexed David A. Weisbach
University of Chicago - Law School; Center for Robust Decisionmaking on Climate & Energy Policy (RDCEP)
Cass R. Sunstein
Harvard Law School; Harvard University - Harvard Kennedy School (HKS)
August 12, 2008
Reg-Markets Center Working Paper No. 08-19 Harvard Public Law Working Paper No. 08-20 Harvard Law School Program on Risk Regulation Research Paper No. 08-12
Abstract:
Some of the most important disagreements about how aggressively to respond to the threat of climate change turn on the choice of the discount rate. A high discount rate implies relatively modest and slow reductions; a low discount rate implies immediate and dramatic action. The debate between the two sides reflects a disagreement between the positivists, who argue for a market rate, and the ethicists, who urge that the positivist approach violates the duty of the present to the future. We argue that the positivists are largely right, and that the question of discounting should be separated from the question of the ethical duties of the present. Discounting is a means of taking account of opportunity costs, and a refusal to discount may well hurt, rather than help, future generations. Nonetheless, it is also possible that cost-benefit analysis with discounting will impose excessive harms on future generations. If so, the proper response is to make investments that will help those generations, not to refuse to discount. We also explore several questions on which the ethicists' legitimate objections require qualification of the positivists' arguments, justifying a low discount rate for climate change policy.
Date posted: August 14, 2008 ; Last revised: October 19, 2014 Suggested Citation Contact Information | 1,796 | 926 | 0.001086 |
warc | 201704 | ---------------------------------------- A guest blog by Joshua Weitz, School of Biology and Physics, Georgia Institute of Technology SummaryThis is a short, well sort-of-short, story of the making of our paper: “A neutral theory of genome evolution and the frequency distribution of genes” recently published in BMC Genomics. I like the story-behind-the-paper concept because it helps to shed light on what really happens as papers move from ideas to completion. It's something we talk about in group meetings but it's nice to contribute an entry in this type of forum. I am also reminded in writing this blog entry just how long science can take, even when, at least in this case, it was relativelyfast. The pre-historyThe story behind this paper began when my former PhD student, Andrey Kislyuk (who is now a Software Engineer at DNAnexus) approached me in October 2009 with a paper by Herve Tettelin and colleagues. He had read the paper in a class organized by Nicholas Bergman (now at NBACC). The Tettelin paper is a classic, and deservedly so. It unified discussions of gene variation between genomes of highly similar isolates by estimating the total size of the pan and core genome within multiple sequenced isolates of the pathogen Streptococcus agalactiae.
However, there was one issue that we felt could be improved: how does one extrapolate the number of genes in a population (the pan genome) and the number of genes that are found in all individuals in the population (the core genome) based on sample data alone? Species definitions notwithstanding, Andrey felt that estimates depended on details of the alignment process utilized to define when two genes were grouped together. Hence, he wanted to evaluate the sensitivity of core and pan geonme predictions to changes in alignment rules. However, it became clear that something deeper was at stake. We teamed up with Bart Haegeman, who was on an extended visit in my group from his INRIA group in Montpellier, to evaluate whether it was even possible to quantitatively predict pan and core genome sizes. We concluded that pan and core genome size estimates were far more problematic than had been acknowledged. In fact, we concluded that they depended sensitively on estimating the number of rare genes and rare genomes, respectively. The basic idea can be encapsulated in this figure:
The top panels show gene frequency distributions for two synthetically generated species. Species A has a substantially smaller pan genome and a substantially larger core genome than does Species B. However, when one synthetically generates a sample set of dozens, even hundreds of genomes, then the rare genes and genomes that correspond to differences in pan and core genome size, do not end up changing the
sample rarefaction curves(seen at the bottom, where the green and blue symbols overlap). Hence, extrapolation to the community size will not necessarily be able to accurately estimate the size of the pan and core genome, nor even which is larger!
As an alternative, we proposed a metric we termed “
genomic fluidity” which captures the dissimilarity of genomes when comparing their gene composition.
The quantitative value of genomic fluidity of the population can be estimated robustly from the sample. Moreover, even if the quantitative value depends on gene alignment parameters, its relative order is robust. All of this work is described in our paper in BMC Genomics from 2011: Genomic fluidity: an integrative view of gene diversity within microbial populations.
However, as we were midway through our genomic fluidity paper, it occurred to us that there was one key element of this story that merited further investigation. We had termed our metric "
genomic fluidity" because it provided information on the degree to which genomes were " fluid", i.e., comprised of different sets of genes. The notion of fluidity also implies a dynamic, i.e., a mechanism by which genes move. Hence, I came up with a very minimal proposal for a model that could explain differences in genomic fluidity. As it turns out, it can explain a lot more. A null model: getting the basic concepts together
In Spring 2010, I began to explore a minimal, population-genetics style model which incorporated a key feature of genomic assays, that the gene composition of genomes differs substantially, even between taxonomically similar isolates. Hence, I thought it would be worthwhile to analyze a model in which the total number of individuals in the population was fixed at
N, and each individual had exactly Mgenes. Bart and I started analyzing this together. My initial proposal was a very simple model that included three components: reproduction, mutation and gene transfer. In a reproduction step, a random individual would be selected, removed and then replaced with one of the remaining N-1 individuals. Hence, this is exactly analogous to a Moran step in a standard neutral model. At the time, what we termed mutation was actually representative of an uptake event, in which a random genome was selected, one of its genes was removed, and then replaced with a new gene, not found in any other of the genomes. Finally, we considered a gene transfer step in which two genomes would be selected at random, and one gene from a given genome would be copied over to the second genome, removing one of the previous genes. The model, with only birth-death (on left) and mutation (on right), which is what we eventually focused on for this first paper, can be depicted as follows: exactlythe same set of genes. Likewise, gene transfer (in the original formulation) also decreases genomic fluidity on average, but the decrease is smaller by a factor of 1/ M,because only one gene is transferred. Finally, mutation increases genomic fluidity on average, because a mutation event occurring at a gene which had before occurred in more than one genome, introduces a new singleton gene in the population, hence increasing dissimilarity. The model was simple, based on physical principles, was analytically tractable, at least for average quantities like genomic fluidity, and moreover it had the right tension. It considered a mechanism for fluidity to increase and two mechanisms for fluidity to decrease. Hence, we thought this might provide a basis for thinking about how relative rates of birth-death, transfer and uptake might be identified from fluidity. As it turns out, many combinations of such parameters lead to the same value of fluidity. This is common in models, and is often referred to as an identifiability problem. However, the model could predict other things, which made it much more interesting. The making of the paper
The key moment when the basic model, described above, began to take shape as a paper occurred when we began to think about all the data that we were not including in our initial genomic fluidity analysis. Most prominently, we were not considering the frequency at which genes occurred amongst different genomes. In fact, gene frequency distributions had already attracted attention. A gene frequency distribution summarizes the number of genes that appear in exactly k genomes. The frequency with which a gene appears is generally thought to imply something about its function, e.g., "
Comprising the pan-genome are the." (Laing et al., BMC Bioinformatics 2011). The of genes common to all members of a species and a core complement that is present in at leastbone but not all members of a species dispensable or accessorygenome is mine. But does one need to invoke selection, either implicitly or explicitly, to explain differences in gene frequency? emphasis
As it turns out, gene frequency distributions end up having a U-shape, such that many genes appear in 1 or a few genomes, many in all genomes (or nearly all), and relatively few occur at intermediate levels. We had extracted such gene frequency distributions from our limited dataset of ~100 genomes over 6 species. Here is what they look like:
And, when we began to think more about our model, we realized that the tension that led to different values of genomic fluidity also generated the right sort of tension corresponding to U-shaped gene frequency distributions. On the one-hand, mutations (e.g., uptake of new genes from the environment) would contribute to shifting the distribution to the left-hand-side of the U-shape. On the other hand, birth-death would contribute to shifting the distribution to the right-hand side of the U-shape. Gene transfer between genomes would also shift the distribution to the right. Hence, it seemed that for a given set of rates, it might be possible to generate reasonable fits to empirical data that would generate a U-shape. In doing so, that would mean that the U-shape was not nearly as informative as had been thought. In fact, the U-shape could be anticipated from a neutral model in which one need not invoke selection. This is an important point as it came back to haunt us in our first round of review.
So, let me be clear: I do think that genes matter to the fitness of an organism and that if you delete/replace certain genes you will find this can have mild to severe to lethal costs (and occasional benefits). However, our point in developing this model was to try and create a baseline null model, in the spirit of neutral theories of population genetics, that would be able to reproduce as much of the data with as few parameters as possible. Doing so would then help identify what features of gene compositional variation could be used as a means to identify the signatures of adaptation and selection. Perhaps this point does not even need to be stated, but obviously not everyone sees it the same way. In fact, Eugene Koonin has made a similar argument in his nice paper, Are there laws of adaptive evolution: "
the null hypothesis is that any observed pattern is first assumed to be the result of non-selective, stochastic processes, and only once this assumption is falsified, should one start to explore adaptive scenarios''. I really like this quote, even if I don't always follow this rule (perhaps I should). It's just so tempting to explore adaptive scenarios first, but it doesn't make it right.
At that point, we began extending the model in a few directions. The major innovation was to formally map our model onto the infinitely many alleles model of population genetics, so that we could formally solve our model using the methods of coalescent theory for both cases of finite population sizes and for exponentially growing population sizes. Bart led the charge on the analytics and here's an example of the fits from the exponentially growing model (the x-axis is the number of genomes):
Trying to publish the paper
We tried to publish this paper in two outlets before finding its home in BMC Genomics. First, we submitted the article to PNAS using their new PNAS Plus format. We submitted the paper in June 2011 and were rejected with an invitation to resubmit in July 2011. One reviewer liked the paper, apparently a lot: "
I very much like the assumption of neutrality, and I think this provocative idea deserves publication." The same reviewer gave a number of useful and critical suggestions for improving the manuscript. Another reviewer had a very strong negative reaction to the paper. Here was the central concern: " I feel that the authors' conclusion that the processes shaping gene content in bacteria and primarily neutral are significantly false, and potentially confusing to readers who do not appreciate the lack of a good fit between predictions and data, and who do not realise that the U-shaped distributions observed would be expected under models where it is selection that determines gene number." There was no disagreement over the method or the analysis. The disagreement was one of what our message was.
I still am not sure how this confusion arose, because throughout our first submission and our final published version, we were clear that the point of the manuscript was to show that the U-shape of gene frequency distributions provide less information than might have been thought/expected about selection. They are relatively easy to fit with a suite of null models. Again, Koonin's quote is very apt here, but at some basic level, we had an impasse over a philosophy of the type of science we were doing. Moreover, although it is clear that non-neutral processes are important, I would argue that it is also incorrect to presume that all genes are non-neutral. There's lots of evidence that many transferred genes have little to no effect on fitness. We revised the paper, including and solving alternative models with fixed and flexible core genomes, again showing that U-shapes are rather generic in this class of models. We argued our point, but the editor sided with the negative review, rejecting our paper in November after resubmission in September, with the same split amongst the reviewers.
Hence, we resubmitted the paper to Genome Biology, which rejected it at the editorial level after a few week delay without much of an explanation, and at that point, we decided to return to BMC Genomics, which we felt had been a good home for our first paper in this area and would likely make a good home for the follow-up.
A colleague once said that there should be an r-index, where r is the number of rejections a paper received before ultimate acceptance. He argued that r-indices of 0 were likely not good (something about if you don't fall, then you're not trying) and an r-index of 10 was probably not good either. I wonder what's right or wrong. But I'll take an r of 2 in this case, especially because I felt that the PNAS review process really helped to make the paper better even if it was ultimately rejected. And, by submitting to Genome Biology, we were able to move quickly to another journal in the same BMC consortia. Upcoming plans
Bart Haegeman and I continue to work on this problem, from both the theory and bioinformatics side. I find this problem incredibly fulfilling. It turns out that there are many features of the model that we still have not fully investigated. In addition, calculating gene frequency distributions involves a number of algorithmic challenges to scale-up to large datasets. We are building a platform to help, to some extent, but are looking for collaborators who have specific algorithmic interests in these types of problems. We are also in discussions with biologists who want to utilize these types of analysis to solve particular problems, e.g., how can the analysis of gene frequency distributions be made more informative with respect to understanding the role of genes in evolution and the importance of genes to fitness. I realize there are more of such models out there tackling other problems in quantitative population genomics (we cite many of them in our BMC Genomics paper), including some in the same area of understanding the core/pan genome and gene frequency distributions. I look forward to learning from and contributing to these studies. | 15,107 | 6,407 | 0.000157 |
warc | 201704 | By replicating the complex micron- and nanometer-scale photonic structures that help give butterfly wings their color, researchers have demonstrated a new technique that uses biotemplates for fabricating nanoscale structures that could serve as optical waveguides, optical splitters and other building blocks of photonic integrated circuits.
Using a low-temperature atomic layer deposition (ALD) process, materials scientists at the Georgia Institute of Technology produced aluminum oxide (alumina) replicas of wing scales from a Morpho peleides butterfly, a bright blue insect native to the rain forests of Central and South America. The artificial wing scales faithfully replicated the physical features and optical properties of the natural wing scales that served as templates.
“We can never come close to the richness of the structures that nature can make,” said Zhong Lin Wang, Regents’ Professor in the Georgia Tech School of Materials Science and Engineering. “We want to utilize biology as a template for making new material and new structures. This process gives us a new way to fabricate photonic structures such as waveguides.”
The work has been reported in the American Chemical Society journal
Nano Letters.
To create their artificial structures, Wang and colleagues Xudong Wang and Jingyun Huang deposited uniform layers of alumina onto butterfly wing scales one Angstrom at a time using the ALD process. (Huang was a visiting scientist from Zhejiang University, China). They were able to precisely control the thickness of the coating with the number of deposition cycles to which each wing scale template was subjected.
After the deposition, the coated scales were heated to 800 degrees Celsius to crystallize the alumina – and burn off the original butterfly wing scale. The resulting polycrystalline alumina was stronger than the original amorphous material deposited with the ALD process.
The artificial butterfly wing scale is a three-dimensional structure that retains the features of the original. That includes hollow tubular structures that split off at regular intervals, providing the potential for use as optical waveguides and optical splitters – and even as microfluidic or microreactor devices.
“Owing to the excellent uniformity of the alumina film, both the large-scale arrangement of the wing scales and the nanometer-scale periodic structures are perfectly preserved after this vigorous template removal process,” the authors wrote. “The alumina replicas of the wing scales exhibit the same shape, orientation, and distribution as their ‘parent’ scales.”
Butterfly wing colors are produced by a combination of pigments and reflection from photonic structures. “If you examine the wing scale, you see all of the intricate micron-scale and nanometer-scale features that determine the optical properties,” Wang noted. “From a physical point of view, this is a very regular photonic structure with regular gaps that produce the bluish color.”
The artificial wing scales produced by the researchers also reflect bluish light, though the color is of slightly longer wavelength than that of the original butterfly. That’s because the chemical pigments that contribute to the original butterfly color are no longer present, and – Wang surmises – because the researchers had to dry the wing scales prior to deposition, which likely altered the size of their photonic structures.
Wang and his colleagues discovered that because the thickness of the alumina coating controlled the size and periodicity of the photonic structures, increasing the thickness shifted the reflected light toward the red portion of the spectrum. For instance, by increasing the coating thickness from 10 to 40 nanometers, the color reflected by the alumina wing scales shifted from the original blue to green, yellow, orange and eventually pink, Wang noted.
The complex nature of the structures would be impossible to create with any other process, he said. “This could provide a new way to make nanostructures that are replicated from biology,” he said. “It allows us to fabricate truly tubular, three-dimensional interconnected nanostructures in a one-step process.”
The atomic layer deposition process could potentially be used with other materials such as titanium oxide, and to replicate other biologically-inspired structures.
“As long as there is a void that the vapor phase can penetrate, an entire structure can be replicated using the ALD process,” Wang said. “Regardless of what the substrate is and what the three-dimensional shape is, you can control it to the Angstrom level.”
Next on the agenda may be the water strider, an insect that uses unique hydrophobic feet to skim gracefully across the surface of water. Wang would like to study the possibility of replicating the micron-scale structures of the insect’s feet, but he has found that obtaining samples may be difficult.
“I was trying to catch one of them, but they are very quick,” he admitted. “I almost fell into the water.”
The research was supported by the Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF) and the U.S. National Institutes of Health (NIH). The Day Butterfly Center at Callaway Gardens in Pine Mountain, Ga., provided the Morpho peleides butterfly specimen.
Source: Georgia Institute of Technology
Explore further: Intelligent synthetic materials that respond to external stimuli | 5,603 | 2,427 | 0.000426 |
warc | 201704 | Both in last week’s Boston bombings and last weekend’s earthquake in China’s Sichuan province, mobile-phone networks were quickly overwhelmed as people rushed to call family and friends. In both cases, users took to services that use bandwidth more efficiently, like text messaging, microblogging and instant-messaging apps.
Voice calls consume about eight times as much bandwidth as text messages. Moreover, they must pass through the network in real time. Text messages, on the other hand, are slivers of data that use relatively little bandwidth, and line up until a break in the network traffic lets them slip through. The same is true of microblogging services such as Twitter, Sina Weibo and Tencent Weibo, and instant messaging apps like WhatsApp or Tencent’s WeChat.
So it’s no surprise that, as the dust settled at the finish line of the marathon in Boston, carriers asked subscribers to avoid making calls and use text messages instead. Some used Twitter, to broadcast to a wider range of people.
It was similar in Sichuan. Bloomberg reports that the government of Chengdu, the provincial capital, posted a message on Sina Weibo asking people to use WeChat or Weibo to communicate. Telecoms firms followed suit: China Unicom also posted on Sina Weibo asking users to reserve phone lines for emergency use only, and to use texting, microblogging or instant messaging apps instead (links in Chinese), while China Telecom did the same in a text message.
Their promotion of WeChat in particular could be seen as ironic. Egged on by China’s three biggest mobile operators, the government is now considering making WeChat’s parent company, Tencent, pay carriers a fee for riding on its service. The sore spot isn’t bandwidth but “signaling.” As long as the app is open on a mobile device, the device pings nearby cell towers every so often, taking up signaling capacity. As China Mobile, by far the largest operator, has called for Tencent to pay a form of “rent,” its complaints have focused mainly on WeChat’s constant use of signaling resources, which drives up operating costs.
The earthquake episode suggests that this gripe might not be totally off-base. Telecom expert Xiang Ligang told Sohu Tech News that IM apps like WeChat users in the area around the earthquake might overload signaling resources in the area (link in Chinese), and that users should temporarily take to microblogging apps instead. | 2,485 | 1,207 | 0.000853 |
warc | 201704 | While folks in esnl’s own state of California think of El Niño as a bearer of exceptional rains, we often forget that the same Pacific Ocean currents that inundate the Golden State exert a contrary effect on Southern Asia and Africa.
So the same global weather pattern that brought drought relief to a parched California threatens millions of African children with the threat of starvation, chronic hunger, and all the ill effects wrought by malnutrition.
From the United Nations News Center:
One of the strongest El Niño events ever recorded has placed the lives of 26.5 million children at risk of malnutrition, water shortages and disease in ten countries in Eastern and Southern Africa, the United Nations Children’s Fund (UNICEF) has reported.
“Children face protection risks as families and communities move in search of work, food, water and grazing land for animals. Children are also finding it difficult to stay in school, due to hunger and/or lack of water,” UNICEF noted in a study on the Eastern and Southern Africa region.
UNICEF added that it found that more than one million children are in need of treatment for severe acute malnutrition. Moreover, water shortages remain a key concern, with many health facilities and schools in critical need of improved water supplies and sanitation facilities to enable the continuity of services.
El Niño is the term used to describe the warming of the central to eastern tropical Pacific that occurs, on average, every three to seven years. It raises sea surface temperatures and impacts weather systems around the globe so that some places receive more rain while others receive none at all, often in a reversal of their usual weather pattern.
In Southern Africa in particular, drought is making life even more precarious for children affected by HIV, according to the UNICEF study.
The UN children’s agency found that governments and partners have been responding since 2015, but the scale of the crisis has outstripped the coping capacities of communities and the resources of the governments in the region, putting decades of development gains at risk.
Urgent investment is still required because the crisis is likely to continue well into 2017, UNICEF said. It could also be further compounded by the coming La Niña, which would bring more erratic weather conditions.
In the first months of 2016, UNICEF said it has reached 155,000 children with treatment for severe acute malnutrition; 2.69 million people with clean water; 82,000 children with protection services; and 100,000 people with HIV education and services.
To provide a comprehensive emergency response, however, UNICEF still needs $127 million of its $226 million goal.
According to the UN Office for the Coordination of Humanitarian Affairs (OCHA), more than 60 million people are expected to be impacted by El Niño’s extreme weather. The humanitarian fallout in certain areas will include increased food insecurity due to low crop yields and rising prices; higher malnutrition rates; devastated livelihoods; and forced displacement. | 3,127 | 1,535 | 0.000666 |
warc | 201704 | Dating….no, it’s not a lonely-hearts club for archaeologists! This week we covered how archaeologists date sites and artifacts. From the use of stratigraphy and superimposition to provide relative dating, to C-14 and other techniques to assign more specific dates, we covered the basics of “pre-historic” and “historic” as it pertains to the artifacts of the Whispering Woods site. We also talked about dating systems (BCE/CE, BP, & BC/AD).
At Whispering Woods, most of the prehistoric (Native American) artifacts found date from the Middle to Late Woodland period, roughly 0 – 1650 CE. Anything historic is non-Native American and can be dated according to style or technology. For instance, our tin-glazed pottery dates from the late 1600’s – mid 1700’s; our musket ball is around 1860; and then there is plenty of “modern” items such as bottles and cans.
But what do all these dates mean in the broader context of human history and culture? What was going on in Ancient Rome when our native Whispering Woods people were making pottery and projectile points?
To illustrate this, our class mapped out a time line of “human history”, and more importantly, put that timeline to scale. One centimeter equaled 100 years. So 0 – 2015CE was a little more than 6ft long. 5,000BCE – 0 was 15 feet long! Of course human culture goes back much farther – 10,000BCE is the emergence of agriculture – but we were running out of hallway. Each student was assigned a continent or a culture (Greece/Rome, Western Europe before 1600, Western Europe after 1600, the Americas, Asia, the Vikings [Tovah’s personal favorite], Mesopotamia & Egypt, and so on). Students had to bring in 20-30 dates from their assigned culture and plot them, along with a representational image, onto the timeline.
Our students quickly learned that 20-30 dates was a struggle – not to find, but to limit to. Also a struggle was the convergence of so many events in the last 300 years. So much happened in such a small space. Students left with a sense of how events in different cultures overlapped as well as the exponential activity as they approached the present.
Our final project was a 30ft long timeline that was both fun and enlightening to construct. Now, where to hang it?? | 2,349 | 1,241 | 0.000844 |
warc | 201704 | In many countries the working-age population now constitutes the highest share seen in decades. That will not be the case in the upcoming years. The European “peak workforce” that this blog has discussed earlier is not unique for Swedes, Germans, and Italians. In fact, in most of the world’s richest countries, the dependency ratio (the number of people younger than 15 and older than 64 per person in working age) has been shrinking for the past half century. More people have, relatively speaking, had to provide for fewer people. Until now. As the graph shows, the dependency ratios will now start to increase, and do so substantially over the coming five decades.
These 42 countries are the member states of either G20 or/and OECD (the EU is excluded from G20 because it is not a country). Between 1960 and 2010, only three of these states (Japan, Sweden and Germany) experienced a growing dependency ratio. In economic-demographic terms those decades were thus on average favorable. In thirty-nine countries the working-age population grew faster than the number of children and seniors. Between 2010 and 2060, in sharp contrast, only two countries (India and South Africa) are expected to face declining dependency ratios. This is related to the fact that both India’s and South Africa’s populations are predicted to continue to grow in the coming decades. The remaining forty countries are likely to have to deal with severe demographic troubles.
This expected demographic pattern is worrisome for several reasons. While increasing longevity is inherently a good thing, it will increase the pressure on pension schemes and healthcare systems. An increasing dependency ratio means that unless employment is increased – through a higher statutory retirement age, sooner average entry on the labor market, or other measures – fewer will have to provide for more. Will this be the end to social security as we know it?
Another troubling outlook is debt. Greece, Italy, Portugal, Ireland, Spain, France, United Kingdom, Canada, and United States all piled up their debt while the dependency ratio in each country was falling, and population on average was growing. How that debt should be repaid n0w that fertility rates slow down, and the dependency ratios are set to increase is a question that is equally mind-boggling as it is alarming.
There is a demographic cliff ahead. Are we going to jump or at least try to climb down?
Simon Hedlin | 2,486 | 1,258 | 0.000809 |
warc | 201704 | For a couple of summers during college I worked at a summer camp aimed towards giving inner city children experiences in the outdoors. Many of our campers came to us through social workers. My summers at this camp brought a lot of joys and challenges. You never knew what a camp session might bring.
At the start of each camp session, we would collect the camper's bug sprays and sun screen and lock it up for safe keeping between uses. One week a camper brought their bug spray in a mason jar, covered with saran wrap. The camper didn't know what it was, just that their grandmother gave it to them. I would guess someone had some DDT left over from the 60's and missed the whole DDT ban in 1973.
We've come a long way in the ingredients put into our products, but we've still got a long way to go. When I'm at the store looking for bug spray I usually buy something with DEET, because even if I don't know exactly what DEET is, I know that it will work and save my husband from a panic attack (read the story
here). But there's got to be better options right? When looking up what DEET actually is, here is the jist of what I found.
DEET is generally safe when used in small amounts (less than 30% concentration) and only used on skin (source). However, it is
not recommended for kids - DEET used on children should be only done in small amounts for risk of seizures and long exposure can lead to seizures, and ingestion can lead to severe reactions (source). Other articles I read confirmed these ideas. So there's got to be another option, right? We didn't find very many, so we made our own!
Our solution is
safe to leave around your children and spray on infants. We've come up with an all natural replacement for your bug spray - and it works! My husband performed the first test on himself, spraying it on only one arm, and sitting in his brother's mosquito haven. He didn't last long without spraying his other arm (I opted out of being a part of this test). The results were great! Not only was he not bothered by mosquitoes, he actually smelled good (the beauty of essential oils).
Since my husband's test, I (and others) have also tried the product and can to attest to its effectiveness. Besides how well it works, I love how it doesn't feel like I'm being overtaken by a poisonous gas (like when someone has sprayed you from head to toe and you have to escape the cloud for air).
We love people, we love the Earth, and we're all about giving people safer options and better products to use that benefit both. That's why our product comes with a 100% satisfaction guarantee. So check it out :)
Katie Veldkamp | 2,633 | 1,362 | 0.000738 |
warc | 201704 | There's no mystery, really, to the climate change deniers. Many who voted for Trump are older in age. They won't be around long enough to experience the worst effects of climate change, so they basically don't care.
There's no mystery, really, to the climate change deniers. Many who voted for Trump are older in age. They won't be around long enough to experience the worst effects of climate change, so they basically don't care.
They care about abstract debt, though. At least they say they do.
Not to change the subject but many climate deniers I talk to name our federal debt as the #1 issue for the wellbeing of our country, not the environment, not climate change, nor greenhouse gases. Perhaps because it's been drilled into their heads by TV, radio and other news sources for years.
There are no wealthy interests doing the same for climate change, at least not nearly to the same extent.
I read a story about a dog who chewed through an EVSE cable. The dog was fine. The cable was toast, though.
The J1772 standard is rather safe, because no power is applied until a signal is detected. Moreover GFCI outlets are mandatory for applications where an EVSE would be installed.
I worry far more about a fire starting in my gas water heater than any kind of electrocution from my EVSE.
When (if) that day comes, the market will look completely different than it does today.
EVs now are expensive because they are new. You can't find them on the used market older than 5 years old.
You can't find _any_ cars on the used market less than 5 years old that don't cost thousands.
Even still, we bought a used 2012 Leaf for $8k. Not a lot of money. We aren't low income but we aren't rich, either.
I'm going to bet you've never driven a Volt. Bought mine for about $25k after incentives, which is a pretty common price for a new car, especially one as loaded with features as mine.
They really are quite nice to drive. Very quiet and smooth. Very low maintenance. Chevy probably has made a few cars in their 100+ years that could be considered a "pile of crap" but this is not one of them.
Also, I have a degree in math, and have made detailed calculations of my total cost of ownership. There are some assumptions baked in like future maintenance costs and electricity costs that can be difficult to predict, but at least electricity prices do not fluctuate nearly as wildly as gasoline does.
Let me guess--your boss drives a Tesla?
Teslas have required frequent maintenance because TMC is in the process of figuring out how to manufacture cars. If you buy a battery electric vehicle from an established maker you won't have these problems. My Volt and LEAF are both virtually maintenance free.
Should have also mentioned--most electric vehicles will never need a battery replacment. H2 vehicles on the other hand must have their tanks replaced due to regulations.
Hydrogen power is a dead end. Nearly all of it is derived from fossil fuels, there are too few stations, it is difficult to store and expensive to compress. It will be a market failure.
Get used to the electric future, it is coming.
Try that with their servers. Their Linux driver support is extremely lagging.
Even today if you download new drivers for an MD array, they bundle Java 6. Which was end of life over 3 years ago and is incapable of interoperating with modern TLS implementations. If you care about securing your systems and use Dell drivers, disable as much as you can and live by the CLI.
Let's set some ground rules for debate, m'kay?
"Spending on social programs is highly correlated with keeping people in poverty."
Do you have any evidence for this claim? Studies? Data? Facts?
The information I have shows a strong correlation to the opposite: https://upload.wikimedia.org/wikipedia/commons/8/8d/The_Antipoverty_Effect_of_Government_Spending_Vector_Graph.svg
"...because we can print money..."
The government doesn't "print" money to spend, it issues currency. Printing presses and paper money exist these days to accommodate small transactions. When is the last time you bought a car with cash in hand? Or your employer paid you in cash?
I'm making a nit, but it's important to be precise with the terminology. Or you lose credibility.
"You don't understand economics."
Umm... you don't know me, nor what I read, or studied, or what my credentials are. You're making a broad assumption based on a Slashdot comment.
In general, if I don't feel qualified to post in Slashdot I stay quiet. I don't need to spread any more misinformation than there already is.
My statement about public spending is a verifiable fact. There is no theoretical bounds to spending given a fiat currency in the post Bretton Woods era. The gold standard is long gone.
"Is it any wonder the more we pay for welfare, the less likely people are to get off of welfare?"
You raise the example of welfare, which I did not. It is one possible type of social support among many. (Personally I favor the Job Guarantee--provide work to all those who cannot find work.)
To examine this issue in depth you need to look at the causes of poverty. If an individual is unemployed with no prospect for work, welfare can provide sustenance but without unemployment they may never escape from poverty. But if you provide work, public or private, you have a path to independence. For a single parent raising children without support (too common in this day and age), child care and education may be the key. However these have to be accessible, if not by private means, then through social support.
They don't change major version numbers, ever. New features may be patched in as long as the risk is minimal. That's what they call "major". Maybe they should have worded it "noteworthy changes" instead.
Unless you're running a desktop, who cares? My CentOS servers don't run Wayland, X, or any kind of graphical desktop.
PayPal didn't back off, the PCI Council did. The PCI DSS standard previously offered an exemption for existing sites that could not easily deprecate TLS 1.0 that was to expire June 2016. Now that has been extended 12 months, and PayPal is following suit.
Oh I'm worried about debt. I'm worried about my auto loan, though that will be paid off soon enough. I'm more worried about my children's student loans and the long-term impact on their finances (postponing retirement savings, home buying etc.)
Did you mean the "public debt"? I don't worry about that, I worry about politicians who use it as an excuse to cut social programs and ensure those who live in poverty remain in poverty. The US is a sovereign nation with a fiat currency. They literally issue currency to cover federal spending, and match it by issuing bonds on the open market to create the illusion they are "borrowing" to cover their spending.
In other words, the risk of the public debt to our grandchildren is no greater than the risk on our generation due to spending from the era our grandparents were alive (including massive spending during WWII which fostered a generation of prosperity).
It's bad enough our politicians and their paid economists start lies about the debt. It's worse when responsible citizens spread the lies.
Regardless, if Earth is too damaged to be inhabitable in 500 years, I think humans will have far greater worries on their hands than political boundaries. Stop fretting over the debt and who is in office, and work instead on FIXING THE PROBLEM.
Thank you.
Yes, it can. But then you no longer have a ZEV. You still have to worry about NOx emissions.
The number of arguments is unimportant unless some of them are correct. -- Ralph Hartley | 7,670 | 3,721 | 0.000271 |
warc | 201704 | If you’re running a young health tech business, here’s a resource you’ll want to know about. The healthcare industry giant athenahealth—a provider of cloud-based electronic health record, billing, patient engagement, and care coordination services with a market cap of $4.4 billion—wants to help entrepreneurs with innovative ideas designed to “driving connectivity and innovation across the continuum of care.”
The company, whose “More Disruption Please” events have convened over 2,500 health tech innovators and hackers since 2010, established a More Disruption Please Accelerator at its Watertown, Mass., headquarters in June. It is accepting applications on a rolling basis from startups that are committed to “open, interoperable, and disruptive technology.” Technologies must be provider-facing solutions and companies should have at least a beta version (cloud-based products preferred), pilot customers or active users, and a solid working team. Athenahealth says it is focused on high-potential early- or growth-stage companies that are committed to keeping at least half of their operation in the Boston area.
Smart Scheduling, the accelerator’s first portfolio company, gets mentorship from athenahealth’s in-house experts and partners, as well as access, through a developer portal, to the company’s APIs and exposure to its network of more than 55,000 health care providers. Smart Scheduling CEO Chris Moses calls those benefits “critical to the scalability and success” of his technology, which improves access to care by predicting doctors’ office no-shows and reducing scheduling errors.
Athenahealth says portfolio companies also get “substantial” seed funding in the form of a convertible note, free office space at the Watertown campus, introductions to its network of venture capitalists and angel investors, and direct feedback from a Physician Advisory Board and super users. During an 8-12 month residency, startups also have access to mentors who are experts in product strategy, development, marketing/PR, business development/sales, recruiting, culture, design, UX/UI, finance, government and regulatory issues.
The accelerator aims to bring “entrepreneurialism to health care through a tailored program that offers portfolio companies customized resources and programming to meet their individual needs,” says Kyle Armbrester, the company’s VP of business development. He says athenahealth “is deepening its commitment to creating a robust market for open health care technology—something doctors and patients demand to improve the quality of care.”
Learn more here about the More Disruption Please Accelerator eligibility criteria. | 2,792 | 1,377 | 0.000762 |
warc | 201704 | You’ve no doubt heard about those towns in Maine that have declared food sovereignty. Well, here’s something at least a little bit in the same direction happening right here in that second rate socialist country, Canada! (Believe it or not, that’s what our Prime Minister is reported to have called it.)
“The City of Richmond is poised to join a growing number of B.C. municipalities that oppose the cultivation of genetically modified crops and plants within their boundaries.
A resolution has been working its way through city hall since June 2010, when Arzeena Hamir of the Richmond Food Security Society and April Reeves of GE Free B.C. pitched councillors on proposed wording that would keep Richmond free of genetically engineered trees, plants and crops.
“We got a call a few days ago from city staff saying they are finally ready to write the report,” said Hamir. “It’s been lost in the legal department for nearly two years, but the resolution is expected to come to council in May.”
Richmond councillor Harold Steves said staff were struggling with the question of how to deal with several farmers in Richmond already growing GE corn.
Opponents say crops such as canola that are engineered to survive pesticide applications lead to excessive use of chemical weed controls. They also worry that engineered genetic material will mix with conventional and organic crops and that foods made with the products of genetically engineered soy and corn may generate unforeseen allergic reactions in consumers.
If Richmond council passes a resolution opposing genetically engineered crops it would join a growing patchwork of B.C. municipal governments to have taken the step. | 1,729 | 929 | 0.00111 |
warc | 201704 | Tue, Feb 2, 2016
“The experts agreed that a causal relationship between Zika infection during pregnancy and microcephaly is strongly suspected, though not yet scientifically proven," said Dr Margaret Chan.
The World Health Organization on Monday declared the Zika Virus which is linked to birth defects, a global public health emergency despite the fact that the allegations have not been proved yet.
This move will generate funding for research to establish whether the virus which is transmitted by mosquitoes is responsible for microcephaly (babies born with small heads and a vast majority of underdeveloped brains).
“In assessing the level of threat, the 18 experts and advisers looked in particular at the strong association, in time and place, between infection with the Zika virus and a rise in detected cases of congenital malformations and neurological complications,” Dr Margaret Chan, the director general of the WHO said during a news conference in Geneva.
“The experts agreed that a causal relationship between Zika infection during pregnancy and microcephaly is strongly suspected, though not yet scientifically proven. All agreed on the urgent need to coordinate international efforts to investigate and understand this relationship better,” Dr Chan added.
The virus which has spread to over 20 countries in Latin America was first detected in Uganda’s Zika Forest in 1947 among monkeys during a Yellow Fever test by experts.
This is the fourth time that the international organization has declared a disease a "public health emergency of international concern,” since H1N1 pandemic in 2009, recurrence of Polio in May 2014 and the Ebola menace in August 2014 as well.
The disease which the WHO has described as spreading "explosively" through the Americas has mild symptoms including skin rash and headache. Further the WHO estimates that by the end of the year, the virus will have infected up to 4 million people and have reached most of the hemisphere.
The 18-member committee agreed that the situation meets the conditions for a public health emergency of international concern even with the “cluster of microcephaly cases and other neurological disorders.” They said the situation calls for a coordinated international response to minimize the threat in affected countries and reduce the risk of further international spread.
“A coordinated international response is needed to improve surveillance, the detection of infections, congenital malformations, and neurological complications, to intensify the control of mosquito populations, and to expedite the development of diagnostic tests and vaccines to protect people at risk, especially during pregnancy” Chan said.
Further, the Committee found no public health justification for restrictions on travel or trade to prevent the spread of Zika virus.
They called for protective measures to control mosquito populations and the prevention of mosquito bites in at-risk individuals, especially pregnant women.
Photo Credit: Getty Images
Kajuju Murori is an enthusiastic writer with a bias towards development stories that ignite positive change among individuals in the society.
Are you impressed, have any concerns, or think we can improve this article? Comment below or email us. | 3,326 | 1,532 | 0.000668 |
warc | 201704 | A key part of research into healthy aging has been finding ways to keep the brain healthy and active into old age. Most research in the past on this matter has tended to focus on the changes in brain function that occur much later in life. There is comparatively little research done on how brain function changes during the midlife period and even less on how these effects might compare to those observed later on. A research team from McGill University in Canada has taken this gap to heart and begun looking at the brain of middle-aged individuals. What they found is that one of the central ideas behind aging and memory may be using the wrong perspective.
The study consisted of 112 adults ranging from 19 to 76 years old. Participants were shown a series of faces and then asked to recall information about how those faces were positioned, where in the order a face might have appeared, and so on. A functional MRI was used to assess brain activity during this process.
The MRI results showed that young adults made more use of their visual cortex during the tasks, suggesting an increased focus on perceptual details. Middle-aged and older adults, however, had lower levels of visual cortex activity. Instead, their medial prefrontal cortex was used—the part of the brain that is known to be involved with personal history and introspection.
Although memory can definitely be impaired by dementia, the idea that aging causes memory to become impaired in general may not be the right way of looking at things. The study results, though small, suggest that aging does not so much cause a decline of memory but changes in what the brain considers “important” information. It is a matter of prioritizing different parts of an experience, in other words.
The research possibilities these findings open up still need to be explored, and the study itself may need replication to confirm results. However, there is some support for the ideas it presents. Mindfulness meditation is known to lead to better cognitive aging, a fact that aligns with the idea that learning to focus on external (instead of internal) information could help middle-aged and older adults with memory.
Source:
“Middle-age memory decline a matter of changing focus,” McGill University web site, July 12, 2016; https://www.mcgill.ca/newsroom/channels/news/middle-age-memory-decline-matter-changing-focus-261683, last accessed July 13, 2016. | 2,446 | 1,221 | 0.00083 |
warc | 201704 | Don't Breeze Past This Item in General Electric's Q3 Earnings
Sending shares to their highest level since September 2008, investors were clearly quite happy with
General Electric's third quarter earnings. Perhaps it was the growth of backlogs to a record $229 billion, or perhaps it was the 11% growth in profits for its industrial divisions.
Regardless of the exact cause, how did the company's wind power offerings affect earnings? Let's take a look.
A gust of good news A total of 407 wind turbines were shipped for the quarter. This figure was 40% lower than the same period last year, but the company is still confident in wind's potential. During the earnings call, management recognized how wind is benefiting the division: "Most of the order strength really for Power & Water has been around our aeroderivatives units have been very strong, and our wind units have been incredibly strong. So in the third quarter we had 477 orders in the developed markets on wind. That is up 390 units, and that explains most of the strength around wind..." This mitigates the adverse effect that wind had on the division earlier in the year. The fact that the division did recognize a 9% increase in profits doesn't hurt either.
Some recent news releases substantiate management's belief in the potential profitability of wind. Earlier in the week, GE reported that it had completed the installation and begun the testing of 20 GE 2.5-103 wind turbines for a 50MW wind-energy project in the Moldavian region of Romania. In addition to the turbines, GE has committed to supporting the operation of the wind farm by signing a 10-year full-service agreement with its partner on the project, local utility GDF SUEZ Energy Romania.
Demonstrating that it is dedicated to its wind turbine customers, GE revealed a novel way to increase a wind farm's output (on its 1.5-77 turbines) by 5%, which in turn results in a 20% increase in profit per turbine. PowerUp is a customized software-enabled platform that takes into account environmental conditions and adjusts turbine performance accordingly. According to the company, "Wind farm operators now have the option to participate in a continuous PowerUp platform to further increase their output as new software and hardware upgrades are introduced."
How strong is the wind blowing for the competition? One of GE's main competitors in the wind turbine market is Siemens AG , which has recently struck some deals on the other side of the world from Romania. It will be providing about 150MW of wind turbines for the grand Renewable Energy Park in Ontario, Canada, with plans calling for Siemens to install 67 of its 2.3 MW wind turbines. Farther south in Peru, Siemens will also be supplying 11 wind turbines for a 32MW onshore project.
Like GE, Siemens is benefiting from an increase in orders. Nearly doubling year over year, the improvement in wind power order intakes was primarily due to the orders from several large offshore wind-farms in Europe, C.I.S., Africa, and the Middle East. However, profits for the first nine months of 2013 have been disappointing—only €126 million ($172.45 million), a 26% decrease from the same period in 2012. Profit margins also suffered, dropping from 4.7% for the first nine months of 2012 to 3.6% for the first nine months of 2013.
This company is on the right track Perhaps it's the fact that the company doesn't make wind turbines, or maybe it's the fact that nearly 60% of its revenue is tied to the railroad industry. Whatever the reason, Trinity Industries is not closely identified with the wind power industry. However, the company fashions itself as the leading manufacturer of structural wind towers in North America. Although not a direct competitor, GE is also in the business of making wind towers; it purchased Wind Tower Systems LLC in 2011 so that it could build towers more cheaply.
The Energy Equipment Group, one of Trinity's five business segments and the one that includes the wind tower business, had a successful second quarter for 2013: revenues of $152.5 million compared to revenues of $130.7 million in the same quarter of 2012. Operating profit increased to $14.3 million compared to $4 million in the same quarter last year due to manufacturing challenges that negatively affected the 2012 results. Having received structural wind tower orders of $22 million in the quarter, the company's backlog increased to $643 million as of June 30, 2013.
To put things in context, second-quarter 2013 revenues for the company increased 7% to $1.1 billion, as compared to revenues of $995.5 million for the same quarter of 2012. Trinity also reported an operating profit of $183.4 million in the second quarter of 2013, a 20% increase compared to an operating profit of $152.5 million for the same quarter last year.
The Foolish conclusion GE's most recent earnings report was well received among investors—shares were up over 3.5% on the day they reported. What went largely unreported, though, were the developments in the company's wind energy business. Although financial results for the quarter were disappointing, management seems confident that there is the potential for gains in the near future. Investors who are unconvinced that GE may soon succeed in the wind space may want to consider other industry leaders: Siemens AG and Trinity Industries. A revolution in fuel One home run investing opportunity has been slipping under Wall Street's radar for months. But it won't stay hidden much longer. Forward-thinking energy players like GE and Ford have already plowed sizable amounts of research capital into this little-known stock... because they know it holds the key to the explosive profit power of the coming "no choice fuel revolution." Luckily, there's still time for you to get on board if you act quickly. All the details are inside an exclusive report from The Motley Fool. Click here for the full story!
The article Don't Breeze Past This Item in General Electric's Q3 Earnings originally appeared on Fool.com.
Scott Levine has no position in any stocks mentioned. The Motley Fool owns shares of General Electric Company. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.
Copyright © 1995 - 2013 The Motley Fool, LLC. All rights reserved. The Motley Fool has a
disclosure policy. | 6,523 | 3,040 | 0.000331 |
warc | 201704 | WASHINGTON -- Housing starts and permits fell in August, but upward revisions to the prior month's data suggested the housing market continued to gradually improve.
Groundbreaking declined 14.4 percent to a seasonally adjusted annual 956,000-unit pace, the Commerce Department said on Thursday. July's starts were revised to show a 1.12-million unit rate, the highest level since November 2007, instead of the previously reported 1.09-million unit rate.
Economists polled by Reuters had forecast starts slipping to a 1.04-million unit rate last month.
Housing is clawing back after suffering a setback following a spike in mortgage rates last year. It, however, remains constrained by a relatively high unemployment rate and stringent lending practices by financial institutions.
A survey Wednesday showed homebuilder sentiment hit its highest level in nearly nine years in September and builders reported a sharp pick-up in buyer traffic since early summer.
Groundbreaking for single-family homes, the largest part of the market, fell 2.4 percent in August to a 643,000-unit pace. That followed a hefty 11.1 percent increase in July.
Starts for the volatile multifamily homes segment tumbled 31.7 percent to a 313,000-unit rate in August.
Last month, permits fell 5.6 percent to a 998,000-unit pace. July's permits were revised slightly up to a 1.06-million unit rate. Economists had expected them to slip to a 1.05-million unit pace in August.
Permits for single-family homes fell 0.8 percent to a 626,000-unit pace in August. Permits in the U.S. South, where more than half of single-family construction occurs, hit their highest level since April 2008.
Permits for multifamily housing declined 12.7 percent to a 372,000-unit pace.
8PHOTOS
Does Your Home Live Up to the American Average?
See Gallery
Housing Starts Fall; Prior Month's Data Revised Higher
In 2013, the median lot size of a new sold single-family house was 8,596 square feet, or just under 0.2 acres. While that might not seem like a lot for you suburban homeowners, a regional breakdown shows that the small average size isn't due to urban inhabitants alone. The Northeast enjoys the largest average lot, at 13,052 square feet, while the less densely populated South and West lay claim to just 8,649 square feet and 6,796 square feet, respectively.
From a footprint of 1,650 square feet in 1978, the average American home has grown 50 percent, to 2,478 square feet. Yet tough times seem to be squeezing our expansionary attitude. Although new single-family homes sold in 2013 clocked in at a median 2,478 square feet, single-family homes completed in 2013 amounted to just 2,384 square feet. Homebuilder confidence has plummeted into pessimism in the last few months, hinting that the housing market's road to recovery might be rougher than expected.
While birth rates have held relatively steady for the past 40 years, everyone apparently needs more elbow room. The share of homes with four or more bedrooms has jumped from 27 percent in 1978 to 51 percent in 2013. And where would a bedroom be without a bathroom? While just 8 percent of 1978 homes had three or more baths, 37 percent of homes now fall in that category.
From 2008 to 2013, both the share of homes with four or more bedrooms and the share of homes with three or more bathrooms have jumped 10 percentage points, while median square footage is up 10.9 percent for the same period.
If there's one strong sign of new housing demand, it's home prices. After nose-diving during the Great Recession to a median sales price of just $216,700, home prices have been roaring back up. In 2013, the median sales price for a new single-family home was $268,900. But for those on the housing hunt, don't be discouraged. Home prices today still don't hold a candle to costs in 2006, according to the well-regarded Case-Shiller Home Price Index. In 2006, the index topped 200 before plummeting to less than 140, and current rates put the index just above 170.
It is America, after all. Our industrialized nation was built on the back of Henry Ford, and America is in no danger of breaking its automobile addiction. In 2013, a whopping 300,000 of the 429,000 new single-family homes sold included a two-car garage. And 98,000 new homes included a three-car garage -- the highest amount since 2007. Of all new homes built, only 10,000 failed to include a garage or carport.
American homebuyers are building bigger homes than ever before. But if there's one thing the recent recession has shown us, bigger isn't always better. Although 30 percent of Americans believe real estate is the best long-term investment, homeownership isn't for everyone. There are plenty of reasons to spend less or invest elsewhere -- and leave keeping up with the Joneses to Mr. and Mrs. Smith. | 4,812 | 2,296 | 0.000438 |
warc | 201704 | A solid infrastructure and keeping the building or property well-maintained is the key to keep residents glued to your property for a long time. Regularly maintain the property and making continuous upgrades are the specific things that will keep current residents happy and attract new ones. From a fresh coat of paint to upgraded appliances,… (0 comment)
Hiring the best property management service in Kansas City is a proven way to introduce sustainability into your property operations. Making regular property upgrades and inspections helps property owners to ward off unexpected maintenance costs. According to a recent study, new homebuyers are becoming more inclined to seek out or at least prefer environmentally friendly… (0 comment)
Generally, it takes a little time and money to attract and retain good tenants. Finding and building a long-term relationship with those loyal people who pay their bills on time, keep up to date with any maintenance they are responsible for, and don’t cause any problems, is a mere dream of every landlord. In order… (0 comment)
With the holiday season around the corner, it’s time for us to protect our home and business from the cold and wet weather conditions. The colder months can bring in a series of potential risks to rental properties and could lead to costly repairs and maintenance. So, protecting properties against the perils of winter is… (0 comment)
Hiring a professional property management service is one of the biggest and toughest decisions you will make as a landlord. Property management companies are the huge asset to your business and add significant value to your investment. What Does a Property Manager Exactly Do? A property manager is responsible for everything that goes on related… (0 comment)
Proper maintenance and management of properties is the often neglected field of the industry and this negligence may be due to the following factors. – Lack of preventive maintenance – Insufficient funds to maintain the building – Lack of building maintenance standard – Non-availability of replacement part and components While considering these factors, people should… (0 comment) | 2,218 | 1,101 | 0.000937 |
warc | 201704 | Paced by a recovering market for nonresidential projects and expanding housing activity, billings at U.S. architecture firms increased 11 percent between 2002 and 2005 to reach $28.7 billion annually.The total construction value of projects that architecture firms directly designed approached $360 billion, accounting for almost three percent of overall U.S. Gross Domestic Product.These findings are from The American Institute of Architects (AIA) Business of Architecture: 2006 AIA Firm Survey which is conducted every three years to examine issues related to business practices of AIA member-owned architecture firms.The study also revealed continued improvement in diversity in the profession and an increase in the number “green” design projects.
“While the residential design category posted the strongest gains in share of firm activity during this period, the institutional market – led by the health care and education sectors – remains the largest source for architecture services,” said survey co-author, AIA Chief Economist Kermit Baker, PhD, Hon. AIA.“State and local governments were the leading architecture clients, followed closely by developers/construction companies.The most common project delivery method remains traditional design-bid-build, which accounts for nearly 60 percent of project activity at architecture firms.”
Top 5 sectors served by architects in 2005
Health care: 14.3%
Office: 11.7%
Education (K-12): 11.1%
Multifamily residential: 10.7%
Education (college/ university): 7.7%
(Percent of firm billings)
Diversity continues to increase within profession
Women currently comprise 26 percent of all architecture staff, up from 20 percent in 1999, and the percentage of minority architecture staff has risen from 9 to 16 percent over the same period.
Baker added, “ Of particular note, women and minority architects have both made advances in leadership positions.Women principles and partners at firms have quadrupled from 4 percent in 1999 to 16 percent in 2005.Minority architects have also increased their share as principals and partners across the spectrum of firm sizes.”
Green architecture grows in popularity
Due to rising energy costs and growing concerns over the impact that construction activity has on the environment, there has been a rise in the use of sustainable (“green”) design principles.In 2005, just over one-third of firms with nonresidential projects and a quarter of firms designing residential projects characterized some of their projects as green.
Percent of firms with green projects:
Nonresidential construction: 34%
Residential construction: 25%
Residential remodeling: 22%
Additional details are available in the survey relating to fees and profitability, range of services offered, international work, marketing practices, IT expenditures, liability insurance, and continuing education at U.S. architecture firms.The survey is available at no charge to AIA members and can be ordered by calling Information Central at 800-242-3837, option 1.
About The Business of Architecture: The 2006 AIA Firm Survey
The survey was researched and compiled by the AIA department of market research. The survey data were weighted to reflect the population proportions of AIA member-owned firms in terms of number of firms in each of six size categories, as well as their geographic distribution in terms of the nine census regions. | 3,466 | 1,665 | 0.000615 |
warc | 201704 | This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.
The most comprehensive
resource available for payroll professionals. This service provides payroll
news, white papers, custom research answers, webinars on the hottest payroll
topics, survey and research reports, in addition to access to Bloomberg BNA’s
Payroll Library™.
By Mike Baer
Representatives from payroll service providers that deposit taxes on behalf of thousands of clients asked Internal Revenue Service officials Sept. 1 about steps to prevent the recurrence of erroneous tax deposit penalty notices that were sent to clients after at least two federal tax holidays.
John Myett, director of government affairs at ADP, LLC, told IRS officials in a teleconference Sept. 1 that the erroneous penalty notices related to changed holiday deadlines were a troubling development. The notices were sent to affected clients after Monday holidays this year for Emancipation Day on April 18 and Memorial Day on May 30.
A notice saying that a programming error resulted in IRS systems not recognizing changes in deposit due dates created by Memorial Day was issued Aug. 4 by the IRS. Notices sent to employers because of the error were rescinded and corrected, the agency said. A similar statement was issued after erroneous penalty notices were sent following the Emancipation Day holiday.
Myett was concerned about the changed deposit due dates for Labor Day on Sept. 5 and suggested that the IRS issue a statement explaining that “actions are being taken” to prevent future errors.
The IRS penalty department reviewed the issue for the remaining holidays this year and believes the problem was resolved, said Scott Mezistrano of the agency's industry engagement and strategy group. He noted that the error did not occur after this year's Independence Day holiday and that he had no information as to the total number of employers affected by the erroneous notices.
Another payroll service provider representative said that after the erroneous penalty notices, clients calling IRS for explanations and help were told that their service providers had made late payments. The IRS explanation was incorrect, the representative said.
A communication of the issue was sent to IRS call-center employees, said Mezistrano, who apologized that misinformation about the problem could possibly taint the credibility of the service providers with clients that rely on payroll tax deposit services.
Myett asked the IRS for an explanation of steps taken to resolve the issue.
For the Emancipation Day error, IRS said that employers receiving a CP161 notice with a failure-to-deposit penalty as a result were to receive follow-up correspondence indicating they do not owe a penalty. Those receiving a CP276B notice regarding an incorrect deposit that said IRS waived the penalty would not receive any follow-up correspondence.
For the Memorial Day error, IRS said those who received a CP161 notice would receive a new notice of adjustment CP210/220 stating the account was corrected and they do not owe a penalty. Employers receiving a CP276B notice that said the IRS waived the penalty would not receive any follow-up correspondence.To contact the reporter on this story: Mike Baer at mbaer@bna.com To contact the editor responsible for this story: Michael Trimarchi at mtrimarchi@bna.com
Copyright © 2016 The Bureau of National Affairs, Inc. All Rights Reserved.
All Bloomberg BNA treatises are available on standing order, which ensures you will always receive the most current edition of the book or supplement of the title you have ordered from Bloomberg BNA’s book division. As soon as a new supplement or edition is published (usually annually) for a title you’ve previously purchased and requested to be placed on standing order, we’ll ship it to you to review for 30 days without any obligation. During this period, you can either (a) honor the invoice and receive a 5% discount (in addition to any other discounts you may qualify for) off the then-current price of the update, plus shipping and handling or (b) return the book(s), in which case, your invoice will be cancelled upon receipt of the book(s). Call us for a prepaid UPS label for your return. It’s as simple and easy as that. Most importantly, standing orders mean you will never have to worry about the timeliness of the information you’re relying on. And, you may discontinue standing orders at any time by contacting us at 1.800.960.1220 or by sending an email to books@bna.com.
Put me on standing order at a 5% discount off list price of all future updates, in addition to any other discounts I may quality for. (Returnable within 30 days.)
Notify me when updates are available (No standing order will be created).
This Bloomberg BNA report is available on standing order, which ensures you will all receive the latest edition. This report is updated annually and we will send you the latest edition once it has been published. By signing up for standing order you will never have to worry about the timeliness of the information you need. And, you may discontinue standing orders at any time by contacting us at 1.800.372.1033, option 5, or by sending us an email to research@bna.com.
Put me on standing order
Notify me when new releases are available (no standing order will be created) | 5,423 | 2,304 | 0.00044 |
warc | 201704 | In May 2010, I wrote an article for this magazine about Vermont Yankee, the troubled nuclear power plant near that state’s southern border. Vermont’s Senate had just voted to block its license (the Nuclear Regulatory Commission renewed it anyway), and I wondered what this meant for New England. For good or ill, our region seemed to be resolutely ignoring the “nuclear renaissance” sweeping America.
Remember those heady days? Just three years ago, as citizens watched the Iraq War slog on and Deepwater Horizon spew oil into the Gulf of Mexico, public support for home-grown nuclear power climbed to a record high. In his 2010 State of the Union address, President Obama pledged to build a new generation of nuclear power plants, and later announced $8.33 billion in loan guarantees for two reactors in Georgia. The future of nuclear power in America seemed bathed in a golden glow.
A lot can change in three years, eh?
Now Vermont seems more bellwether than backwater. In August, Entergy Corp., which owns Vermont Yankee, said it will close the plant in 2014, making it the fifth American reactor whose retirement has been announced over the past year. Granted, a total of four nuclear reactors are under construction in Georgia and South Carolina, and Obama, in his June 2013 Climate Action Plan, still offered support for emerging nuclear technologies. But enthusiasm from the White House (and other quarters) seems dampened: I didn’t hear any ringing calls for more nuclear power in this year’s State of the Union address. In much of the United States, the nuclear renaissance seems to be falling victim to the three F’s: fracking, funding, and, to a much lesser extent, Fukushima.
Let’s talk about Fukushima first. When Japan’s Fukushima Daiichi nuclear plant melted down in March 2011, critics of Vermont Yankee quickly pointed out similarities between the plants. Both were about the same age and used Mark 1 boiling water reactors. Furthermore, the Vermont plant had been dogged by a couple of problems — a partial cooling tower collapse in 2007, a radioactive tritium leak in 2010 — that lowered citizens’ confidence. Vermont-based environmentalist Bill McKibben told me his state’s plant was “a Fukushima in waiting.”
Fukushima galvanized the already strong antinuclear movement in Vermont, creating an increasingly unhappy political environment. While this alone didn’t kill Vermont Yankee, it certainly didn’t help. But interestingly, Fukushima had little effect on nuclear enthusiasm in the rest of the country. In March 2011, just before the Fukushima disaster, a Gallup Poll indicated that 57 percent of Americans favored nuclear power. A year later, after Fukushima, that percentage was exactly the same.
So if fear isn’t killing the nuclear renaissance, what is? It’s the other F’s: fracking and funding. Fracking, properly known as hydraulic fracturing, is when water, sand, and chemicals are pumped into a well to crack rocks and release natural gas. Fracking, while controversial, has helped to dramatically lower the cost of natural gas over the past five years. Meanwhile, the slower economy has lowered demand for electricity. Nuclear energy, while still generating cheap, reliable electricity (or, rather, cheap once the plant is paid off) has become both less competitive and less needed.
And the natural gas won’t stop flowing anytime soon. The US Energy Information Administration expects our natural gas production to increase 44 percent by 2040. Building a nuclear reactor costs billions of dollars, and investors don’t fork over that much cash without a good promise of future profit. And that’s where the third F comes in.
By funding, I really mean regulation and subsidies. (OK, it’s a stretch, but I wanted three F’s.) In regulated energy markets, like many Southern states have, plant investors know they will be able to sell nuclear-generated power at a competitive rate. In deregulated states, like those in New England, there is no such guarantee. On the contrary, says Jacopo Buongiorno, an associate professor of nuclear science and engineering at MIT, the market here has been “poisoned” by state and federal subsidies for renewables. “Utilities are forced to buy electricity generated by wind, even though it costs more,” says Buongiorno. “It seems like an unnatural interference in the market.”
Of course, one man’s poison is another’s honey. The closing of Vermont Yankee, McKibben told me, “will offer Vermonters a real chance to change up their energy mix. Hopefully we will unite behind wind and sun.”
It’s hard to foresee the future of nuclear power in New England or the rest of America. Nuclear power may be an awkward fit for our region, but it’s troubling to see so many eggs in the natural gas basket. According to the Energy Information Administration, natural gas fueled 52 percent of the electricity generated in New England in 2012, up from less than 30 percent in 2001. This gas glut can’t always fit through our pipelines, especially in winter, which leads to extreme spikes in our electricity prices, as happened in January and February 2013. As Buongiorno says, if the United States starts exporting more natural gas, or regulators crack down on fracking, nuclear’s prospects may change rapidly.
Who knows, in another three years, the nuclear renaissance may return. But not, it’s safe to say, in Vermont.
Barbara Moran is a freelance science writer in Brookline. Send comments to magazine@globe.com. | 5,676 | 2,707 | 0.000384 |
warc | 201704 | I’m going to do something totally selfish, but also totally benevolent at the same time…
I want to write a short book, about lead generation. But I don’t want to fit the additional writing time into my already cramped schedule.
And so I’m planning the book as a series of short essays, and will be writing them over the next couple weeks. What you’ll be getting will basically be a rough draft of the book, packed with all the juicy content the final version will contain.
But because you’re already on my email list, you won’t have to pay a penny for it.
The way I’m structuring this, it is one consistent narrative that builds piece-by-piece as you go through the book. Each essay will certainly hold value on its own, but they will be most useful if you read every essay, in order.
So… Make sure you read each issue I send you over the next couple weeks, as together they will form some of my best current thinking on lead generation.
(And hint: they will ALSO teach you all about how you can use the “free book” model to generate leads and sell more. There’s a reason folks like Brendon Burchard, Perry Marshall, Ryan Levesque, Dan Kennedy, and so many others — including a ton of “real” businesses outside the marketing space — are using these free book offers. THEY WORK! And I’m going to give you the skinny on how to make them work for you.)
Today’s essay is actually the appendix of the book…
In planning the book, I went back to my most trusted method of getting my ideas out of my head, and into the real world. And so, because I want to save some of the content for the actual essay that’s going to go into the book, here it is!
—
How To Get Your Best Ideas Out Of Your Head And Into The Real World
As someone who has made my living writing and creating content for over a decade now, I’ve had to become an expert at thinking.
More specifically, how to clarify my thinking, organize it, and present it in a way that’s easy for others to understand.
And through enough time and practice, it’s become mostly second nature. I’m able to regularly write 1,000+-word essays off the cuff, with little preparation required. And usually, they make a lot of sense. Occasionally, they even draw praise from readers — including some of my biggest heroes, who are among my “regulars.”
But when it comes time to sit down and create something more complex — a book, a speech, or a seminar, for example — I have to force myself to think.
I’ve tried lots of tools… Sometimes, a pen and a pad work well.
The more we move toward a paperless society, the more scientists are finding that there is actually benefit to writing on paper. It slows us down, forces us to think, and activates completely different areas of the brain than typing. And so, at least in some instances, I try to do at least some of my thinking on paper.
Other times, free typing on my computer works well.
The other day, I had an idea that I wanted to get out of my head, in a way that made at least some sense. So I picked a person with whom I wanted to share that idea, and I wrote them an email. The informality of most email communication allowed me to riff on the idea, without being too worried about grammar or other small details that could hold me up.
A friend who is a partner in a half-billion-dollar-per-year publishing company insists that any significant new business idea be turned into a 1 to 2 page memo within 24 hours. This forces clarification of thinking, and keeps a record of the idea from when it is most fresh.
When I have my direct response copywriter hat on, and am preparing to write a 10,000-word sales letter or script for a client, I will often start by trying to capture the idea in the first 1,000 or so words. This is called the lead, is critical in getting readership and eventual sales, and is enough to get a feel for whether the idea will work or not. It’s hard to replace this free writing as a way of testing ideas for long sales copy.
But when the content will be complex or have many parts, I go back to my most trusted thinking tool… Mind maps are the single best way to get complex thoughts out of your head…
FreeMind is a free piece of mind-mapping software, available for Windows, Mac, and Linux. (For info and to download, visit http://freemind.sourceforge.net/.) I’ve used it long enough now that I don’t remember when I discovered it.
FreeMind is nice because it’s simple and no-frills. It’s not particularly pretty, but once you learn it — an easy proposition — it allows you to “export” and organize your ideas very quickly. (And if you want to go back and “dress up” your mind maps, you can do that with it, too.)
Here’s a screen shot from FreeMind, showing you roughly what a section of the earliest mind map for this book looked like.
As you can see, a mind map is a lot like an outline. But it’s much more flexible.
It’s quick and easy to create each point on the outline (called “nodes” in mind map speak). A “sibling node” is a point at the same level as your current point — to create a sibling, simply hit
enter, and start typing. A “child node” is one level down — to create a child, hit tab, and start typing. Get the hang of it, and suddenly you’re putting down a ton of thought in very little time.
And yet, our thoughts never come out quite right. And this is where mind mapping shines. With a word processor, you’d have to cut and paste, change indentation, and generally do a song and dance to reorganize ideas. With FreeMind, you simply click and drag, and can move nodes all around your mind map.
If you want to go back and add ideas? Well, that’s easy, too. I often start by getting all my high-level ideas out of my head. Then, I go back and start to create child nodes to fill in the details. As you can see in the included screen shot, I wrote general chapter titles, and then wrote in questions that needed to be answered in the chapter.
If you want to write a book or create any other big piece of content, start with a mind map…
It may not be easy at first, if you’re new to mind mapping. You may have to get over the hurdle of getting used to working with the software. But once you know how to do that, do this…
Open up an empty mind map. You’ll see an empty oval in the middle, assuming you’re using FreeMind. That’s called the “root node” — all other nodes branch off of it.
Here you type your title or the big topic area for your book. You can always edit later, so just put something here!
Then, start typing! Hit
enter to create a new node ( enter on the root node creates a child node — the root has no sibling nodes). (And if all of this is sounding like too much mind map jargon, just go by sight. I’m trying to be accurate with my language, but I don’t think about the node names and relationships when I’m brainstorming.)
Type in an idea, a topic, a question that needs to be answered — and
enter to submit it. Then hit enter again to create another node, and type in another idea, topic, or question.
Just keep going. The idea initially is to get as many ideas “on paper” as possible.
When you start to slow on the main topics, go back to any you’d like to add notes to. Hit
tab to create a child node, and type in your thought, and hit enter to submit. Hitting enter again will create a sibling of this child node — a sub-point to the big topic.
If at any point one of your nodes has too many branches or child nodes, you can hide them while you’re not working on them. Double-click on a node with child nodes, and the child nodes will be hidden. Double-click again and they’ll show up again.
You can do this all in one session, or you can build your content through time…
For this book, I sat down in roughly 30 minutes and mapped out the chapters and content I plan to include. Because it was already tumbling around in my head, it was pretty quick and easy to do.
When I did a 3-day seminar on advanced direct response copywriting, I spent months fleshing out my content in a mind map, using years’ worth of notes that were also in mind map form.
When it finally came time to sit down and create PowerPoint slides and handouts from the content, I was able to copy, paste, and edit the content in a matter of days (which was a feat for hundreds of slides’ worth of content).
This makes it super-easy to actually sit down and create your content…
Depending on what you’re creating, your writing ability and capacity, and a number of other factors, you have a lot of methods for turning your mind map into a book or other piece of content.
For this book, I’m actually writing each essay from the mind map you saw in the screen shot above. By planning each chapter in advance, including the questions it needs to answer, I can sit down and quickly crank out each essay or chapter.
On a previous book, I created a mind map then had an interviewer turn it into questions to ask me. She interviewed me for about an hour, then transcribed the interview. This formed a great rough draft I could go back and edit, giving me the guts of a book within a couple hours of my work.
And with the seminar, I never turned the mind map into fleshed out text. Because the content was designed to be delivered spoken, from the front of the room, I simply used the outlines as prompts when giving my talks. I’ve used this same method in other speaking situations, too — including podcasts, teleseminars, webinars, speeches, and more.
Usually the trickiest part is getting your thinking organized on paper, and this is the fastest, most-efficient way I can recommend…
For most of us (me included), our thoughts aren’t clearly elucidated when the come pouring out of our heads. They’re fractured and incomplete, lacking detail, and sometimes begging for better structure.
Simply sitting down to stare at a blank page only makes this problem worse.
So use FreeMind, or another piece of mind mapping software. Give it a try. Use it to get the ideas out of your head, and onto “paper.” Once they’re out, look for how you can improve your thinking or the flow. Reorganize. Add points. Fill in details where necessary.
This gives you the skeleton of your book or other piece of content, from which it’s easy to flesh out and add to the content until it’s a finished piece of work.
—
Yours for bigger breakthroughs,
Roy Furr
Like This? Get More...
Get daily marketing tips by email... Enter your info below! | 10,876 | 4,835 | 0.000218 |
warc | 201704 | Learn about this topic in these articles: process in canning presterilization
Immediately after exhausting, the lids are placed on the cans and the cans are
sealed. An airtight sealis achieved between the lid and the rim of the can using a thin layer of gasket or compound. The anaerobic conditions prevent the growth of oxygen-requiring microorganisms. In addition, many of the spores of anaerobic microorganisms are less resistant to heat and are easily destroyed during... production of glass
The
sealing of glass to various materials (including glass itself) is keyed to the relationship between glass viscosity and temperature, the differing thermal-expansion characteristics of the components to be sealed, the wetting and adhesion characteristics of molten glass at sealing temperatures, and the chemical durability of glass during service. Hermeticity is often a desired result in... work of Bridgman
...100,000 atmospheres and ultimately reached about 400,000 atmospheres. In this unexplored field, he had to invent much of the equipment himself. His most important invention was a special type of
seal, in which the pressure in the gasket always exceeds that in the pressurized fluid, so that the closure is self- sealing; without this his work at very high pressures would not have been possible.... | 1,318 | 710 | 0.001417 |
warc | 201704 | Selecting a Discount Fitness Club Network
If your needs assessment indicates that a discount fitness club Network (DFCN) would be a useful service for your employees, the next step is to identify a fitness club network that suits your needs. Also, if you have few or only one employee location, you may consider an arrangement with one or more specific fitness clubs or chains instead of a large network.
Selecting a DFCN includes the following activities and considerations:
Decide what DFCN qualities are important to your organization. For example, if you want the DFCN to help you with the evaluation, be sure to include that in the selection criteria and agreement. Below are some sample questions to ask that will help determine your selection criteria:
What discount level is of interest to our employees? Will the employer or employees pay for the service or memberships? What geographic locations must be included in the network service area to meet the needs of our employees? Should the service be made available to only regular (e.g., non-contractor, full-time) employees or should we include other types of employees (e.g., contractors or fellows, retirees or adult family members)? What scope of services is required to meet the needs of our employees? Is it important to require a special Web page and/or telephone number for our employees? What scope of services is required to meet the needs of our program (e.g., administrative duties, customer service, usage data, promotional programs and materials)? Will the DFCN provider collect utilization data and provide periodic reports? Will the fitness club network provider require a minimum number of participating employees? What quality standards must the network meet?
After you’ve identified key criteria for a DFCN, it is important to seek the assistance of your human resources, contract, or legal department to conduct a proper search for the right DFCN. If you determine that there are no internal procurement or ethical requirements, the following resources might help you identify appropriate DFCNs for consideration:
Internet search Work site health promotion organizations or coalitions Your employee benefits manager
Once you have identified all available DFCNs, apply your criteria identified above to select the appropriate DFCN for your agency/company within the procurement and ethic rules and regulations for your organization.
Next Steps
During the planning phase, you can begin developing a communication plan for promoting the launch of your DFCN and maintaining employee interest in it.
DFCN Example Tools Other Helpful Information Page last reviewed: September 4, 2015 Page last updated: September 4, 2015 Content source: | 2,726 | 1,264 | 0.000796 |
warc | 201704 | Applying for a Merchant Account
If you want to start accepting credit cards, you will generally need to apply for a merchant services account. This often involves a credit check to determine not only if you qualify for a merchant account, but how much you can qualify to process each month in payments. If you have bad or no credit, there are still options available to you through a “high risk” merchant services provider like Charge.com. As you process payments each month, the amount of processing volume you have available will increase.
Underwriting
There is a high level of risk merchant providers and their partner banks take on when approving a new merchant. Charge backs occur when customers dispute activity on their credit or debit cards and request their credit card provider issue a refund to them. These refunds are debited against the merchant services provider and their partner banks, creating a lot of potential liability.
Here are a list of items you will generally need in order to open a merchant account: Corporations and LLCs: A legal entity – You will need to have a legal entity set up for your business. This can be an LLC, C-Corp and so on. EIN number – An EIN number stands for Employer Identification Number and is what the IRS uses to identify your company with. Business checking account – This will allow the merchant service company to deposit transactions directly into your business checking account.
Once you have these three things, you can then start the application process that your chosen merchant services provider requires. Sole proprietors do not need these things, and instead can sign up using their personal name, Social Security Number, and personal bank account. The next sections will talk about the underwriting process.
Sole Proprietors:
Sole proprietors do not need the three items listed above for Corporations and LLC’s and can open a merchant account under their personal name, can use a Social Security Number as their tax ID number, and can also use a personal checking account to receive deposits.
Things that can hurt your chances of getting approved for a merchant account
Due to the risk merchant service providers and their partnering banks assume when approving merchants for a merchant account, certain factors can make it hard to open a merchant account. Some of these are listed below:
Bad Credit – This is by far the most common reason for a denial; however, there are merchant service providers out there like Charge.com that work with high risk applicants. New Companies – A brand new company newly formed is a much higher risk than an established company looking to change providers. Type of Industry – The type of business you are in can also bring up a red flag. For instance, industries where there has been a history of abuse by other people can make it hard to obtain a merchant account. Types of industries having a hard time getting approval are listed below: Pornographic website Guns and ammunition Online gambling Free + shipping type offers Multi-level marketing companies Affiliate marketing companies Group buying websites Penny auction websites Discount memberships Gym memberships Membership clubs Recurring billing products Nutritional supplements Expected Transaction Volume – If you apply for a merchant account and indicate that you need a high level of processing volume, your chances for obtaining a merchant account will greatly diminish. The more volume you do, the more likely you will face a lot of charge backs and the more risk you will pose to the merchant services provider. Only apply for processing volume based on your expected volume, be conservative, and only ask for the minimum you need. If your volume picks up, you can always apply for more volume later. Build trust with your merchant services provider and you can eventually get to the point where you can have virtually unlimited processing volume. Average Ticket Size – On the typical application, the merchant services provider will want to know what your average ticket size will be. If you indicate a high average ticket price, the chances of approval will be greatly diminished. The larger transaction size means a higher cost if there is a charge back and more risk the bank and merchant services provider must take. Summary
Getting a merchant account isn’t hard if you follow the advice we have offered and already have finished the things included in the “Getting Started” paragraph. Once you get a merchant account, make sure you are staying on top of your customer service to minimize the number of charge backs that you have. Keeping your merchant account in good standing is vital to the long term success of your business. Failing to maintain good standing with any merchant service provider will make it much harder to obtaining a merchant account in the future. Contact Charge.com today to speak to one of our merchant service account specialists! | 5,007 | 2,141 | 0.000475 |
warc | 201704 | Anyone who buys bottles of Formula 409 or Windex knows it costs a pretty penny for the most popular cleansers. In light of the green movement, companies such as Seventh Generation, BioKleen and Caldrea have developed ecological alternatives to chemically-harsh cleaners, but they cost even more! Check out a health food store and you’ll see that a bottle of natural all-purpose cleaner costs about $8.
Baking soda, salt and white vinegar, three of the least expensive, most easily available household products, can save you big money on cleaning products. In addition to your wallet, since they are non-toxic, you’ll be protecting the health of your pets and family.
Each of these items may be purchased inexpensively in restaurant sizes (bulk) at warehouse stores. What are some of the ingenious ways these staples can be used to clean your household without breaking your budget? Let’s investigate.
Baking soda
With its odor-absorbing qualities, baking soda sprinkles (or an open box) may be used to deodorize:
trash cans drains in the sink and bathtub athletic footwear cat litter boxes pet beds gym bags carpet closets stuffed animals refrigerators; and dishwashers.
Dust the bottom of trash cans or the litter box and leave it. An open box works well in the fridge, but dusting the items, letting them sit overnight and shaking or vacuuming the power off then works well.
The slight abrasion in baking soda also makes it an effective and safe cleanser for:
microwaves the oven floors pots and pans tiles sinks tubs shower curtains countertops car exteriors walls
In most cases, it is less messy and easier to work with when mixed with a bit of water to form a cleaning paste. But, it also may be used by sprinkling it on a damp cleaning cloth.
And, with its alkali properties, it will clean corrosion off and neutralize battery acid, extending the life of the batteries and engines in cars and mowers and battery compartments in your digital camera, video game player, as well as TV remote controls.
White vinegar
Vinegar, too, works as a deodorizer, but also as disinfectant. It kills most mold, bacteria and germs. After use, the smell of vinegar will linger, but will fade after a few hours. If it bothers you, you can also soak it with lemon or mint for several days and strain it before using. It’s handy to buy a spray bottle, fill it with half vinegar and half water. Keep it on hand for use on:
ketchup, tomato or mustard-stained clothing before washing grout (use a toothbrush to scrub it out) windows faucets walls stovetops counters exterior of the fridge and washing machine shower stall mini blinds linoleum or no-wax vinyl floors toilet sinks and stains in wool carpets
Other ways to take advantage of this economical cleanser include:
Boiling a cup of vinegar in the microwave to get rid of odors and loosen dried food splatters for removal with a sponge. Running it, diluted, through the coffee maker, dishwasher, drains and garbage disposal. (Make sure to run plain water through the coffee maker a couple of times after the cleaning cycle). Adding a cup to rinse water in the washer will remove excess detergent and hard water deposits leaving the clothing cleaner and softer. Removing price tags and decal residue by soaking overnight. Soak hardened paint brushes with vinegar for an hour then simmer, drain and rinse them.
Table Salt
Salt, too, is mildly abrasive and kills germs.
It’s great for removing wine or grape juice stains on carpets or clothing. Pour on fresh wine stains to absorb the liquid. Leave it to dry. Then brush it off or vacuum and repeatedly wash the spot with diluted white wine vinegar.
A good salt scrub helps remove grease from plates and sinks and coffee and tea stains from cups and teapots.
Add water to some salt and soak overnight with a bit of water on burnt food on pots and pans. Likewise, it can be used to clean up oven drips and juice spills on the bottom of an over. Sprinkle salt on the spills while they are still warm. Let the oven cool and scrub them off.
In a pinch, it may be used as dishwashing liquid. Just add a tablespoon into water in your dishpan.
If the bottom of your iron becomes sticky, rub some salt on a damp cloth and iron it a few times. Then wipe the iron’s surface off with plain water.
You may also clean drains regularly to helps prevent the build up of grease, food particles and bacteria. Just boil salt water and pour it down the kitchen drain.
…………………………………
House cleaning is rarely fun, but if you can make it cheap and healthier for your family, your pets and yourself, you benefit in two ways. As always, the greater savings in everyday costs, the more money we have left to pay down debt and save for the future. Economical cleaning is just one option for cutting costs. Click here for additional ideas on ways to save. | 4,936 | 2,396 | 0.000428 |
warc | 201704 | Product Description
There is an urgent need to increase the food production to maintain the dietary intake of growing human population, in an environmentally sustainable manner. This demand for higher crop production also implies a higher demand for fixed nitrogen. Chemically produced nitrogenous fertilizers can provide this nitrogen requirement, but they are expensive to produce, in addition to being harmful to the environments. An inexpensive and eco-friendly alternative source of nitrogen fertilizer is the Biological Nitrogen Fixation. There is a wide scope to increase yield and production of groundnut in these area through improved efficiency of biological nitrogen fixation, which in turn will also reduce the requirement of nitrogenous fertilizer and ultimately lower the cost of production. The different areas of Saurashtra contain Rhizobium strains, well adapted to varying soil and environmental conditions. The Rhizobium strain isolated from these areas will be very effective for evaluating as a groundnut inoculant in different areas of Saurashtra, because the inoculated Rhizobium should survive or persist in critical number for a long time in the rhizosphere of the groundnut.
SKU : COC90042 Author Bipin Malviya,Hirenkumar Sherathiya and V. P. Chovatia Language English Binding Paperback Number of Pages 96 Publishing Year 2013-09-19T00:00:00.000 ISBN 9783659386060 Edition 1 st Book Type Microbiology (non-medical) Country of Manufacture India Product Brand LAP LAMBERT Academic Publishing Product Packaging Info Box In The Box 1 Piece Product First Available On ClickOnCare.com 2015-10-08 00:00:00 | 1,628 | 893 | 0.001123 |
warc | 201704 | Hollywood is not spending nearly enough to promote its movies online.
That is the main conclusion of a new study by MarketCast, a market research firm servicing the global entertainment industry.
The study, titled “The Internet and Moviegoing: A Benchmark Study on Influences and Opportunities,” was funded by Google and prepared in conjunction with the Motion Picture Association of America and
Variety magazine. It was conducted in July and August and its conclusions were based on a survey of approximately 2,100 moviegoers 13- to 49-years-old, reached via telephone and the Internet.
Approximately one-third of respondents were classified as “moviegoing infoseekers” who used the Internet to research a movie before deciding to go see it. The remaining two-thirds were classified as “traditional” moviegoers, says David Fleck, industry marketing manager for media and entertainment at Google.
According to the study, television advertising, trailers and word-of-mouth were the most important sources of “first awareness” of a movie by a potential customer.” But in the time between first awareness and a movie ticket purchase, MarketCast found that the Internet plays a crucial role in helping people decide which movie to see.
Internet sources — search engines, general portals, entertainment Web sites, specific movie sites and ticketing sites — were the most influential media for more than one third of Infoseekers when they were deciding which movie to see. Less than half of Infoseekers cited television as the most influential medium in their purchase decisions. By comparison, among Traditional moviegoers, seven percent cited the Internet as the most influential medium and 70 percent cited television.
MarketCast also found that Infoseekers are four times as likely as Traditional moviegoers to get their first awareness of a movie from the Internet (16 percent versus four percent). Moreover, Infoseekers are one third less likely than Traditional moviegoers to get their first awareness of a movie via television (32 percent versus 48 percent), the study showed.
But while the study showed that the Internet was the most influential driver of movie ticket purchases for 17 percent of all respondents, movie companies spend only 2.6 percent of their marketing budgets online, Fleck says.
“Years ago marketers viewed the [Internet] medium itself as the experiment. Now they realize the experiment is the marketing mix” of online, print and TV, Fleck says.
The optimal media mix remains to be figured out, Fleck concedes. Yet, at least for the Internet’s best portion of a marketing budget, he adds, “I think it’s safe to say it is north of 2.6 percent.”
Related reading
As an organisation, finding the right marketing channels is an essential part of your marketing strategy.
2017 is the year in which CMOs are expected to outspend CIOs on technology, according to Gartner, which is no surprise given the way in which consumers of all kinds are increasingly using technology in their everyday lives.
As it prepares for a 2017 IPO that could be the largest in the social media space since Facebook went public in 2012, all eyes are on Snapchat.
Amazon Prime was launched in 2005 as an express shipping membership program and more than a decade later it has tens of millions of subscribers who enjoy a lot more than just free, fast shipping on millions of products Amazon sells. | 3,492 | 1,626 | 0.000634 |
warc | 201704 | SOURCE: Harvard Medical School, news release, July 13, 2015
MONDAY, July 13, 2015 (HealthDay News) -- A species of fat, binge-eating fish share a specific gene mutation with some obese people who are constantly hungry, a new study shows.
The finding could improve understanding about the link between obesity and health, the researchers said.
They discovered the mutation in the MC4R gene of Mexican cavefish, which have adapted to cycles of starvation and food abundance.
The study, published online July 13 in the journal
Proceedings of the National Academy of Sciences, offers new insight into how vertebrates evolved to have different metabolisms from one another.
"We all know that people have different metabolisms that lead to their gaining weight under different amounts of eating," senior study author Clifford Tabin, chairman of the department of genetics at Harvard Medical School in Boston, said in a university news release.
"The work with the cavefish gives us an example in a natural setting of why and how metabolisms evolved to be different. Some of the mechanisms we see in the fish may well have implications for human metabolism and therefore human health," he explained.
Cavefish, which are blind, go months without eating by storing huge amounts of fat and burning it more slowly than other fish. While the cavefish are much fatter than other fish, they are healthy.
Learning more about how the cavefish can be so fat but healthy may lead to new ways to help obese people, the researchers noted.
More information
The U.S. National Library of Medicine has more about body weight. | 1,617 | 855 | 0.001182 |
warc | 201704 | Some of the scare headlines read: "Restless leg syndrome can kill you." But in fact, a recent study merely links the syndrome with a higher risk of death. It's what's called an observational study, not one that establishes causation.
In other words, there may be a common underlying cause for the restless legs syndrome and the higher death rate observed in those with the condition.
"This study suggests that individuals with restless legs syndrome are more likely to die early than other people," said study author Dr. Xiang Gao, an assistant professor at Harvard Medical School and an associate epidemiologist at Brigham and Women's Hospital in Boston. "This association was independent of other known risk factors."
Gao and his team are studying a group of women with the syndrome; he said he doesn't know if the findings from the study of men will be similar in women.
But Gao went on to caution: "[T]his is an observational study," not one that establishes a definite cause-effect relationship. The findings were published online June 12 in the journal Neurology.
The findings corroborate a 2007 study, which found that people with the syndrome were twice as likely to have a stroke or heart disease compared, and the risk is greatest in those with the most frequent and severe symptoms.
39% higher risk
Gao and colleagues studied nearly 20,000 men and found that those with restless legs syndrome had a 39 percent higher risk of an early death than did men without the condition.
Restless legs syndrome is a common condition that causes people to feel an uncomfortable sensation in their legs when lying down, according to the U.S. National Institute of Neurological Disorders and Stroke (NINDS). The feeling may be a throbbing, pulling or creeping sensation. Restless legs syndrome makes it hard to fall asleep and stay asleep.
While the exact cause is unknown, it does tend to run in families, suggesting that there may be a genetic factor.
The condition has been linked to medical conditions including kidney disease and peripheral neuropathy.
Gao noted that many people with restless legs syndrome have low iron levels but he cautioned that consumers should check with their doctor before taking iron supplements, as too much iron can be dangerous, especially in men.
The study followed 18,500 men for eight years. The average age at the start of the study was 67 and at the beginning of the study, none of the men had kidney failure, diabetes or arthritis.
During the study, nearly 2,800 men died. Comparing those with restless legs syndrome to those without, it was found that those with the condition were 39 percent more likely to have died.
Men with restless legs syndrome were more likely to take antidepressant drugs and to have high blood pressure, cardiovascular disease or Parkinson's disease.
Common condition
Restless legs syndrome is a condition that produces an intense, often irresistible urge to move the legs and is a major cause of insomnia and sleep disruption. It affects approximately 10 percent of the U.S. population and about one percent of school-aged children.
In 2007, an international team of researchers identified the first gene associated with the condition.
"We now have concrete evidence that RLS is an authentic disorder with recognizable features and underlying biological basis," David Rye, MD, PhD, professor of neurology at Emory University School of Medicine, director of the Emory Healthcare Program in Sleep, and one of the study's lead authors, said at the time. | 3,545 | 1,695 | 0.000595 |
warc | 201704 | This comes via Andy Lees:
The Chinese 2’s – 5’s yield curve, as measured by swaps, flattened back to 23bpts making bank lending along the curve less profitable and therefore indicating an imminent credit tightening. Clearly with China’s regulated system it is not quite so easy, but
Reuters (Richard Bernstein) highlights that now both Brazil’s and India’s sovereign curves actually inverted last week which invariably precedes a recession for the aforementioned reason. As Reuters says an inverted curve is not a signal that inflation is the problem but rather that the government has over-tightened. Bernstein said “The markets are still priced for very rapid unhindered growth, and I just think the probability of that is getting less and less”.Brazil raised base rates for a fourth straight time and indicated more rate hikes on the way. South Korean M2 money supply growth slowed to 3.9% y/y in April which as you can see by chart 2 is approaching all time lows.
I am concerned that the emerging markets have overheated and are now reversing direction quickly. Policy rates are still low when adjusted for inflation. But, given the economic weakness now manifest across North America and Europe, the risk is clearly to the downside for now.
Below is the South Korean M2 Chart from Andy | 1,336 | 765 | 0.001354 |
warc | 201704 | As far as the Hong Kong tourism industry is concerned, this year's numbers are all looking good and every graph measuring trends is heading in the right direction. The monthly total of inbound tourists is nearing record levels and, with the economy looking brighter, Hong Kong residents are once again making overseas holiday plans rather than staying at home and saving for a rainy day.
"The number of visitor arrivals is expected to increase by 20 percent this year to around 18 million," says Joseph Tung, executive director of the Travel Industry Council of Hong Kong (TIC). "In addition to the steady growth of mainland visitors stimulated by the extension of the Individual Visit Scheme, there are also signs of recovery in the long-haul markets," he notes. Indeed, Hong Kong Tourism Board figures show that March arrivals from Europe and Australia were ahead of pre-SARS levels while those from the Americas and Southeast Asia were nearing full recovery.
Mainland China is considered the key driver of growth for inbound tourism. Of the 4.9 million visitors recorded in the first quarter of 2004, almost 60 percent were from the mainland and the year-on-year increase of 37 percent for this market far exceeds the industry average of 14 percent.
Extending the Individual Visit Scheme to all of Guangdong province from 1 May and to nine more cities in Fujian, Jiangsu and Zhejiang as from 1 July will further increase the momentum. However, those arriving from Guangdong and Fujian will not necessarily benefit travel agencies as much as hoped. "These visitors usually have relatives in Hong Kong and require limited services from agencies and hotels except in arranging local tours," notes Mr Tung.
Going away
The projections for outbound tourism are also positive. Industry forecasts indicate a 10 percent increase in tourist departures this year and 5 percent growth in related revenues. "These figures imply lower prices for package tours," explains Mr Tung. "However, the economic revival and greater convenience with, for example, visa exemptions for SAR passport holders going to Japan, will encourage more outbound trips." Mr Tung points out there has been strong growth for all markets except the US, which is still affected by the threat of terrorism and for which the process of applying for tourist visas has become more stringent.
Hong Kong, as a free port, has long been known as an international shopper's paradise and this is a major attraction for mainland visitors. "They are keen to buy the high-quality products which are available in Hong Kong at reasonable prices," says Mr Tung. Cosmetics, audio and telecom products, video equipment and computers are the most popular items while fashion and jewellery are not far behind.
"The retail industry will gain the most from the influx of mainland tourists," says Mr Tung. "They spend comparatively more of their budgets on shopping and less in restaurants unlike international tourists who tend to spend more on gourmet food."
Quality assurance is essential in gaining the trust and confidence of visitors and sustaining tourist growth. The TIC's "100% refund guarantee scheme" offers purchase protection for tourists. They are entitled to a full refund, from participating shops, if purchases prove unsatisfactory and are returned within 14 days.
Customer-oriented services are also crucial. With the advent of the Individual Visit Scheme, travel agencies must provide value-added services which are better than ever. "Otherwise, visitors will not join tours," says Mr Tung. "Travel agencies should arrange shopping with reliable merchants who provide quality products at attractive prices. They should also offer a range of sightseeing itineraries to cater for people with different backgrounds and interests."
In order to avoid the possible perception that Hong Kong, as a small city, may have a limited number of attractions for visitors, Mr Tung recommends further integration with the Pearl River Delta for the development of tourism on a regional basis.
Service first
Thousand of new employees will be needed to support the current growth. Recruitment is mainly for frontline positions with travel agencies, retailers and hotels. "High academic qualifications are not necessarily required," says Mr Tung, "but a good attitude towards service certainly is."
Language skills are increasingly important and proficiency in Putonghua is now expected of frontline employees in tourism. "There is still room for improvement in this area particularly since Putonghua is the common language spoken by all mainland visitors and they expect fluency in Hong Kong," Mr Tung says.
Requirements for greater overall professionalism in the sector are now being emphasised. For example, a new accreditation system will come into effect this July and will require all tourist guides taking care of inbound visitors to obtain a tourist guide pass from TIC. A similar scheme for guides leading outbound tourist groups was implemented in 1999. Under this, all outbound tour escorts are required to sit a qualifying examination and obtain the relevant certificate. Such measures definitely do much to ensure the high service levels of tourist agencies and promote a positive image of the whole tourist sector in Hong Kong. | 5,323 | 2,508 | 0.000402 |
warc | 201704 | An accurate property valuation is crucial for the timely completion of whatever resolution is being pursued.
Our chartered surveyors have the experience and expertise to asses and value property and non-property assets precisely, and within the timeframe that the insolvency process often demands.
Currell often works with insolvency practitioners on assets across the whole of England and Wales. We implement the agreed strategy for conditioning the asset prior to a sale and then oversee the disposal strategy, either through auction, tender or negotiated sale.
A potentially complex step during the insolvency process is the valuation and sale of non-property assets. These assets could be: Plant & Machinery, Business & IT Assets, Trade stocks, Private & Commercial Motor Vehicles. We are happy to assist with the valuation and disposal of such assets. | 861 | 485 | 0.002074 |
warc | 201704 | Nuisance in construction
Contents [edit] Introduction
Private nuisance might be caused by:
Encroachment onto land, for example by trees. Damage to land or buildings. Interference with a party’s enjoyment of the land, for example by generating excessive noise or smells.
For nuisance to be actionable, there must be actual or prospective damage, although this damage need not be physical, and might be demonstrated by encroachment or unreasonable material interference.
This is a matter of balance and degree, depending on how unreasonable the infringement is, the duration of the nuisance and the timing or repetition of the nuisance. This in turn can depend on circumstance, such as the character of a particular location and the types of activities that are carried out there. Activities that constitute nuisance in one location might be acceptable in another.
In Sturges v Bridgman (1879) Thesiger LJ stated, ‘What would be a nuisance in Belgrave Square would not necessarily be so in Bermondsey’.
The courts will also consider the intention of the defendant and whether there was any malice. Liability for nuisance requires that damage is foreseeable, although this does not necessarily constitute negligence.
Under Part III of the Environmental Protection Act 1990 certain matters may constitute statutory nuisances:
Noise. Artificial light. Odour. Insects. Smoke. Dust. Premises. Fumes or gases. Accumulation or deposit. Animals kept in such a place or manner as to be prejudicial to health or a nuisance. Any other matter declared by any enactment to be a statutory nuisance.
Where a local authority establishes any one of these issues constitutes a nuisance (ie is unreasonably interfering with the use or enjoyment of someone’s premises) or is prejudicial to health they must generally serve an abatement notice on the person responsible. Failure to comply with the notice could result prosecution.
[edit] Construction
Clearly construction works are a potential cause of nuisance. However, construction is a necessary activity and in the case of Andreae v. Selfridge & Co. Limited (1958) Sir Wilfred Green MR suggested that ‘...in respect of operations of this character, such as demolition and building, if they are reasonably carried on and all proper and reasonable steps are taken to ensure that no undue inconvenience is caused to neighbours, whether from noise, dust, or other reasons, the neighbours must put up with it.’
Reasonable precautions that might be taken to reduce or avoid
nuisance in construction might include: Keep neighbours informed. Providing a help line so that problems can be reported. Only working at reasonable times and restricting noisy activities to particular periods. Storing fine materials under cover. Damping fine materials and roadways. Minimising demolition or crushing dust. Washing down vehicles. Taking care when deciding transport routes. Providing hard-surfaced roadways. Proper waste management and avoiding burning waste materials. Limiting vibration. Use well-maintained, quiet machinery. Careful sub-contractor management. [edit] Related articles on Designing Buildings Wiki Air quality. Clerk of works. Construction dust. Contract vs tort. Derogation from grant. Indoor air quality. Light pollution. Quiet enjoyment. Trespass. TSI Environmental dust monitoring system. Negligence. Pollution. Site inspections. Right to a view. Rights to light. [edit] External references Featured articles and news
5 out of 10 filtering facepieces fail HSE tests.
Eleven Magazine announce the winner and runners-up in their Moontopia competition.
As January is the time for hitting the gym, Designing Buildings Wiki lists the best gym architecture in the world.
London is at the top of the list of global construction megacities, beating Dubai and Abu Dhabi.
What are the innovative business models of the future, and how to incentivise supply chains to work on a whole life basis?
One of the largest churches in the world, the monumental St. Peter's Basilica.
How thermal comfort is quantified and how it can affect wellbeing.
Snøhetta complete a treehouse cabin that allows guests to lie beneath the Northern Lights.
Christiania is an anarchist 'freetown' in Copenhagen where strange and experimental architecture has flourished.
Why buildings crack, how cracks are categorised and what can be done. | 4,403 | 2,174 | 0.000466 |
warc | 201704 | We have data centers strategically located in San Francisco, a key connectivity hub for Silicon Valley. As a standalone country, San Francisco’s $535 billion economy would rank 19th in the world.
According to the San Francisco Center for Economic Development, worker productivity in the Bay area is double the national average, producing more patents than anywhere else in the country, and attracting 36 percent of the nation’s total venture capital investment in dollars.
Nearly 1,900 tech companies are located here, representing 45,493 jobs. A September 2013 report published by Bloomberg Technology Summit found that the city of San Francisco led the nation in tech job growth for the past five years rising 51.8 percent from 2007 to 2012. Jobs continue to be plentiful, in spite of the region’s reputation for existing in a bubble of boom and bust, and according to SiliconValley.com more growth is expected even though it has leveled off from 2015.
Digital Realty’s SFR2 facility affords customers access to a world-class data center environment within walking distance of San Francisco’s financial district. Originally built as a military tank assembly plant, the building features solid steel and concrete construction, extensive security capabilities, and 8.6 MW of critical IT power load. The building underwent award-winning upgrades for its friction pendulum system, which isolates the base of the building from its floors, ensuring critical operations are maintained throughout and after a major seismic event. As a result, the building has achieved “essential facility” designation according to applicable building codes. SFR2 also operates as a carrier-neutral facility, enabling access to major carriers, internet exchanges, and cloud hubs within the region. Connectivity is facilitated within its meet me room, atop a neighboring roof with significant line of sight, and via Metro Connect service to other Digital Realty facilities in the Bay Area. As a result, SFR2 provides the ideal infrastructure to support our customer’s mission-critical applications and service offerings.
Located in the heart of San Francisco within walking distance to the central business district, this top-tier data center was renovated in 2000 and is strategically adjacent to the Bay Bridge Consortium Trans Bay Fiber Link. Our five-story, 154,950 sq. ft. facility currently services large telecommunications corporations.
Digital Realty is the leading colocation and interconnection provider within 200 Paul Avenue, one of the premier data center and carrier facilities in Northern California. Acquired in 2006, the facility is seismically rated for protection against earthquakes.
Located south of downtown in one of the largest colocation markets in the U.S., the 200 Paul Avenue data center provides access to dozens of the leading domestic and international carriers, as well as physical connection points to the world's telecommunications networks and backbones.
SFR1’s carrier density makes it ideal in reaching point both north in the U.S. as well to the south to our two data center facilities in Santa Clara and to our Los Angeles site. In addition, access to both Asia and the East Coast are available from our San Francisco data center. | 3,300 | 1,614 | 0.00063 |
warc | 201704 | Once upon a time I ate little packets of Oatmeal for breakfast. (Actually, once upon a time I ate Cinnamon Toast Crunch for breakfast, but let’s stay on topic).
I believed I was eating something super heart-healthy and organic, but it was also loaded with sugar and packaged in wasteful individual servings. So about a year ago I switched to plain old oats, bought in bulk and microwaved for 90 seconds with some skim milk. It was pretty plain, but enough to keep me sated until lunchtime.
But then I learned a little trick from a Martha Stewart magazine that I have made my own by dumbing-down and now share with you, minus any ideas of Martha Stewart measuring or perfection!
Take a bulk-load of oats and toast them on a baking pan for about 20 minutes in a 350 degree oven.
Cool and then mix in a bowl with any combination of the following items by the fistful:
Brown sugar Cinnamon Flax Seeds Sunflower Seeds Dried fruits Diced Nuts Whatever
Store in airtight container for a week. Probably longer is fine though. When you are ready to eat, mix about a fistful with skim milk and microwave for 90 seconds. If you are anti-micro you can break out the kettle. I can’t give up the microwave no matter how much radiation I am probably absorbing. | 1,263 | 723 | 0.001403 |
warc | 201704 | As one of the first books examining the core issues of accelerated nursing education, this one offers valuable information on the challenges and successes of this educational model."--Doody's Medical Reviews
Accelerated degree programs provide evidence that creativity in nursing program design can facilitate learning experiences that assure competence in the profession while also taking advantage of the knowledge, skills, and experiences the learner brings to our profession. Lessons learned from accelerated nursing programs can be applied in all our programs and enrich the education of professional nurses n"
Geraldine Polly Bednash, PhD, RN, FAAN
CEO, AACN
Over the last two decades, an unprecedented pool of nursing students-many with academic degrees and prior work experience-have entered accelerated programs. This is the first volume to examine core issues in accelerated nursing education, such as curriculum innovation, clinical immersion, recruitment and retention of students and faculty, and inter-professional education. It also addresses questions regarding:
How accelerated nursing programs prepare graduates to meet changing health care needs Which curriculums and clinical models are best suited to accelerated education What teaching strategies and evidence-based practices ensure high quality results Key Features:
Discusses enrollment and admission at the BSN and MSN levels Explores curriculum innovation, new teaching methods, and start-up programs Analyzes student retention and progression, with remediation strategies Presents faculty recruitment, retention, and development successes Addresses issues concerning second degree and second career students " | 1,697 | 849 | 0.001186 |
warc | 201704 | The Leadership Blog Industrial What’s your interest?
Back to Leadership Blog
Board Recruiting: When Playing It Safe Is Risky by German Herrera | July 12, 2016
Given the performance pressures public company boards are under these days, I’m not surprised when nominating committees make prior public company board experience a requirement for director candidates. By limiting the pool to known, proven quantities, nominating committees hope to minimize the chances of a board appointment that doesn’t live up to its promise in either perception or reality—thus exposing the board to potential attacks from activist investors, analysts and the media.
The limits of yesterday’s logic
The impulse is understandable but relies on outdated logic. I would argue that in the current environment, it’s sticking to the usual director-candidate suspects that is actually the riskier strategy. Doing so assumes that the company is unaffected by the disruptions that have rearranged entire industries, and that a board with the competencies and experiences that spelled success a decade ago is perfectly equipped to protect shareholder value and advise the CEO in an age of radically new business models and continual innovation.
Many times, in other words, the ideal director candidate isn’t from today’s Fortune 500, but tomorrow’s— and given the pace of change today, that isn’t a synonym for the bottom half of the Fortune 1000. Mark Zuckerberg was running Facebook out of his dorm room a dozen years ago; Uber, which has yet to go public, was only founded in 2009. When companies can catapult from whiteboard to global dominance in a decade or less, you can’t hope to wait until a company is a household name before attempting to recruit one of its leaders for your board—he or she will long be fully committed by then.
The risk and return on forward thinking
Is there risk in looking ahead to spot tomorrow’s boardroom leaders today? Of course. But mitigating that risk is what director recruiting is all about. You don’t need a recruiter to work through a list of Fortune 500 directors. A recruiter generates the most value when he or she can bring to the table someone who isn’t already inundated with directorship opportunities and yet who has the experience and perspective that matches the evolving challenges and opportunities the company faces.
Given how slowly most boards tend to evolve, recruiting someone to a board seat is an exercise in forward thinking. Forward thinking always involves some uncertainty, but that uncertainty can be countered with a thorough understanding of the company’s key strategic issues, the board’s culture and the candidate’s track record and potential. The return for taking that risk is being first in line for a new generation of director candidates who have the experience and mindset to help legacy organizations navigate a radically new set of challenges and opportunities. Playing it safe, on the other hand, is to cling to incrementalism—a recipe for falling behind as boards retool themselves in a period of rapid change.
Contact German at:
german.herrera@egonzehnder.com +1 305 569 1040 | 3,241 | 1,615 | 0.000642 |
warc | 201704 | Authors:Karl Atkin Neil Lunt Carl Thompson Paperback ISBN:9780702023248 Imprint:Bailièrre Tindall Published Date:17th February 1999 Page Count:232
Using empirical case studies, this key text explores a range of key practice issues, and evaluates the contribution which nursing makes in the community. The text emphasizes, explores and evaluates three themes: The shift from secondary to primary care --Establishing needs and priority setting --And the role of evaluation in evidence-based practice. Clearly written by leading commentators in nursing, policy and research, this book provides an evidence-based resource for all community nurses.
Introduction Evaluating Change In Community Nursing. Section 1: Evaluating Changes To Organisational Contexts. Addressing Cultural Diversity In Health Care. Skill Mix In Primary Care Shifting The Balance. Prescribing And Community Nursing An Evaluation Of Practice.
Evaluating Changes To Practice Arenas. School Nursing An Evaluation Of Policy And Practice. Evaluation And Community Psychiatric Nursing. The Emergence And Development Of Practice Nursing; Implications For Future Policy And Practice. Evaluating Learning Disability Embracing Change . Section 3 Evaluating Changes To Preparation. Mapping Changes In Competencies In Community Adult Nursing. Elements Of Competence. Evidence-Based Nursing, Evaluation And The Role Of The Community Practitioner. No. of pages: 232 Language: English Copyright: © Bailièrre Tindall 1999 Published: 17th February 1999 Imprint: Bailièrre Tindall Paperback ISBN: 9780702023248
Senior Research Fellow, Ethnicity and Social Policy Research Unit, University of Bradford, UK
Lecturer, Department of Social Policy, University of Massey, New Zealand
Professor (Personal Chair), Department of Health Sciences, University of York, York, UK Senior Lecturer | 1,859 | 937 | 0.001084 |
warc | 201704 | 1 Answer | Add Yours
Sadly, according to the American Psycholocigal Association (APA) teenage suicide is the third cause of preventable death in the United States (the first is accident; the second, homicide).
The
reasons teens take their own lives can be complex and varied. The causes are often socially motivated, but actually mental illness plays the greater role. Sometime mental illness is diagnosed, sometimes it is not. Common mental illnesses in teens include depression, schizophrenia, and bipolar disorder. Another top factor is substance abuse. There is no socio-economic group that is NOT at risk. Suicides occur in all income brackets, in both urban, suburban, and rural communities.
In recent years, prevention efforts have included intensive school education programs, more crisis hotlines, and social networking efforts like the very successful "It Gets Better" campaign which tries to reach
at-risk gay teens, who are five times more likely to attempt suicide as straight teens.
All of these increased efforts have led those involved in psychology and social sciences to compile a list of the most frequent indicators that a teen may be considering suicide. Those indicators are as follows:
Talking About Dying-- any mention of dying, disappearing, jumping, shooting oneself, or other types of self harm Recent Loss-- through death, divorce, separation, broken relationship, self-confidence, self-esteem, loss of interest in friends, hobbies, activities previously enjoyed Change in Personality-- sad, withdrawn, irritable, anxious, tired, indecisive, apathetic Change in Behavior-- can't concentrate on school, work, routine tasks Change in Sleep Patterns-- insomnia, often with early waking or oversleeping, nightmares Change in Eating Habits-- loss of appetite and weight, or overeating Fear of losing control- acting erratically, harming self or others Low self esteem-- feeling worthless, shame, overwhelming guilt, self-hatred, "everyone would be better off without me" No hope for the future-- believing things will never get better; that nothing will ever change Sources:
We’ve answered 319,200 questions. We can answer yours, too.Ask a question | 2,190 | 1,223 | 0.000825 |
warc | 201704 | 2 Answers | Add Yours
In Kate Chopin's "The Story of an Hour," the oppression of women isn't super blatant; there is a woman who is unhappy in her marriage, and has a rather unusual reaction to news of her husband's death. She is, after the initial grief and shock, actually overcome with a sense of freedom. This is not because she was abused, or because her husband was an awful tyrant; in fact, as Louise Mallard (the wife) thinks of him, she realizes that "she had loved him, sometimes," and she knew that at the funeral that she would
"weep again when she saw the kind, tender hands folded in death; the face that had never looked save with love upon her, fixed and gray and dead."
So, her husband was kind, and he loved her, showing her love in every glance. However, this did not keep Louise from feeling oppressed. She lived an era when women were born and bred to be married, to serve their husbands, to be mothers and housewives, and to submit their will to the head of the household. So even though Brently Mallard was "kind," Chopin alludes to the fact that his wife was repressed (she had a face "whose lines bespoke repression") and who resented the
"powerful will bending hers in that blind persistence with which men and women believe they have a right to impose a private will upon a fellow-creature" that often came with marriage."
So, the suppression in this story is subtle; it is seen in the stereotypical roles of women that existed in that time period, and in a woman who was not happy playing the role of subservient housewife. Freed from that role unexpectedly, Louise Mallard feels like "a goddess of Victory" as she looks forward to her life as a "free" woman. I hope those thoughts help; good luck!
mustangjbj,
A more precise term for Chopin's "The Story of an Hour" is repression and not oppression.
The first sentence of the story proves to be essential to the end, though during the middle of the story the initial care to protect Mrs. Mallard from the “sad message” seems almost comic. It is usually assumed, too easily, or possibly incorrectly, that Mrs. Mallard’s “storm of grief” is hypocritical.
If you notice that the renewal after the first shock is stimulated by the renewal of life around her
“the tops of trees . . . were all aquiver with the new spring of life”
and that before she achieves a new life, Mrs. Mallard first goes through a sort of death and then tries to resist renewal: Her expression “indicated a suspension of intelligent thought,” she felt something “creeping out of the sky,” and she tried to “beat it back with her will,” but she soon finds herself
“drinking the elixir of life through that open window,”
and her thoughts turn to “spring days, and summer days.” Implicit in the story is the idea that her life as a wife—which she had thought was happy—was in fact a life of repression or subjugation, and the awareness comes to her only at this late stage.
The story has two surprises: the change from grief to joy proves not to be the whole story, for we get the second surprise, the husband’s return and Mrs. Mallard’s death.
The last line
“the doctors . . . said she had died . . . of joy that kills”
is doubly ironic: The doctors wrongly assume that she was overjoyed to find that her husband was alive, but they were not wholly wrong in guessing that her last day of life brought her great joy.
The text clearly says “she had loved him—sometimes.” The previous paragraph in the story calls attention to a certain aspect of love—a satisfying giving of the self—and yet also to a most unpleasant yielding to force.
Kate Chopin's stories of women who lead contradictory and somewhat unsatisfying lives are wonderful lessons for students to choose friendships, loves, and acquaintances carefully.
We’ve answered 319,200 questions. We can answer yours, too.Ask a question | 4,019 | 1,994 | 0.000526 |
warc | 201704 | A multi-method comparison of Atchafalaya basin surface water organic matter samples
Received for publication January 29, 2008. Surface water organic matter (OM) was isolated from two distinct sites within the Atchafalaya Basin using a combination of XAD-8 and XAD-4 non-ionic macroporous resins and characterized by a suite of analytical methods, including elemental analysis, 13C cross polarization magic angle spinning nuclear magnetic resonance, attenuated total reflectance Fourier transform infrared, luminescence spectroscopy including parallel factor analysis, and ultraviolet-visible spectroscopy. The major findings of the study are (i) despite the large differences in hydrology, optical properties, iron content, dissolved oxygen, and degree of human exploitation, the spectral and elemental signatures of the hydrophobic acids and transphilic acids fractions of the isolated OM for the different sites were remarkably similar; (ii) the luminescence characteristics of the four studied fractions provided information on the relative contributions from terrestrial and microbial input sources, as well as the degree of humification; and (iii) a detailed analysis of the total luminescence data led to a new dual excitation model based on quinone exciplexes for long wavelength emissions. | 1,300 | 716 | 0.0014 |
warc | 201704 | Study title
Can dogs predict seizures?
Institution
Queen's University, Belfast, UK
About the study
Researchers want to explore whether dogs can predict the onset of epileptic seizures
When will this study be recruiting?
It is recruiting now, until December 2018
What will participants be asked to do?
You will be asked to complete a questionnaire, which should take about 10-20 minutes. Your answers will be treated confidentially, and you do not have to give your name, unless you want to.
The questionnaire asks for information about:
You and your household Your epilepsy Whether you own a dog Your dog How your dog(s) respond to you when you have a seizure
This is an ongoing project. If you would be interested in participating in further studies, you can indicate this at the end of the survey.
Who can take part?
You can take part if you:
Have a medically confirmed diagnosis of epilepsy
Note: you do
not have to own a dog to take part
If you are under the age of 16, you should complete the questionnaire with the consent and help of your parents/guardians.
Who is conducting the research?
Neil Powell, PhD student, under the supervision of Professor Peter Hepper.
Who has reviewed this study?
Approval has been granted by the Queen’s University of Belfast Psychology Ethics Committee and Indemnity and Insurance by Queens University Belfast
Interested?
If you prefer to complete a paper version of the questionnaire, please contact Professor Peter Hepper:
Address: School of Psychology,
Queens University Belfast, BT7 1NN | 1,562 | 823 | 0.001241 |
warc | 201704 | Details
Viridian Pregnancy Omega Oil is a supplement which is targeted at mothers through all stages of pregnancy from pre-conception through to postnatal care, especially whilst breastfeeding. This supplement contains a set of ingredients which have been carefully selected by experts in mother and baby nutrition, to offer the best chances at conception and ensure an adequate supply of essential nutrients for healthy development once conception has occurred. The omega oils in this supplement contribute to both the normal brain function and vision of mother and baby. Omega oils are sourced from marine algae sources, so this product is suitable for vegans.
Additional Information
Special Diet Requirements No Added Salt, No Added Sugar, Dairy Free, Gluten Free, Vegan, Vegetarian, Wheat Free, Yeast Free Supplement Age Range Adult Supplement Type Oils
Take 3 teaspoons (15ml) daily with food
Each 5ml teaspoon typically provides:
Organic golden flax seed oil 3094mg, organic hemp seed oil 683g, rice bran natural oil 410mg, organic avocado oil 228mg, DHA (docosahexanoic acid) 9mg, cranberry seed oil 46mg
Providing:
Alpha linolenic acid (omega-3) 1959mg, linolenic acid (omega-6) 1000mg, oleic acid (omega-9) 910mg, DHA per 15ml serving = 273mg
You will receive an e-mail as soon as your items have been dispatched. If your order is delayed for any reason you will be notified as soon as possible. Delivery to Ireland will take approximately 1-2 working days after being despatched. Orders shipped to the UK can take 2-3 working days after being despatched. Delivery to the island of Ireland will incur a delivery charge of €4.99 when the order value is below €30. Orders shipping to the UK will incur a delivery charge of €5.99 when the order value is below €49. Remember to login to your Evergreen account to check on your order status. If you require any more details regarding shipping please contact us at info@evergreen.ie or call us on IRE: 091 753236 or UK: 0845 2994051 during office hours. | 2,037 | 1,076 | 0.000944 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.