text
string | id
string | dump
string | url
string | file_path
string | language
string | language_score
float64 | token_count
int64 | score
float64 | int_score
int64 |
|---|---|---|---|---|---|---|---|---|---|
From Delhi, India:
What do you mean by anxiety neurosis? What are its symptoms? Do people with diabetes or blood pressure have it? What are the remedies?
Anxiety neurosis is a old term which is used to describe a condition in which anxiety occurs out of proportion to the real danger of a situation, and which can lead to maladaptive behavior. It is treated with either psychotherapy and/or medication. It does not have to be associated with another illness, but certainly a chronic illness like diabetes can exacerbate the condition.
Original posting 6 Jun 2000
Posted to Other Illnesses
Last Updated: Tuesday April 06, 2010 15:09:10
This Internet site provides information of a general nature and is designed for educational purposes only. If you have any concerns about your own health or the health of your child, you should always consult with a physician or other health care professional.
This site is published by T-1 Today, Inc. (d/b/a Children with Diabetes), a 501c3 not-for-profit organization, which is responsible for its contents. Our mission is to provide education and support to families living with type 1 diabetes.
© Children with Diabetes, Inc. 1995-2016. Comments and Feedback.
|
<urn:uuid:646e4526-54a6-4395-8dfa-fa8ee400c8cb>
|
CC-MAIN-2016-26
|
http://www.childrenwithdiabetes.com/dteam/2000-06/d_0d_4wf.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00004-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94121
| 257
| 2.9375
| 3
|
|Carnegie Mellon | Robotics Institute | Field Robotics Center | USDA | Smart Farms|
There are several irrigation modes that the grower can use.
|Irrigation Mode||Description||The WSN Advantage|
|Manual||Allows the grower to send manual irrigation commands.||Irrigation can be determined remotely based on sensor data.|
|Schedule||Irrigated based on a predefined schedule.||Irrigation nodes can easily be reconfigured and/or moved based on the current crop.|
|Local "Setpoint" Control||Uses the schedule but also looks at current soil moisture to determine if irrigation is needed at that nodes location.||Irrigation is determined at each location for precise irrigation control.|
|Global Control||This is used to control irrigation based on values external to the node. Examples of global control can be a sensor on a different node, a computed value from a growing tool, or from a plant science model.||This entire method of control that is completely data driven and allows for feed-forward predictive control is completely unique to WSN's.|
|Pulse Types||This is a sub-mode for the modes listed above. With this option irrigation can be issued in pulses to allow sensors time to react, increase precision, or allow irrigation lines time to recharge.||The ability to do this on each node allows for localized irrigation control.|
|
<urn:uuid:776c8300-18a7-494e-97eb-de82e34d50fc>
|
CC-MAIN-2016-26
|
http://frc.ri.cmu.edu/project/sensorwebs/basestation.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00034-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.877191
| 291
| 2.796875
| 3
|
As debate stirs over whether Americans are willing to create and print items using 3-D printers in their homes, one Chinese design and engineering firm is printing houses.
Massive 3-D printers have been used to construct 10 full-sized homes in China in just 24 hours, according to WinSun, a private Chinese firm.
WinSun’s assembly line of four printers, each about 33-feet wide and 22-feet high, spray a mixture of quick-drying cement and construction waste used to print layers of walls, according to Xinhua state news agency.
Because the material is inexpensive and there are no labor costs, each house can be printed for less than $5,000, Xinhua reported.
Ma Yihe, who designed the printers, says creating 3-D houses is cheaper and better for the environment because the machines use recyclable materials. Ma wouldn’t discuss in detail the technology used to print the homes, Xinhua reported.
Earlier this year, USC professor Behrokh Khoshnevis was testing a 3-D printer that could build a 2,500-square-foot house in 24 hours using a technology called “contour crafting.” Khoshnevis told MSN.com the technology could be used to provide affordable housing in impoverished parts of the world as well as quickly create shelter in the wake of natural disasters.
Here’s a closer look at contour crafting:
Follow Michelle @m_cof.
More from MarketWatch:
|
<urn:uuid:9bbfc193-a006-4219-8308-96e365717f33>
|
CC-MAIN-2016-26
|
http://blogs.marketwatch.com/themargin/2014/04/30/chinese-design-firm-builds-10-homes-using-3-d-printing/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925009
| 313
| 2.5625
| 3
|
Acute cytomegalovirus (CMV) infection is a condition caused by a member of the herpesvirus family.
CMV mononucleosis; Cytomegalovirus (CMV)
Infection with cytomegalovirus (CMV) is very common. The infection is spread by:
Most people come into contact with CMV in their lifetime, but typically only individuals with weakened immune systems become ill from CMV infection. Some otherwise healthy people with acute CMV infection develop a mononucleosis-like syndrome.
In the U.S., CMV infection most commonly develops between ages 10 - 35. Many people are exposed to CMV early in life and do not realize it because they have no symptoms. People with a compromised immune system can have a more severe form of the disease.
CMV is a type of herpes viruses. The virus remains in your body for the rest of your life. If your immune system becomes weakened in the future, this virus may have the chance to reactivate, causing symptoms.
Less common symptoms include:
Your health care provider will perform a physical exam and feel your belly area. Your liver and spleen may be tender when they are gently pressed (palpated). You may have a skin rash.
Special lab tests such as a CMV DNA serum PCR test may be done to check for substances in your blood that are produced by CMV. Other tests such as a CMV antibody test may be done to check your body’s response to the CMV infection.
Other tests include:
Most patients recover in 4 - 6 weeks without medication. Rest is needed, sometimes for a month or longer to regain full activity levels. Painkillers and warm salt water gargles can help relieve symptoms.
Antiviral medications are usually not used in people with normal immune function.
Fever usually goes away in 10 days, and swollen lymph glands and spleen return to normal in 4 weeks. Fatigue may linger for 2 to 3 months.
Throat infection is the most common complication. Rare complications include:
Call for an appointment with your health care provider if you have symptoms of acute CMV infection.
Go to the emergency room or call the local emergency number (such as 911) if you have sharp, sudden pain in your left upper abdomen. This could be a sign of a ruptured spleen, which requires emergency surgery.
CMV infection can be contagious if the infected person comes in close or intimate contact with another person. You should avoid kissing and sexual contact with an infected person.
The virus may also spread among young children in day care settings.
When planning blood transfusions or organ transplants, the CMV status of the donor can be checked to avoid passing CMV to a recipient who has not had CMV.
Crumpacker CS II, Zhang JL. Cytomegalovirus. In: Mandell GL, Bennett JE, Dolin R, eds. Principles and Practice of Infectious Diseases. 7th ed. Philadelphia, Pa: Elsevier Churchill Livingstone; 2009:chap 138.
Drew WL. Cytomegalovirus. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 384.
|
<urn:uuid:aff78c32-8fed-463b-979b-17da65122399>
|
CC-MAIN-2016-26
|
http://www.northside.com/HealthLibrary/?Path=HIE+Multimedia%5C1%5C000568.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00164-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91062
| 691
| 3.328125
| 3
|
"It’s time to stop being so “amazed” at things that are just part of the technological and cultural landscape of life in the 21st century. It’s not “amazing” that computers can edit video, manage numbers or manipulate digital images. It’s not “amazing” that mobile phones can stream live video or GPS your current position. It’s not “amazing” that you can make phone calls to the other side of the planet at no cost. None of these things are really “amazing” any more… they just “are”. To be “amazed” at this sort of stuff is to fail to recognise the invisible role that technology plays in all our lives these days. To anyone working in education, working with young people, you need to realise that simple tasks performed with technology are not something to be “amazed” at, marveled at and gushed over. For our students, the use of technology as the enabler for such tasks seems as natural as breathing air.
I was in another meeting with some students and a teacher the other day, and the teacher was trying to show the kids about a Ning they’d had set up for a class project. The teacher was all effusive, gushed about the Ning’s “amazing” features and wanting to show the students all the “amazing” things it could do… “Look! You can use it to leave messages for each other!”, she said excitedly. One of the students confided to me later “I can’t believe how worked up she was getting about that Ning… it’s just a blog. It’s like Facebook. Of course we know how to use it.” It reminded me of that wonderful line from The Hitchhiker’s Guide to the Galaxy by Douglas Adams, where the people of Earth were considered a bit of a joke for being “so amazingly primitive that they still think digital watches are a pretty neat idea.”Here is my own response that I have for Chris (as left in the comments):
I think it is important to realize that everyone has their own personal journey into this computing age. As change agents it is important to realize that just because something isn't amazing to us does not mean that it is not amazing to another person. I think the reason more people don't jump into technology is the condescension of those already in technology.
If someone thinks it is amazing, the first thing to do is to put their hand on the mouse and then let them try it. Then it travels from amazing to "I can do it." And when it becomes non-amazing and part of what they do on a daily basis, then, we have enacted positive change and helped people transform.
We have to help these things move from amazing to normal for people and as long as people are thinking these things are amazing we aren't making it real enough for them to think that they can do it!
Please be kind to the others you work with - they are probably great people who just aren't there yet and need some encouragement. And when the light bulb goes on, we all gush on and get excited and then we move on and just use it. Newbies need some kindness and grace from those who know more and not made to feel like dummies. How will those you help feel if they read this post - will they feel appreciated and accepted or will they feel like you're on the inside looking down at their stupidity? They aren't stupid they might just be newbies and that, in itself is a HUGE accomplishment because each person needs to start somewhere.
It is also amazing that we have the ability to build our own social networks and do it for free and that we can set these things up. There are a lot of amazing things computers can do -- does it mean that separating conjoined twins or the stars in the sky aren't more amazing. I guess it is sort of like how the Eskimos have so many words for snow -- perhaps we need more words for how it feels when something is really really cool.
I do think Chris is right that it is about time for many of these things to move from "amazing" status to just what we do. But we're not there yet AT ALL. The fact is that these things really are amazing to a whole lot of people.
Honestly, I find my iTouch's ability to coach or help me manage my christmas list or my calendar pretty amazing because it has made my life better and I remember just a year a go when I didn't have it. I also find Twitter's ability to connect me to everyone else all over the place pretty amazing as well - since we didn't have this ability around 4 years a go.
Do I find it as amazing as the sparkle in my young son's eyes as we play with the cat? Or as seeing my Mom serve Thanksgiving yesterday when we didn't know last year if she'd live through 2008? Gosh, not - those things are transcendent.
I think we just have to give people time. Love them, encourage them, help them and also teach a true patience when newbies are just learning something because truly we're all newbies and gush on about something new to us that is old hat for someone else.
Chris is a great guy and I'm sure as he helped these people that he was just thinking inside himself: when, when is this going to move on and be something everyone knows how to do? Why am I the only one doing this? Why can't they see that this is no big deal? We all feel this.
And yet, we have to temper how we feel with the reality that a lot of good people in education out there are really just now starting to begin to understand these tools and patience and helpfulness when they are ready is a great asset in our desire for change.
In sports - I would do anything for my coach who was kind, loving, and encouraging but the most arrogant coach I ever had coached the sport I loved the most (basketball) but the one I quit because he was so frustrated that we didn't get things that were old hat to him but very new to us. Coaches gotta keep coming back to the fundamentals when new people are on their team.
Progress is being made - keep it going. Help teachers connect themselves using an RSS reader and Twitter - that is a great first step to helping them connect and learn themselves without being so dependent upon a few people at their school.
And one side note: when these things are no longer amazing, they no longer carry a premium price tag for the people who can help them with it -- so if you're an IT person worth your salt, you'd better be working in the realm of amazing to the other folks at your school, that IS what they pay you for.
Finally, it is all about learning and helping students learn. To me, when my students in 8th grade first make a video, they think it is amazing but by the time they are done with tenth grade - videos are just what you do and are part of what they know how to do. We should be part of transforming these important tools, skills, and knowledge from amazing to just what we do for the students and the teachers -- although this is something we will have to continue to do as long as new humans are being born on this planet.
|
<urn:uuid:32fda97d-afe1-4b38-a473-877311db438b>
|
CC-MAIN-2016-26
|
http://coolcatteacher.blogspot.com/2009/11/from-amazing-to-normal-taking-journey.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00046-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982358
| 1,554
| 2.703125
| 3
|
Toilets are a popular research subject. They’re an essential part of modern life. Everybody has one, everybody uses them everyday, and everybody has an opinion.
Manufacturers are constantly researching new designs, products, and technologies. To be approved for market, fixtures must go through rigorous tests in which the flushing mechanism is deployed thousands of times.
This toilet model, made by Niagara Conservation, flushes by a tipping bucket rather than a conventional flapper. Photo source: Aquacraft, Inc., by permission.
Currently there are several research projects that are evaluating the effectiveness of several new toilet models as well as customer satisfaction with these products. Sponsored by the U.S. Environmental Protection Agency and two water providers (East Bay Municipal Water Utility District and City of Tampa Water Department), these studies measure water use in single-family homes before and after the installation of high-efficiency fixtures including toilets and clothes washers. Results from these studies should be available in 2002.
There are ongoing research efforts into toilet performance and toilet flapper durability being conducted by a number of independent laboratories. The Metropolitan Water District of Southern California bench tests toilets and flappers to ensure that they meet performance standards. The National Association of Home Builders also has a bench testing lab for toilets.
For information on toilet research you can visit the following web sites:
|
<urn:uuid:614d4dc1-3174-4d75-9c8b-973d893bfb60>
|
CC-MAIN-2016-26
|
http://www.h2ouse.org/action/details/action_element_contents.cfm?actionID=99DD5AC4-FC6D-4635-A35D-EE0C98C7B847&elementID=5812B5A5-E0BE-4D14-A202C8DAE8CE491F&parentPage=Take%20Action%7C/action/index.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00000-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946085
| 274
| 3.140625
| 3
|
In its nearly 70 years, the United Nations has won many awards, including the Nobel Peace Prize several times for its work on peacekeeping, climate change, children, refugees and more.
But lesser known is that the UN has been decorated by the film industry – with an Oscar! In 1947, the UN short film “First Steps” won the Academy Award for Documentary Short Subject.
The 10-minute piece about the treatment of children with disabilities takes us on the journey of one young boy as he learns to walk – first to move his legs, then to stand and then finally to take his first steps.
Several aspects of the film are now outdated and over the last decades the UN has supported the full participation and inclusion of persons with disabilities in all aspects of society, including through the Convention on the Rights of Persons with Disabilities.
Yet the film’s overall theme of achieving a life of dignity for all is one that continues to guide the work of the UN.
The Oscar still resides at UN Headquarters in New York and recently took a quick tour of the building in honour of this weekend’s Academy Awards ceremony.
Oscar even paid a visit to Secretary-General Ban Ki-moon who wished luck to this year’s nominees!
The Academy Award also posed with several UN staff, including Deputy Secretary-General Jan Eliasson.
- Find out more about the UN’s work for persons with disabilities.
- Check out the UN’s YouTube channel for more film and video from around the world.
|
<urn:uuid:270411d9-c805-4be3-9cf8-b11acfba408d>
|
CC-MAIN-2016-26
|
http://blogs.un.org/blog/2014/02/28/oscar-at-the-un/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964267
| 318
| 2.546875
| 3
|
|February 20, 2010||Posted by Barry Arrington under Intelligent Design|
Science Daily reports on new work examining cellular motors:
Life’s smallest motor — a protein that shuttles cargo within cells and helps cells divide — does so by rocking up and down like a seesaw, according to research conducted by scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and Brandeis University.
The researchers created high-resolution snapshots of a protein motor, called kinesin, as it walked along a microtubule, which are tube-shaped structures that form a cell’s “skeleton.” The result is the closest look yet at the structural changes kinesin proteins undergo as they ferry molecules within cells.
“We see for the first time how kinesin’s atomic-scale moving parts allow it to pull itself and its cargo along a microtubule,” says Ken Downing, a biophysicist with Berkeley Lab’s Life Sciences Division. He conducted the research with postdoctoral fellow Charles Sindelar, now at Brandeis University. “We found that there is a pivot point, where the kinesin motor attaches to the microtubule, which acts like a fulcrum and causes kinesin to rock up and down like a seesaw as it moves along the microtubule,” adds Downing. Their research is reported in the online early edition of the Proceedings of the National Academy of Sciences.
The first-ever glimpse of kinesin’s seesaw motion offers key insights into one of life’s most fundamental processes. Fueled by an energy-giving compound called ATP, kinesin proteins motor along microtubules like trains on a railroad track, towing cargo to various locations within cells and assisting in cell division. Microtubules are a cylindrical weave of proteins found throughout cells that serve as cellular scaffolding.
Writing in PNAS (www.pnas.org/cgi/doi/10.1073/pnas.0915158107, subscription required), Franck Fournio and Carolyn A. Moores state “Thus the cell’s nanomachines have evolved to use ATP only when they can couple it to essential work.”
Incredible. The language of teleology is, as always, inescapable. Let’s count the words in these two brief quotations that imply design: “motor” “railroad track” “machine” “shuttle” “seesaw” “cargo” “ferry” “fueled” “towing” “scaffolding”
They even admit they are dealing with MACHINES. Machines are designed for a purpose. It takes a staggering amount of blind faith in materialist philosophy to believe machines somehow “evolved” through unguided natural processes. Yet Fournio and Moores say “machines” evolved with no hint of irony or the slightest qualm regarding the fact that they appear to have absolutely no idea how such a thing could have happened (or if they do, they certainly do not share it with us).
|
<urn:uuid:0885fd35-a632-4099-b879-30f868e2af87>
|
CC-MAIN-2016-26
|
http://www.uncommondescent.com/intelligent-design/nanomachine-evolved/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00021-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922059
| 679
| 3.53125
| 4
|
Image courtesy of Jeroen de Vries, University of Twente
If you want to store information for a long time—like several thousand years—your best bet is still etching in stone. All the newcomers, dating back to paper and moving up through magnetic storage and DVDs, are fragile and perishable. To send a note to a million years in the future, it takes tougher stuff than what we have readily available.
That’s why the QR code above was etched into tungsten and coated in silicon nitride. Hopefully, the message it contains can outlast humanity itself. Huge aspirations, to be sure, but it’s just one step on Human Document Project, a new group that aims to preserve a document about mankind for a million years.
“If you do nothing with the purpose of storing information for a long time, what will be stored and what will be found will be just a matter of coincidence,” explains Dr. Miko Elwenspoek, a professor at the University of Twente in the Netherlands. “In a few thousand years lots of things will be found, but what will be found, there will be no control over... What will be left will be architecture and everything in stone and not even all metal—everything in steel will be gone. The basis of our culture will drift somewhere, and during this drift lots of things will be lost.”
Elwenspoek is part of the Human Document Project, which he says isn’t a formal organization yet, but is rather “just a loose group of enthusiastic people from all over the world.” They’ve had two symposia, one in Germany and one at Stanford, with another coming up in England. They're meetings where people who work independently in their labs in their spare time can come together and exchange and develop ideas on how to preserve culture as a “gift to our far and away descendants.”
As you might imagine, there are huge practical barriers to preserving anything for a thousand millennia. Physical objects are subject to decay and destruction, and digital objects even more so. Even though we’re always saying things like, “Everything on the internet is permanent,” that adage is only really true on a human timescale, not on a geologic one—cultures drift, global catastrophes strike. Link rot is already afflicting the internet, which, far from being self-contained, relies on servers, unbroken connections and power grids.
The ideal super long-term data storage system should be able to survive without losing content; it should be self-evidently decodable for someone who doesn’t speak any currently-existent or known language, and it needs to be stored somewhere where it can eventually be found.
Jeroen de Vries, a PhD candidate at the University of Twente, headed the team that designed the QR code-bearing disk that, they say, can still be readable a million years from now. That's 25 times longer than the oldest known cave drawings. And de Vries and his team want to spare our distant descendants the mystery that surrounds prehistoric drawings, whose purpose, authors, and meanings are lost to time.
A close up of the storage device's tiny QR codes.
They chose tungsten because it is an extremely tough metal, and one that holds its shape well, even in extreme heat, as they proved by cooking the disk like bacon. They coated it in silicon nitride to toughen it up further. To prove just how densely they could pack the data on the disk, they not only etched on the QR code, they made it out of smaller QR codes.
“The QR is not important itself,” Elwenspoek, who also worked on the project, told me. “More important is that we are able to make dots, of a very small size. The [lifespan] of the dots is a very long time. That’s the main point. The QR codes are just for demonstration.”
While fairly ubiquitous now, the QR isn't ideal because it needs to be decoded somehow. Elwenspoek laid out why even the proper language or highly dense data for the message is one complex part of the complex problem.
“You first need a guide into the code,” Elwenspoek said. “Of course no one will be able to speak any existing language. Even in a few thousand years it’s improbable that anyone will speak English or Chinese—or understand it."
“So first you’ll need to teach them a language, then you can explain how the system works—how to crack the code, how to build a machine to read the code,” he explained.
The mechanism for reading the code could be anything from an optical microscope up to an electron microscope “for a really high density of data,” de Vries said.
“If we want high data density, we’ll probably move to something like binary or a binary language,” said de Vries. “But one of the first things that the human document project disk should do, is teach how to read the disk. It could be by images, like the record that was sent on Voyager.”
The golden records that were loaded onto Voyager, which launched in 1977, stand as one of the most well-known efforts to preserve the human record for future cultures. They're still riding outward from Earth, bearing instructions on how to listen to the “Sounds of Earth” for any extra-terrestrials that find it.
The Human Document Project is more focused on storing something on Earth, something that will “only” last until complex life becomes impossible on Earth. (Elwenspoek pegged that at around 500 million years or so from now.) They’re looking into geologically stable places on Earth to store that future disk, considering burying something on the Moon, and even thinking about stashing something in a stable part of the solar system for safe keeping.
It’s an incredibly compelling problem—one that leads to solutions so creative they border on bizarre. Taking a totally different approach, Canadian poet Christian Bok wanted to translate a poem into DNA and splice it into a bacteria, which would reproduce and preserve the writing until the Sun swallowed the Earth. The openness and complexity is what drew Elwenspoek to the field.
“First of all thinking of these things is just fun,” he said. “It’s very complex; you have to talk to many different people with different backgrounds—linguists, scientists, computer scientists. The choice of what to preserve needs people who know literature and music and culture so that’s a fun part.”
Fun aside, Elwenspoek is also motivated by trouble he sees in the long-term: a world heavy on people and getting heavier, short on resources and getting shorter, and with enough nuclear warheads around that conflicts can turn catastrophic.
I jokingly responded that with all the pressure, they’d better hurry up and get this human document going. “That’s what I’m saying,” was his straight-faced reply.
“So there is enough concern that there will come a major catastrophe within the next few thousand years and this could destroy our culture,” he explained. “In 10,000 years, something terrible could happen.”
|
<urn:uuid:8142ec83-b682-474c-9b22-28172ce57bb5>
|
CC-MAIN-2016-26
|
http://motherboard.vice.com/blog/the-project-to-preserve-humanitys-data-for-a-posthuman-future
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00038-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950779
| 1,566
| 2.828125
| 3
|
Despite the abundance of earthworms in soils all around the world, there is a lack of information concerning the geographical distribution of many lumbricid species. Researchers from eight European countries have collected information on earthworm communities to map the biodiversity of these invertebrates and to put soil conservation on the political agenda.
Even though we cannot see them, there are numerous, unevenly distributed organisms which are living in the top most layer of the Earth's crust. The soil, considered a "forgotten resource", is home to more than a quarter of our planet's biodiversity. One single gram of healthy soil contains millions of organisms, one of the reasons why 2015 was named the International Year of Soils.
"In 2015, several initiatives were organised with the aim of bringing some justice to this system (edaphic environments) that we step on every day and feed on, and that makes it possible for the forests, meadows and crop fields, among others, to function properly," says María Jesús Briones to SINC, a researcher at the University of Vigo and one of the authors of a study that, for the first time, gathered information on the biodiversity of one of the groups of terrestrial invertebrates that has a significant impact on soils: the Lumbricidae, or earthworms.
During the study, published in the journal Applied Soil Ecology, scientists from eight different countries, including Spain, created the first large-scale European map of earthworm abundance and diversity in addition to distribution maps of widespread species such as Aporrectodea caliginosa and Lumbricus terrestris.
In recent years, "the classification of edaphic invertebrates and their distribution patterns have not received priority for funding, meaning that a lot of information from unpublished studies has not been digitised," states the researcher regretfully. In addition, these animals -affected by the use of soils- were not well documented in the records, but their presence and role in the ecosystem greatly enhance the quality of the soils where they live.
In total, the research team analysed earthworm records from 3,838 locations from eight European countries, and the results were modelled in order to compare them to the rest of the continent. With regard to the rest of the world outside of Europe, however, "there is not enough quantitative data concerning the properties of soil to be able to make accurate estimations," stresses Briones.
Completing the map in Spain
France, Ireland and Germany were the top countries to gather information regarding the biodiversity of these invertebrates -France collected data from 1,423 locations- thanks to access to substantial funding to study the entire territory. On the other hand, there is still a lot to be done in Spain.
In fact, the records from Spain focus on approximately one third of the country's territory: this study included earthworm data from 63 sites representing four provinces in northwestern Spain (Asturias, Leon, Zamora and Salamanca). According to the expert, "the urgency now is to gather more data in order to validate the data we already have".
For this reason, the researcher is currently overseeing a project to collect more data. Specifically, "we are digitising the data from Galicia to complete the northwestern Iberian Peninsula," highlights Briones, who included data from her thesis in this study. The ultimate objective is to make further contributions, even from Portugal and Italy, to obtain detailed maps of the diversity and abundance of the European earthworm.
"The study is the first step to creating a database of European earthworms, which needs to be improved on," point out the authors. Given its environmental importance -being a reflection of the quality of their habitat-, the study strives for a better understanding of these invertebrate communities in addition to improved monitoring.
"We hope that studies such as this one put a greater weight on the need to understand the diversity of these invertebrates that are so important to the proper functioning of soils," concludes Briones who is involved in another initiative called Global Soil Biodiversity Atlas which will soon be published.
Rutgers, Michiel et al. "Mapping earthworm communities in Europe" Applied Soil Ecology 97: 98-111 DOI: 10.1016/j.apsoil.2015.08.015 January 2016
|
<urn:uuid:aa1b79f9-11c8-4f87-b34a-7ee6d3c08403>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2016-02/f-sf-tfe022416.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945471
| 890
| 4.21875
| 4
|
Electronics Components: Parallel Resistors
So how do you calculate the total resistance for resistors in parallel on your electronic circuit? Put on your thinking cap and follow along. Here are the rules:
First, the simplest case: Resistors of equal value in parallel. In this case, you can calculate the total resistance by dividing the value of one of the individual resistors by the number of resistors in parallel. For example, the total resistance of two, 1 kΩ resistors in parallel is 500 Ω and the total resistance of four, 1 kΩ resistors is 250 Ω.
Unfortunately, this is the only case that's simple. The math when resistors in parallel have unequal values is more complicated.
If only two resistors of different values are involved, the calculation isn't too bad:
In this formula, R1 and R2 are the values of the two resistors.
Here's an example, based on a 2 kΩ and a 3 kΩ resistor in parallel:
For three or more resistors in parallel, the calculation begins to look like rocket science:
The dots at the end of the expression indicate that you keep adding up the reciprocals of the resistances for as many resistors as you have.
In case you're crazy enough to actually want to do this kind of math, here's an example for three resistors whose values are 2 kΩ, 4 kΩ, and 8 kΩ:
As you can see, the final result is 1,142.857 Ω. That's more precision than you could possibly want, so you can probably safely round it off to 1,142 Ω, or maybe even 1,150 Ω.
The parallel resistance formula makes more sense if you think about it in terms of the opposite of resistance, which is called conductance. Resistance is the ability of a conductor to block current; conductance is the ability of a conductor to pass current. Conductance has an inverse relationship with resistance: When you increase resistance, you decrease conductance, and vice versa.
Because the pioneers of electrical theory had a nerdy sense of humor, they named the unit of measure for conductance the mho, which is ohm spelled backward. The mho is the reciprocal (also known as inverse) of the ohm.
To calculate the conductance of any circuit or component (including a single resistor), you just divide the resistance of the circuit or component (in ohms) into 1. Thus, a 100 Ω resistor has 1/100 mho of conductance.
When circuits are connected in parallel, current has multiple pathways it can travel through. It turns out that the total conductance of a parallel network of resistors is simple to calculate: You just add up the conductances of each individual resistor.
For example, suppose you have three resistors in parallel whose conductances are 0.1 mho, 0.02 mho, and 0.005 mho. (These are the conductances of 10 Ω, 50 Ω, and 200 Ω resistors, respectively.) The total conductance of this circuit is 0.125 mho (0.1 + 0.02 + 0.005 = 0.125).
One of the basic rules of doing math with reciprocals is that if one number is the reciprocal of a second number, the second number is also the reciprocal of the first number. Thus, since mhos are the reciprocal of ohms, ohms are the reciprocal of mhos.
To convert conductance to resistance, you just divide the conductance into 1. Thus, the resistance equivalent to 0.125 mho is 8 Ω (1 ÷ 0.125 = 8).
It may help you remember how the parallel resistance formula works when you realize that what you're really doing is converting each individual resistance to conductance, adding them up, and then converting the result back to resistance. In other words, convert the ohms to mhos, add them up, and then convert them back to ohms. That's how — and why — the resistance formula actually works.
|
<urn:uuid:f6225092-7ebb-4155-8a57-d1fb1dbca2db>
|
CC-MAIN-2016-26
|
http://www.dummies.com/how-to/content/electronics-components-parallel-resistors.navId-810951.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00080-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932903
| 857
| 4.03125
| 4
|
Remember Watson? The supercomputer made star status when it competed on the game show Jeopardy. Now, IBM and Cleveland Clinic are collaborating to give Watson a new assignment: helping healthcare workers make faster decisions.
The IBM researchers who created Watson will work with Cleveland Clinic clinicians, faculty and medical students to build up the capabilities of Watson's Deep Question Answering technology in the medical field. The goal is to unlock important knowledge and facts buried within huge volumes of .
"Every day, physicians and scientists around the world add more and more information to what I think of as an ever-expanding, global medical library," said C. Martin Harris, M.D., chief information officer of Cleveland Clinic. "Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine. Technology like this can allow us to leverage that medical library to help train our students and also find new ways to address the public health challenges we face today."
Tapping Watson's Strengths
Instead of trying to memorize everything in textbooks and medical journals -- now acknowledged as an impossible task -- students are learning through doing. In other words, students are taking patient case studies, analyzing them, coming up with hypotheses, and then finding and connecting evidence in reference materials and the latest journals to identify diagnoses and treatment options in the context of medical training.
That's one of Watson's core strengths. As part of the collaboration, medical students will interact with Watson on challenging cases as part of a problem-based learning curriculum and in hypothetical clinical simulations. A collaborative learning and training tool that taps Watson technology will help the students learn the process of navigating the latest content, suggesting and considering a variety of hypotheses and finding key evidence to support potential answers, diagnoses and possible treatment options.
"The practice of medicine is changing and so should the way medical students learn," said Dr. David Ferrucci, IBM Fellow and principal investigator of the Watson project. "In the real world, medical case scenarios should rely on people's ability to quickly find and apply the most relevant knowledge. Finding and evaluating multi-step paths through the medical literature is required to identify evidence in support of potential diagnoses and treatment options."
Students Make Watson Smarter
For their part, students will help improve Watson's language and domain analysis capabilities by judging the evidence it provides and analyzing its answers within the domain of medicine. The collaboration will also focus on leveraging Watson to process an electronic medical record (EMR) based on a deep semantic understanding of the content within an EMR.
IBM expects Watson will get "smarter" about medical language and how to assemble good chains of evidence from available content. Students will learn how to focus on critical thinking skills and how to best leverage informational tools like Watson in helping them learn how to diagnose and treat patients.
"New discoveries and medical breakthroughs are growing our collective knowledge of medicine at an unprecedented pace, and tomorrow's doctors will have to embrace new tools and technology to complement their own knowledge and experience in the field," said James Stoller, M.D., chair of the Education Institute at Cleveland Clinic. "Technology will never replace the doctor, but it can make us better. Our students and faculty are excited to play a role in getting us there."
Posted: 2012-11-02 @ 6:10am PT
Great News!!! Hope Watson replaces Doctors down the line:) At least the evidences provided by Watson to make a decision will make some Doctors to think out of box.
|
<urn:uuid:6e6dda16-508c-4ddb-9175-acdd66f327c6>
|
CC-MAIN-2016-26
|
http://business.newsfactor.com/news/IBM-s-Watson-Going-to-Medical-School/story.xhtml?story_id=10200AN2HEX0
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00119-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939035
| 729
| 3.25
| 3
|
Lori J. Rachul
NASA Lewis Research Center
FORMER NASA OFFICIAL RECEIVES GUGGENHEIM MEDAL
CLEVELAND, OH-- Abe Silverstein, a leading figure in 20th century aerospace engineering and a former NASA center director, was presented today the prestigious Guggenheim Medal by representatives from the Guggenheim Medal Fund and the American Institute of Aeronautics and Astronautics.
The medal, established in 1927, honors those who have made significant contributions to the advancement of flight. Silverstein joins the distinguished company of previous winners that include Orville Wright, William Boeing, Donald Douglas, James Doolittle, Charles Lindbergh, James McDonnell, Jr. and Clarence "Kelly" Johnson.
Silverstein was selected to receive the award by representatives from the U.S., Canada and six European countries. Silverstein's citation praises his "technical contributions and visionary leadership in advancing technology of aircraft and propulsion performance, and foresight in establishing the Mercury and Gemini manned space flight activities."
"Lewis is an outstanding center because of the contributions made by many dedicated researchers and leaders who have gone before us. Dr. Silverstein stands head and shoulders above all others in terms of contributions in the areas of aeronautics and space. It is for this reason that he is richly deserving of the award," NASA Lewis Director Donald Campbell said.
Silverstein began his career at the National Advisory Committee for Aeronautics' (NACA) Langley Research Center, Hampton, VA, in 1929. There, he helped design and was in charge of the full-scale wind tunnel. He directed significant aerodynamic research that led to higher-speed performance for most of the United States' World War II combat aircraft.
In 1943, Silverstein was transferred to the NACA laboratory in Cleveland where he directed research in the historic Altitude Wind Tunnel that was later named for him. This work led to outstanding improvements in both reciprocating and early turbojet aircraft engines such as the development of supersonic jet afterburners. He also pioneered research on large-scale ramjet engines.
After World War II, Silverstein was instrumental in the development of U.S. supersonic propulsion wind tunnels that supported work on developed supersonic aircraft. In 1958, he moved to NACA Headquarters in Washington, DC, where he helped create and subsequently directed the efforts leading to the Mercury space flights and established the technical basis for the Apollo program to send U.S. astronauts to the Moon. He also is credited for proposing the name "Apollo" for the lunar landing program.
He returned to Cleveland to become Director of NASA's Lewis Research Center from 1961-1969. Silverstein oversaw expansion of the center and was a driving force behind creation of the Centaur launch vehicle.
- end -
NASA Glenn Research Center news releases are available automatically by sending an Internet electronic mail message to:
Leave the subject and body blank. The system will reply with a confirmation via e-mail of each subscription. You must reply to that message to begin your subscription.
To unsubscribe, address an e-mail message to:
Leave the subject and body blank.
|
<urn:uuid:3fc6fefa-1344-4a2a-83a9-22fb2713fb77>
|
CC-MAIN-2016-26
|
http://www.nasa.gov/centers/glenn/news/pressrel/1997/97_54.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93941
| 652
| 2.984375
| 3
|
/ fuel mixture, the more frugal the engine is. But there are two
prevent conventional engines from operating in lean air / fuel mixture:
Today, Lean Burn technology has evolved into Direct Injection, which is basically the former added with direct fuel injection. Toyota, Mitsubishi and Nissan all concentrate in DI engines development.
Mitsubishi claimed GDI consumes 20 to 35% less fuel, generates 20% less CO2 emission and 10% more power than conventional engines. How can it be so magical ? The following paragraphs will tell you its secret.
Theory of GDi
Gasoline direct injection technology is one of the branches of "Lean Burn Technology". What it differs with Lean Burn is the adoption of directly fuel injection system.
Direct fuel injection
been used in diesel engines for many years, but not in petrol engine
recently. Inherently, direct injection has two advantages :
How can Mitsubishi applied direct injection without such problem? Let us look at the following diagrams:
The fuel injector is another new feature. It pumps out the fuel at higher pressure, enables better pulverisation and more uniformal spread.
Fuel injection takes place in two phases. During intake stroke, some amount of fuel is "pre-injected" into the combustion chamber, cools the incoming air thus improve volumetric efficiency, and ensuring an even fuel / air mixture in everywhere.
Mitsubishi GDI engine has an extraordinarily high compression ratio of 12.5 : 1, this is perhaps the highest record for production petrol engine. The result is higher power output.
How can it prevent combustion knock under such pressure ? The secret is the pre-injection process. During compression, the heated air is cooled by the fuel spray, thus knocking becomes less easy to occur.
One of the few drawbacks of GDI engine is the higher NOx pollutant level. Luckily, a newly developed catalytic convertor deal comfortably with it. Nevertheless, USA and many developing countries cannot be benefited by it because their high-sulphur petrol will damage the catalyst.
Also see : The Problem of GDI in Europe
As tested by a UK magazine, Mitsubishi Carisma GDI did not deliver higher fuel efficiency than competitors with conventional engines, very different to what the company claimed. This is simply not explainable until Renault launched its own direct injection petrol engine recently. In Renault’s press release material, there is implication that "a Japanese design" suffers from the relatively high Sulphur fuel in Europe, which is 150ppm compare with Japan’s 10-15ppm (although still a lot lower than that of the US). In Japan the GDI needs a special catalyst to clean the excessive NOx generating under ultra-lean combustion. However, the high Sulphur fuel could "pollute" the catalyst and makes it permanently ineffective.
Therefore the European Carisma GDI runs at much richer air fuel mixture than Japan’s sisters in order to reduce NOx, hence require only a normal Catalyst. While the Japanese GDI achieve a fuel / air ratio of 1 : 40 at light load, the European GDI can only reach 1 : 20 or so, compare to conventional engine’s 1 : 14. This greatly reduce fuel efficiency.
Another problem lies on different testing method between Japan and Europe. The test carried out by Transportation Department of Japan was done on a route and conditions consists of mostly light load operation, which suits GDI’s character (at light load GDI runs at 1 : 40 lean mode, otherwise at the 1 : 14.5 normal mode). European’s combined cycle test requires much more high load, high speed operation, thus resulting in mpg figures far worse than Japan’s claim.
Renault’s IDE (Injection Direct Essence)
Renault launched the first European direct injection petrol engine. It avoids the troubles encountered by Mitsubishi by implementing in a completely different way.
Instead of pursuing ultra-lean air / fuel mixture, they adopt ultra-high EGR (Exhaust Gas Recirculation). EGR, as mentioned here before, reduces fuel consumption by reducing pumping loss as well as by reducing the effective engine capacity during light or part load. At the lightest load, Renault’s IDE engine enables as much as 25% EGR compare with conventional car’s 10-15%.
How can IDE engine run at 25% EGR without failing to combust ? Thanks to the direct injection, which is at the center of the cylinder head in place of spark plug. The latter is relocated to the side nearby, very close to the injector outlet. The Siemens injector injects high pressure fuel (at 100 bar or 1450 psi) directly to the combustion chamber. As the inclined spark plug locates just at the path of the fuel spray, successful combustion is guaranteed even at 25% exhaust gas in the chamber.
Without the precise direct injection, conventional engines pulverize the fuel spray in the induction port thus enter the combustion chamber uniformally. As a result it is impossible to concentrate more fuel to the spark plug.
Depends on engine load, IDE runs at one of the 3 preset EGR ratios, among which the full load mode has no exhaust gas recirculation at all for the need of maximum power. Therefore, like GDI, running at full load saves no fuel. However, overall speaking Renault claims 16% reduction of fuel comsumption in real world, that is, according to the European test method. Well done.
Another to note is the enhance of performance. The 1998 c.c. engine output a solid 140 hp and a class-beating 148 lbft. As a comparison, the non-IDE but variable valve timing-equipped version output the same 140 hp but merely 139 lbft of torque. Not even the VVT matches the IDE.
Gain in performance is due to the increase of compression ratio to an unusually high 11.5 : 1 (GDI is even at 12.5 : 1). Like the Mitsubishi, a pre-injection in prior to the normal injection helps cooling the combustion chamber, thus raising knock resistance and enables a higher compression ratio.
To reduce the time
to bring the catalyst to its operating temperature, apart from using
converter and pre-heated engine, Mercedes also tried to reduce the
area of the exhaust port - by using a single exhaust valve in each
rather than 2.
|Mercedes 3 valves V6, one of the Ten Best Engines in AutoZine's engine award.|
Of course, the drawback is some power loss. Therefore many other technology were employed to compensate - variable valve timing, variable intake manifold and twin-spark.
|
<urn:uuid:78191997-b632-448a-9aa3-2234c3d2fce2>
|
CC-MAIN-2016-26
|
http://www.autozine.org/technical_school/engine/petrol1.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00074-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91817
| 1,398
| 2.65625
| 3
|
The Pew report “Mapping the Global Muslim Population: A Report on the Size and Distribution of the World’s Muslim Population,” concluded that the world wide Muslim population was 1.6 billion and that the majority of Muslims today reside beyond the birth place of Islam. It is estimated that over 30% of Muslims – the largest – reside in Hind, which comprised of India, Pakistan and Bangladesh.
Contact between the Muslims from Baghdad – the Abbasid Khilafah [800-1300 CE] – took place before the expedition of Muhammed bin Qasim, which has come to be seen as a seminary period in the regions history books. The Arabs used to visit the coast of Southern India, which then provided the link between the ports of South and South East Asia. After the Arab traders became Muslim, they brought Islam to South Asia. A number of local Indians living in the coastal areas embraced Islam through contact with the Muslim traders.
Over a period of 500 years Islam expanded across the Indus plains. By the 12th century Islam had reached Delhi. The Islamic conquests were different compared to the previous invasions into the region, as many conquerers did nothing regarding the prevailing customs and in many cases assimilated into the existing architecture. Hind had an abhorrent caste system which differentiated between people on ethnic lines which led to the supremacy of princely rulers who enslaved many to work in their fields in return for basic wages. As Hindu and Buddhist kingdoms came under the fold of Islam, the Khilafah became a highly centralising force that facilitated the creation of a common legal system that gradually replaced the caste system.
Letters of credit issued in Egypt or Tunisia were honoured in India and in order to create and sustain such an internationally consistent legal system, local and traditional systems of governance were uprooted.
In an analysis of the Muslim conquest of the Indian subcontinent, one researcher highlighted: “Unlike earlier conquerors who assimilated into prevalent social systems, Muslim conquerors retained their Islamic identity and created legal and administrative systems that challenged and destroyed existing systems of social conduct, culture, religious practices, lifestyle and ethics.”[i]
The Muslims when they came to the region introduced a new culture, which was very different from the existing cultures. Muhammad bin Qasim set up an administrative structure that incorporated a newly conquered land, inhabited by non-Muslims. He adopted a conciliatory policy, asking for acceptance of Muslim rule by the natives in return for non-interference in their religious practices, so long as the natives paid Jizya, without forced conversion. Shari’ah law was applied over the people of the region; however, Hindus were allowed to settle their marital disputes according to their own laws. Hindus and Buddhists were inducted into the administration as trusted advisors and governors.[ii] A Hindu, was at one point the second most important member of Muhammed bin Qasim’s administration.[iii]
Islam created a system where political power, law and worship became fused in a manner so as to safeguard the interests of all people. This stability led to the subcontinent to become the hub between the Far East and the Mediterranean. The Khilafah has also been credited for creating the Karkhanas – small factories in the subcontinent. New towns were created that specialised in a particular category of manufactured goods, which led to development and prosperity that had not been seen for centuries.
While the spread of Islam in the Sub-continent is the story of untiring efforts of numerous saints and Sufis who dedicated their lives to the cause of Islam, by the time the Muslims conquered Delhi and established what came to be known as the Delhi Sultanate, Sufi fraternities had come into being and the Sufi influence was far more powerful than it was in earlier days under the Arabs in Sindh.
One Hindu historian described this period as follows: “Throughout its existence the Delhi Sultanate (1205-1526), remained a legal part of the worldwide Muslim empire functioning under the de jure suzerainty of the Abbasid caliphs. Sultans considered themselves the deputies of the caliph and derived their validity of their administrative and legal authority only on the basis of delegation. Since the supreme authority of the community legally remained with the caliph, every king and potentate claimed to exercise governmental power for, and on behalf of the Imam of Islam.”[iv]
In this way many people who were discriminated against due to the caste system embraced Islam.
It was only the weakness that overcame Muslims in understanding Islam that became the cause of their decline as under Islam the people of the region only saw progress.
Today, Hind is widely, though incorrectly, recognised as encompassing India alone. In spite of the apparent economic growth of India today, the region is the most poverty stricken in the world after Africa. The resources of the region have not been marshalled for the people and corruption runs rampant as it was before Islam came to the region.
We also unfortunately see the ancient caste system return to the region which has turned the region into hereditary groups creating huge fault-lines which has led to the ugly head of sectarianism returning to the region when the early Muslims dedicated their lives to removing it.
The Hind region today is a far cry from the heights it achieved in the past.
[i] M. S. Asimov and C. E. Bosworth, ‘History of Civilizations of Central Asia,’ Vol IV: The Rise of Islam and Nomadic
and Military Empires in Central Asia, Paris: UNESCO Publishing, 1998
[ii] Nicholas F. Gier, FROM MONGOLS TO MUGHALS: RELIGIOUS VIOLENCE IN INDIA 9TH-18TH CENTURIES, Presented at the Pacific Northwest Regional Meeting American Academy of Religion, Gonzaga University, May, 2006 http://www.class.uidaho.edu/ngier/mm.htm
[iii] H. M. Elliot and John Dowson, The History of India as Told by Its Own Historians, (London, 1867-1877), vol. 1, p. 203.
[iv] Shashi S. Sharma, Caliphs and Sultans – Religious ideology and political praxis, pg. 247
|
<urn:uuid:dc023069-33e4-4dc7-a926-24cf70baec7c>
|
CC-MAIN-2016-26
|
http://www.hizb.org.uk/islamic-culture/al-hind-and-islam
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00112-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964767
| 1,305
| 3.90625
| 4
|
A new study in the Proceedings of the National Academy of Sciences sheds light on a question that continues to vex industry executives and policymakers alike: How significant are fugitive methane emissions from oil and gas production?
The study claims that U.S. methane emissions may be as much as 50 percent higher than estimates in the Environmental Protection Agency’s (EPA) annual Inventory – the equivalent to adding 2.3 million cars to the road. Most significantly, the study’s authors assert that methane that is leaked or vented during oil and gas production—aka “fugitive methane”—may be up to five times greater than current estimates. If these results are correct and applicable to oil and gas development nationwide, it would fundamentally alter the scale of the fugitive methane problem and seriously undermine any climate advantage natural gas possesses over coal.
Here, we seek to address some of the biggest questions this new study raises.
What Does the Study Say?
The study’s authors, led by researchers from Harvard University, used atmospheric measurements of methane – a greenhouse gas at least 25 times as powerful at trapping heat as CO2 – from aircraft and stationary towers. Overall, the study found that methane emissions from all U.S. sources—including agriculture, oil and gas development, landfills, and coal mining—are 50 percent greater than estimates from the EPA Inventory, which recently lowered its estimate of methane emissions from oil and gas systems. The study found evidence of even greater discrepancies in individual sectors; methane emissions from livestock, for example, were estimated at twice the level estimated by EPA. But it was the oil and gas sector – the largest source of methane emissions in the United States – where the variance was greatest, and which is the greatest cause for concern.
Measurements of methane were taken in 2007 and 2008 in Texas and Oklahoma, the locus of U.S. oil and gas extraction during that time period. The authors estimated that the region’s methane emissions were 3.7 million metric tons of CO2 equivalent, almost five times greater than the 0.75 million metric tons previously estimated (according to the study, methane emissions from oil and gas operations are 2.3-7.5 times greater than earlier estimates). Furthermore, the methane was highly correlated with propane, a byproduct of oil and gas extraction and refining, which strengthens the researchers’ contention that these methane emissions are coming from oil and gas production.
What Are the Implications of this Study? And What Are the Caveats?
If fugitive methane emissions from oil and gas systems are indeed five times greater than previously estimated, that would imply a leakage rate in the range of 5-15 percent of total production. Not only is this a substantial quantity of product vanishing into the air (natural gas is primarily methane), it is also significantly higher than most previous estimates – including from industry – and would reduce or eliminate any advantage natural gas has over coal from a climate standpoint. While natural gas emits roughly half the CO2 of coal at the point of combustion, because methane is such a powerful greenhouse gas, any fugitive methane that escapes during the drilling, processing, or transmission of natural gas serves to lessen that benefit.
However, it’s important to note that much has changed since Harvard researchers took their measurements in 2007 and 2008. The boom in natural gas development due to hydraulic fracturing had not yet begun in earnest; there are now hundreds of thousands of hydraulically fractured wells across the United States. How their emissions compare to those of conventional wells is still an open question. The industry has also changed in important ways in the last five years. We know a lot more about where fugitive methane emissions come from and how to address them. While it’s clear that some companies are taking advantage of best practices to reduce these emissions, there are also now many more companies operating in this space—it’s impossible to determine how many of them are actively reigning in fugitive methane. And while the Harvard researchers plan to repeat their measurements with data from 2012, even these updated estimates may overstate current emissions. EPA rules affecting some of the larger sources of U.S. methane emissions went into effect just this year.
Despite these caveats, this is nonetheless a valuable and important study—one that sheds light on an issue that remains all too murky. The EPA Inventory uses bottom-up estimates and engineering calculations in determining emissions levels. While this data is still the definitive source for all U.S. greenhouse gas emissions, it’s far from perfect. Any direct measurement data helps bring clarity to the confusing world of fugitive methane emissions.
How Does this Study Compare to Other Recent Studies?
Another recent study, led by researchers at the University of Texas at Austin, measured methane emissions from natural gas development. However, the two studies are not entirely comparable for a number of reasons. The most significant is that the UT Austin study looked only at the production stage of natural gas, while the Harvard study used atmospheric measurements to estimate methane emissions from all sources. The UT Austin measurements were also more recent and demonstrate the impact that the aforementioned EPA rules are having on fugitive methane emissions during the production stage. It is worth repeating that, despite widespread reports to the contrary, the UT Austin study does not confirm that EPA estimates of methane emissions from natural gas systems are correct and accurate. Although the study’s estimated leakage rate for production operations was in line with EPA’s estimate, several processes leaked more than previously thought, while another (well completions) emitted much less due to the effectiveness of recent EPA rules.
What More Can Be Done?
Ultimately, more direct measurements are needed to get a better handle on the size and scale of the fugitive methane problem. We do, however, know enough on how to reduce fugitive methane now—for the good of the environment and human health.
In addition to the EPA rules, many states have taken a lead role in addressing air and water pollution from natural gas development. Colorado, for example, has proposed rules that would require companies to monitor new and existing oil and gas infrastructure for leaks, and to repair those leaks within 15 days. While Colorado could still strengthen its rules – for example, by requiring the immediate repair of leaking equipment – the state is demonstrating to both the federal government and other states that natural gas development can be improved without economic harm. With the litany of policies and cost-effective technologies available to governments and industry, we can reduce fugitive methane emissions and take a step toward preventing the worsening impacts of climate change.
- LEARN MORE: Download WRI's publication, Clearing the Air: Reducing Upstream Greenhouse Gas Emissions from U.S. Natural Gas Systems
|
<urn:uuid:70745490-6b68-4615-a039-e81bc0889ac0>
|
CC-MAIN-2016-26
|
http://www.wri.org/blog/2013/12/new-study-raises-big-questions-us-fugitive-methane-emissions
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00027-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957114
| 1,362
| 3.125
| 3
|
|Description: This is a detail of a map of Florida showing counties, railroads, cities, inland waters, etc for Madison County. Each color represents a different route. The origin and destination for each route is found on the main map in an Explanation key. Features of this detail include the Aucilla River, Madison, and Westfarm.|
Place Names: Madison, Aucilla, Greenville, Hamburg, Madison, Ellaville, Westfarm, Moseley Hall, Harmony,
ISO Topic Categories: inlandWaters, oceans, boundaries, transportation
Keywords: Madison County, physical, historical, political, transportation, physical features, county borders, railroads, inlandWaters, oceans, boundaries, transportation, Unknown,1889
Source: Wm. M. Bradley and Bros., Bradley's atlas of the world for commercial and library reference (Philadelphia, PA: Wm. M. Bradley and Bros., 1889) 240-241
Map Credit: Courtesy of the Special Collections Department, University of South Florida.
|
<urn:uuid:ea3df0c3-9b25-44c6-b00b-7acc922c4d19>
|
CC-MAIN-2016-26
|
http://fcit.usf.edu/florida/maps/pages/10300/f10301/f10301.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.832692
| 214
| 2.609375
| 3
|
|go to week of Sep 29, 2013||29||30||1||2||3||4||5|
|go to week of Oct 6, 2013||6||7||8||9||10||11||12|
|go to week of Oct 13, 2013||13||14||15||16||17||18||19|
|go to week of Oct 20, 2013||20||21||22||23||24||25||26|
|go to week of Oct 27, 2013||27||28||29||30||31||1||2|
In How Learning Works, Susan Ambrose and her colleagues identify the following principle: "How students organize knowledge influences how they learn and apply what they know."
Students often struggle to connect new concepts to what they already know. Concept mapping can help students make sense of complex information by providing a means of organizing and representing knowledge. A visual format can help students gain an overview of the concepts being studied. Concept maps can be used as a tool for learning, for teaching, and for assessment. In this interactive session, you will learn about these different ways to use concept maps in your course.
|
<urn:uuid:6a8e6a79-757e-4004-9d01-5b709b57ce46>
|
CC-MAIN-2016-26
|
http://illinois.edu/calendar/detail/3283?eventId=29770223&calMin=201310&cal=20131012&skinId=4101
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00015-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.903307
| 242
| 4.09375
| 4
|
GDANSK, Poland (AP) — The outbreak of World War II 75 years ago shows why Europe must put an end to the war in Ukraine now, Poland's prime minister said Monday.
Prime Minister Donald Tusk, chosen by EU leaders to be the next president of the European Council, spoke at the Westerplatte peninsula on the Baltic coast, where some of the first shots of World War II were fired on Sept.1, 1939, by the Nazi warship Schleswig-Holstein. Two weeks later Soviet troops invaded from the east, acting on a Moscow deal with Germany to carve up Poland.
More than five years of brutal, global war followed, taking the lives of almost 60 million people.
"Today, looking at the tragedy of Ukraine at war — because we should use this word — in the east of our continent, we know that September 1939 must not be repeated," Tusk said.
He said the lesson that Europe should draw from its past "must not be a lesson of naive optimism" because the continent's security requires "courage, imagination and resolute action."
Europe's security is the top priority for a NATO summit Thursday in Wales. Mindful of their painful history, Poland and the Baltic nations are calling for a sizeable, permanent presence of NATO troops on their territory, a goal that may prove hard to achieve.
"There is still time to stop all those in Europe and in the world for whom violence, force, aggression are again becoming an arsenal of political activity," Tusk said.
German President Joachim Gauck, also speaking at Westerplatte, called for the 28-nation European Union to "stand together."
"Stability and peace on our continent are in danger again," Gauck said. "We will oppose those who break international law, annex foreign territory and provide military support to breakaway movements in foreign countries."
His comments were a clear reference to Russia's actions in eastern Ukraine.
|
<urn:uuid:84340913-d80c-4e26-8be5-131ce7b75c6b>
|
CC-MAIN-2016-26
|
http://cnsnews.com/news/article/polands-pm-ukraines-war-must-be-stopped-now
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00143-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956744
| 402
| 2.828125
| 3
|
There are two types of rabbits you're likely to run into: wild and domesticated. Wild rabbits run free through the countryside, while domesticated rabbits are usually kept as pets. Both varieties are herbivores which like to eat small amounts throughout the day. However, because of their different living conditions, the two also have very different diets.
Most domestic rabbit owners provide hay at all times for their pets, but bunnies also love fresh vegetables as treats. Before you throw out unwanted produce such as carrot tops, leftover salad fixings or cabbage, toss it to your grateful bunny instead! Many rabbit owners also grow their own food for their rabbits. Lettuce and carrots are both high-yield plants and bunny favorites. Be careful not to feed too much of a new veggie too soon, though, as rabbits need time to adjust to any changes in their diets. You should also avoid sweet fruits and other sugary foods, which are not part of a rabbit's natural biology. If you have found a baby rabbit or want to know what wild rabbits eat, keep reading for the information you need.
Wild rabbits don't have an owner to feed them, and so must forage for themselves. They sleep in large, community burrows and begin grazing at dawn when they are less likely to be attacked by predators. Wild rabbits nibble grass, preferably all day if they feel safe, and may also eat leafy ground plants and flowers. Because they may be scared away from their food, the first minutes of grazing are the most important for rabbits. They quickly eat whatever is available, usually grass. Once their basic hunger is satisfied, they can afford to be a little more choosy with their dining. So, what do rabbits eat? Clover is a favorite, as well as anything they can nab from a local garden. Almost every gardener knows what it's like to come out in the morning and find that their precious plants have provided a feast for the neighborhood rabbit warren.
All that grass means that a rabbit's diet is very high in fiber. Because of this, it's hard for them to digest everything efficiently. In response, rabbits have developed a rather gross way of reclaiming the nutrients they excrete too quickly: they eat their own droppings! Much like a cow chews cud, rabbits actually excrete a special kind of dropping that is soft and full of plant material. After being processed a second time, the food comes out as the hard pellets you often see on lawns.
If you have found an abandoned baby bunny in the wild, it may be best to put it back where you found it. Rabbit nests are often located in the open on lawns or lightly overgrown areas. The mother returns to the nest twice a day to nurse her kits and care for them. Unless you know the mother is dead, it's likely she will return for her kits later that night. If returning the kit is not an option, call a local wildlife shelter, rescue group or vet. They will be able to take the rabbit in and give it the care it needs. This is especially important if the bunnies still have their eyes closed, as they are very fragile and need professional care. Raising a wild rabbit is very difficult, and it may be better to save yourself the heartache of losing a baby bunny.
If you have a domestic orphaned bunny, it will need intensive care to keep alive. It must be kept warm and hydrated, and will likely need a milk substitute. A local tack store or vet may carry dried rabbit milk, but you will more likely need to buy goat's milk instead. Even the goat's milk sold at a grocery store is fine for use with rabbits. Sit the baby upright before feeding, and only feed it twice a day. This is the same schedule a mother rabbit would follow, so don't be tempted to overfeed the kit. Use a syringe to administer the milk and make sure not to choke the kit with too much at once. Caring for an orphaned bunny may be one of the hardest things you ever do, but if done right the rewards of saving a life can be well worth the effort.
|
<urn:uuid:d44d0c98-1df5-462b-a45e-b4343dd52ecd>
|
CC-MAIN-2016-26
|
http://www.whatdorabbitseat.info/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974614
| 853
| 2.8125
| 3
|
Definition from Wiktionary, the free dictionary
dissociation (plural dissociations)
- The act of dissociating or disuniting; a state of separation; disunion.
- (chemistry) The process by which a compound body breaks up into simpler constituents; said particularly of the action of heat on gaseous or volatile substances.
- the dissociation of the sulphur molecules
- the dissociation of ammonium chloride into hydrochloric acid and ammonia
- (psychology) A defence mechanism where certain thoughts or mental processes are compartmentalised in order to avoid emotional stress to the conscious mind.
- "Project MONARCH could be best described as a form of structured dissociation and occultic integration, carried out in order to compartmentalize the mind into multiple personalities within a systematic framework." —Ron Patton
act of dissociating
chemistry: process of breaking up
|
<urn:uuid:455c06a2-fb8a-4bb2-ab8f-646767790831>
|
CC-MAIN-2016-26
|
https://en.wiktionary.org/wiki/dissociation
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.843408
| 188
| 2.875
| 3
|
After completing the Tantrik sadhana Sri Ramakrishna followed the
Brahmani in the disciplines of Vaishnavism. The Vaishnavas are worshippers
of Vishnu, the "All-pervading", the Supreme God, who is also known as
Hari and Narayana. Of Vishnu's various Incarnations the two with the
largest number of followers are Rama and Krishna.
Vaishnavism is exclusively a religion of bhakti. Bhakti is intense love of God, attachment to Him alone; it is of the nature of bliss and bestows upon the lover immortality and liberation. God, according to Vaishnavism, cannot be realized through logic or reason; and, without bhakti, all penances, austerities and rites are futile. Man cannot realize God by self-exertion alone. For the vision of God His grace is absolutely necessary, and this grace is felt by the pure of heart. The mind is to be purified through bhakti. The pure mind then remains for ever immersed in the ecstasy of God-vision. It is the cultivation of this divine love that is the chief concern of the Vaishnava religion.
There are three kinds of formal devotion: tamasic, rajasic, and sattvic. If a person, while showing devotion, to God, is actuated by malevolence, arrogance, jealousy, or anger, then his devotion is tamasic, since it is influenced by tamas, the quality of inertia. If he worships God from a desire for fame or wealth, or from any other worldly ambition, then his devotion is rajasic, since it is influenced by rajas, the quality of activity. But if a person loves God without any thought of material gain, if he performs his duties to please God alone and maintains toward all created beings the attitude of friendship, then his devotion is called sattvic, since it is influenced by sattva, the quality of harmony. But the highest devotion transcends the three gunas, or qualities, being a spontaneous, uninterrupted inclination of the mind toward God, the Inner Soul of all beings; and it wells up in the heart of a true devotee as soon as he hears the name of God or mention of God's attributes. A devotee possessed of this love would not accept the happiness of heaven if it were offered him. His one desire is to love God under all conditions — in pleasure and pain, life and death, honour and dishonour, prosperity and adversity.
There are two stages of bhakti. The first is known as vaidhi-bhakti, or love of God qualified by scriptural injunctions. For the devotees of this stage are prescribed regular and methodical worship, hymns, prayers, the repetition of God's name, and the chanting of His glories. This lower bhakti in course of time matures into para-bhakti, or supreme devotion, known also as prema, the most intense form of divine love. Divine love is an end in itself. It exists potentially in all human hearts, but in the case of bound creatures it is misdirected to earthly objects.
To develop the devotee's love for God, Vaishnavism humanizes God. God is to be regarded as the devotee's Parent, Master, Friend, Child, Husband, or Sweetheart, each succeeding relationship representing an intensification of love. These bhavas, or attitudes toward God, are known as santa, dasya, sakhya, vatsalya, and madhur. The rishis of the Vedas, Hanuman, the cow-herd boys of Vrindavan, Rama's mother Kausalya, and Radhika, Krishna's sweetheart, exhibited, respectively, the most perfect examples of these forms. In the ascending scale the-glories of God are gradually forgotten and the devotee realizes more and more the intimacy of divine communion. Finally he regards himself as the mistress of his Beloved, and no artificial barrier remains to separate him from his Ideal. No social or moral obligation can bind to the earth his soaring spirit. He experiences perfect union with the Godhead. Unlike the Vedantist, who strives to transcend all varieties of the subject-object relationship, a devotee of the Vaishnava path wishes to retain both his own individuality and the personality of God. To him God is not an intangible Absolute, but the Purushottama, the Supreme Person.
While practising the discipline of the madhur bhava, the male devotee often regards himself as a woman, in order to develop the most intense form of love for Sri Krishna, the only purusha, or man, in the universe. This assumption of the attitude of the opposite sex has a deep psychological significance. It is a matter of common experience that an idea may be cultivated to such an intense degree that every idea alien to it is driven from the mind. This peculiarity of the mind may be utilized for the subjugation of the lower desires and the development of the spiritual nature. Now, the idea which is the basis of all desires and passions in a man is the conviction of his indissoluble association with a male body. If he can inoculate himself thoroughly with the idea that he is a woman, he can get rid of the desires peculiar to his male body. Again, the idea that he is a woman may in turn be made to give way to another higher idea, namely, that he is neither man nor woman, but the Impersonal Spirit. The Impersonal Spirit alone can enjoy real communion with the Impersonal God. Hence the highest est realization of the Vaishnava draws close to the transcendental experience of the Vedantist.
A beautiful expression of the Vaishnava worship of God through love is to be found in the Vrindavan episode of the Bhagavata. The gopis, or milk-maids, of Vrindavan regarded the six-year-old Krishna as their Beloved. They sought no personal gain or happiness from this love. They surrendered to Krishna their bodies, minds, and souls. Of all the gopis, Radhika, or Radha, because of her intense love for Him, was the closest to Krishna. She manifested mahabhava and was united with her Beloved. This union represents, through sensuous language, a supersensuous experience.
Sri Chaitanya, also known as Gauranga, Gora, or Nimai, born in Bengal in 1485 and regarded as an Incarnation of God, is a great prophet of the Vaishnava religion. Chaitanya declared the chanting of God's name to be the most efficacious spiritual discipline for the Kaliyuga.
Sri Ramakrishna, as the monkey Hanuman, had already worshipped God as his Master. Through his devotion to Kali he had worshipped God as his Mother. He was now to take up the other relationships prescribed by the Vaishnava scriptures.
|
<urn:uuid:839b76bd-3fc1-4e7d-a0b0-a51910dca658>
|
CC-MAIN-2016-26
|
http://www.ramakrishnavivekananda.info/gospel/introduction/vaishnava_disciplines.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955588
| 1,473
| 2.625
| 3
|
by H. Brett Melendy
The Philippine Islands, off the east coast of Asia, are part of the Pacific Ocean's fiery volcanic rim. The Philippine archipelago, consisting of about 7,100 islands, lies along a north-south arc of 1,152 miles. From east to west, its widest dimension is 682 miles. Most islands, large and small, have high mountains, and many are surrounded by coral-reef shorelines.
The Philippines' land area is 115,831 square miles, slightly larger than the state of Nevada. Eleven islands comprise about 95 percent of the land mass of the Philippines, with the two largest, Luzon and Mindanao, accounting for 65 percent of the total. The national capital, Quezon City, and the de facto capital and largest city, Manila, are both situated on Luzon, on which over 25 percent of the country's population lives. Thirty-five percent inhabits the Visayan Islands, a cluster of islands—Samar, Leyete Bohol, Cebu, Negros, Panay, and Masbate—that lie between Luzon and Mindanao. Cebu has the highest population concentration with more than 400 people per square mile. The country's total population in 1992 was about 67,144,000. Malays are in the majority; major ethnic minorities are Chinese, Americans, and Spanish. Eighty-three percent of the population is Roman Catholic, nine percent is Protestant, and five percent is Muslim. Mindanao has the greatest Islamic concentration.
Climatic conditions, which are about the same throughout the archipelago, help determine the islanders' lifestyle. The climate, both tropical and maritime in nature, usually has high humidity and high temperatures. Monsoons and typhoons, over-riding normal conditions, bring periods of heavy rain. All of these factors have determined where and how Filipinos have cultivated their land. Agriculture, ranging from subsistence farming to export plantations, remains the basis of the islands' economy. Even so, given the mountainous terrain, only about 15 percent of the land is cultivated. Major domestic crops are rice and corn; important export crops are abaca (Manila hemp), copra (dried coconut meat, from which coconut oil is made), pineapple, sugar, and tobacco.
One of the persistent problems for Philippines islanders has been inequitable land distribution. A share tenant system has made most farmers captives of landlords, or caciques. At the time of independence in 1946, over 70 percent of the crops went to caciqueors. Share tenancy has brought considerable political and social unrest. Historically, limited economic opportunities tied to tenancy and a high birthrate led to immigration to Hawaii and the mainland United States.
The islands have seen the arrival of different peoples over the centuries leading to the evolution of the present diverse culture. Among the earliest immigrants were the Little People, shorter than five feet tall. They were dark skinned, had Negroid features, and were named Negritoes by the Spanish. They may have arrived about 25,000 years ago, and they lived throughout the islands. In recent decades, they occupied the mountain interiors of Luzon, Mindanao, and Palawan, living in isolation and not mixing with later arrivals.
Sometime between 4000 B.C. and 3000 B.C., the first Indonesians arrived from the Asian continent. A second Indonesian influx occurred about 1000 B.C. and lasted about 500 years. Both waves of Indonesians settled throughout the islands, and over the centuries assimilated with subsequent immigrants. Present-day Ilonggo are one result of tribal intermixing.
The Malays, an Iron Age people, began arriving in the third century A.D. Peak influxes started in the thirteenth century and continued well into the next. The Bontoks, Igorots, and Tinguians are descendants of the Malays. Tribes that in time became dominant were the Visayans, Cebunos, and Ilocanos. European and American colonists discovered some of these groups were "head-hunting pagans." Those Malays who came in the later waves had elements of an alphabet and metal tools. More peaceful than earlier arrivals, they were the ancestors of most present-day Filipino Christians. While considered primitive by Western standards, these Malays were in fact far advanced over the earliest immigrants. During the fourteenth century, Islamic Arab traders arrived; their descendants, the Moros, populated the southern islands and remained militant Muslims.
The Chinese and Japanese have had a major impact in the twentieth century, although trade between the Philippines and South China began to develop as early as the fourteenth century as Chinese emigrants became successful merchants and traders. Descendants of Filipino and Chinese marriages continued this domination of island businesses, gaining economic successes and power. Their virtual monopoly of the nation's big businesses in the twentieth century led some Filipinos, particularly those in urban areas, to resent the Chinese and to engage in occasional hostile activities. Japanese immigration occurred after 1900; emigrants from Japan settled first on the island of Mindanao, and they developed several large abaca plantations. Unlike the Chinese and earlier Malay emigrants, the Japanese remained largely a homogeneous group, rarely intermarrying. At the outbreak of World War II, Japanese could be found throughout the islands, working mostly at such crafts as cabinetmaking and photography.
The first European immigrants did not intend to settle permanently in the Philippines. Spanish settlement proved transitory during the 400 years of Spain's colonial occupation. The first contact between Spain and the Philippines occurred in March of 1521, when Ferdinand Magellan's fleet reached the island of Samar on its circumnavigation of the earth. Magellan claimed the archipelago for Spain and the Catholic church, but Spain did not make his claim official until 1565. The country was named the Philippines in the 1550s after King Philip II of Spain.
In 1565, nine years after ascending to the Spanish throne, Philip II sent a royal governor to the Philippines. The governor, from his first seat of government on Cebu, sent expeditions to other islands and imposed Spanish rule. From the outset, colonial officers exerted forceful and lasting control, using the colonial methods used in the Americas as their model.
From 1565 to 1810 the Acapulco-Manila galleon trade flourished. It connected the Spanish empire in Latin America with the Asian market via the Philippines. Manila served as the entreport to the China trade route. Gold bullions were extracted by the Spanish in Latin America and exchanged for silk, spices, and tea in the East. The galleon trade provided the first opportunity for native Filipinos to leave the islands as members of the crews aboard the Spanish ships.
As royal governors gained greater dominion over the islands, they moved the colonial capital to Manila, with its superior harbor. Endorsing European ideas of mercantilism and imperialism, Spain's monarchs believed that they should exercise their power in the Philippines to enrich themselves. In the course of almost four centuries, Spanish settlers and their descendants in the islands came to own large estates and to control the colonial government.
The Catholic church, supported by the colonial powers, controlled large areas of land and held a monopoly on formal education. The church and the Spanish language were major Spanish cultural institutions imposed upon Filipinos. By 1898, over 80 percent of the islanders were Catholics. Most young Filipinos, migrating to Hawaii and the mainland before World War II, came from Catholic backgrounds.
The Spanish, in installing an autocratic imperialism that alienated Filipinos, created a class society and a culture that many Filipinos later tried to imitate. Some of the Spanish, who made the islands their home, married Filipinos; the descendants of these marriages were known as mestizos . By the nineteenth century, mestizos had inherited large areas of agricultural lands. This Filipino upper class found that the lighter their skin color, the easier it became to mingle with Europeans and Americans. They also learned to control local politics through power and corruption. This economic-political dominance came to be known as caciquism.
Local revolts against Spanish imperial corruption, caciquism, racial discrimination, and church abuse began late in the nineteenth century. These first revolts called for reform of the economic-political system but not for independence. An early leader, Jose Rizal, who formed La Liga Filipina (the Filipino League), called for social reform. After the Spanish banished Rizal, more radical leaders emerged. When Rizal returned to the islands, the Spanish colonial government arrested, tried, and executed him in 1896, thus unwittingly creating a martyr and national hero.
Twenty-seven-year-old Emilio Aguinaldo became the next leader of the insurrectionists— now fighting openly against the Spanish. In 1898, Aguinaldo conferred with American officials in Hong Kong and Singapore. He was led to understand that the Filipinos would become allies with the United States in a war against Spain, the anticipated outcome of which would be an independent Philippine nation. Admiral George Dewey and Consul General E. Spencer Pratt, with whom Aguinaldo met, later denied that they had made such a promise. In 1898, the United States declared war against Spain, and as a result of the ensuing Spanish-American War, the United States went to war with the Philippines. The war took more than one million Filipino lives and 6,000 American lives. The Treaty of Paris, approved on February 6, 1899, made the United States an imperial power and started a 47-year relationship with the Philippines.
Filipinos, following Aguinaldo's lead, protested the arrival of American imperialism, and the insurrection first launched against the Spanish continued. After annexation of the Philippines by the United States, the U.S. Army fought to quell uprisings throughout the islands. With his capture on March 23, 1901, Aguinaldo advised his followers to swear allegiance to the United States. On July 4, 1902, the Army declared the insurrection to be at an end, even though the Moros, who had become largely independent under Spanish rule, continued to fight until 1913.
U.S. President William McKinley sent several commissions to the Philippines even as the U.S. Army fought the Filipinos. William Howard Taft, president of the Philippine Commission, began installing American control on September 1, 1900. A year later, he became the first governor-general of the Philippines. Between 1901 and 1913, American officials, controlling executive, legislative, and judicial offices, rebuilt the islands' government from the village to the national level. An elected lower house, the Philippine Assembly, soon participated in national affairs. Both the judicial system and the civil service, modeled after American counterparts, replaced the Spanish system.
Undoubtedly, the great American impact came in education, with primary schools set up in most communities and high schools in each province. Nationwide vocational schools and teacher colleges were established, as was the University of the Philippines in Manila, founded in 1908 as the capstone of the islands' education program. Religious freedom was guaranteed, and government support of the Catholic church as the state religion ended. Most of the provincial colleges remained under Catholic control with a curriculum reflecting the church's traditional education. A major cause of Filipino unrest under Spanish imperialism was church-controlled Friar lands. To ease this crisis, the United States bought about 400,000 acres from the Catholic church. This land was then sold, mostly to former tenants at low prices and with easy payment terms.
While American administrators tended to be benevolent authorities, Filipinos still desired independence. From the outset of American rule, the leaders of the Nacionalista party called for immediate independence. From 1907 on, the Nacionalistas gained and held control of elective offices in villages, provinces, and the Philippine Assembly. A small number of wealthy party members, drawn from among large landowners, used caciquism to control the Nacionalista party. Early major political leaders were Sergio Osmena and Manuel Quezon. By 1917, these two men had concentrated national political power under their absolute control. Most immigrants to the United States and the Territory of Hawaii were Nacionalistas.
In 1916, U.S. President Woodrow Wilson, committed to making the Philippines an independent nation, supported passage of the Jones Act, which promised that the Philippines would be free as soon as a stable government was established. The act provided that during a transitional period, executive power would remain with an American appointed governor-general while Filipinos elected members to the Assembly and to the newly established Senate. The Jones Act helped Osmena's and Quezon's political machine entrench itself. In 1921, with the election of a Republican administration in the United States, independence was no longer strongly advocated, as Republican governor-generals insisted that the islands were not ready to be set free.
During the late 1920s, concerns over the large influx of Filipinos into the West Coast of the United States and falling agricultural prices for certain American commodities led to agitation that called for change in the relationship between the islands and the United States. American farmers wanted an end to free trade of commodities from the islands while exclusionists wanted to stop Filipino immigration. These two political forces began calling for Philippine independence.
In December 1931, Congress passed the Hare-Hawes-Cutting bill, which was intended to grant independence to the islands after a ten-year period. It then overrode President Herbert Hoover's veto, and the bill became law. The new law provided that American goods would be imported into the islands duty free, while Philippine goods exported to the United States would be subject to increasing tariff rates during these ten years. During this period, Filipino immigration would be limited to an annual quota of 50, and general United States immigration laws would apply. The Philippine national legislature had to approve the act, but in October 1933, Quezon-led forces rejected the proposal, which had the backing of Osmena and Manuel Roxas adherents. Quezon then led a delegation to Washington to negotiate with the new American president, Franklin Roosevelt.
Quezon obtained only a slight modification of the Hare-Hawes-Cutting Act; key issues relating to the island economy and immigration to the United States remained unchanged. At the end of the ten-year transition period, the United States was to withdraw its forces from all military and naval bases, something that did not actually happen until the 1990s. The Tydings-McDuffie Act, signed into law on March 23, 1934, promised independence after ten years and created the Commonwealth of the Philippines. The Philippine legislature approved this act on May 1, 1934, and a year later the Filipino people approved a constitution.
At the first presidential election in September 1935, Filipinos elected Quezon as president and one of his major rivals, Osmena, as vice president. With their inauguration on November 15, 1935, the Commonwealth of the Philippines came into being, although many Filipinos were ambivalent about the prospect of complete independence. While independence appealed to their sense of nationalism, the hard economic fact was that the islands depended upon tariff-free American markets. Many felt that, in due course, imposition of a tariff upon Philippine products could be disastrous.
With the Tydings-McDuffie Act, independence was to come to the Philippines in 1944, but the Japanese conquest of the islands in 1942 brought a two-year hiatus to the commonwealth. The Quezon government fled, first to Australia with General Douglas MacArthur and then to the United States, where Quezon continued to serve as the commonwealth's president until his death in 1944.
After U.S. President Harry Truman proclaimed the independence of the Philippines on July 4, 1946, Manual Roxas was elected the first president of the Republic of the Philippines. However, the Philippine
The new republic struggled to nationhood during the turmoil of the postwar years. Communistdominated Huks soon confronted Roxas' government with armed resistance in an internal war that lasted until 1954. Huks is a shortened term for Hukbon Magpapalaya ng Bayan Laban sa Hapon, or People's Anti-Japanese Liberation Army. Since independence in 1946, urban and rural violence have continued; election days in the Philippines are marked by many deaths. Under the leadership of Ramon Magsaysay, who succeeded Elpidio Quirino, the republic by 1955 came to be seen as a sturdy bastion of democracy in the Far East, one that the United States hoped would be a model for other Asian countries.
In 1965, Ferdinand Marcos was elected president. When several groups conducted terrorist tactics and the Moros continued to fight for their independence, Marcos, declaring martial law in September of 1972, seized dictatorial powers. This state of affairs lasted fourteen years. Early in 1973, Marcos proclaimed a new constitution, naming himself as president. In 1978, he gave his wife, Imelda, extensive powers to control national planning and development. In the face of growing political repression, many of Marcos's political opponents found it expedient to leave the country as croyism was elevated to the national level. Marcos lifted the decree of martial law in 1981 and turned political power over to the national legislature. He was then elected to another six-year term as president.
Following the 1983 assassination of Benigno S. Aquino Jr., a leading rival of Marcos, political unrest and violence became commonplace until 1986, when Marcos fled the country, and Corazon Aquino, Benigno Aquino's widow, was declared president. The end of the Marcos era did not bring political and economic calm to the nation, however; unsuccessful coups against the government have continued and the national economy has remained weak. Additionally, widespread poverty and communism have posed threats to the unstable central government.
Since the end of Mrs. Aquino's presidency in 1992, there have been two peaceful transitions of power through the process of elections. Under presidents Fidel Ramos and Joseph Estrada the communist rebellion and the Muslim rebellion have been severely weakened and the Philippines has made substantial economic strides.
Filipino arrivals in the Territory of Hawaii and the United States mainland came in three waves. The earliest, from 1903 to 1935, brought many young men to enroll in American universities and colleges and then return to the Philippines. Also during this time, plantation workers arrived to work in Hawaii from 1906 to the 1930s, with a parallel movement occurring along the Pacific Coast during the 1920s—an immigration that lasted until enactment of the Tydings-McDuffie Act in 1934. A much smaller influx to American shores occurred following World War II. The third and largest immigration wave arrived after passage of the 1965 Immigration Act. Since 1970 the Philippines have been surpassed only by Mexico in the number of immigrants coming to the United States.
The first Filipino immigrants came to the United States seeking higher education. Governor-General Taft's administration prepared an educational plan, the Pensionado Act, to send promising young Filipinos to United States' institutions of higher learning. Beginning in 1903, a group of 100 students left for the United States, and by 1910 all had returned. Once home, these new college graduates were met with confusion and jealousy by fellow Filipinos and with hostility by American colonials. However, these men came to play key roles in agriculture, business, education, engineering, and government.
Other students followed; a later estimate indicated that between 1910 and 1938 almost 14,000 Filipinos had enrolled in educational institutions throughout the United States. Most of these came as independent students, apart from the Pensionado program. Many of these hopefuls became over-whelmed by the high cost of living, their inadequate academic preparation, insufficient language skills, and an inability to determine what level of American education best suited their state of educational preparation. These Filipinos soon found themselves trapped as unskilled laborers. Those who were successful in graduating from major universities returned to the Philippines to take their places with Pensionados as provincial and national leaders.
A chance encounter in 1901 between a trustee of the Hawaiian Sugar Planters Association (HSPA) and a band of Filipino musicians en route to the United States led the planter to speculate about Filipinos as potential plantation workers, for he felt that these musicians had a "healthy physique and robust appearance." Even before 1907, Hawaii had begun looking for other pools of unskilled labor on the island of Luzon. During 1907 some 150 workers were sent to Hawaii. Two years later, with Chinese, Japanese, and Koreans now banned from immigrating to the United Sates, the HSPA returned to the Philippines, looking for workers. The Bureau of Census reported that there were 2,361 Filipinos in Hawaii in 1910. Recruiting efforts after 1909 centered on the Visayan Islands, Cebu in particular, and Luzon's Tagalogs.
In 1915 recruiters focused on Luzon's northwestern Ilocano provinces: Ilocos Norte, Ilocos Sur, and La Union. Immigrants from Pangasinan, Zambales, and Cagayan account for about 25 percent of those from Ilocano. The Ilocanos, suffering greatly from economic hardship and overpopulation, proved willing recruits. The HSPA awarded a three-year labor contract to Filipinos migrating to Hawaii; this paid their passage to Hawaii and guaranteed free subsistence and clothing. If they worked a total of 720 days, they received return passage money. A worker was not penalized for violating his contract, but if he did, he forfeited all guarantees, including his return passage. Plantation owners found the Ilocanos to be the "best workers," and poverty in their provinces provided a stimulant for out-migration. By 1935, young single Ilocano men were the largest Filipino ethnic group in Hawaii.
According to census figures, the Filipino population in Hawaii climbed from 21,031 in 1920 to 63,052 in 1930, but dropped to 52,659 by 1940. The decline in the number of Filipinos during the late 1930s is attributable to the return of many to the Philippines during the Depression years and to others seeking greener pastures on the West Coast. The high point of immigration to Hawaii occurred in 1925, when 11,621 Filipinos arrived in Honolulu. At that point, the HSPA closed active recruiting in the Philippines, relying upon self-motivation to maintain the influx of workers.
In 1910, only 406 Filipinos lived on the United States mainland. The largest group, of 109, lived in New Orleans, the remnants of a nineteenth-century settlement of Filipino sailors who came ashore at that port city, married local women, and found jobs. The state of Washington had 17 and California had only five. In 1920, 5,603 Filipinos lived along the West Coast or in Alaska. California then had 2,674 Filipinos while Washington had 958. The northeastern United States had the second-largest number: 1,844.
The 1920s saw dramatic changes as California's Filipino population, mostly single young men, increased by 91 percent; over 31,000 Filipinos disembarked at the ports of San Francisco and Los Angeles. In 1930, there were 108,260 Filipinos in the United States and the Territory of Hawaii. California had 30,470, and this number rose to 31,408 by 1940. Washington had 3,480 in 1930 and 2,222 in 1940. Apart from the West Coast and Hawaii, the next largest concentration was in New York, which in 1930 had 1,982 and 2,978 in 1940. Many of these Filipinos experienced significant racial discrimination.
Emigrants in the second wave left the Philippines in increasing numbers during the late 1940s and 1950s. This group included war brides, the "1946 boys," and military recruits. War brides were the spouses of American GIs who married Filipina women while being stationed in the Philippines. After the passage of the War Brides Act of 1946, it is estimated that 5,000 brides came to the United States. Contracted workers called the "1946 boys", or Sakadas, numbered 7,000 were a major component of the second wave. They were the last large group of agricultural laborers brought to Hawaii by the sugar planters. Plantation owners brought them in an effort to break up the first interracial and territorial-wide strike organized by the International Longshoremen and Warehousemen's Union (ILWU). The Philippine workers supported the ILWU strike, which resulted in the first major victory for Hawaii agricultural workers. Filipinos who came to the United States through the U.S. military were another component of the second wave. A provision of the 1947 US-RP Military Bases Agreement allowed the Navy to recruit Filipino men for its mess halls. During the same year President Truman ended racial segregation in the military and the Filipino replaced African Americans in mess halls. By the 1970s, more than 20,000 Filipinos had entered the United States through the U.S. Navy.
Internal conditions in the new republic contributed to many moving from the islands to the United States. By 1960 Hawaii, which had become a state a year earlier, had 69,070 Filipinos, followed closely by California with 65,459. The two states together accounted for 76 percent of all Filipinos living in the United States. The Pacific Coast states had 146,340 (83 percent of the total), while the East and the South had slightly more than 10,000 each and the Great Lakes states had 8,600. Included in these census numbers were second-generation Filipino Americans.
Changes in American immigration law in 1965 significantly altered the type and number of immigrants coming to the United States. Unlike pre-war immigrants who largely worked as unskilled laborers in West Coast and Hawaiian agriculture and in Alaska's salmon canneries, the third wave was composed of larger numbers of well-educated Filipinos between the ages of 20 and 40 who came looking for better career opportunities than they could find in the Philippines. This highly skilled third-wave population had a command of the English language, allowing them to enter a wide range of professions.
Unlike earlier arrivals, most of the Filipino immigrants after 1970 came to the United States without intending to return to the Philippines. In 1970, 343,060 Filipinos lived in the United States; in 1980, the number was 781,894, with 92 percent of these living in urban areas. By 1990, the number of Filipinos had reached 1,450,512. The West, as reported in the 1990 Census, had 991,572, or 68.4 percent of the Filipinos. The other three areas, Northeast, Midwest, and South, ranged from 8.8 to 12.5 percent. California in 1990 had the largest Filipino population, almost 50 percent of the total; Hawaii had fallen to second place. Every state in the union had a Filipino population. Florida, Illinois, New York, New Jersey, Texas, and Washington had Filipino populations in excess of 30,000.
From the outset of their arrival in Hawaii and the Pacific Coast, Filipinos, as a color-visible minority, encountered prejudice and discrimination as they pursued their economic and educational goals. One major problem for Filipinos prior to 1946 was the issue of American citizenship.
From 1898 to 1946, Filipinos, classified as American nationals, could travel abroad with an American passport and could enter and leave the United States at will, until the Tydings-McDuffie Act limited the number entering as immigrants to 50 a year. The opportunity for most Filipinos to become American citizens before 1946 was closed to them by the United States Supreme Court in its 1925 decision, Toyota v. United States. This decision declared that only whites or persons of African descent were entitled to citizenship, thus closing the opportunity for Filipinos to become United States citizens. Those Filipinos, however, who had enlisted and served three years in the United States Navy, Marine Corps, or Naval Auxiliary Service during World War I and who had received an honorable discharge could apply for citizenship. In 1946, Congress passed a law that permitted Filipinos to qualify for American citizenship.
The inability to acquire citizenship, besides being a social stigma, presented serious economic and political implications. Since most states required citizenship to practice law, medicine, and other licensed professions and occupations, Filipinos were prohibited from these occupations. Filipinos had no recognized voice of protest to speak
Throughout the Depression years of the 1930s, Filipinos found it difficult to qualify for federal relief. Although the Works Progress Administration in 1937 ruled that Filipinos were eligible for employment on WPA projects, they could not receive preference since they were not citizens. During the 1920s and 1930s, those Filipinos living on the Pacific Coast encountered prejudice and hostilities resulting in hateful discrimination and race riots. A sagging economy made assimilation difficult if not impossible.
At the height of discrimination in California, the California Department of Industrial Relations published in 1930 a biased study, Facts about Filipino Immigration into California, claiming that Filipinos posed economic and social threats. On the West Coast, Filipinos were frequently denied service in restaurants and barbershops and were barred from swimming pools, movies, and tennis courts. They found that their dark skin and imperfect English marked them, in the eyes of whites, as being different and therefore inferior. White Californians presented several contradictions that confused Filipinos. Farmers and certain urban enterprises welcomed them because they provided cheap labor. However, discriminatory attitudes relegated them to low-paying jobs and an inferior social existence. Consequently, many other Californians criticized the Filipinos' substandard living conditions and attacked them for creating health problems and lowering the American standard of living. Faced with discrimination in real estate, Filipinos were forced into "Little Manilas" in California cities. Filipinos in cities such as Chicago, New York, and Washington, D.C., also clustered together.
Discrimination against Filipinos has persisted into the late twentieth century, but civil rights legislation, affirmative action, and equal opportunity laws have improved the daily lives of most Filipinos who have arrived in recent decades. A perhaps unexpected form of discrimination for immigrants arriving after 1965 was the hostility that they met from second-generation Filipinos who saw the new arrivals as snobs and upstarts who were benefitting from advances made by the older group. At the same time, more recent Filipino immigrants have treated their older compatriots with disdain, considering them the equivalent of "hillbillies."
During the 1990 Census, Filipinos reported a median income of $46,698, while the median income for the entire United States was $35,225. This can be attributed to the ongoing stream of highly educated and highly skilled Filipinos from the Philippines and to second and third generation Filipino Americans finishing college.
The Filipinos who came to Hawaii and the West Coast during the 1920s and 1930s sought a range of leisure-time activities to relieve the monotony of unskilled labor. A result of the recruitment tactics of the agribusiness industry in Hawaii and the West Coast, the pre-World War II Filipino Community was made up mostly of single, uneducated men, with few or no relatives in the United States. These men attended and enjoyed spectator sports, bet on prize fights and wrestling matches, and gambled at poker, blackjack, and dice. During the 1930s they increased the profits of Stockton gambling operators and prostitutes by about $2 million annually. Gambling, dance halls, and prostitution gave credence to white Americans' complaints that Filipinos were immoral and lawless. Many in California traveled to Reno, Nevada, looking for the proverbial "pot of gold." Pool halls in the "Little Manilas" provided both recreation and gambling. Cockfighting, a major source of entertainment and gambling, was imported from the Philippines. The fighting of cocks, although illegal, continues to attract Filipinos in Hawaii and on the mainland.
Filipino Americans, like other immigrants, brought with them cuisine from their native country. As with many Eastern Pacific Rim countries, rice is the basic staple. Three favorite foods are lumpia , kare kare , and chicken and pork Adobo . Lumpia is an egg roll—a lumpia wrapper filled with pork, shrimp, cabbage, beans, scallions, and bean sprouts and fried in peanut oil. Kare Kare is a peanut-oil-flavored, stewed mixture of oxtail and beef tripe mixed with onions and tomatoes. Chicken and Pork Adobo consists of these two meats boiled in vinegar and soy sauce and flavored with garlic and spices. This dish is then served over rice.
Second-wave Filipinos incurred severe health problems as they aged. One illness that seemed almost endemic was gout arthritis, coupled with an excessive amount of uric acid in the blood. Doctors have speculated that a genetic characteristic makes these Filipinos unable to tolerate the American diet. Unmarried men also had a high rate of venereal disease. Complicating these health problems was the fact that these men did not or could not obtain regular health care when they had good health.
There is evidence, according to a study conducted in Hawaii, that Filipino women have a higher rate of heart disease and circulatory problems than does that state's general population. The same study noted that Filipino men suffered more from lateral sclerosis than other men did. Other diseases of high incidence were liver cancer and diabetes. The more highly educated fourth-wave Filipinos know the value of good health care and have utilized the medical services available to them.
The official languages in the Philippines are Pilipino (a derivative of Tagalog) and English. Linguists have identified some 87 different dialects throughout the country. At the time of Philippine independence, about 25 percent of Filipinos spoke Tagalog, the language of central Luzon. About 44 percent spoke Visayan; Visayans in the United States generally spoke Cebuano. The language most commonly spoken by Filipinos in Hawaii and the United States mainland is Ilocano, although only 15 percent of those in the Philippines speak this language. The coming of the fourth wave of Filipinos brought more Tagalog speakers. However, the high number of university graduates in the fourth wave communicated easily in English. Others, however, did not know English or spoke it poorly. In Hawaii, social service centers taught English by showing Filipinos how to shop in supermarkets and how to order in restaurants.
The distinct migration patterns of the Filipinos have led to unique community dynamics. The vast majority of the second wave of Filipinos migrating to Hawaii and the West Coast, as noted, were single young men. Only a very few married and had families in the United States. The dream that most Filipinos never realized—of returning to the Philippines—led in time to disillusionment as these young men grew old, trapped as unskilled laborers. Many of these "birds of passage" sent money to the Philippines to help their families pay taxes, buy land, finance the education of relatives, or meet obligations owed by the Philippines' family alliance system.
Relatively few Filipinos of the second wave who returned to the Philippines came from the West Coast. Many more from Hawaii's plantations were able to do so. Those who did return were called Hawayanos. In comparison to those in their Philippine villages, they had a degree of affluence. Filipino American philanthropy aimed mostly to benefit relatives in the Philippines. Filipinos sent funds to their families in Philippine barrios. Several mayors of villages in the Ilocos Norte reported that about $35,000 a month was received through the pension checks of returned Ilocanos workers and from remittances sent by fourth-wave immigrants. During the Marcos regime the Philippine government offered inexpensive airfares and incentives to foster return visits by recent immigrants, who in turn furnished information about life in America and provided money, as had earlier immigrants, to pay taxes, buy land, and finance college education.
While some Americans believed that Filipinos of the second wave were headhunting savages, others feared that they were health hazards because of a meningitis outbreak in the early 1930s. However, the greatest concern came from the attention that these young men lavished on white women. Given that in 1930 the ratio of Filipino males to females was fourteen to one, it was only natural that the men would seek companionship with white women. Young men frequented taxi-dance halls (where white girls, hired to dance with male customers, were paid ten cents for a one minute dance) during the 1920s and 1930s, seeking female companionship. Many white citizens believed that meetings between the young Filipinos and white women, whose morals were assumed to be questionable, led to inappropriate behavior by these men. In addition to these urban dance halls, "floating" taxi-dancers followed the Filipino migrant workers from California's Imperial Valley to the central and coastal valleys. Coupled with white hatred of Filipino attention to white women was an economic motive—the fear of losing jobs to the migrant labor force.
Filipino Americans came from a society where families, composed of paternal and maternal relatives, were the center of their lives. The family provided sustenance, social alliances, and political affiliations. Its social structure extended to include neighbors, fellow workers, and ritual or honorary kinsmen, called compadres. All of these people were welded together by this compadrazgo system. Through this system, which stemmed from the Roman Catholic church's rituals of weddings and baptisms, parents of a newborn child selected godparents, and this in turn led to a lifelong interrelated association. This bound the community together while excluding outsiders. Given the tightly knit villages or barrios, the compadrazgo system created obligations that included sharing food, labor, and financial resources. This system assured the role of the individual and demanded loyalty to the group.
To offset the absence of kin in the Philippines or to compensate for the lack of Filipina immigrants, Filipino Americans sought out male relatives and compadres from their barrios to cook, eat, and live together in bunk houses. Thus they formed a surrogate family, known as a kumpang, with the eldest man serving as leader of the "household." In addition, Filipino Americans compensated for the lack of traditional families by observing "life-cycle celebrations" such as baptismals, birthdays, weddings and funerals. These celebrations took on a greater importance than they would have in the Philippines, providing the single Filipino men without relatives in the United States the opportunity to become part of an extended family. Such new customs became an important part of the Filipino American strategy to adapt to the new world and culture in the United States.
A few Filipinos in California married Filipinas or Mexicans, while those living in Hawaii married Filipinas, Hawaiians, Puerto Ricans, or Portuguese. These women who married Filipinos in mixed marriages came from cultures whose value systems were similar to those of the men. However, large weddings, common in the barrios, did not occur because of the lack of family members. The birth of a child saw the duplication of the compadrazgo system. The rite of baptism gave an opportunity for those of the same barrio to come together for a time of socializing. As many as 200 sponsors might appear to become godparents, but there was not the same sense of obligation as there was in the Philippines. Marriages and funerals were also occasions that brought Filipino Americans together to renew their common ties.
Recent immigrants, unlike the agricultural workers of the 1920s and 1930s, have moved to major metropolitan areas of the United States, finding that urban areas provided better employment opportunities. They came with their families or sent for them after becoming established in the United States. These recent arrivals also brought with them the barrio familial and compadrazgo structures. Having complete families, they found it much easier to maintain traditional relationships. Those in the greater New York area settled in Queens and Westchester County in New York and in Jersey City, Riverdale, and Bergen County in New Jersey. A part of New York City's Ninth Avenue became a Filipino center, with restaurants and small shops catering to Filipinos' needs. Unlike the West Coast, however, there was no identifiable ethnic enclave. Outsiders saw these East Coast Filipinos merely as part of the larger Asian American group. They were largely professionals: bankers, doctors, insurance salesmen, lawyers, nurses, secretaries, and travel agents.
Filipinos have organized community groups representing a wide range of concerns, but the tendency to fragment has made it difficult to present a common front on issues of mutual concern. Organizations may be based upon professions or politics, but most have evolved from a common religion, city or barrio, language, school, or church in the Philippines. In 1980 California had more than 400 cultural and social organizations representing Filipinos.
Second-wave Filipinos in California, finding white society closed to them, organized three major fraternal organizations: Caballeros de Dimas-Alang, Legionairos del Trabajo, and Gran Oriente Filipino (Great Filipino Lodge). The first, organized in 1921, honored Jose Rizal, the Philippine national hero (his pen name while writing revolutionary tracts was Dimas-Alang). This fraternal lodge at one time during the 1930s had 100 chapters throughout the United States and was one of many that commemorated Rizal's execution on Jose Rizal Day, December 30. Legionairos del Trabajo, organized in San Francisco in 1920, originated in the Philippines. Centered in the Bay City, it had about 700 members, some of whom were women. Filipinos established Gran Oriente Filipino in San Francisco in 1924. At one time it had 3,000 members in 46 states and in the Territories of Alaska and Hawaii. All lodges sponsored beauty pageant contests and dances in their various communities. Such pageants continue, and now often include a Mrs. Philippines pageant.
Besides these formal organizations, Filipinos gather with others from their province for ritualistic and religious ceremonies and festivals. Most Filipinos, from the first wave of immigrants, were either nominal or practicing Roman Catholics, and in the United States, they participated in church celebrations. Some Filipinos have, however, become members of evangelical churches.
As second-wave Filipinos grew old and remained in California, various organizations started looking after their welfare. Caballeros de Dimas-Alang, using federal and city agencies, built the Dimas-Alang House in San Francisco to care for elderly and low-income Filipinos. The United Farm Workers Organizing Committee established the Paulo Agbayani Retirement Village near Delano for older Filipino field workers. As younger Filipinos worried about the fate of these aging agricultural workers, the organization Pilipino Bayanihan built in 1972 the largest federally funded community located in Stockton; subsequently, branches were built in Tulare County, Cochella, Brawley, and Ventura. Pilipino Bayanihan hoped to fulfill the needs of the unemployed, underemployed, and senior citizens.
The vast majority of Filipino Americans are Roman Catholic, although about five percent are Muslim. Both Roman Catholicism and Islam, however, are heavily influenced by a belief in the intervention of spirits, reminiscent of religious beliefs that existed in the Philippines prior to European and Asian settlement. Because the majority of early Filipino immigrants to the United States were single males, few Catholics attended church with any regularity. Once families began settling in the United States, however, religion became a central component of family and community life.
Second-wave Filipinos came primarily "to get rich quick"—by Philippine standards—and return to their home provinces to live in affluence. Thus their goal was not to adjust to life in the United States but to find high-paying jobs. They faced severe handicaps because of limited education and job skills, inadequate English, and racial prejudice.
Some found ready but low-paying employment as Pacific Coast migratory field hands and cannery workers. Others were employed in the merchant marine, the United States Navy, and Alaska's salmon canneries. Compared to Philippine wages, agricultural workers' pay seemed high. The workers, however, became ensnared in these jobs due to the higher cost of living in the United States. Consequently, many of the young Filipinos grew old in California, unable to fulfill their dream of returning to their homeland.
California agriculture, with its specialty crops, relied on migratory field workers. From the Imperial Valley to the Sacramento Valley, farmers sought cheap field labor to harvest their crops. Filipino and Mexican workers dominated in harvesting asparagus, cantaloupes, citrus fruits, cotton, lettuce, potatoes, strawberries, sugar beets, and tomatoes. Filipinos returned annually to work as members of an organized work gang headed by a padrone who negotiated contracts with growers. The padrone supervised the gang's work and provided housing and meals, charging a fee against wages. These gangs followed the harvest season north from California into Oregon's Hood River Valley and Washington's Wenatchee Valley. As late as the 1950s, Filipinos provided the largest number of migrant workers for western agriculture.
Migrant jobs ended after the harvest season. Filipinos then moved to cities in the late fall and winter in search of employment. But most usually had to return to the fields in the spring. By 1930, Los Angeles, San Francisco, Stockton, and Seattle each had "Little Manilas," as discriminatory real estate covenants restricted Filipinos to congested ghettos. The number living in these racial enclaves varied depending on the time of year, with the population highest in the winter months. A few Filipinos catered to their countrymen's needs—barbershops, grocery stores, pool rooms, dance halls, restaurants, and auto-repair garages. Others found employment in hotel service jobs, working as dishwashers, bellhops, and elevator operators. Some worked in various unskilled restaurant jobs or as houseboys.
Second-generation Filipino Americans, descendants of immigrants of the 1920s and 1930s, worked in unskilled and skilled jobs. California trade unions remained closed to them, keeping them out of many industrial jobs. Second-generation Filipinos in Hawaii found employment on plantations and in the islands' urban centers. Unions there became open to all Asians during the New Deal years. Many who immigrated to the United States after 1970 with limited education entered the unskilled labor market and soon found themselves joining second-generation Filipinos on welfare.
Declining market prices for agricultural produce in the late 1920s and during the Great Depression of the 1930s seriously affected the Filipinos. As migrant workers saw their wages fall lower and lower, they threatened strikes and boycotts. Given the American Federation of Labor's antipathy to non-white workers, minority workers, such as Filipinos, sought to organize ethnic unions. In 1930, an Agricultural Workers Industrial League tried without success to organize all field workers into a single union. California's Monterey County saw two short-lived unions emerge in 1933—the Filipino Labor Supply Association and the Filipino Labor Union.
The Filipino Labor Union, utilizing the National Industry Recovery Act's collective bargaining clause, called on the Salinas Valley lettuce growers to recognize the union. The lettuce workers struck, leading to violence, white vigilante action, and defeat for the workers time and time again. The Filipino labor movement generally failed during the Depression years and well into the 1950s as growers used strikebreakers and court injunctions to quash union activities.
During the 1920s many Filipinos spent summer seasons in salmon canneries in the Pacific Northwest and Alaska. Again, Filipinos worked in labor gangs under a contractor for seasonal work lasting three or four months. In 1928 there were about 4,000 Filipinos employed in Alaskan canneries but at low wages. Wages remained in dispute each season. This conflict continued until 1957 when Seattle's Local 37 of the International Longshoremen's and Warehousemen's Union (ILWU) became the sole bargaining voice for cannery workers in California, Oregon, and Washington.
In 1959, the AFL-CIO formed the Agricultural Workers Organizing Committee (AWOC) to organize grape pickers in California's lower San Joaquin Valley. About the same time, Cesar Chavez founded the National Farm Workers Association (NFWA). Both unions were ethnically integrated, but Larry Itliong led the largely Filipino AWOC union. Itliong, born in the Philippines in 1914, campaigned during the 1960s to improve the lot of Filipinos and other minorities. Other Filipino union leaders were Philip Vera Cruz, Pete Velasco, and Andrew Imutan.
Both AWOC and NFWA spent their initial energy recruiting members. In 1965, the unions protested the low wages being paid to grape pickers. On September 8, at the height of the picking season, AWOC struck against 35 grape growers in the Delano, Kern County, area. Domestic pickers, including Filipinos and Mexicans, demanded $1.40 an hour plus 20 cents a box. They argued that domestic pickers were receiving $1.20 an hour while Braceros, under a United States Department of Labor order, received $1.40. Chavez's NFWA joined the strike, which lasted for seven months.
In August of 1966 AWOC and NFWA joined forces to end any unnecessary conflict between themselves. The merger resulted in the formation of the United Farm Workers Organizing Committee (UFWOC). Some major grape growers recognized this union as the bargaining agent for workers in the vineyards. Itliong was instrumental in securing three contracts with a $2.00 minimum wage for field workers. The battle between the growers and their workers continued as the UFWOC challenged California's agriculture strongholds.
Filipinos were also instrumental in Hawaii's labor union movement. The key figure during the 1920s was Pablo Manlapit (1892-1969), who organized the Filipino Federation of Labor and the Filipino Higher Wage Movement. His organizations ran head long into the Hawaiian Sugar Planters Association (HSPA), which refused to meet the Filipinos' demands. This led to a 1920 sugar strike that lasted about six months. To rebuild his union, Manlapit continued to organize Filipinos as they arrived from the Philippines. A second confrontation between Manlapit's followers and plantation owners caused a strike in 1924 which resulted in a bloodbath in Hanapepe, Kauai, where sixteen workers and four policemen were killed. During the 1930s, the Filipinos' ethnic union, Vibora Luviminda, failed to make headway against the powerful HSPA. The ILWU started organizing dock and plantation workers in the 1930s and gained economic and political power after World War II. An important ILWU president was Filipino Carl Damasco. Another key labor leader was Pedro dela Cruz, born in Mindanao. He was a leading spokesman for the workers on the island of Lanai who worked in Dole's pineapple fields.
By 1980, Filipinos constituted 50 percent of the Hawaii branch of the ILWU. Agricultural workers were not the only union members; Filipinos also formed 40 percent of the Hotel and Restaurant Workers' Union.
Many of those Filipinos arriving during the 1970s and after created a "brain drain" for the Philippines. By 1980, the Philippines had replaced all European countries as the leading foreign provider of accountants, engineers, nurses, physicians, teachers, and technical workers. It is noteworthy that the Philippines have had a higher number of college and university graduates per capita than any other country. In the early 1970s, one-third of all immigrants seeking licensure in the United States were Filipino, and many found employment easy to obtain. Such was often not the case for physicians, pharmacists, dentists, lawyers, and teachers. These professionals ran into the highly protective bureaucratic screens that had been enacted by western state legislatures in earlier years. A Filipino dentist, who had served in the United States Navy for eight years, found it took him three years to gain a California license. A physician, licensed in the Philippines in 1954, had been in practice for 16 years before moving to Hawaii, where he was denied a license and forced to take a job as a janitor in a drive-in restaurant. He eventually found employment as a meat cutter. His employer thought he "was very good at separating the meat from the bone." Those professionals who settled in eastern and middle states found it easier to start careers because these states had less stringent laws or had reciprocity agreements.
By the 1990s, with affirmative action and equal opportunity programs, the lot of Filipino American professionals improved greatly, and they were able to employ their talents in the skills for which they were trained. Doctors and nurses found ready employment once they gained certification. In most urban areas with a high concentration of Filipino businessmen, Filipino chambers of commerce were organized. The purpose of such organizations was to stimulate business, but these chambers also provided support groups for small businessmen.
During the Depression years, discrimination against Filipinos led to efforts by exclusionists to bar further emigration from the Philippines. Some Filipino organizations, concerned about the economic hardships confronting their fellow countrymen, suggested a program of repatriation to the Philippines. Several members of Congress tried to enact a repatriation measure, but did not gain much support until Representative Richard Welch of San Francisco introduced his repatriation bill. This bill provided that the federal government would pay repatriation expenses of those wishing to return to the Philippines. These repatriates could only return to the United States as one of the annual quota of 50 immigrants. When this program ended in 1940, 2,190 of the 45,000 Filipinos living in the United States had elected to be repatriated. Many who took this opportunity for free transportation across the Pacific were university graduates who had already planned to return to assume leadership roles in the Philippines.
Repatriation did not satisfy California's exclusionists, who attempted to demonstrate that Filipinos were taking scarce jobs. However, Los Angeles County reported that of the 12,000 Filipinos who lived in the county in 1933, 75 percent could not find work. At the time, they were not eligible for federal relief programs. During the Depression, not only did Filipinos face legal discrimination in obtaining licenses to practice their professions, but they found that restrictive housing covenants prohibited them from living where they wished. During the New Deal era, Filipinos registered for relief projects only to be denied positions by the Civil Works Administration. In 1937, the United States Attorney General restated that Filipinos were American nationals and thus eligible for employment on Works Progress Administration projects. However, they could not receive preference because they were not citizens.
Filipinos found that miscegenation laws denied them the right to marry white women. In 1901, the California legislature had enacted a law forbidding whites to marry blacks, Mongolians, or mulattos. In the early 1930s, California Attorney General U. S. Webb ruled that Filipinos were Mongolians, but since his opinion did not have the force of law, it was up to each of the 58 county clerks to make his/her interpretation as to the racial origin of Filipinos. By 1936, Nevada, Oregon, and Washington had enacted laws prohibiting marriages between Filipinos and whites. Consequently, white women became common-law wives. In 1948, the California Supreme Court ruled in Perez v. Sharp that the miscegenation law violated individual civil rights, thus freeing Filipinos to marry whomever they pleased.
During World War I, some Filipinos enlisted in the United States Navy and the Marine Corps. Men who had served three years and had received an honorable discharge could apply for American citizenship, and several did so. Following the Japanese attack on Pearl Harbor and the Philippines in 1941, which triggered America's involvement in World War II, Filipinos tried to volunteer for military service and/or work in defense factories. Existing law had no provisions to enlist nationals, thus denying Filipinos employment in war industries. However, given the need for Army personnel, Secretary of War Henry Stimson on February 19, 1942, announced the formation of the First Filipino Infantry Battalion, which began training at Camp San Luis Obispo in California. It was activated on April 1, 1942, but in July the Army reformed the unit as the First Filipino Regiment. A few weeks later, President Franklin Roosevelt issued an executive order that opened the way for Filipinos to work in government and in war industries. He also ordered a change in the draft law, reclassifying Filipinos from 4-C to 1-A, making them eligible for Army service.
The First Filipino Regiment, after training in several California Army posts, transferred to Camp Beale near Marysville, California. The citizenship of the troops remained a major issue. On February 20, 1943, Army officers on Camp Beale's parade grounds administered the oath of allegiance, granting citizenship to 1,000 Filipinos. Many in the First Regiment believed that citizenship gave them the right to marry their common-law wives, thus providing family allowances and making these women their federal insurance beneficiaries. An appeal of the miscegenation law fell upon deaf ears, leading the regimental chaplain and the Red Cross to obtain emergency leaves so that couples could travel to New Mexico to become legally married before the regiment went overseas.
A second Army unit, the Second Filipino Infantry Battalion, was formed in October 1942 and reorganized in March 1944, training at Camp Cooke, California. This battalion and the First Infantry were sent to Australia and fought in New Guinea before landing in the southern Philippines. The First Infantry Regiment also went to Australia and then to New Guinea. They fought in Mindanao, the Visayan Islands, and northern Luzon. From the First Infantry Regiment came the First Reconnaissance Battalion, organized in 1944, to undertake pre-invasion intelligence in Luzon. Some 1,000 went ashore from submarines to work undercover as civilians.
The First Filipino Infantry Regiment earned the prestige of fighting bravely and with honor, closely paralleling the record of the more widely known Japanese American 442 Regimental Combat Team. At the war's end, 555 soldiers returned to the United States, 500 of whom reenlisted; 800 of the regiment remained in the Philippines. Altogether, more than 7,000 Filipinos served in the United States Army.
The United States Navy began early to recruit Filipinos in the Philippines, Hawaii, and the mainland. By the end of World War I, about 6,000 Filipinos had served in the Navy or the Marine Corps. During the 1920s and 1930s, enlistments totaled about 4,000. However, the only billet open to these men was mess steward, for the Navy had determined during World War I that this was the best assignment for Filipinos. During World War II, the Navy continued its mess-boy policy and denied these men the opportunity to secure other ratings and privileges.
In 1970, over 14,000 Filipinos served in the Navy. Most had sea duty as personal valets, cabin boys, and dishwashers. Captains and admirals had Filipino stewards assigned directly to them. Others worked at the White House, the Pentagon, the United States Naval Academy, and at naval bases. At the same, the Navy discovered that its ships' galleys had become "Filipino ghettos." The Navy then provided opportunities for a few to train for other ratings. Some 1,600 Filipinos gained new assignments. The Navy continued to recruit mess stewards in the Philippines. Of the some 17,000 Filipinos in the Navy in 1970, 13,600 were stewards. Those in the Navy did not complain quite as much as did outsiders. The steward's entry-level pay of $1,500 equalled the salary of a lieutenant colonel in the Philippine Army. Naval service was an important way for Philippine nationals to gain American citizenship.
James Misahon was a prominent administrator at the University of Hawaii and served as the chairperson of the 1969 Governor's Statewide Conference on Immigration in Hawaii. Many other Filipinos are active in public and higher education.
Two prominent authors of earlier years were Manuel Buaken, who wrote I Have Lived with the American People and Carlos Bulosan, author of America Is in the Heart.
Several Filipinos have entered politics and won election to office. Those in Hawaii have had the most success, in part because of large Filipino enclaves and because of their strength in the ILWU, a strong arm of the Democratic Party in Hawaii. In 1954, the Democratic Party gained control of the legislature and won the governorship in 1962; Democrats have controlled Hawaii's politics ever since. Between 1954 and the winning of statehood in 1959, three Filipinos were elected to the House of Representatives. Peter Aduja, born in Ilocos Sur, received his education in Hilo, Hawaii, and graduated from the University of Hawaii before completing his law degree at Boston University. He was elected to one term in 1955 but was defeated in his bid for a second term. After statehood, Aduja was elected to three terms, starting in 1966. Bernaldo D. Bicoy, another of the five Filipino lawyers in the Territory of Hawaii, was elected in 1958. He was defeated in 1959 for a seat in the new state senate, but he won election for one term to the House in 1968. The third pioneer Filipino legislator was Pedro dela Cruz of Lanai, a longtime ILWU labor leader who was first elected to the House in 1958 and served 16 years until his defeat in 1974. In his later years in the House, he served as vice speaker. During the 1970s Emilio Alcon and Oliver Lunasco served as representatives from the island of Oahu.
Alfred Laureta became Hawaii's first Filipino director of the Department of Labor and Industrial Relations. Born in Hawaii on May 21, 1924, he graduated from the University of Hawaii in 1947 and then received his law degree from Fordham University. Governor John Burns appointed him to the directorship in 1963 and then in 1969 appointed him judge of Hawaii's Circuit Court One. He then became the first federal judge of Filipino ancestry. Since then Benjamin Menor and Mario Ramil were appointed to the Hawaii Supreme Court. In 1999, there were five judges in Hawaii and two in California.
In 1974 Benjamin Menor, born in the Philippines on September 27, 1922, became the first Filipino appointed to the Hawaii State Supreme Court. He migrated to Hawaii in 1930 and graduated from the University of Hawaii in 1950, later earning his law degree from Boston University. After practicing law in Hilo, he served for a time as Hawaii county attorney. In 1962 he was elected to the Hawaii State Senate, becoming the first Filipino in the United States to be elected as a state senator.
Two other Filipino firsts also occurred in Hawaii. In 1975, Eduardo E. Malapit, who had served several terms on the Kauai County Council, was elected mayor of Kauai. In 1990, Benjamin Cayetano, a member of the Hawaii legislature, was elected lieutenant governor of Hawaii.
Only a few Filipinos have achieved political success outside of Hawaii. In California, Maria L. Obrea has served as Los Angeles municipal judge; G. Monty Manibog served as mayor of Monterey Park; Leonard Velasco held the same office in Delano. Glenn Olea was a councilman in the Monterey Bay community of Seaside.
Most American sports enthusiasts remember Roman Gabriel, who gained national recognition as quarterback for the Los Angeles Rams football team.
For a good list of Filipino media, try the Kang & Lee list at http://www.asian.mediaguide.com/filipino/ fm.html .
From the early 1920s to the late 1980s, several Filipino newspapers were published, although their existence was generally short-lived. In Hawaii, the Kauai Filipino News became the Filipino News in 1931. In California, early papers were the Philippine Herald of 1920, the Commonwealth Courier of 1930, and the Philippine Advocate of 1934. In 1930, the Philippine Mail began publishing in Salinas, California. It succeeded the Philippine Independent News, started in Salinas in 1921. The Philippine Mail is still published in Salinas, making it the oldest Filipino newspaper in the United States. Over the years, it has reported news from the Philippines as well as news stories about Filipinos in the United States. In the 1990s, Filipino publications included the Philippine News, printed in South San Francisco, the Filapinas Magazine of San Francisco, and The Philippine Review of Sacramento and Stockton, California.
Bi-weekly newspaper for Filipino communities. Most widely-read periodical for Filipinos in the United States.
Address: Tri-Media Center Building, 4515 Eagle Rock Boulevard, Los Angeles, California 90041.
Telephone: (323) 344-3500.
Fax: (323) 344-3501.
Contact: Rene Ciria-Cruz, Editor.
Address: Filipinas Publishing, Inc., 655 Sutter Street, Suite 333, San Francisco, California 94102.
Telephone: (415) 563-5878; or (800) 654-7777.
Online: http://www.filipinasmag.com .
Magazine founded in 1992. Covers Filipino American interests and affairs.
Weekly newspaper for the Filipino community with six U.S. editions.
Contact: Alex A. Esclamado, Publisher.
Address: 371 Allerton Avenue, San Francisco, California 94080.
Telephone: (415) 872-3000; or (800) 432-5877.
Fax: (415) 872-0217.
Online: http://www.philippinenews.com .
Filipino American National Historical Society.
Gathers, maintains, and disseminates Filipino American history.
Contact: Dorothy Cordova, Director.
Address: 810 18th Avenue, Room 100, Seattle, Washington 98122.
Telephone: (206) 322-0203.
Fax: (206) 461-4879.
Asian American Studies Center at the University of California, Los Angeles; Asian American Studies Department at the University of California, Davis; the Oakland Museum History Department in Oakland, California; and the Social Science Research Institute of Hawaii at the University of Hawaii in Honolulu.
Ave, Mario P. Characteristics of Filipino Organizations in Los Angeles. San Francisco: R & E Research Associates, 1974.
Cabezas, Amado, et al. "New Inquiries into the Socioeconomic Status of Pilipino Americans in California," Amerasia Journal, 13, 1986; pp. 1-21.
Chan, Sucheng. Asian Americans (Golden State Series). San Francisco: MTL/Boyd & Fraser, 1991.
Dorita, Mary. Filipino Immigration to Hawaii. San Francisco: R & E Research Associates, 1975.
Dunne, John. Delano. New York: Farrar, Straus & Giroux, 1971.
Espiritu, Yen Le. Filipino American Lives. Temple University Press, 1995.
Gamalinda, Rancia and Eric, eds. Flippin: Filipinos on America. Asian American Writers Workshop, 1996.
Lasker, Bruno. Filipino Immigration to the Continental United States and to Hawaii. New York: Arno Press, 1969 (first published in 1931).
Letters in Exile: An Introductory Reader on the History of Pilipinos in America, edited by Jesse Quinsaat. Los Angeles: UCLA Asian Studies Center, 1976.
McWilliams, Carey. Brothers under the Skin, revised edition. Boston: Little, Brown and Co., 1951.
Melendy, H. Brett. Asians in America: Filipinos, Koreans and East Indians. New York: Hippocrene Books, Inc., 1981.
Navarro, Jovina. "Immigration of Filipino Women" in Asian American Women. Stanford: Stanford University Press, 1976.
Okamura, Jonathan. Imagining the Filipino American Diaspora: Transnational Relations, Identities and Communities. Garland, 1998.
Rafael, Vince. Discrepant Histories: Translocal Essays on Filipino Culture. Temple University Press, 1995.
Root, Maria, ed. Filipino Americans: Transformation and Identity. Sage Publications, 1997.
San Juan, E., Jr. From Exile to Diaspora: Versions of the Filipino Experience in the United States. Westview Press, 1998.
|
<urn:uuid:eb51caae-d356-4877-99c0-3fc5e048b20b>
|
CC-MAIN-2016-26
|
http://www.everyculture.com/multi/Du-Ha/Filipino-Americans.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967357
| 14,094
| 3.1875
| 3
|
A Close-Up Look at Worms under the Microscope
In this microscope lesson, we will be performing an interesting simple science experiment and activity using the microscope. First of all, we will be using both a low power and a high power microscope. Unlike some microscope experiments wherein we only have to raid our kitchen for samples, this time we are going to dirty our fingers in obtaining our next sample. We are going to dig for worms as part of our field nature experiment.
First, we are going to look for a tubifex. A tubifex is a thread-like worm that measures from half an inch to one and a half inches in length. They live at the mud floor of ponds. To obtain one, we have to look carefully at the mud on the bottom of the pond. You will notice that there seem to be a red tinge among the brown. If you look at it closely, you will see that these are actually numerous tubifex.
The tubifex builds and lives in a tube. It buries its head inside the tube while its tails sticks up to the surrounding mud. This worm feeds on the decaying organic matter in the bottom of the mud and ejects its waste from its tail.
Let us gather the mud where a tubifex lives and separate it from its habitat. We can observe a tubifex under low power binocular stereo microscope and we will see that the worm is actually colorless and transparent. The bright red hue that we see is due to the worm’s blood. If there are a lot of tubifex gathered together, it will give the mud a colored appearance.
Tubifex are under the same group as earthworms. Like earthworms, they are divided into rings or segments that can be more clearly seen under a simple child microscope. On each segment, we can see a short hair-like bristles that arranged in a single row on the sides of the worm’s body. They also have curved spines that the worms use in locomotion and are called foot spines. These foot spines may be hard to spot because they retract so it is advisable to compress the worm between a microscope slide and it cover slip. If we look at them under a high power compound light microscope, we will see that these foot spines appear to be curved or forked.
There are also aquatic worms that have so many bristles that they are called bristleworms. A species of these is called Nais that are commonly found in mud, algae, or on the leaflets of water plants. Under a low power microscope, they look like yellowish or whitish in color and we can see how the bristles aid it in moving about. Another species that can be observed under a kid microscope is the Dero. The dero measures not more than quarter of an inch and with an end that is broad and looks like a funnel.
There are many more worms that we can observe by using a child microscope. One of the most common ones that we can find over submerged stems and leaves are flatworms. Flatworms belong to the group of Turbellaria because the turbulent lashing of their cilia that is responsible for their movement. In viewing the cilia, however, we will need a high power compound microscope. If we only need to know the impact of the cilia’s movements, a one inch objective lens can be used to observe the currents in the water that is created by the cilia’s motion. We will see small objects being swept away because of the rapid motion.
One of the most common flatworms goes by the name of Planaria. If we look at these leech-like creatures under a student microscope, we will see that they are velvety black in color and with slender bodies. They have broad heads and a pointed tail and measure about a quarter of an inch long. Another kind of tapeworm is the Dendrocoelum. Under a child educational microscope, they look smaller compared to the planaria and have a creamy white color.
These worms are among the many other worms that we can obtain and observe under a child microscope. While they are interesting as well as educational to examine, we have to be reminded that these are still dirty little creatures that can harm our health. Let us remember to wash our hands after the experiment.
|
<urn:uuid:de199f9d-40eb-4de8-a6ae-d07d1397ded0>
|
CC-MAIN-2016-26
|
http://www.truevisionmicroscopes.com/a-close-up-look-at-worms-under-the-microscope.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00055-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958896
| 894
| 4.15625
| 4
|
According to the most reliable information obtainable, the first school in
Manistee was taught by Mrs. PARSONS in the year 1852 near CANFIELD's Mill
at the mouth of the Manistee River. But little can now be learned respecting
this pioneer school. It was doubtless a very primitive affair with small
attendance and was supported by private contributions. A short time afterwards
a school house was built on the northwest corner of Spruce and First streets,
where the first public school was established. It was a plain, unpainted,
one room building with crude interior furnishings, such as may still be seen
in some remote country districts. A row of pine board benches ran along the
inner walls with open desks in front. Attached to these was another row of
seats without desks, for the smaller children, who were thus always kept
within easy annoying distance of the older pupils occupying the back seats.
Movable benches without arms or backs were used for recitation purposes.
Some of Manistee's well-known citizens taught in this building.
Mrs. T.J. RAMSDELL had charge of the school in 1861 and for her services
received the munificent sum of twenty-eight dollars per month. Her sister
taught during the following year and it was in the early sixties that T.J.
RAMSDELL taught for a short time. The annual school meeting had voted to
employ a man teacher. It transpired that Mr. Ramsdell was the only available
man in the entire district, so he consented to wield the rod during a three
months' term. Miss TIBBITTS, now Mrs. Dr. FAIRFIELD, also taught here. While
the building was still used for school purposes, the stumps which had been
left underneath when it was built, began to appear above the floor. As stumps
do not grow the presumption is that manistee's first public school house
had not a very durable foundation. The First Congregational church, which
now owns the costliest edifice in the city, was then homeless and held it's
Sunday services and week-day prayer meetings in the little school house.
As political meetings were also held there, it appears that there was once
a time in our history when public education, politics and religion could
dwell amicably together under the same roof. But notwithstanding the fact
that that a private school had meanwhile been established near the present
site of A.O. WHEELER's residence, the little school house at last became
inadequate and on May 10th, 1865, a public meeting was held for the purpose
of selecting a location for a new building. The present site of the Central
school was chosen and a four-room brick building ordered built. The contract
for its construction was afterward let to T.J. RAMSDELL, who completed it
School opened in a new building in 1867 with D. CARLETON as principal. At
that time it was thought that this building would accommodate the children
of the entire city for many years, but in 1870 we find its capacity doubled
by the addition of four more rooms and supplementary schools established
in the first and third wards. A year later a single room building was erected
in the fourth ward near the foot of RIETZ's hill. Charles HURD was then
My intimate acquaintance with the schools began in 1872 when I assumed the
superintendency. They remained in my charge four years, during which time
they were thoroughly organized and graded. The first courses of study, a
uniform set of text books and regulations for the guidance of teachers and
pupils were adopted and printed, and in 1874 the first annual report containing
complete and useful statistics was published in pamphlet form and gratuitously
distributed among the school patrons. Music, as a district branch of study,
was introduced into the schools in 1873. Mrs. D.H. BUTLER was the first special
music teacher. This was about the time of the great temperance crusade which
resulted in materially increasing the library fund. About five hundred dollars
were invested in books which was the beginning of the school library. Frequent
additions were made until at the time of its destruction by fire it numbered
over 2,000 volumes. High school courses of study were adopted in 1874 and
the first pupils graduated in 1879. There were only two in the graduating
class, viz. Nora B. BULLIS and Kittie P. EDINGTON. Upon my retirement Charles
HURD again became superintendent, retaining the position two years. He was
succeeded by David BEMISS, who was followed in 1881 by Webster COOK, who
remained four years, then Mr. GILLETT had charge for one year, which takes
us to Mr. JENNING's administration.
There are now nearly four times as many pupils and six and one-half times
as many teachers as in 1874, and the amount paid the high school teachers
alone is greater than the total received by the entire corps of teachers
twenty-five years ago. Truly we have grown, but growth has been so quiet
and gradual that we only realize its extent when confronted with strong contrasts
between periods separated by years of time. Since the erection of the 4-room
building in the second ward in 1866 our capacity has increased sixteen-fold.
Every new building or addition has seemed to be the last, yet the demand
still is for more room. Passing the quarter century in review we are reminded
of some critical periods through which our schools have passed. The great
temperance movement in the early seventies threatened to invade them, but
the god judgment and continued firmness of those in authority preserved their
non-partisanship and carried them safely through that memorable struggle.
The religious contest of later years seemed still more threatening but was
finally settled without materially interfering with the progress of the schools.
On the whole the people have reason to rejoice over the past and present
prosperity and good management of the schools and should continue to heartily
support those having them in charge.
|
<urn:uuid:e1878e5b-9abb-4e06-9f93-b77f3e012e0a>
|
CC-MAIN-2016-26
|
http://www.rootsweb.ancestry.com/~mimanist/Page60.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00067-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982705
| 1,307
| 2.921875
| 3
|
Imagine you were a very clever ant, living on a large log, floating in a big lake…
…a very large, deep, cold lake.
Being a not incurious, clever ant, you contemplate the lake in its infinite and insurmountable vastness.
Surely knowing what lies on the lake, or even beyond the lake (if such can be conceived) is not feasible.
Then, you notice that the lake has ripples.
Ripples, and waves, and swirls.
So, being a methodical sort of ant, you start measuring the ripples and waves and swirls, and you get your student ants to measure them also.
You measure lots of waves, very very carefully.
Being a very clever ant, you realise that you can figure out something about the lake!
For example, you realise that while it is a very deep lake, it is not infinitely deep.
In fact you can, eg by measuring some of the swirls, and where the waves break, figure there are shallows in some parts of the lake.
You can also measure your drift speed in the lake, relative to some arbitrary standard of rest for the bulk water.
You then make a tremendous discovery: the lake has a boundary!
It is as if you are on a great circular lake, surrounded by steep hard cliffs.
The waves bring you information from the edge of the lake!
Oh, you also, incidentally, discover from the bends in the ripples on the lake, that there are other floating logs and structures on the lake, far away, but much closer than the edge.
That could be useful.
Surely this is it. There can be no more known.
Even the most diligent student ant could never swim to the edge of the lake to explore the cliff, it is too far. They’d barely make it to the nearest floating log. By the time they got back their advisor would be retired, their candidacy stale, a caution on their matriculation.
They’d never get to be doctor ants.
No point swimming those waters…?
But, then you realise that very very very careful measurements of lots more ripples and waves and swirls might, just might, tell you something about the cliffs!
The cliffs themselves will interact with the water waves differently depending on the height, and slope and composition of the cliff stuff. Why, the cliffs themselves might propagate internal waves which interact with the lake waves, carrying information about what, if anything, lies beyond the cliff edge.
So you send your best observant to the quietest most isolated tip of the log, and have them listen to the ripples and waves and swirls for a long long time.
And, you find that, indeed, the subtlest swirls of the the waves do in fact tell you something about the cliffs: they are tall cliffs, but they are not infinitely tall; they appear to plateau and extend back for a considerable distance, but, if you measure carefully, there is, maybe, a hint of a rise to mountains in those hinterlands, taller than the cliffs, but not infinitely so.
Maybe the lake came from waters that flowed when the snow in those mountains melted a long time ago, as once conjectured by a wise theorant.
There is something beyond the lake!
This opens up a new world of speculation for even the most conservative doctor ants: maybe, just maybe, beyond those mountains, are other lows in the landscape, holding other lakes!
Big lakes, small lakes.
Round lakes, lochs, lakes through which rivers flow onto other lakes, lakes of all sorts of varied shapes and even different chemistries, and, perhaps, even oceans.
And on those various other lakes and rivers and seas, there might be other log-like things floating along, and, perhaps, on some of those logs are clever little ant-like creatures watching the ripples and waves and swirls in the waters and wondering if there are, anywhere else, other ants sitting on their logs, watching the ripples and waves and swirls in their waters.
The ants’ world just got a lot bigger.
Of Particular Significance… – good blog analysis of BICEP2 results
|
<urn:uuid:4b14faae-4149-4f4c-9e9e-50ee25d8cc57>
|
CC-MAIN-2016-26
|
http://scienceblogs.com/catdynamics/2014/03/17/the-curl-of-space/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00024-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948789
| 881
| 2.546875
| 3
|
Asylum seeker and refugee support
In most countries a person must apply for asylum before they are recognised as a refugee. An asylum seeker is someone who arrives in a new country and makes an asylum application. It is then up to the Government to decide if their claim meets the definition of a refugee, as below.
The 1951 United Nations Convention on Refugees defines a refugee as someone who has fled their country due to 'a well founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion'.
Hampshire has relatively few refugee families. This means that many of them are isolated and very vulnerable. Schools will need to be flexible in their approach and provide a planned induction programme. This may be in conjunction with Hampshire Social Care if the child concerned is unaccompanied.
Please note that government law and guidance regarding asylum seekers and refugees is under constant review
Support available from EMTAS
Hampshire EMTAS can
provide detailed guidance on asylum seeker/refugee issues including immigration, admission to schools/colleges, teaching and learning, pastoral advice, resources, useful contacts.
provide a bilingual assistant to translate or interpret.
give you practical teaching support, including an early profiling assessment of the pupil's previous education and current achievement levels to enable early and correct placement.
provide advice on the best course of action regarding GCSE choices/other courses when admitting pupils into Year 10 or Year 11.
provide in-class support for the child/young person.
provide training for school staff.
Hampshire EMTAS and the Virtual School for Children in Care have jointly produced a set of Frequently-Asked Questions 428 kB in relation to Asylum-Seeking children and young people. Download the document.
Entitlement to compulsory education
Children/young people who are asylum seekers or refugees have the right to free education. This includes young people aged 16-19 who are entitled to attend school sixth forms or Further Education(FE) colleges.
Schools must by law treat any application from a child/young person seeking asylum in the same way as any other application. The fact that the child/young person may speak little or no English does not matter.
Free school meals
If your family has been issued with vouchers from the National Asylum Support Service (NASS) then your child is entitled to free school meals (and milk, where provided).
Schools may be able to provide you with some school uniform items for your child.
At the discretion of Hampshire Local Education Authority school uniform grants may be available.
Please ask the school to obtain a bilingual dictionary for use in school.
If you have a problem or are worried about your child's education, please speak to the class teacher or form tutor.
Continue to use your own language at home. Children who speak their first language well will become better speakers and users of English. Talk to your child about their lessons. This will help your child to understand more.
Information for unaccompanied children and young people
You can listen to sections of the 'Welcome to Hampshire' booklet being read aloud in English.
Asylum process introduction
Asylum process screening
Asylum process first meeting
Asylum process interview
Asylum process decision
Asylum process appeal
Working in the UK
Sport and recreation facilities
Contacting family and friends
|
<urn:uuid:e05cc223-5a13-4214-abc8-8ecc04797622>
|
CC-MAIN-2016-26
|
http://www3.hants.gov.uk/education/emtas/goodpractice/ema-asylumseekers.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00077-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937771
| 690
| 2.890625
| 3
|
A Viable Alternative in Electric Power Production
Caterpillar offers durable and reliable solutions for the use of biogas. The initial investment can yield significant long-range savings, in addition to being environmentally responsible. Biogas, which is derived as a byproduct of many agricultural, food processing and industrial processes, is used today as a fuel source for engine generators.
By tapping into biogas resources with proven Caterpillar technology, agricultural biogas and industrial biogas projects result in the:
- Reduction of carbon emissions through the destruction of naturally occurring methane
- Reclamation of valuable land space normally utilized for purifying organic wastewater
- Elimination of odor and pest issues that are caused by the decomposition of organic material
How It Works
Consisting primarily of methane and carbon dioxide, biogas is produced through the anaerobic decomposition of organic waste. Instead of allowing uncontrolled decomposition of these wastes and the release of gases into the atmosphere, they are contained in an oxygen-deprived environment such as a covered lagoon or aboveground steel tank. From there, methane is extracted and burned to generate electricity or heat.
Because of the impurities and inconsistencies in biogas, pretreatment is generally required. However, using a gas engine specifically designed to operate on biogas can reduce the amount of investment in pretreatment. Cat® low-energy fuel generator sets are engineered to handle fuel with variations in methane content typical of biogas operations.
Cat biogas engines typically include:
- A crankcase ventilation pump to eject potentially acidic blow-by gases
- Specially designed aftercooler cores, cylinder heads, main bearings and connecting rod bearings that are hardened against corrosive elements
- Differentiated cooling systems to operate at elevated jacket water temperatures to prevent condensation of contaminants
Cat switchgear with PLC based controls is supplied to parallel the generator to the local electrical grid to export renewable power at a premium to the local electric utility. For more remote installations, the power produced can be consumed by the facilities on site.
|
<urn:uuid:1845080b-5ff1-46c8-a805-299700e9c134>
|
CC-MAIN-2016-26
|
http://www.catgaspower.com/AgricultureBiofuelFood.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.925938
| 421
| 3.171875
| 3
|
Why did the City create a Stormwater Management Utility?
The City of Santa Cruz is required by the federal Environmental Protection Agency and the state Department of Water Resources to address water pollution associated with stormwater runoff from streets and properties in the City. Studies indicate that stormwater runoff is a major contributor of pollutants to the San Lorenzo River and Monterey Bay. In addition to pollution control requirements, the City faces significant flood control commitments for the San Lorenzo River Flood Control Project.
The City created the Stormwater Management Utility and established utility fees to help pay for the City's share of costs for flood control projects and stormwater pollution prevention. Total costs to implement the San Lorenzo Flood Control Project are estimated at over $66 million, shared between the federal government, state and City, with the City's share of costs estimated at around $4.4 million. These costs include the reconstruction of four bridges, levee raising, river landscaping, and the Laurel Street Extension/Third Avenue Riverbank Stabilization Project. Estimated costs for stormwater pollution abatement are over $650,000 per year.
How does the City address stormwater pollution?
The City implements programs to monitor stormwater for pollutants, improve stormwater system maintenance, and provide educational activities to individuals, businesses and agencies impacting stormwater. The City has a Stormwater Ordinance that establishes standards for keeping stormwater clean. Best management practices for specific areas such as retail, industrial, and construction activities were developed and are codified by the Stormwater Ordinance. These activities support the goal of the City to minimize the pollutants from the City storm drain system entering Monterey Bay National Marine Sanctuary. Stormwater utility fees and Clean River, Beaches and Ocean tax (Measure E) funds pay for these activities.
What is the Citywide Stormwater Management Fee?
A stormwater management fee is charged to each property within the Santa Cruz city limits. The stormwater fee is based on the average stormwater runoff from various land use types within the City and on property size. Property land uses which have a high percentage of area covered by buildings and pavement generate more stormwater runoff than vacant land or uses with less coverage. Therefore those properties with more impermeable area are charged a higher user rate per acre.
Parcels with single-family residences are charged a flat amount of $21.24. All other rates are based on acreage. A vacant parcel is charged at $5.28 per acre. An average commercial parcel would be charged around $85 per year, based on the $261.09/acre rate. Citywide fees fund stormwater pollution reduction programs and the City's share of bridge improvements on the San Lorenzo River.
City Stormwater Management utility fees are billed on the property tax statements mailed by the Santa Cruz County Tax Collector. The fees should be paid with property taxes. Fees collected by the County are then turned over to the City.
What will the Flood Levee Improvement Project do?
The U.S. Army Corps of Engineers and the City of Santa Cruz have agreed to jointly fund a project to raise the height of the San Lorenzo River levees from one to five feet, depending on location, and to restore riparian habitat along the levees. The San Lorenzo River Flood Control and Restoration Improvements project has been in the works since 1978, and has cost over $22 million. The City's share of this cost is $1.1 million.
The improvements will be built in several phases depending on Congressional funding. Congress approved $4.8 million in the Fiscal Year 2000 federal budget and construction of Phase I began in August, 1999. Phase I of the project extends from Highway 1 to Water Street and Soquel Avenue Bridge to Riverside Bridge. A small section of the levee was raised in 1998 near Highway 1 behind the new Gateway Shopping Center. Phase II covers Water Street to Soquel and Riverside to the River mouth, and construction is scheduled to begin in the fall of 2000. The State Legislature in 2000 passed legislation authorizing State assistance to the project. This assistance will cover a large percentage of the required City share of the Corps portion of the project.
A second part of this project was replacement of the Riverside Avenue Bridge, the northern two lanes of Water Street Bridge, the Soquel Avenue Bridge, and retrofit of the Broadway/Laurel Bridge. The new structures are higher and allow freer flow of flood waters. Bridge construction is funded by a separate Federal program and the City's share has been financed from the citywide stormwater fees, since all City residents benefit from the improved bridges.
The third part of the project is the Laurel Street Extension/Third Street Riverbank Stabilization. This project constructed a natural rock form wall along this section of the river to prevent the collapse of these streets into the river. Vegetation will be planted along the toe of the wall adjacent to the river to provide shade for fish and other wildlife. The project cost $6.2 million and the City's share after federal and state assistance was $120,000.
The primary purpose of the project is to reduce flood damage and loss within the City of Santa Cruz 100-year floodplain. According to the Federal Emergency Management Agency (FEMA), the December 1955 flood caused over $40 million in damage. The U.S. Army Corps of Engineers estimates that today a 100-year flood in the downtown area would cause $86 million in damage.
Which properties are affected by levee improvements? All parcels which lie within the 100-year flood plain of the San Lorenzo River within the Santa Cruz city limits receive increased flood protection from the levee improvements. About 1600 parcels lie within the floodplain.
What are the flood insurance benefits to affected parcels?
Most properties within the 100-year floodplain are required by the Federal Emergency Management Agency (FEMA) to purchase flood insurance. Annual premiums for a single-family home range from $200 to $300. In 2002, FEMA recognized that the levees are already providing increased flood protection and granted an interim A99 flood zone designation to most parcels in the 100-year floodplain. This designation allows flood insurance premiums to be reduced by 40%. Property owners can get more information on this change on the City website under Flood Insurance Premium Reduction or by contacting their insurance agent. Once the river levee is completely finished, the City will apply to FEMA to have the 100-year floodplain boundry revised. This should remove the FEMA flood insurance requirements for these properties. Most property owners will save money when this happens; they will pay less in annual stormwater fees for the levee project than they now pay for flood insurance premiums.
What is the flood levee improvement fee?
Those parcels which lie within the 100-year floodplain are charged a stormwater utility fee to pay the City's share of the flood levee project. This fee is in addition to the existing citywide stormwater utility fee which is financing bridge improvements and citywide floor management issues. Both fees were approved by the City Council in 1994. The fees are based on the average amount of stormwater runoff for the various land uses and on parcel size.
|Land Use Type||Rate|
|Vacant land||$ 21.85/acre|
|Single-family parcel||$ 87.86/year|
|Commercial parcel||$ 1,079.50/acre|
The City Flood Levee Improvements utility fees are billed on the property tax statements mailed by the Santa Cruz County Tax Collector. The fees should be paid along with annual property taxes. Fees collected by the County are turned over to the City.
For more information contact:
Public Works Principal Management Analyst
City of Santa Cruz Public Works Department
809 Center Street, Room 201
Santa Cruz, CA 95060
|
<urn:uuid:c738d8b4-391f-4015-a6c6-7b1a9906eb63>
|
CC-MAIN-2016-26
|
http://www.cityofsantacruz.com/departments/public-works/stormwater/management-utility
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00151-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929099
| 1,588
| 2.546875
| 3
|
19 August 2013 The Truth about the Texas Lizard “Conservation” Plan Posted by: Ya-Wei Li | 22 comments Ya-Wei Li, Policy Advisor, Endangered Species Conservation Crane County, Texas is a land peppered with oil and gas wells, connected by arteries of pipelines and dirt roads. It’s one of the top counties for oil and gas production in Texas. It’s also where the dunes sagebrush lizard is trying to persist amidst all the mayhem. Last June, the U.S. Fish & Wildlife Service decided that it no longer needed to list the lizard under the Endangered Species Act, partly because it had signed a conservation plan (called the Texas Habitat Conservation Plan) for the lizard with the Texas Comptroller of Public Accounts. Earlier this year, we explained why the plan can’t protect the lizard. For one reason, it doesn’t describe how landowners will protect the species from oil and gas development, off-road vehicle use and other activities that can squish lizards. Without this information, the Service has absolutely no idea whether the plan will live up to its promises. The Comptroller certainly believes it will. Ever since it signed the plan in April 2012, it’s been reporting every month to the Service that not a single acre of enrolled habitat has been disturbed. That’s right, nothing across over 138,640 acres in some of the most productive oil and gas counties in Texas. Sound too good to be true? We thought so too, so we launched our own investigation. My colleague, Andy Shepard, and I compared aerial images taken immediately after an area was enrolled in the Texas plan, with images taken four and then thirteen months later. What we discovered were multiple instances of habitat destruction that the Comptroller was required to report to the Service, but never did. We’re talking about new oil drilling pads, dirt roads and land clearings. You can see all the before and after images in our newly-released report and supplemental presentation, which show the images in higher quality. As an example, the images below show three oil well pads, each about 400 feet wide, created after the plan went into effect. ©Defenders of Wildlife ©Defenders of Wildlife A website that collects Texas oil and gas permitting information (www.texas-drilling.com/crane-county) places a red dot on all the areas in Crane County with recently issued oil and gas drilling permits. As expected, a dot appears over each of these well pads. Only a few hundred feet away, we found more new oil pads and roads that apparently don’t exist – at least, according to the Comptroller they don’t. The story gets even better. You might be wondering how the Comptroller keeps track of which areas are destroyed by oil and gas development. A non-profit organization called the Texas Habitat Conservation Foundation was created to collect and report this information to the Comptroller, and to administer other crucial parts of the Texas Plan. As it turns out, however, this so-called “conservation” foundation is directed solely by lobbyists for the Texas Oil and Gas Association. Legally, every time oil and gas developers disturb lizard habitat, they are required to pay a fee under the Texas plan to offset the impacts of their development activity. If no disturbance is reported, then there are no fees to pay. It’s hard not to suspect a conflict of interest here. Dune sagebrush lizard (©Mark L. Watson) Monetary suspicions aside, without knowing how much habitat is destroyed, the Service can’t effectively ensure that the lizard is protected. Under the Texas plan, habitat destruction is capped at one percent of the total habitat for the species within the first three years. If this limit is exceeded, the Service will likely have to reconsider listing the lizard. But without a system to verify the Comptroller’s claims about habitat disturbance, the Service might not know if and when this one percent limit is exceeded. We hope our investigation sends a signal far and wide that conservation plans like this one where the Service is completely in the dark about plan compliance simply don’t work. At the most basic level, a plan shouldn’t leave the public or the Service in the dark about how it will protect a species – especially if the plan is used as an excuse to avoid listing an imperiled species. And if a plan relies on self-reporting, it shouldn’t be performed solely by lobbyists for the industry that poses the greatest threat to the species. Fortunately, tools like GIS mapping allow us to shed light on some of the darkest corners of how states try to avoid listing an endangered species. Ya-Wei Li, Director of Endangered Species Conservation Ya-Wei (Jake) is an environmental lawyer who specializes in endangered species law, policy, and science. He focuses on improving how the Endangered Species Act is implemented, so that it becomes more effective and efficient at conserving wildlife.
|
<urn:uuid:65046456-828f-46c6-88df-914f9300579d>
|
CC-MAIN-2016-26
|
http://www.defendersblog.org/2013/08/the-truth-about-the-texas-lizard-conservation-plan/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944538
| 1,026
| 2.65625
| 3
|
Over the Holiday season, Santa Claus sees his face plastered around the world, from commercials for Coca-Cola in the U.S, to reprising his role as Dun Che Lao Ren, or ‘Christmas Old Man,’ in China. But did Santa Claus even exist? KPCC’s Patt Morrison spoke with Adam C. English, author of ‘The Saint Who Would Be Santa Claus: The True Life and Trials of Nicholas of Myra’ to find out how Saint Nicholas was transformed into a worldwide phenomenon.
It’s difficult to think of Christmas without thinking about Santa Claus: the plump, white-bearded jolly man in a red suit, who brings gifts to well-behaved kids on his reindeer-led sleigh. But the story behind the real Santa Claus isn’t quite so refined.
Santa Claus is an interpretation of sorts of St. Nicholas, an actual Saint and Greek Bishop with a remarkable story. In his new book The Saint Who Would Be Santa Claus, Adam C. English pulls together an historical portrait, based on documents, archaeology, and legend, of a charitable bishop with a passion for social justice, and an important role in perhaps Christianity’s most important period: the conversion of the Roman Empire.
St. Nicholas served as the Bishop of Myra, in present-day Turkey, and was renowned for his commitment to the impoverished citizens of his town. The legend of Santa Claus and his gifts derived from Nicholas giving away his possessions to impoverished families. St. Nicholas was also involved in critical historical events that saw Christianity emerge as a dominant world religion. He played a key role in the destroying the temple of Artemis in Myra, a pivotal event looking to erase Rome’s pagan past and cement its Christian future. He also attended the Council of Nicaea, one of the seminal events in early Christianity, and the birthplace of the Nicene Creed.
This interview originally aired on KPCC's Take Two
St. Nicholas as an activist:
“That was some of the most surprising bits for me to discover, to really sort of reshape the jolly elf of generosity that we’ve come to imagine him to be, and to see that he was also a man of public action. Yes, there are stories about him stopping the beheading of three innocent men, and him going all the way to the capitol to petition for lower taxes, and stopping grain ships to barter for grain for the starving people of Myra. So he worked for the people, and did much more than simply give gifts.”
The reasons for St. Nicholas' early popularity:
“Some of the earliest stories about Nicholas tell him getting on board ships with the sailors, rolling up his sleeves, and going to work, helping with the oars and the ropes. And so he was immensely popular with them. I think that’s one of the reasons for the early spread of his story and his fame. Sailors are taking icons and images and statues, and of course stories, with them up rivers and across seas, and everywhere they’re going, of course, they’re telling the stories of St. Nicholas. So very quickly he becomes a very popular saint throughout Europe.”
Adam English on his hopes of the impact of St. Nicholas:
“My real hope is that learning more about St. Nicholas really enriches our own family and Santa Claus traditions. I don’t want to simply say no to Santa Claus, but I really want to say yes to St. Nicholas. What Nicholas does, is challenge us to broaden our horizons. The Santa Claus traditions of gift giving focus on our immediate, intimate family. What St. Nicholas does, and what families do by bringing St. Nicholas into the home, is to broaden beyond the walls of the family. St. Nicholas gave to those whom he did not know, and did not love, those in the most need, and that is really something that can be added into family celebrations of Christmas, giving gifts not only to their family members whom they know and love, but to those who are in need whom they do not know.”
|
<urn:uuid:c1d989f1-94ab-408d-b5c9-d53c994f8f8f>
|
CC-MAIN-2016-26
|
http://www.scpr.org/blogs/patt-morrison/2012/12/20/11642/how-st-nicholas-became-santa-claus/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00077-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973986
| 866
| 3.109375
| 3
|
By 2020, there will be nearly a million "smart" parking spaces around the world, according to Navigant Research. More municipalities and corporations are adopting the technology, for a variety of reasons. Chiefly, municipalities are turning to the tech, which typically consists of sensors installed in parking lots that alert drivers with compatible interfaces when they open up—but can also send signals to cops when a car has overstayed its meter share—to clear up congestion.
As Navigant notes, some 30% of a given city's traffic gridlock is caused by drivers circling the blocks, looking for a spot. Smart parking aims to eliminate both the unpleasantness and the arterial clog in the city's traffic flows.
IEEE Spectrum's report on the smart system installed at the Baltimore/Washington International airport, the first to utilize smart parking tech, offers a good look at how these systems work in a parking garage-type setting:
"The airport installed a smart parking system for its hourly and daily garages, which combine to offer 13, 200 parking spaces. Sensors embedded in each parking space at BWI detect whether the space is occupied, with that information fed into a central parking management system." But in this case, the system operates under the assumption that the driver doesn't have an app or installed tech to guide them to the spaces. Instead, the garage itself shepherds drivers towards open spots.
"As drivers approach BWI on their way to departing flights, they see signs showing the availability of parking at the airport’s garages," the report explains. "As a passenger enters a garage, signs indicate the total number of parking spaces available and the number on each level. At the levels, there are additional signs that tell the passenger how many spaces are available per row. A light over each space indicates whether it is available: green for open, red for occupied."
It's not only convenient, it's efficient. Jonathan Dean, who works for the Maryland Aviation Administration, told IEEE that ”Surface lots and other parking facilities must close at 75 percent to 80 percent of capacity, because at that point they essentially become full. At BWI, we can run to virtually 100 percent capacity.”
So that's how smart parking works in a garage—it's great for airports or other locales that require mass parking, like museums, stadiums, or exhibition centers. Where the real improvement stands to be made, however, is in on-street parking. Unfortunately, that's trickier business.
The New York Times recently rounded up some of the ongoing efforts in the arena:
"Smart-parking technology for on-street spaces is expensive, and still in its early stages. The largest examples are pilot projects with costs covered primarily by grants from the federal Department of Transportation. In San Francisco, the SFpark pilot project uses sensors from StreetSmart Technology for 7,000 of the city’s 28,000 meters. In Los Angeles, LA Express Park has installed sensors from Streetline for 6,000 parking spots on downtown streets."
These programs are linked to smart phone apps, and the city encourages drivers to tune in. And while the Times is right that these efforts are expensive right now, parking is actually a huge industry: it employs 1 million people and rakes in $27 billion a year. There's room for real competition in who can provide the smartest, and least painful services. Not to mention an incentive for cities to install the smartest tech to squeeze to maximize ticket fee revenues.
All of this shows we're moving towards a world where scavenging for a parking place will increasingly be a thing of the past—both because smart parking tech is improving and becoming more ubiquitous, and because car ownership itself is finally stalling out. Navigant says we may have hit "peak car ownership." All of which is good news; it means a shrinking carbon footprint (driving is the second biggest emitter worldwide) and less wasted time. If there's one thing we can all agree on about the future, it's that we don't want to spend it looking for places to park.
|
<urn:uuid:0ddc4a83-e7bd-4223-b536-adf9b0c9944f>
|
CC-MAIN-2016-26
|
http://motherboard.vice.com/blog/the-rise-of-smart-parking
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00030-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950916
| 840
| 2.859375
| 3
|
CHIN& 123 Chinese III • 5 Cr.
Further expands functional language ability in spoken and written Chinese. Students practice sounds and tones, vocabulary, and grammatical constructions and both traditional and both traditional and simplified characters and practice using Chinese in authentic situations. Continues understanding of Chinese culture. Prerequisite: CHIN& 122 or permission of instructor.
After completing this class, students should be able to:
- Identify time descriptive words.
- Write Chinese character equivalents of those time words.
- Summarize with few errors a short passage on an everyday topic.
- Present a two minute speech in Chinese and convey ten facts about their culture with few errors.
- Read and comprehend a passage written in Chinese characters about someone’s culture.
- Use a Chinese word processor and write a passage about someone’s culture that has fewer than four errors.
- Compose a calligraphic work in cursive form applying traditional aesthetic principles.
|
<urn:uuid:0202175c-eec6-4ba9-9174-eba35509f093>
|
CC-MAIN-2016-26
|
http://www.bellevuecollege.edu/classes/All/CHIN%26/123
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.884962
| 198
| 3.109375
| 3
|
The Garden Route is a popular stretch of the south-eastern coast of South Africa. It stretches from Heidelberg in the Western Cape to the Storms River which is crossed along the N2 coastal highway over the Paul Sauer Bridge in the extreme western reach of the neighbouring Eastern Cape. The name comes from the verdant and ecologically diverse vegetation encountered here and the numerous lagoons and lakes dotted along the coast. It includes towns such as Mossel Bay, Knysna, Oudtshoorn, Plettenberg Bay and Nature's Valley; with George, the Garden Route's largest city and main administrative centre.
It has an oceanic climate, with mild to warm summers, and mild to cool winters. It has the mildest climate in South Africa and the second mildest climate in the World, after Hawaii, according to the Guinness Book of Records. Temperatures rarely fall below 10°C in winter and rarely climb beyond 28°C in summer. Rain occurs year-round, with a slight peak in the spring months, brought by the humid sea-winds from the Indian Ocean rising and releasing their precipitation along the Outeniqua and Tsitsikamma Mountains just inland of the coast.
|
<urn:uuid:6aacc590-b463-47ff-9f0e-aa2981b130f5>
|
CC-MAIN-2016-26
|
http://www.touristlink.com/south-africa/garden-route/overview.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00037-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941511
| 250
| 2.609375
| 3
|
To the Virgins, to Make Much of Time (Gather ye rosebuds) Theme of Mortality
Mortality is a fancy word for the inevitability of death. "To the Virgins" talks about the death of a flower, the setting of the sun (another kind of death, and a metaphor for human life), about how getting older means getting closer to death, and about the possibility of a sort of living death, exemplified in the poem's nightmarish vision of an unmarried life. Even though death is everywhere, we can still make the most of what time we have.
Questions About Mortality
- Are you afraid of death? Do you have a "bucket list" (i.e., a list of things to do before you die)?
- Why is the poem so obsessed with death?
- What is the speaker's attitude toward death?
- What other works of literature portray death in the same way as this poem?
- Does the poem ever hint at rebirth? In which lines?
Chew on This
Although the speaker emphasizes the brevity of life, and thus the importance of acting during one's "prime," both the sun (stanza 2) and the flowers (stanza 1) suggest the possibility of rebirth or second chances. The rosebush will grow new flowers, and the sun will rise again.
"To the Virgins" describes both literal deaths (the flower) and a number of figurative ones (the setting of the sun; the "tarrying" of the last stanza), and it remains undecided about which is worse.
|
<urn:uuid:5d22566a-04ba-434d-8adf-41e705f7f14e>
|
CC-MAIN-2016-26
|
http://www.shmoop.com/gather-ye-rosebuds/mortality-theme.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00011-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948033
| 330
| 2.84375
| 3
|
John Bowlby (1907 - 1990) was a psychoanalyst (like Freud) and believed that mental health and behavioral problems could be attributed to early childhood.
Bowlby’s evolutionary theory of attachment suggests that children come into the world biologically pre-programmed to form attachments with others, because this will help them to survive.
Bowlby was very much influenced by ethological theory in general, but especially by Lorenz’s (1935) study of imprinting. Lorenz showed that attachment was innate (in young ducklings) and therefore has a survival value.
Bowlby believed that attachment behaviors are instinctive and will be activated by any conditions that seem to threaten the achievement of proximity, such as separation, insecurity and fear.
Bowlby (1969, 1988) also postulated that the fear of strangers represents an important survival mechanism, built in by nature. Babies are born with the tendency to display certain innate behaviors (called social releasers) which help ensure proximity and contact with the mother or attachment figure (e.g. crying, smiling, crawling, etc.) these are species-specific behaviors.
During the evolution of the human species, it would have been the babies who stayed close to their mothers that would have survived to have children of their own. Bowlby hypothesized that both infants and mothers have evolved a biological need to stay in contact with each other.
These attachment behaviors initially function like fixed action patterns and all share the same function. The infant produces innate ‘social releaser’ behaviors such as crying and smiling that stimulate caregiving from adults. The determinant of attachment is not food but care and responsiveness.
Bowlby suggested that a child would initially form only one attachment and that the attachment figure acted as a secure base for exploring the world. The attachment relationship acts as a prototype for all future social relationships so disrupting it can have severe consequences.
1. A child has an innate (i.e. inborn) need to attach to one main attachment figure (i.e. monotropy).
Although Bowlby did not rule out the possibility of other attachment figures for a child, he did believe that there should be a primary bond which was much more important than any other (usually the mother).
Bowlby believes that this attachment is different in kind (qualitatively different) from any subsequent attachments. Bowlby argues that the relationship with the mother is somehow different altogether from other relationships.
Essentially, Bowlby (1988) suggested that the nature of monotropy (attachment conceptualized as being a vital and close bond with just one attachment figure) meant that a failure to initiate, or a breakdown of, the maternal attachment would lead to serious negative consequences, possibly including affectionless psychopathy. Bowlby’s theory of monotropy led to the formulation of his maternal deprivation hypothesis.
The child behaves in ways that elicits contact or proximity to the caregiver. When a child experiences heightened arousal, he/she signals their caregiver. Crying, smiling, and, locomotion, are examples of these signaling behaviors. Instinctively, caregivers respond to their children’s behavior creating a reciprocal pattern of interaction.
2. A child should receive the continuous care of this single most important attachment figure for approximately the first two years of life.
Bowlby (1951) claimed that mothering is almost useless if delayed until after two and a half to three years and, for most children, if delayed till after 12 months, i.e. there is a critical period.
If the attachment figure is broken or disrupted during the critical two year period the child will suffer irreversible long-term consequences of this maternal deprivation. This risk continues until the age of five.
Bowlby used the term maternal deprivation to refer to the separation or loss of the mother as well as failure to develop an attachment.
The underlying assumption of Bowlby’s Maternal Deprivation Hypothesis is that continual disruption of the attachment between infant and primary caregiver (i.e. mother) could result in long term cognitive, social, and emotional difficulties for that infant. The implications of this are vast if this is true, should the primary caregiver leave their child in day care, whilst they continue to work?
3. The long term consequences of maternal deprivation might include the following:
• reduced intelligence,
• increased aggression,
• affectionless psychopathy
Affectionless psychopathy is an inability to show affection or concern for others. Such individuals act on impulse with little regard for the consequences of their actions. For example, showing no guilt for antisocial behavior.
4. Robertson and Bowlby (1952) believe that short term separation from an attachment figure leads to distress (i.e. the PDD model).
They found 3 progressive stages of distress:
5. The child’s attachment relationship with their primary caregiver leads to the development of an internal working model (Bowlby, 1969).
This internal working model is a cognitive framework comprising mental representations for understanding the world, self and others. A person’s interaction with others is guided by memories and expectations from their internal model which influence and help evaluate their contact with others (Bretherton, & Munholland, 1999).
Around the age of three these seem to become part of a child’s personality and thus affects their understanding of the world and future interactions with others (Schore, 2000). According to Bowlby (1969) the primary caregiver acts as a prototype for future relationships via the internal working model.
There are three main features of the internal working model: (1) a model of others as being trustworthy, (2) a model of the self as valuable, and (3) a model of the self as effective when interacting with others.
It is this mental representation that guides future social and emotional behavior as the child’s internal working model guides their responsiveness to others in general.
John Bowlby believed that the relationship between the infant and its mother during the first five years of life was most crucial to socialization. He believed that disruption of this primary relationship could lead to a higher incidence of juvenile delinquency, emotional difficulties and antisocial behavior.
To test his hypothesis, he studied 44 adolescent juvenile delinquents in a child guidance clinic.
Aim: To investigate the long-term effects of maternal deprivation on people in order to see whether delinquents have suffered deprivation. According to the Maternal Deprivation Hypothesis, breaking the maternal bond with the child during the early stages of its life is likely to have serious effects on its intellectual, social and emotional development.
Procedure: Between 1936 and 1939 an opportunity sample of 88 children was selected from the clinic where Bowlby worked. Of these, 44 were juvenile thieves and had been referred to him because of their stealing. Bowlby selected another group of 44 children to act as ‘controls (individuals referred to the clinic because of emotional problems, but not yet committed any crimes).
On arrival at the clinic, each child had their IQ tested by a psychologist who also assessed the child’s
emotional attitudes towards the tests. At the same time a social worker interviewed a parent to record
details of the child’s early life (e.g. periods of separation). The psychologist and social worker made separate reports. A psychiatrist
(Bowlby) then conducted an initial interview with the child and accompanying parent (e.g. diagnosing affectionless psychopathy).
He also found 14 of the young thieves (32%) showed 'affectionless psychopathy' (they were not able to care about or feel affection for others). None of the control group were affectionless psychopaths.
Bowlby found that 86% of the ‘affectionless psychopaths’ in group 1 (‘thieves) had experienced a long period of maternal separation before the age of 5 years (they had spent most of their early years in residential homes or hospitals and were not often visited by their families).
Only 17% of the thieves not diagnosed as affectionless psychopaths had experienced maternal separation. Only 2 of the control group had experienced a prolonged separation in their first 5 years.
Conclusion: Bowlby concluded that maternal separation/deprivation in the child’s early life caused permanent
emotional damage. He diagnosed this as a condition and called it Affectionless Psychopathy. According to
Bowlby, this condition involves a lack of emotional development, characterised by a lack of concern for others,
lack of guilt and inability to form meaningful and lasting relationships.
Evaluation: The supporting evidence that Bowlby (1944) provided was in the form of clinical interviews of, and retrospective data on, those who had and had not been separated from their primary caregiver.
This meant that Bowlby was asking the participants to look back and recall separations. These memories may not be accurate. Bowlby designed and conducted the experiment himself. This may have lead to experimenter bias. Particularly as he was responsible for making the diagnosis of affectionless psychopathy.
Another criticism of the 44 thieves study was that it concluded affectionless psychopathy was caused by maternal deprivation. This is correlational data and as such only shows a relationship between these two variables. Indeed, other external variables, such as family conflict, parental income, education etc. may have affected the behavior of the 44 thieves, and not, as concluded, the disruption of the attachment bond. Thus, as Rutter (1972) pointed out, Bowlby’s conclusions were flawed, mixing up cause and effect with correlation.
The study was vulnerable to researcher bias. Bowlby conducted the psychiatric assessments himself and made the diagnoses of Affectionless Psychopathy. He knew whether the children were in the ‘theft group’ or the control group. Consequently, his findings may have unconsciously influenced by his own expectations. This potentially undermines their validity.
Bifulco et al. (1992) supports the maternal deprivation hypothesis. They studied 250 women who had lost mothers, through separation or death, before they were 17. They found that loss of their mother through separation or death doubles the risk of depressive and anxiety disorders in adult women. The rate of depression was the highest in women whose mothers had died before the child reached the age of 6.
Bowlby’s (1944, 1956) ideas had a great influence on the way researchers thought about attachment and much of the discussion of his theory has focused on his belief in monotropy.
Although Bowlby may not dispute that young children form multiple attachments, he still contends that the attachment to the mother is unique in that it is the first to appear and remains the strongest of all. However, on both of these counts, the evidence seems to suggest otherwise.
Critics such as Rutter have also accused Bowlby of not distinguishing between deprivation and privation the complete lack of an attachment bond, rather than its loss. Rutter stresses that the quality of the attachment bond is the most important factor, rather than just deprivation in the critical period.
Bowlby used the term maternal deprivation to refer to the separation or loss of the mother as well as the failure to develop an attachment. Are the effects of maternal deprivation as dire as Bowlby suggested?
Michael Rutter (1972) wrote a book called Maternal Deprivation Re-assessed. In the book, he suggested that Bowlby may have oversimplified the concept of maternal deprivation. Bowlby used the term 'maternal deprivation' to refer to separation from an attached figure, loss of an attached figure and failure to develop an attachment to any figure. These each have different effects, argued Rutter. In particular Rutter distinguished between privation and deprivation.
From his survey of research on privation, Rutter proposed that it is likely to lead initially to clinging, dependent behavior, attention-seeking and indiscriminate friendliness, then as the child matures, an inability to keep rules, form lasting relationships, or feel guilt. He also found evidence of anti-social behavior, affectionless psychopathy, and disorders of language, intellectual development and physical growth.
Rutter argues that these problems are not due solely to the lack of attachment to a mother figure, as Bowlby claimed, but to factors such as the lack of intellectual stimulation and social experiences which attachments normally provide. In addition, such problems can be overcome later in the child's development, with the right kind of care.
Many of the 44 thieves in Bowlby’s study had been moved around a lot during childhood, and had probably never formed an attachment. This suggested that they were suffering from privation, rather than deprivation, which Rutter suggested was far more deleterious to the children. This led to a very important study on the long term effects of privation, carried out by Hodges and Tizard (1989).
Bowlby's Maternal Deprivation is, however, supported by Harlow's (1958) research with monkeys. He showed that monkeys reared in isolation from their mother suffered emotional and social problems in older age. The monkey's never formed an attachment (privation) and as such grew up to be aggressive and had problems interacting with other monkeys.
Konrad Lorenz (1935) supports Bowlby's maternal deprivation hypothesis as the attachment process of imprinting is an innate process.
Bowlby assumed that physical separation on its own could lead to deprivation but Rutter (1972) argues that it is the disruption of the attachment rather than the physical separation. This is supported by Radke-Yarrow (1985) who found that 52% of children whose mothers suffered with depression were insecurely attached. This figure raised to 80% when this occurred in a context of poverty (Lyons-Ruth,1988). This shows the influence of social factors. Bowlby did not take into account the quality of the substitute care. Deprivation can be avoided if there is good emotional care after separation.
There are implications arising from Bowlby’s work. As he believed the mother to be the most central care giver and that this care should be given on a continuous basis an obvious implication is that mothers should not go out to work. There have been many attacks on this claim:
Bifulco, A., Harris, T., & Brown, G. W. (1992). Mourning or early inadequate care? Reexamining the relationship of maternal loss in childhood with adult depression and anxiety. Development and Psychopathology, 4(03), 433-449.
Bowlby, J. (1944). Forty-four juvenile thieves: Their characters and home life. International Journal of Psychoanalysis, 25(19-52), 107-127.
Bowlby, J. (1951). Maternal care and mental health. World Health Organization Monograph.
Bowlby, J. (1952). Maternal care and mental health. Journal of Consulting Psychology, 16(3), 232.
Bowlby, J. (1953). Child care and the growth of love. London: Penguin Books.
Bowlby, J. (1956). Mother-child separation. Mental Health and Infant Development, 1, 117-122.
Bowlby, J. (1957). Symposium on the contribution of current theories to an understanding of child development. British Journal of Medical Psychology, 30(4), 230-240.
Bowlby, J. (1969). Attachment. Attachment and loss: Vol. 1. Loss. New York: Basic Books.
Bowlby, J. (1980). Loss: Sadness & depression. Attachment and loss (vol. 3); (International psycho-analytical library no.109). London: Hogarth Press.
Bowlby, J. (1988). Attachment, communication, and the therapeutic process. A secure base: Parent-child attachment and healthy human development, 137-157.
Bowlby, J., and Robertson, J. (1952). A two-year-old goes to hospital. Proceedings of the Royal Society of Medicine, 46, 425–427.
Bretherton, I., & Munholland, K.A. (1999). Internal working models revisited. In J. Cassidy & P.R. Shaver (Eds.), Handbook of attachment: Theory, research, and clinical applications (pp. 89– 111). New York: Guilford Press.
Harlow, H. F., & Zimmermann, R. R. (1958). The development of affective responsiveness in infant monkeys. Proceedings of the American Philosophical Society, 102,501 -509.
Hodges, J., & Tizard, B. (1989). Social and family relationships of ex‐institutional adolescents. Journal of Child Psychology and Psychiatry, 30(1), 77-97.
Lorenz, K. (1935). Der Kumpan in der Umwelt des Vogels. Der Artgenosse als auslösendes Moment sozialer Verhaltensweisen. Journal für Ornithologie 83, 137215.
yons-Ruth, K., Zoll, D., Connell, D., & Grunebaum, H. E. (1986). The depressed mother and her one-year-old infant: Environment, interaction, attachment, and infant development. In E. Tronick & T. Field (Eds.), Maternal depression and infant disturbance (pp. 61-82). San Francisco: Jossey-Bass.
Radke-Yarrow, M., Cummings, E. M., Kuczynski, L., & Chapman, M. (1985). Patterns of attachment in two-and three-year-olds in normal families and families with parental depression. Child development, 884-893.
Rutter, M. (1972). Maternal deprivation reassessed. Harmondsworth: Penguin.
Rutter, M. (1979). Maternal deprivation, 1972-1978: New findings, new concepts, new approaches. Child Development, 283-305.
Rutter, M. (1981). Stress, coping and development: Some issues and some questions. Journal of Child Psychology and Psychiatry, 22(4), 323-356.
Schaffer, H. R. & Emerson, P. E. (1964). The development of social attachments in infancy. Monographs of the Society for Research in Child Development, 29 (3), serial number 94.
Schore, A. N. (2000). Attachment and the regulation of the right brain. Attachment & Human Development, 2(1), 23-47.
Tavecchio, L. W., & Van Ijzendoorn, M. H. (Eds.). (1987). Attachment in social networks: Contributions to the Bowlby-Ainsworth attachment theory. Elsevier.
Weisner, T. S., & Gallimore, R. (1977). My brother's keeper: Child and sibling caretaking. Current Anthropology, 18(2), 169.
McLeod, S. A. (2007). Bowlby's Attachment Theory. Retrieved from www.simplypsychology.org/bowlby.html
|
<urn:uuid:bb1c2921-653a-4ee5-b4fb-fdd111ab0c62>
|
CC-MAIN-2016-26
|
http://www.simplypsychology.org/bowlby.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949178
| 3,994
| 4.03125
| 4
|
Some risk factors you cannot control; these are called uncontrollable risk factors and include:
Age- the older you get, the more likely you are to develop or progress heart disease.
Gender- Earlier in life, men are at a greater risk than women for having a heart attack or stroke. After menopause, women’s risk rises greatly.
Race/Ethnicity- African Americans typically have higher blood pressure than Caucasians which can put them at a greater risk for heart disease.
Previous Heart Attack or Stroke- If you have already had a heart attack or stroke, you are at a higher risk for having a second heart attack or stroke.
Family History- You may be at a higher risk for a heart attack or stroke if your brother, father or grandfather had one before the age of 55 or if your sister, mother or grandmother had one before the age of 65.
The Female Risk Factor for Heart Disease
|
<urn:uuid:10263994-2baf-439b-816d-76f5917fc254>
|
CC-MAIN-2016-26
|
http://stmarysmadison.com/Cardiac/Pages/RiskFactors.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956714
| 191
| 2.71875
| 3
|
|Like Vikings, our children are explorers in wood shop.|
And so the willingness of those in a culture, or in a tribe, or in a classroom to take risks in learning and creative acts is also a genetic trait that is encouraged through the migration of folks into new areas where they mate with those who may be like-minded in their creative inclinations. Eureka Springs can serve as an example. We are now number 8 in the yearly poll of cities that serve as arts destinations, ahead of Taos, New Mexico and just two cities behind Santa Fe. So folks come here to buy art, but also with the intention of making art, and it makes our town an ever more creative place to hang out.
Here, folks have gathered at some risk to do art, and the inclination to take on tasks that offer creative exploration is a form of genetically reinforced creativity, just as they describe in National Geographic with regard to cane toads and explorers. And our students are particularly creative.
Our 1st, 2nd and third grade students are studying the Vikings, their culture, artifacts, conquests and explorations. And so in honor of the Vikings, I offered "creative day" to these students. I allowed them to make whatever they wanted. And creative day is always a trial and adventure for me, too.
Today my 7th, 8th and 9th grade students worked on their 9 legged bench.
Make, fix and create...
|
<urn:uuid:fee503cc-b83d-4810-9261-8ee1537d92f0>
|
CC-MAIN-2016-26
|
http://wisdomofhands.blogspot.com/2013/01/cane-frogs-genetics-and-creativity.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00104-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977032
| 299
| 2.828125
| 3
|
The chimney effect is the natural phenomena that occurs when the density difference between a hot and a cold air column creates a natural flow through a chimney. Learn more about why this works.
You can see the tall flue gas stacks in all the power plants. The function of the stack is to disperse at a great height the hot gases, emissions and particulates that leave the boiler. At these heights the pollutants disperse in a very large area so that ground level concentrations are within permissible levels not harmful for humans or vegetation.
Chimneys were in use from the times of the Roman Empire. Chimneys and fireplaces are a common household item in countries with a cold climate. It serves the dual function of removing the hot gases out of the house at the same time bringing in fresh air to the fireplace for combustion.
Flue gas stacks higher than 250 meters are common nowadays for larger power plants. The tallest stack currently is 420 meters in Kazakhstan. Many factors like terrain, dispersion pattern, plume heights, adjacent tall structures, and population density determine the height of the stack.
There is a natural phenomena associated with the chimney or the flue gas stack. This is the natural flow of air up the chimney. This is called the ‘chimney or the stack effect’. This effect is found not only in chimneys but also in tall buildings.
What is the Chimney (or Stock) Effect?
The gas temperature inside the flue gas stack is around 140° C. The outside ambient air temperature is around say 30° C. Consider this as two air columns connect at the bottom. The high density and heavier cold air will be always pushing the low density and lighter hot gases up. This causes the natural flow of gases up the flue gas stack. This pressure difference that pushes the hot gas up the flue gas stack or the chimney is the 'chimney or stack effect'.
You can feel the effect if you stand near the doors or openings at the bottom of a stack or at open door of an elevator shaft. Depending on the height it can be gentle draught or heavy suction. This is the chimney or stack effect.
In numerical terms this can be represented as
Chimney effect = 353 x Chimney Height x [1/ Stack gas temperature – 1/ Ambient Temperature]
Chimney effect is in mm of water column.
Chimney height is in mteres.
Temperatures are in ° Kelvin.
For a thermal power plant with a stack height of 250 meters the effect could be around 77 mm of water column. In thermal power plants the stack effect aids the Induced draft fans in removing the hot flue gases from the furnace and dispersing them at the top of the stack.
In tall buildings this effect could create problems for the airconditioning system. In deserts where the outside temperatures are higher than the cool interior of the buildings the effect will be in the reverse.
|
<urn:uuid:54ef4b72-c5b3-4251-a8b7-d2ec71b0d220>
|
CC-MAIN-2016-26
|
http://www.brighthubengineering.com/hvac/29769-the-chimney-or-stack-effect-explained/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.890029
| 610
| 4.1875
| 4
|
Shellfish can cause severe allergic reactions (such as anaphylaxis). Therefore it is advised that people with shellfish allergy have quick access to an epinephrine auto-injector (such as an EpiPen®, Auvi-Q™ or Adrenaclick®) at all times. This allergy usually is lifelong. Approximately 60 percent of people with shellfish allergy experienced their first allergic reaction as adults. Shrimp, crab and lobster cause most shellfish allergies. Finned fish and shellfish do not come from related families of foods, so being allergic to one does not necessarily mean that you must avoid both. To prevent a reaction, strict avoidance of shellfish and shellfish products is essential. Always read ingredient labels to identify shellfish ingredients.
There are two kinds of shellfish: crustacea (such as shrimp, crab and lobster) and mollusks (such as clams, mussels, oysters and scallops). Reactions to crustacean shellfish tend to be particularly severe. If you are allergic to one group of shellfish, you might be able to eat some varieties from the other group. However, since most people who are allergic to one kind of shellfish usually are allergic to other types, allergists usually advise their patients to avoid all varieties. If you have been diagnosed with a shellfish allergy, do not eat any shellfish without first consulting your doctor.
To prevent a reaction, strict avoidance of shellfish and shellfish products is essential. Always read ingredient labels to identify shellfish ingredients. In addition, avoid touching shellfish, going to the fish market, and being in an area where shellfish are being cooked (the protein in the steam may present a risk).
The federal Food Allergen Labeling and Consumer Protection Act (FALCPA) requires that all packaged food products sold in the U.S. that contain shellfish as an ingredient must list the specific shellfish used on the label.
Read all product labels carefully before purchasing and consuming any item. Ingredients in packaged food products may change without warning, so check ingredient statements carefully every time you shop. If you have questions, call the manufacturer.
As of this time, the use of advisory labels (such as “May Contain”) on packaged foods is voluntary, and there are no guidelines for their use. However, the FDA has begun to develop a long-term strategy to help manufacturers use these statements in a clear and consistent manner, so that food-allergic consumers and their caregivers can be informed as to the potential presence of major allergens.
Read more about food labels>
Avoid foods that contain shellfish or any of these ingredients:
- Crawfish (crawdad, crayfish, ecrevisse)
- Lobster (langouste, langoustine, Moreton bay bugs, scampi, tomalley)
- Shrimp (crevette, scampi)
- It is important to note that mollusks are not considered major allergens under FALCPA and may not be fully disclosed on a product label.
Your doctor may advise you to avoid mollusks or these ingredients:
- Clams (cherrystone, geoduck, littleneck, pismo, quahog)
- Limpet (lapas, opihi)
- Sea cucumber
- Sea urchin
- Snails (escargot)
- Squid (calamari)
- Whelk (Turban shell)
Shellfish are sometimes found in the following:
- Cuttlefish ink
- Fish stock
- Seafood flavoring (e.g., crab or clam extract)
Keep the following in mind:
- If you have seafood allergy, avoid seafood restaurants. Even if you order a non-seafood item off of the menu, cross-contact is possible.
- Asian restaurants often serve dishes that use fish sauce as a flavoring base. Exercise caution or avoid eating there altogether.
- Shellfish protein can become airborne in the steam released during cooking and may be a risk. Stay away from cooking areas.
- Carrageenan, or "Irish moss,” is not shellfish. It is a red marine algae that is used in a wide variety of foods, particularly dairy foods, as an emulsifier, stabilizer, and thickener. It appears safe for most individuals with food allergies.
- Allergy to iodine, allergy to radiocontrast material (used in some radiographic procedures), and to shellfish are not related. If you have an allergy to shellfish, you do not need to worry about cross reactions with radiocontrast material or iodine.
Download our PDF on how to read a label for a shellfish-free diet.
1Sicherer SH, Munoz-Furlong A, Sampson HA. Prevalence of seafood allergy in the United States determined by a random telephone survey. J Allergy Clin Immunol 2004; 114(1):159-65.
|
<urn:uuid:a54d9edd-ca4a-4306-b718-e3b478c1167f>
|
CC-MAIN-2016-26
|
http://www.foodallergy.org/allergens/shellfish-allergy
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00188-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.921277
| 1,046
| 3.140625
| 3
|
Thirteen-year-old Raihana Ahmadi is at the heart of sweeping changes to education in Afghanistan.
When the ninth-grade biology student points to a plastic model of the human heart, she is not only sharing a lesson with her female classmates, but also demonstrating the importance of a quality education, especially for girls, in this country.
Watching the class at Kabul’s Sorya School, principal Naseema Saberi says: “This is my dream in life. I have always wanted to educate and empower the young women of Afghanistan, so they can serve the younger generation of this country and make it a better place.”
Saberi says her wish is coming true, thanks to the World Bank’s Education Quality Improvement Program, or EQUIP, co-financed by the Afghanistan Reconstruction Trust Fund (ARTF). The objective of the program is to increase access to education, particularly for girls, through school grants, teacher training, curriculum development, and community involvement.
Through EQUIP, more than 1,600 schools are being constructed or rehabilitated in Afghanistan. Girls’ enrollment has increased to 2.7 million from less than 200,000 in 2002, and boys’ attendance to about 4.4 million from less than a million.
Friends of education
Now a bustling brick complex in the capital city’s west end, the Sorya School offers classes to 6,000 students, both boys and girls, who attend in two daily shifts. But not long ago, the building, which stood for almost 50 years, was simply a mass of rubble. Civil war and subsequent Taliban rule in the 1990s destroyed the country’s education system, and strictly forbade girls’ attendance.
“If you had come here back then, you would have seen nothing, not a chair to sit on, no books, no sign of anything,” recalls Saberi.
During this dark time, Saberi was so determined to continue teaching that she ran a “secret school” for girls. Today, some of these young women are university professors and teachers in their own right, she says.
“But if they had been caught, they would have been killed,” observes Abdul Ghafar Moarefdost, who currently has six granddaughters at Saberi’s school. The literal translation of Moarefdost’s name is “friend of education.”
“No one can ask me if education is important,” says Moarefdost, laughing. “It is essential in our lives that not only boys, but also girls, go to school.” Today, he is a teacher of sports medicine at a Kabul university, but he values time spent on the Sorya School’s advisory “shura” of 16 community members including parents, elders, and two students. They meet weekly to discuss the cases of girls still left behind, and other issues facing the school.
|
<urn:uuid:c740e216-56eb-4912-88d6-d05b0111e8f7>
|
CC-MAIN-2016-26
|
http://www.worldbank.org/en/news/feature/2012/08/21/education-project-equips-children-life
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96899
| 624
| 2.84375
| 3
|
Psychol Sci. 2010 Sep 3. [Epub ahead of print]
Overheard Cell-Phone Conversations: When Less Speech Is More Distracting.
Emberson LL, Lupyan G, Goldstein MH, Spivey MJ.
1Psychology Department, Cornell University.
Why are people more irritated by nearby cell-phone conversations than by conversations between two people who are physically present? Overhearing someone on a cell phone means hearing only half of a conversation-a “halfalogue.” We show that merely overhearing a halfalogue results in decreased performance on cognitive tasks designed to reflect the attentional demands of daily activities. By contrast, overhearing both sides of a cell-phone conversation or a monologue does not result in decreased performance. This may be because the content of a halfalogue is less predictable than both sides of a conversation. In a second experiment, we controlled for differences in acoustic factors between these types of overheard speech, establishing that it is the unpredictable informational content of halfalogues that results in distraction. Thus, we provide a cognitive explanation for why overheard cell-phone conversations are especially irritating: Less-predictable speech results in more distraction for a listener engaged in other tasks.
|
<urn:uuid:5f3392e8-a633-455d-a029-a678ccb0d348>
|
CC-MAIN-2016-26
|
http://stillnotalice.com/?p=609
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.917897
| 249
| 2.640625
| 3
|
By Betsy McKay
When it comes to the flu, the government shutdown was particularly ill-timed. Its flu trackers were furloughed just as the virus’s traditional season was getting underway.
That means no “FluView,” a weekly report that public health officials and doctors rely on to track whether flu is circulating heavily. Published by the Centers for Disease Control and Prevention, it shows the number of flu cases, hospitalizations and deaths nationwide and is based on data the CDC collects and analyzes from than 2,700 outpatient health-care providers around the nation, as well as from labs, hospitals, and other sources.
Now, a private company is stepping in. “We were uncomfortable with the prospect of no national surveillance,” says Josh Gray, vice president of athenaResearch, a unit of athenahealth Inc. in Watertown, Mass. “It’s going dark at a critical period when flu is starting to ramp up.”
So athenahealth, a provider of electronic health records and other technologies, is putting out weekly flu reports of its own, based on data from about 600,000 patient visits a week to a network of about 15,000 primary care providers in 49 states, Mr. Gray says. Using cloud-based software, “we can track what’s going on in their practices in near real time,” he says.
The reports are released on Wednesdays, with the latest one here.
The news so far is good. Only about 4.4 out of 10,000 patients who visited doctors were diagnosed with the flu, meaning no outbreak yet. And the proportion of patients getting their flu shots is climbing.
The hope, Mr. Gray says, is both to identify potential flu outbreaks early and to encourage people to get their flu shots, since rising numbers of flu cases tend to drive people to get their vaccinations.
Athenahealth uses a different data set than the CDC; it relies on flu diagnoses recorded by providers in claims submitted to insurers, while the CDC uses clinical observations reported by providers. But the company says its flu data of previous years follows similar patterns to those of the CDC data.
“The CDC is the gold standard for flu surveillance,” Mr. Gray says. Still, “with the emergence of cloud we think that has opened up a lot of opportunities for the private sector to help with disease surveillance.”
The CDC declined to comment on the athenahealth data, but welcomed the effort. Other groups have conducted flu surveillance before, with some publishing their results, said spokeswoman Barbara Reynolds. “We’re always interested in what the outcomes are,” she said.
|
<urn:uuid:e04090e6-d27e-419c-9957-bd22b8559ffb>
|
CC-MAIN-2016-26
|
http://blogs.wsj.com/washwire/2013/10/11/private-firm-steps-in-on-flu-reports-amid-shutdown/?mod=Politics_and_Policy_newsreel_7
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969547
| 566
| 2.53125
| 3
|
Officials in charge of the Soviet manned spaceflight program (Ustinov, Keldysh, Korolev, Rudnev and Moskalenko) sent a letter to the Central Committee on 10 November 1960 asking for permission to resume the flight testing of the Vostok spacecraft (4). The first Vostok-3A spacecraft, i.e. the version intended for manned flight was indeed launched from Baikonur carrying the dogs Pchelka and Mushka at 0726 UT on 1 December 1960. The spacecraft weighed 4563 kg and entered an orbit at i = 64.97o and 172-238 km. This was the type of orbit intended to be used for manned flights in order to permit natural decay after ten days in case the retrorocket malfunctioned. Soviet media announced that the spacecraft carried a transmitter "working in the telegraphic transmitting regime of varying duration" on 19.995 MHz (5). Soviet ground stations had twelve telemetry reception and orbit determination sessions during the flight (14).
Now that the Cold War is over we know that U.S. Intelligence used the Radio Research Laboratories of the Japanese Ministry of Posts and Telecommunications to monitor Soviet manned flights by listening to 19.995 MHz. The Japanese did indeed pick up Sputnik 6 on the launch revolution. But first there was a strong jammer starting at 0727 UT, i.e. a minute after launch. As the jammer was switched off the Japanese picked up telemetry on 19.995 MHz as the spacecraft reached its first northern apex at 65 N and 125 E. In (15) the authors note with satisfaction that they picked up the signal before the Soviet announcement of the launch and that they noted the Doppler shift as a strong indication that they were listening to a spacecraft.
The monitoring station of Sweden's
Telecommunications Agency near Enköping did indeed pick up the
beacon signals on 19.995 MHz at 1023 UT, three hours after launch (6),
and then further until 1937 UT on 1 December when they stopped abruptly
They were heard again for a few minutes starting at 2238 UT (7). Wire services
also reported (1) that "undecipherable
signals were picked up by the U.S. Army Signal Corps on a frequency of
19.995 Megacycles". This must have been done at Fort Monmouth, New
Jersey where signals on 19.995 MHz at were picked up at 0816 a.m. (1316
UT) on December 1 according to (2).
Soviet media announced that because the "descent went along an unplanned trajectory the satellite spaceship ceased to exist when entering the dense layers of the atmosphere" (5). With hindsight we may say that this statement was entirely true, but gave the impression that the spacecraft had been destroyed because of natural causes - overheating and not a deliberate explosion. In (8), of course, the truth is stated: "The programme of the space flight was executed, but due to a failure in the control system of the TDU the descent was in an unforeseen area and the SA (landing apparatus) had to be blown up."
Interestingly, Korolev himself in (14) states that "after the separation [of the descent appartus] one more session of telemetry information was received from the instrument compartment of the ship before it re-entered the dense layers of the atmosphere".
If retro-fire occurred at 0822 UT and the craft continued for 1.5 revolutions, i.e. 2 hours 15 minutes, the re-entry occurred at about 1035 UT over the Pacific. By that time the descent appartus had separated from the instrument compartment. Interestingly the monitoring station of Sweden's Telecommunications Agency near Enköping reported that it had picked up last signals from the spacecraft on 19.995 MHz fading out at 1015-1017 UT (13) near the northern apex of the orbit. This could be part of that extra communications session that Korolev mentions! the fact that this communications session occurred at all was proof that the the de-orbit had failed.
In (15) the authors report that signals on 19.995 MHz ceased at 1020.11 UT when the spacecraft was at 62 N and 105 E.
The station at Enköping did not pick up telemetry during the closest approach to Sweden on the orbit on which the retro-rocket was fired. However, the station picked up the 19.995 MHz beacon twice during the two passes closest to Sweden before the retro-fire orbit, i.e. at 0540 UT, 0710 UT (3). Intriguingly, the station reported that at 1100 UT a strong radio jammer started transmitting on the beacon frequency and continued to do so until 1142 UT - as if to hide the fate of the spacecraft? (13). It is interesting that jamming also took place promptly after launch. (All the three receptions made at Enköping in the morning of 2 December 1960 are marked by light red ellipses in the map below).
The way that the destruct system worked was that if the re-entry g-loads did not occur wihin a prescribed time the destruct system would be activated. Nominally, the descent apparatus would have been separated from the instrument compartment promptly after retrofire. If the descent apparus had been destroyed by on-board explosisves the instrument compartment was probably still intact.
Why did the retro-fire impulse not provide sufficient delta-v to bring the spacecraft down? Available sources are extremely vague on this matter. The infrared vertical sensor was not used on this flight to orient the ship for retro-fire. Instead the sun orientation system was used. In a separate article I have examined the orientation of the sun relative to the orbital path of all Vostok and Vostok test flights. The really intriguing thing about the flight of Korabl-Sputnik-3 is that the sun was almost perpendicular to the orbital path at retro-fire! So, the TDU-1 would not have changed the speed of the craft at all, merely deflected it a little bit! So, maybe the sketch of how the solar orientation system worked that is shown in the article "Vostok retro-fire attitude" does not show how this system worked, or the mission planners for this mission had made a horrible mistake and miscalculated the solar orientation for establishing the correct retro-fire attitude! Opinions are invited as to how this mystery can be solved....!
Kamanin's’ diaries starts on 17 December 1960, so regretfully just after the launch of Sputnik-6. Kamanin speaks about a "spherical object" to be launched in the second half of December 1960, obviously referring to the unsuccessful launch on 22 December. Kamanin writes that the ship will contain the dogs Zhemchuzhnaya and Zhulka and "all the small items like the 3d ship".
The "sphere " was indeed launched from Baikonur at 0745 UT on 22 December 1960 and did not reach orbit. The Blok-E stage of the launch vehicle shut down at T+432 sec. The craft reached an apogee of 214 km, re-entered the atmosphere and landed 3500 km downrange, 70 km south of Tura village in the independent district of Evenkiyskiy in the Krasnoyarsk region. Nevertheless it landed safely and the dogs were still alive. Bearings came from the object from Krug stations (powerful direction-finders of of the air force) around Tashkent. Also Moscow and Krasnodar picked transmissions from the object's "Peleng" P-37 beacon transmitter on the frequency 10.003 MHz (interestingly, half the value for the AM voice frequency of the Vostoks 20.006 MHz). The beacon signals on 10.003 MHz were a carrier that was "ON" for 0.5 seconds and "OFF" for 1.5 seconds..(9)
Actually, the landing capsule should not have survived the the launch vehicle failure because there was a system to blow up the capsule in the event of an unscheduled or unintentional recovery - just as had been the case for Sputnik-6 as described above. If, despite this, the capsule survived re-entry , there was another destruct timer designed to set destroy the capsule 60 hours after landing. After the launch failure, mission control was surprised to receive reports from Krug direction-finding station reporting beacon signals from the capsule near the place expected to have been the impact point. Now, the capsule and the dogs had to found in 69 hours' time, or they would both be blown to smithereens. Twelve specialists were flown in an Il-14 from Baikonur to Krasnoyarsk , where they would transfer to a smaller plane that could land at airports nearer the capsule. Experts on the self-destruct package also flew to Krasnoyarsk from Kuybyshev and Leningrad where they joined up with the group from Baikonur. When the team finally reached the capsule, the temperature was -45 C and more than half of the 60 hours had elapsed. The dogs, miraculously, were alive. The successful landing had been caused by lucky chance. Some cables in the umbilical between launch vehicle and spacecraft had been burned and shorted in such a way that the self-destruct system did not trigger. Also, the ejection seat (where the dogs were) had been triggered before the hatch, causing the ejection seat to jam against the hatch which was later jettisoned. Thus, the dogs were still in the capsule.(10)
Another source, A.V. Pallo, head of search-and-rescue for Vostok within Korolev's organization, described (11) how planes found the capsule and how a helicopter air dropped a small team to disarm the destruct package and the firing circuits for the parachute of the ejection seat. The disarming team sank to their waists in snow and had to get help from planes to show them the direction of the spacecraft on the taiga. These two engineers could not hear the dogs and the little window on the ejection seat was covered with hoarfrost. Despite knocking on the walls of the ejection seat cabin there was no sign of life. All personnel had to return to Tura by helicopter because the short daylight period was over. An anxious Sergei Korolev called Pallo on HF radio to find out about the dogs. The next morning the crew, including a veterinarian, flew back to the capsule. As they removed the capsule they could hear the dogs bark. The veterinarian wrapped the dogs in his sheepskin coat and flew with them back to Tura and then on to Moscow.
Interestingly, Kamanin and Pallo differ as to the exact location of the landing point. Kamanin writes that is was "70 km south of Tura", while Pallo says "60 km west of Tura". Both sites have been marked in the map above.
In 2010, a Russian web site gave the
landing co-ordinates as 63° 42'
N 99° 50' E. (Thanks to Alexander Koval for
sending the link)
|
<urn:uuid:68389b42-021a-46ba-9b2e-7d05ff1f96d9>
|
CC-MAIN-2016-26
|
http://www.svengrahn.pp.se/histind/sputnik6/sputnik6.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00105-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970411
| 2,335
| 2.796875
| 3
|
2 entries found for raise.
To select an entry, click on it.
Main Entry: 1raise
Inflected Form(s): raised; rais·ing 1: to cause to rise <raise a window> <raise dust> 2 a: 1AWAKE, AROUSE <enough noise to raise the dead> b: to recall from or as if from death c: to stir up : INCITE <raise a rebellion> 3 a: to set upright by lifting or building <raise a monument> b: to lift up <raise your hand> c: to place higher especially in rank : PROMOTE <was raised to captain> d: HEIGHTEN 1, invigorate <raise the spirits> 4: 2COLLECT 1b <raise funds> 5 a: to look after the growth and development of : GROW <raise hogs for market> <raise corn> b: BRING UP 1, rear 3b <raise a child> <was raised in the city> 6: BRING ABOUT <raised a laugh> 7: to bring to notice <raise an issue> 8 a: to increase the strength of <don't raise your voice> b: to increase the amount of <raise the rent> c: to increase a bid or bet 9: to make light and airy <raise dough> 10: to multiply a quantity by itself a specified number of times <raise two to the forth power> 11: to bring into sight on the horizon by approaching <raised land at last> 12: to cause to form on the skin <raise a blister> - rais·ernoun - raise eyebrows: to cause surprise or mild disapproval <raised eyebrows by turning down the award> - raise the bar: to set a higher standard <new software that raises the bar for competitors> synonymsRAISE, LIFT, HEAVE, HOIST mean to move from a lower to a higher place or position. RAISE often suggests a suitable or intended higher position to which something is brought <raise the flag to the top of the pole>. LIFT suggests a bringing up especially from the ground and may also suggest the need for exertion in order to pick up something heavy <lift some boxes onto the table>. HEAVE suggests lifting with great effort or strain <heave those bales of hay onto the truck>. HOIST often suggests the use of pulleys to increase the force applied in raising something very heavy <hoist the crates onto the ship>.
|
<urn:uuid:8248e23d-aae4-439c-9854-377cee843234>
|
CC-MAIN-2016-26
|
http://www.wordcentral.com/cgi-bin/student?raise
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.861263
| 510
| 2.5625
| 3
|
Diaphyseal bone does not exist without an outer cover of cortex in its natural state. Thus it is intuitively physiologic to seal the end of the bone following amputation, and techniques have been refined for performing an osteo-periosteal bone cap over the end of diaphyseal bone. However, even without a surgical osteo-periosteal flap, the end of the bone naturally heals by formation of bone callous and fibrous tissue. When a periosteum cuff is available it may be sutured over the end of the bone, but excessive use of periosteal strips can cause problems. As occasionally seen in traumatic amputations or when the periosteum is circumventially peeled off the bone before sectioning, the residual periosteal strips can slowly form irregular bone spikes. These spikes or bone spurs can cause painful pressure points for the amputee. The surgeon should be aware of this potential problem in order to minimize its occurrence.
The standard protocols for skin closure in any other surgery also apply to closing the wound following an amputation. Dead space should be eliminated and drain systems used when necessary. When closing the wound, opposing tissue layers are sewn under physiologic tension, and care must be taken so that the final closure is neither too light nor too loose. As with all surgery, careful judgement is necessary in the selection of suture and closure technique, and the amputation surgeon must be aware of the options and differences between various techniques. Many patients have only marginal blood supply and the utmost surgical care and technique is required to maximize their wound healing potential.
If primary closure of the wound is not advisable, amputation should be carried out in two or more stages. An initial amputation may be done to provide adequate drainage of infection. This is the recommended course for a preliminary open ankle disarticulation involving a septic patient with a severely infected, non-salvagable diabetic foot. Patients presenting with such a scenario are frequently febrile and bacteremic. The initial open amputation helps to control the infection, eliminate the bacteremia and provide a safer wound environment for a definitive amputation at a later date. Leaving the bone long and avoiding transecting the muscle bellies minimizes the post-operative swelling and edema that often complicates mid-diaphyseal open amputations. When left long, the bone can act as an internal splint, protecting the remaining soft tissue. This will facilitate the later definitive amputation.
Often times a contaminated, open amputation is the result of the original traumatic injury. Contaminated amputations can be treated in a similar fashion to other open amputations. As always, first and foremost the amputation is formed with consideration as to how it will eventually be shaped and closed. Often in trauma cases there is an intermediate zone of tissue. This zone usually requires time to either recover or demarcate, and multiple secondary surgeries can be required before it becomes evident if the involved tissue is viable or must be removed.
Open amputations are not guillotine amputations. In the past the term ‘guillotine amputation’ was commonly used, but both this wording and the particular technique it describes should be avoided. In times of war, guillotine amputation was used to avoid infection. All the different tissues were transected at the same level, much as a guillotine blade would sever a limb. In a guillotine amputation, no flaps were fashioned, no muscle for myodesis was retained and no fasciocutaneous closure was planned. The post-operative plan following guillotine amputation was not to perform a secondary closure, but instead to apply skin traction, daily dressing changes and prolonged wound care. Distal healing with skin traction resulted in fragile, thin, distal coverage that has poor durablity. An eventual revision would often be performed many months later. The guillotine technique is no longer recommended. Even in instances of grave trauma, open amputation with a thoughtful plan for closure is a better option.
The general principles of primary amputation also apply to revision amputation. Revision is necessary if the primary amputation fails to heal, or else if the residual limb is unsatisfactory for prosthetic fitting. Revision may also be necessary if the residual limb does not serve the patient’s functional requirements. With advances in prosthetic devices and interfaces, limbs once historically difficult to fit can now be accommodated for reasonably well. Unfortunately, many modern-day amputations are still poorly done, and these will either develop complications during the healing process, or else require revision surgery at a later date. Better education, more research, and additional refinement of surgical technique are the ways to avoid unnecessary revision amputations.
Revision amputation for pain issues is a viable option only when the etiology of the patient's pain is clearly identified. Such pain problems that are amenable to surgical treatment include redundant tissue, in-folded skin, painful scars, bone prominence, bone spurs, heterotopic ossification, failure of myodesis, distinct and identifiable symptomatic neuromas and some chronic skin conditions, such as epidemoid cysts and chronic skin break down or ulceration. Surgery specifically for the treatment of phantom pain, without clear pathologic etiology, has not been successful.
|
<urn:uuid:b2452d5f-6ace-459f-950b-bf93389ae652>
|
CC-MAIN-2016-26
|
http://www.ampsurg.org/html/commonfiles/fundamentals6.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00080-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.933473
| 1,108
| 2.8125
| 3
|
We like to think of ourselves as open-minded, but we're not. The problem is not that, once we've found a solution to a problem, we refuse to think of alternatives. It's that we don't even realize that there are alternatives to consider.
If you remember someone having a name "like Megan," it's going to be hard to shake the actual name out of your head. If you think that the diabolical Count is the murderer in an Agatha Christie mystery, it's going to be hard to think of anyone else committing the crime even if he's shown to be innocent. If you're pretty sure you left your keys in that one old jacket you have, you're going to keep circling back to it because it's hard to think of other places your keys might be.
That's not the congruence bias. The congruence bias so completely dominates our minds that we can't even realize there are alternative theories. We can't find the real solution because we're not looking for it. One researcher tested this by giving people lists of numbers that followed a certain rule. (The numbers given were 2, 4, and 6, and they were simply ascending numbers.) People assumed that they were even numbers, or numbers that increased by two, which is a perfectly understandable guess, and not an example of bias. The bias came when people were told that their guess regarding the rule was wrong. Instead of thinking of alternate solutions, they began re-wording their guess. The problem couldn't be with the concept they'd thought of, just the way they expressed it.
A bigger problem with the congruence bias is that testing it and asking questions often won't help. One study put forward a plausible but incorrect hypothesis about a complicated social issue, and backed it up with a list of questions and answers. Subjects looked over the hypothesis and the answers, and rated the questions and the answers that seemed to confirm the hypothesis as very important, while downplaying the questions and the answers that cast doubts on it. As for testing, people tend to set up tests that will yield a positive answer if the hypothesis is right, not ones that will disprove it if it's wrong. They also don't set up tests that might indicate the validity of alternate theories. We want to be told we're right so much, we don't even think we could be wrong.
The congruence bias might have real world repercussions. I remember posters all over town which featured a picture of a homeless alcoholic man sitting dejectedly on a street corner and said, "It doesn't always end here." The lower third of the poster featured a fresh-faced teen smoking a joint and said, "But it often starts here." Studies had found that a lot of drug users and homeless people once smoked pot. Therefore pot is a "gateway drug." No further hypotheses needed.
|
<urn:uuid:6bedf614-4f85-4699-8294-b5c151c01183>
|
CC-MAIN-2016-26
|
http://io9.gizmodo.com/the-congruence-bias-is-why-we-all-jump-to-conclusions-a-1472899810
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.982407
| 587
| 3.03125
| 3
|
Spitzer Leads NASA's Great Observatories To Uncover Black Holes, Other Hidden Objects
Astronomers unveiled the deepest images from NASA's new Spitzer Space Telescope today and announced the detection of distant objects -- including several supermassive black holes -- that are nearly invisible in even the deepest images from telescopes operating at other wavelengths.
Dr. Mark Dickinson, of the National Optical Astronomy Observatory, Tucson, Ariz., principal investigator for the new observations, said, "With these ultra-deep Spitzer images, we are easily seeing objects throughout time and space, where the most distant known galaxies lie. Moreover, we see some objects that are completely invisible, but whose existence was hinted at by previous observations from the Chandra and Hubble Observatories."
Seven of the objects detected by Spitzer may be part of the long-sought population of "missing" supermassive black holes that powered the bright cores of the earliest active galaxies. The discovery completes a full accounting of all the X-ray sources seen in one of the deepest surveys of the universe ever taken.
This detective story required the combined power of NASA's three Great Observatories -- the Hubble Space Telescope, Chandra X-ray Observatory and Spitzer Space Telescope. Each observatory studies different wavelengths, from high-energy X-rays with Chandra, through visible light with Hubble, and into the infrared with Spitzer. Together, these telescopes yield far more information than any single instrument.
All three telescopes looked as far as 13 billion light-years away, toward a small patch of the southern sky containing more than 10,000 galaxies, in a coordinated project called the Great Observatories Origins Deep Survey (GOODS). Chandra images detected more than 200 X-ray sources believed to be supermassive black holes in the centers of young galaxies. Extremely hot interstellar gases falling into the black holes produce the X-rays.
Hubble's Advanced Camera for Surveys revealed optical galaxies around almost all the X-ray black holes. However, seven mysterious X-ray sources remained for which there was no optical galaxy. Dr. Anton Koekemoer of the Space Telescope Science Institute, Baltimore, Md., discovered these sources and has three intriguing possibilities for their origin: "The galaxies around these black holes may be completely hidden by thick clouds of dust absorbing all their light, or may contain very old, red stars. Or some could be the most distant black holes ever observed -- perhaps as far as 13 billion light-years." If so, all their optical light would be shifted to very long infrared wavelengths by expansion of the universe.
Scientists eagerly awaited the Spitzer images to solve this puzzle. Because Spitzer observes at infrared wavelengths up to 100 times longer than those probed by Hubble, Spitzer might be able to see the otherwise invisible objects. Indeed, the very first Spitzer images of these objects, obtained earlier this year, immediately revealed the telltale infrared glow from the host galaxies around all the missing X-ray black holes.
Three of Koekemoer's galaxies are extremely "red," or bright, in infrared. The Spitzer data, together with new images at shorter infrared wavelengths from the Very Large Telescope at the European Southern Observatory, indicate that the galaxies around these black holes could be heavily obscured by dust, and perhaps more distant than other known dust-obscured galaxies. Some of the other objects, however, have quite different colors, and are even more intriguing. "Their colors may be consistent with objects more distant than any now known," said Dickinson, who cautioned that additional Spitzer observations later this year will help confirm what kind of objects these might be.
Old Galaxies Shine in Infrared: In another study using the same Spitzer data, Dr. Haojing Yan of the California Institute of Technology, Pasadena, Calif., studied 17 unusual galaxies near the Hubble Ultra Deep Field. This small patch of sky within the GOODS area was recently the target for the deepest optical images ever taken with Hubble's Advanced Camera. The Deep Field optical images, released in March 2004, reach more than five times fainter than the GOODS Hubble data. But even with that phenomenal sensitivity, two of the 17 Spitzer-selected objects remain completely invisible in optical light, while the others are only faintly detected. Yan finds that these galaxies get steadily brighter at longer wavelengths, and seem to be more distant cousins of the so-called "Extremely Red Objects," known from previous deep surveys. Most are distant galaxies that are red because they are either old or dusty. These new Spitzer-identified objects, however, appear to lie farther away to a time when the universe was only two billion years old.
"These objects could be the remnants of the first stars -- the very first galaxies formed in the earliest stages of the universe," said Yan. Most galaxies that we see today formed their stars gradually over a long period of time. But these 17 objects seem to be "old before their time," perhaps almost as old as the universe itself at that early epoch. "If we indeed are seeing the direct, 'pure' descendants of the first stars, this would make a thrilling story," says Yan. Further Spitzer observations at longer wavelengths, planned for later this year, should help decide whether these objects are red because they are old, or because they are young and actively forming stars enveloped in dust.
Black Holes In Hiding: Using Hubble and Chandra data, Dr. Meg Urry, a GOODS astronomer at Yale University, New Haven, Conn., and her team suggest that most accreting black holes are hidden at visible wavelengths, even in the early universe. Few such hidden black holes had previously been found at such large distances, despite theoretical arguments for their existence. They were missed because their visible radiation is so dim they look like faint, ordinary galaxies. "With the new Spitzer data these very luminous, distant objects are easily visible," said Urry. "The great sensitivity of the new Spitzer infrared cameras, combined with the superb spatial resolution of Chandra, means that finding all of the black holes that are powered by infalling gas is now possible."
Urry's team is using data from the three space observatories to take a census of the supermassive black holes that formed two to five billion years after the big bang. Most of these active galactic nuclei are hidden by dust, which absorbs visible and some X-ray light but emits strongly at infrared wavelengths. "The Spitzer GOODS observations verify that large numbers -- perhaps three-quarters -- of the obscured active galactic nuclei were indeed present in the early universe. The longer-wavelength Spitzer data still to come will reveal even more shrouded active galactic nuclei," said Urry, "including some, missed by X-ray observations, which look like ultraluminous infrared galaxies."
NASA's Jet Propulsion Laboratory, Pasadena, Calif. manages the Spitzer Space Telescope, with science operations conducted at Caltech. The Space Telescope Science Institute, Baltimore, Md. is operated by the Association of Universities for Research in Astronomy, Inc. for NASA, under contract with the Goddard Space Flight Center, Greenbelt, Md. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for NASA's Office of Space Science, Washington. Northrop Grumman of Redondo Beach, Calif., formerly TRW, Inc., was the prime development contractor for the observatory. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.
Electronic images and additional information are available at:
|
<urn:uuid:dcf3cdfb-06bf-4fb1-b82d-6aab0f19563f>
|
CC-MAIN-2016-26
|
http://www.spitzer.caltech.edu/news/165-ssc2004-10-Spitzer-Leads-NASA-s-Great-Observatories-To-Uncover-Black-Holes-Other-Hidden-Objects
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93469
| 1,561
| 3.640625
| 4
|
Building Powered By Algae Growing On Its Facade
The BIQ house will use microalgae to generate renewable energy for the building while also providing shade.
The BIQ house in Germany, which was designed for the International Building Exhibition in Hamburg, features a ‘bio-adaptive façade’ that uses microalgae to generate renewable energy and provide shade. The zero-energy house is currently under construction and will be the first real-life test for the new façade system.
Algae in the bio-reactor façades grow faster in bright sunlight to provide more shade. The bio-reactors power the building by capturing solar thermal heat and producing biomass that can be harvested. The BIQ house was designed by Splitterwerk Architects, in collaboration with Colt International, Arup, and SSC. Arup’s Europe Research Leader, Jan Wurm, said:
To use bio-chemical processes for adaptive shading is a really innovative and sustainable solution so it is great to see it being tested in a real-life scenario. As well as generating renewable energy and providing shade to keep the inside of the building cooler on sunny days, it also creates a visually interesting look that architects and building owners will like.
The building is due to be completed in March 2013, and it will allow scientists, engineers, and builders the opportunity to assess the full potential of the system as a green alternative.
|
<urn:uuid:11cdea40-ad85-449d-8b0a-80ab5e344a38>
|
CC-MAIN-2016-26
|
http://www.psfk.com/2012/10/algae-powered-building.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00135-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943121
| 295
| 2.765625
| 3
|
Calculations by Rice theoretical physicist Boris Yakobson, Assistant Professor Feng Ding of Hong Kong Polytechnic and their collaborators showed substrates not only of diamond but also nickel could chemically bind the edge of a strip of a graphene nanoribbon. Because the contact is so slight, the graphene walls retain nearly all of their inherent electrical or magnetic properties.Journal of the American Chemical Society - Upright Standing Graphene Formation on Substrates
We propose integrating graphene nanoribbons (GNRs) onto a substrate in an upright position whereby they are chemically bound to the substrate at the basal edge. Extensive ab initio calculations show that both nickel (Ni)- and diamond-supported upright GNRs are feasible for synthesis and are mechanically robust. Moreover, the substrate-supported GNRs display electronic and magnetic properties nearly the same as those of free-standing GNRs. Due to the extremely small footprint of an upright GNR on a substrate, standing GNRs are ideal building blocks for synthesis of subnanometer electronic or spintronic devices. Theoretically, standing GNR-based microchips with field-effect transistor (FET) densities up to 10^13 per cm2 are achievable.
Yakobson and Ding calculated a theoretical potential of putting 100 trillion graphene wall field-effect transistors (FETs) on a square-centimeter chip.
That potential alone may make it possible to blow past the limits implied by Moore's Law -- something Yakobson once discussed with Intel founder Gordon Moore himself.
"We met in Montreal, when nano was a new kid on the block, and had a good conversation," said Yakobson, Rice's Karl F. Hasselmann Chair in Engineering and a professor of materials science and mechanical engineering and of chemistry. "Moore liked to talk about silicon wafers in terms of real estate. Following his metaphor, an upright architecture would increase the density of circuits on a chip -- like going from ranch-style houses in Texas to skyscraper condos in Hong Kong.
"This kind of strategy may help sustain Moore's Law for an extra decade," he said.
A sheet of material a fraction of a nanometer wide is pretty pliable, he said, but the laws of physics are on its side. Binding energies between carbon in the diamond matrix and carbon in graphene are maximized at the edge, and the molecules bind strongly at a 90-degree angle. Minimal energy is required for the graphene to stand upright, which is its preferred state. (Walls on a nickel substrate would be angled at about 30 degrees, the researchers found.)
Yakobson said the walls could be as close to each other as 7/10ths of a nanometer, which would maintain the independent electronic properties of individual nanoribbons. They could potentially be grown on silicon, silicon dioxide, aluminum oxide or silicon carbide.
The research illustrated differences between walls made of two distinct types of graphene, zigzag and armchair, so-called because of the way their edges are shaped.
Sheets of graphene are considered semimetals that have limited use in electronics because electrical current shoots straight through without resistance. However, armchair nanoribbons can become semiconductors; the thinner the ribbon, the larger the band gap, which is essential for transistors.
Zigzag nanoribbons are magnetic. Electrons at their opposing edges spin in opposite directions, a characteristic that can be controlled by an electric current; this makes them suitable for spintronic devices.
In both cases, the electronic properties of the walls can be tuned by changing their height.
The researchers also suggested nanowalls could become nanoarches by attaching opposing ends of a graphene ribbon to the substrate. Rather than lie flat on the diamond or nickel surface, the energies at play along the binding edges would naturally force the graphene strip to rise in the middle. It would essentially become a half-nanotube with its own set of potentially useful properties.
Precisely how to turn these two-dimensional building blocks into a three-dimensional device presents challenges, but the payoff is great, Yakobson said. He noted that the research lays the groundwork for subnanometer electronic technology.
5 pages of supplemental information
formation on a substrate: (i) Synthesizing transition metal pattern on a substrate using the lithography technology. (ii) Using the transition metal substrate as catalyst to grow a layer of graphene on the patterned metal surface (iii) Cutting the top part of the patterned metal and grown graphene to achieve desired height by ion beam etching. (iv) Etching away transition metal chemically to achieve the predesigned standing GNR pattern with controlled height for device application.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
|
<urn:uuid:9302063a-5c68-4afb-929b-7a4d6eacd051>
|
CC-MAIN-2016-26
|
http://nextbigfuture.com/2011/09/rice-hong-kong-polytechnic-physicists.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00182-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938638
| 994
| 3.015625
| 3
|
- Diabetes Research
- Glucose Meters
- Adult Onset Diabetes
- Diabetes and Exercise
- Diabetes and Insurance
- Diabetes and Sex
- Diabetes Care
- Diabetes Control
- Diabetes Cure
- Diabetes Prevention
- Diabetes Technology
- Insulin Resistance
- Type 1 Diabetes
- Type 2 Diabetes
- Type 3 Diabetes
- Battle Diabetes
Citrus: the new anti-stroke food?
Eating citrus-rich foods may help reduce your risk of the most common type of hemorrhagic stroke, according to new research from the American Academy of Neurology.
The study's findings will be presented at the American Academy of Neurology's 66th Annual Meeting in Philadelphia between April 26 and May 3.
The authors note that while hemorrhagic stroke is less common than ischemic stroke, the former is often more deadly.
New risk factor potentially identified
Researchers analyzed 65 people who had experienced an intracerebral hemorrhagic stroke, or blood vessel rupture, inside the brain. These subjects were then compared to 65 healthy individuals. All of the study participants were tested for vitamin C levels in their blood. Forty-one percent had normal levels of vitamin C, 45 percent had depleted levels of the vitamin, and 14 percent were considered deficient in vitamin C.
Results showed that stroke patients, on average, were more likely to have depleted levels of vitamin C.
"Our results show that vitamin C deficiency should be considered a risk factor for this severe type of stroke, as were high blood pressure, drinking alcohol and being overweight in our study," said study author Stéphane Vannier, M.D., with Pontchaillou University Hospital in Rennes, France. "More research is needed to explore specifically how vitamin C may help to reduce stroke risk. For example, the vitamin may regulate blood pressure."
Vannier also noted that vitamin C can have other beneficial effects, like assisting in collagen creation. Foods like oranges, peppers, papaya, broccoli, and strawberries are all rich in this key vitamin, which, according to other studies, may also help to prevent heart disease.
Source: American Academy of Neurology
The information provided on battlediabetes.com is designed to support, not replace, the relationship that exists between a patient/site visitor and his/her health professional. This information is solely for informational and educational purposes. The publication of this information does not constitute the practice of medicine, and this information does not replace the advice of your physician or other health care provider. Neither the owners or employees of battlediabetes.com nor the author(s) of site content take responsibility for any possible consequences from any treatment, procedure, exercise, dietary modification, action or application of medication which results from reading this site. Always speak with your primary health care provider before engaging in any form of self treatment. Please see our Legal Statement for further information.
|
<urn:uuid:1dd92a07-9c71-4ec4-aa8b-e14722dc08d9>
|
CC-MAIN-2016-26
|
http://www.battlediabetes.com/news/diet-and-nutrition/citrus-the-new-anti-stroke-food
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00107-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930213
| 591
| 2.9375
| 3
|
This text is part of:
Table of Contents:
1 This figure, originally designed for the seal of the Committee for the Abolition of the Slave Trade, in October, 1787, had a powerful influence in kindling anti-slavery sentiment in Great Britain, and was, with its direct and pathetic appeal, no less an inspiration and incentive to the American abolitionists. (See Clarkson's “History of the slave trade,” Chapter XX:)
3 How thoroughly the prohibition was disregarded can be judged from the fact, that although the law required the forfeiture to the Government of all slaves illegally imported after 1807, the Register of the Treasury was obliged to confess, in 1819, that of more than a hundred thousand thus introduced up to that time, not one had been forfeited. Frequent record of the capture of slavers by English vessels was made in the Genius.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 United States License.
An XML version of this text is available for download, with the additional restriction that you offer Perseus any modifications you make. Perseus provides credit for all accepted changes, storing new additions in a versioning system.
|
<urn:uuid:567fe08e-46f9-4cc2-a2c1-099a95aa5f29>
|
CC-MAIN-2016-26
|
http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A2001.05.0196%3Achapter%3D6%3Apage%3D163
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95902
| 249
| 3.5625
| 4
|
What is the Best Angle For Solar Panels?
Producing electricity with photovoltaics is most efficient when panels are directly facing the sun, however there is no single, global best angle for solar panels to be installed.
Calculating the optimal angle depends on the latitude of the location where they’re being installed.
According to military and government OEM solutions provider Ok Solar, for a house situated at 0-15° latitude, the best angle for solar panels is 15°. For houses situated at 25-30°, add 5° to local latitude, for 30-35° add 10° to local latitude, for 35-40° add 15° to local latitude, and for houses situated at more than 40° add 20° to local latitude .
You can use the map below to find your latitude. Once you search for your city, the map will display your latitude.
The best angle for your location will also change between summer and winter, when the sun is higher or lower in the sky in your location.
Once you’ve calculated what is the best angle for your solar panels, you’ve got a few options for how to acheive it.
- You can get static framing made to measure to suit the optimal angle. Going for your optimal winter angle will make your year-round production more consistent, but having a sub-optimal summer angle will mean you’re not using your panels to full capacity in the summer.
- A better option for static panels is to get frames that are manually adjustable, so you can raise the angle during winter and lower it during summer.
Solar tracker systems are a more expensive but effective option for ensuring your solar panels have the best angle all day long, every day of the year. Solar trackers constantly change the tilt of your panels to face the sun and follow it through the day.
Installing solar on your roof is useful as not only does it take advantage of unused space, you also make use of the roof’s pitch to acheive an optimal angle. Panels can be directly installed to the roof, or spacers can be used to adjust the angle.
For ground mounted panels, installation at the correct angle will yield around a 15% increase against panels installed flat .
OkSolar. Angle of Orientation for Solar Panels & photovoltaic modules. Retrieved from http://www.oksolar.com/technical/angle_orientation.html
Renewable Energy World. Solar Trackers: Facing the Sun. Retrieved from http://www.renewableenergyworld.com/rea/news/article/2009/06/solar-trackers-facing-the-sun
|
<urn:uuid:2ca73b35-7c63-44a7-9a69-6c96f8f6bf9e>
|
CC-MAIN-2016-26
|
http://exploringgreentechnology.com/solar-energy/s/best-angle-for-solar-panels/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00066-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.883047
| 557
| 2.796875
| 3
|
Mary Calvagna, MS
Here are some ways to reduce your risk of
The National Academy of Science’s Institute of Medicine makes the following recommendations regarding weight gain during pregnancy:
*These values are based on
body mass index
(BMI)—the ratio of your weight in kilograms to your height in meters squared. Recognize that these values are for Caucasians, which may not apply to Asians who have smaller body frames and different percentage of body fat.
Besides increasing your risk for gestational diabetes, excessive weight gain during pregnancy is also a risk factor for
post-pregnancy. It should be noted that the subject of recommended pregnancy weight gain remains somewhat controversial and that some feel that the above guidelines are too high. Talk with your doctor about what range of weight gain is right for you.
Even before pregnancy begins, nutrition is a primary factor in the health of the mother and the baby. Besides lowering your risk of
gestational diabetes, eating a healthy diet lowers your and your baby’s risk of serious complications during and after pregnancy. A healthy diet is one that is low in saturated fat and rich in fruits, vegetables, and whole grains.
Talk to your doctor about whether you should take probiotic supplements to reduce your risk of gestational diabetes.
If you smoke, talk to your doctor about ways to quit to reduce your risk of gestational diabetes.
Participating in a regular exercise program can lower your risk of developing gestational diabetes by helping you maintain a healthy weight. But, it is very important that you discuss exercise with your doctor before you begin.
Choose exercises that do not require your body to bear any extra weight. Good examples are:
When you are exercising, be sure to stay hydrated. Drink plenty of fluids, even if you are not thirsty. If your body temperature goes up too high, it can be dangerous for your baby.
Avoid contact sports or vigorous sports. Also, avoid any exercises that increase your risk of falls or injury.
Chung S, Song MY, et al. Korean and Caucasian overweight premenopausal women have different relationship of body mass index to percent body fat with age.
J Appl Physiol.
Mottola MF. The role of exercise in the prevention and treatment of gestational diabetes mellitus.
Cur Sports Med Rep. 2007;6:381-386.
Standards of Medical Care in Diabetes 2006 III. Detection and diagnosis of gestational diabetes mellitus.
Tieu J, Crowther CA, et al. Dietary advice in pregnancy for preventing gestational diabetes mellitus.
Cochrane Database Syst Rev. 2008;16(2):CD006674.
Yun S, kabeer NH, et al. Modifiable risk factors for developing diabetes among women with previous gestational diabetes.
Prev Chronic Dis.
3/17/2014 DynaMed's Systematic Literature Surveillance
http://www.ebscohost.com/dynamed: Luoto R, Laitinen K, et al. Impact of material probiotic-supplemented dietary counselling on pregnancy outcome and prenatal and postnatal growth: a double-blind, placebo-controlled study. Br J Nutr. 2010. Jun;103(12):1792-1799.
10/13/2014 DynaMed's Systematic Literature Surveillance
http://www.ebscohost.com/dynamed: Zhang C, Tobias DK, et al. Adherence to healthy lifestyle and risk of gestational diabetes mellitus: prospective cohort study. BMJ. 2014 Sep 30;349.
Last reviewed September 2015 by Andrea Chisholm, MD
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © EBSCO Publishing. All rights reserved.
|
<urn:uuid:26bdf410-99c6-4853-b082-af2cfbec7790>
|
CC-MAIN-2016-26
|
http://www.wkhs.com/Obstetrics/Library.aspx?chunkiid=19517
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.882817
| 853
| 2.796875
| 3
|
Research Shows Dolphins Call Each Other By Unique Names
Researchers found that dolphins, known to have sophisticated communication systems, each appear to have their own names. Not Fred or Diane. Instead, whistles that are unique to each individual dolphin.
BBC news reports that researchers looked into a group of bottlenose dolphins, recording their sounds and then playing them back to gauge reactions.
Dr. Vincent Janik, from the University of St. Andrews in Scotland, said about their findings, “We played signature whistles of animals in the group, we also played other whistles in their repertoire and then signature whistles of different populations – animals they had never seen in their lives.”
When the dolphins heard their own specific whistle, they responded back with the same sound. Dr. Janik continued, “Most of the time they can’t see each other, they can’t use smell underwater, which is a very important sense in mammals for recognition, and they also don’t tend to hang out in one spot, so they don’t have nests or burrows that they return to.”
Not being able to rely on their other senses, sound is vital for them to communicate and locate one another in the water.
The abstract for the study states, “This study provides compelling evidence that a dolphin’s learned identity signal is used as a label when addressing conspecifics. Bottlenose dolphins therefore appear to be unique as nonhuman mammals to use learned signals as individually specific labels for different social companions in their own natural communication system.”
That puts them in the same category as humans and adds a second species to the list of animals who label each other individually.
Photo Credit: Shutterstock
|
<urn:uuid:d58a1d70-a607-417c-bad9-80335a2c10c6>
|
CC-MAIN-2016-26
|
http://www.ecorazzi.com/2013/07/23/research-shows-dolphins-call-each-other-by-unique-names/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00098-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956325
| 358
| 3.828125
| 4
|
Swamp White Oak
- Deciduous tree, medium-sized, to 75 ft (23 m) tall, pyramidal when young, open crown, rounded, short trunk; bark light grayish-brown, scaly, fissured with flat ridges with age. Leaves alternate, simple, 12-17 cm long, widest above the middle, tapering to a wedge-shape base, rounded shallow lobes, glossy green above, gray or almost white and pubescent below. Fruit (acorns) 20-30 mm long, solitary or in pairs, stalks 2-10 mm long, cup covered with swollen scales, enclose a third or half of the nut.
- Sun to part shade, best in moist, well-drained, acid soils, reportedly has some tolerance to drought and urban conditions.
- Hardy to USDA Zone 4 Native range from Quebec, Pennsylvania, west to Wisconsin, south to Georgia and Arkansas.
- bicolor: two-colored, a reference to the contrast between the upper and lower leaf surfaces.
- Oregon State Univ. campus: west Market Place West.
|
<urn:uuid:9f7d2dd9-8744-4ab4-8971-36becd3cb54f>
|
CC-MAIN-2016-26
|
http://oregonstate.edu/dept/ldplants/qubi.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00114-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.821448
| 230
| 2.625
| 3
|
This multi-disciplinary database provides full text for more than 4,600 journals, including full text for nearly 3,900 peer-reviewed titles. PDF backfiles to 1975 or further are available for well over one hundred journals, and searchable cited references are provided for more than 1,000 titles.
Databases and Digital Media Services
Features thousands of cross-referenced entries, covering the entire spectrum of African-American history over the past 500 years.
Spans more than 500 years of political, military, social, and cultural history to cover the American experience.
Offers fast access to more than 600 Native American groups and over 15,000 years of American Indian culture and history.
The Library's subscription to this popular genealogy website is limited to users within the Library itself. Please visit us!
Presents the full scope of world history from prehistory through the 1500s, with special topic centers on key civilizations and regions.
Find repair and diagnostic information for your car in our easy searchable database! Begin your search by selecting the year, make and model and download easy to read instructions as well as car care tips and up tp date safety alerts. YouTube Tutorial.
This weekly paper has online content available going back to 2009.
Virtually every course of study — from history to science to literature — is ultimately tied to the study of people. Biography in Context delivers outstanding research support with nearly a million biographical entries spanning history and geography. Biography in Context is a curriculum-aligned resource that offers media-rich content in context that's updated daily to meet the needs of today’s user. YouTube Tutorial.
Find expert criticisms of the works and lives of many great authors, access topics on literary themes, movements and genres, and watch video clips of many full-length classic plays.
|
<urn:uuid:fcd8174f-a95f-4a2d-b9f5-92d72f6e4d27>
|
CC-MAIN-2016-26
|
http://www.berwynlibrary.org/databases/Z
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919289
| 372
| 3.015625
| 3
|
Earlier in this book I called it EM-gravity and will retain that name. Even though much of gravity is due to the magnetic charge of magnetic monopoles.
When the Maxwell Equations are taken to be the axioms over all of physics, then gravity is the smallest EM attraction possible and the reason we see gravity on cosmic bodies is because they hold so much mass and this mass is proportional to the magnetic monopoles it contains. So, the old Newton-gravity is just the Maxwell Equations attraction of one magnetic monopole for another magnetic monopole. The force law is the same as Coulomb force law, only the relative strength is 10^40 weaker. Magnetic monopoles come in two types, a north and a south monopole and all four possible combinations are an attractive force, whether north to north, north to south, south to north, and south to south. It maybe the case that unlike magnetic monopole charges have a tiny bit more attraction force than the like charges, but it is the Maxwell Equations that will determine that.
Now in the Universe at large, we see more than just the plain old Newton Gravity, for we see solid-body-rotation. Here are a few examples of solid-body-rotation:
(1) barred spiral galaxies (2) many spiral galaxies (3) Saturn's Rings (4) Jupiter's Red-Spot (5) resonance of satellites with parent body such as the Jovian moons or Mercury with Sun
Now the theory of General Relativity by Einstein is crackpot nonsense and is fake physics. Einstein never made the Maxwell Equations the axioms of physics and he pays a dreadful price for that omission because General Relativity is nothing but science fiction and utter nonsense. The planet Mercury precession of perihelion is not explained by General Relativity but is explained by the solar radiation pressure. If GR were viable, then there would be a Venus precession and a Earth precession but those two planets are far enough away from the direct solar radiation pressure that there is no precession to speak of.
An easy experimental proof that GR is physics fakery is this experiment: Bubble Chamber tracks of the neutron versus proton, of the chemical elements and an ionized chemical element. If GR were true then no particle, even photons of high energy would have a straight line track in the bubble chamber. The only tracks in a bubble chamber that are curved are those particles with a magnetic or electric charge imbalance.
GR is reduced to the statement: "Mass bends Space and Matter follows the Curvature of bent Space." When we put that statement into the Maxwell Equations we see that it does not hold up, and Einstein should have sought to inject GR into the Maxwell Equations, but Einstein was never really a bright enough physicist. If you put GR into the Maxwell Equations, you have to alter the statement of GR to be this: "Charge bends Space and Matter follows the Curvature of that bent Space." The charge could be either electric charge, or dipole magnetic charge or monopole charge. Einstein is the typical physicist of the 20th century that comes up with theories of physics to please his own idiosyncratic wishes of whatever he falls in love with such as a elevator in space that imitates the force of gravity. And sadly enough, almost all the other physicists of the 20th century followed Einstein in how they put together a new theory-- idiosyncratic pet loves, rather than taking a axiom system as the heart of physics and having everything new put to the test of whether the axioms allow it or disallow the new proposed theory.
You see, for me and future physicists, we no longer have to be clever in dreaming up a new theory. We only have to be super cognizant of the Maxwell Equations and how they work. And if we run into some new physics phenomenon such as superconductivity or superfluidity or BEC, we do not do what Einstein or Bardeen Cooper Schrieffer, or Higgs, or Hawking, did, by dreaming up new mechanisms that were their own pet loves at the time. No, we do not do that crank crackpot nonsense. What we do as real physicists is simply pull out the Symmetrical Maxwell Equations as axioms and see if they permit and allow the new phenomenon and how they permit and allow that phenomenon.
Back to gravity.
Now it is a awful shame that we recently sent a flyby satellite mission to Saturn and have not proven to what percentage the Rings are solid body rotation. Is it 90% solid body? Is it 50% solid body rotation? Who knows. And did the satellite take enough photos that we can find out the percentage of solid body rotation of those ice crystal Rings. If not, then we wasted a mission to Saturn, because the most important question of Saturn is how much solid body rotation of its Rings.
Now previously I did not list the Red Spot of Jupiter as a solid body rotation phenomenon, but I should have. The literature says the Red Spot is a weather phenomenon. I can believe it is a weather aspect but a magnetic aspect of loosely held particles, much like the ice of Saturn's Rings and those magnetic particles are displaying solid body rotation.
Now some moons and the planet Mercury display orbital resonance. And Newton-gravity is really ill-equiped to explain resonance. Resonance is part of the Maxwell Equations theory, as a feature just before you reach solid-body-rotation. The planets around the Sun are in Newtonian gravity, not in solid body rotation, but if we moved closer to that of solid body rotation, we would first have the planets in a harmonic resonance to the Sun. The Rings of Saturn are in solid body rotation and at one stage earlier in their orbit those rings were in a resonance mode, and as the rotation went on further, the resonance turned into solid body.
Now it is a shame the Rings of Saturn have no bar to them, as in barred spiral galaxies that we can watch and see that the bars are solid body rotation. So that barred spiral galaxies are 100% solid body and others less than 100%. So why would stars have magnetic and electric charges to cause solid body rotation? They are not ice crystals like Saturn's ring, so what are stars in barred spiral galaxies? I can only guess that in older galaxies that there is a huge supply of iron present and in 14 billion years of evolving those stars that they reached a evenness or uniformity of magnetism and electricity that like the Coulomb force itself is a rotation of a vinyl phonograph record of the 1960s.
Google's archives are top-heavy in hate-spew from search-engine- bombing. Only Drexel's Math Forum has done a excellent, simple and fair archiving of AP posts for the past 15 years as seen here:
|
<urn:uuid:d89ee202-84e1-412c-b48b-333158046061>
|
CC-MAIN-2016-26
|
http://mathforum.org/kb/thread.jspa?threadID=2435877&messageID=8353849
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00135-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941734
| 1,397
| 3.234375
| 3
|
Dextrose is a reducing sugar. The reducing power of a sugar is measured by its ability to reduce solutions of alkaline copper sulphate (Fehling’s solution) to cuprous oxide. The dextrose equivalent (DE) of pure dextrose is defined as 100. Expressed as a percentage of the reducing value of pure dextrose and calculated on a dry weight basis, the total reducing value of a starch hydrolysate is referred to as its DE.
The classic browning in food systems is due to the interaction of reducing sugars and acidified protein compounds. Due to its active aldehyde groups, dextrose is a powerful reducing sugar and promotes rapid buildup of browning.
At temperatures below 55 °C (131°F) dextrose crystallizes from aqueous concentrated solutions in the monohydrate form, in which each dextrose crystal contains 1 molecule crystal water per molecule dextrose (Dx-Monohydrate). Above 55 °C (131°F) the anhydrous form is crystallizing where the dextrose crystal contains no crystal water.
With its pleasant, clean and sweet, cooling taste, dextrose has been used for years as a sweetener in a wide range of food applications. Dextrose is one of the sweetest of the starch derived sugars. On a scale on which sucrose is assigned a sweetness value of 100, dextrose is rated at 75.
Its sweetness is influenced by a variety of factors such as temperature, acidity, salts, flavoring materials, sweetener concentration and the nature of other sugars present. Contrary to sucrose, dextrose is not subject to the process known as inversion, and therefore its degree of sweetness does not change.
Dextrose and sucrose are often used together to control and balance sweetness and total solids. When dextrose and sucrose are combined, they exhibit a synergy. At a 40 percent replacement level, for example, the apparent relative sweetness of dextrose could be as high as 90.
Heat of solution
The heats of solution of dextrose monohydrate (-105.5 J/g) and of anhydrous dextrose (-59.3 J/g) differ greatly from that of sucrose (-16.1 J/g). Hence, the heat required to dissolve dextrose is approximately 10 times greater than for sucrose. Consequently, when eating food containing dextrose in crystalline state, there is a distinct cooling sensation in the mouth. The perception of sweetness is shortened and flavor enhancement is improved.
Crystalline dextrose is readily soluble in water but only slightly in ethanol and hardly soluble in other organic solvents. At temperatures higher than 55°C (131°F), dextrose is more soluble than sucrose.
In addition, at any given specific temperature, there is an optimum sucrose-dextrose saturation ratio that raises total solubility above that of the individual components.
Dextrose, because of its low molecular weight, has the capacity to decrease the freezing point. At a 30% concentration, the freezing point of a dextrose solution is 2°C lower than that of a comparable sucrose solution - crucial in the production and consumption of ice-cream.
The freezing point depression factor (FPDF) is typically used for calculations in the ice-cream industry. The FPDF factor for sucrose is 1.00 compared to 1.90 for dextrose.
Because it is a monosaccharide, dextrose is the ideal carbohydrate source for yeast fermentation in baking and brewing. The fermentation begins immediately and proceeds rapidly. Dextrose provides energy to the cell to produce many by-products in addition to carbodioxide and ethanol. Also, dextrose is used in lactic acid fermentation processes in the pickling and the meat industry.
Dextrose is often used in combination with sugar or other sweeteners. It acts to shorten the sweetness perception and enhance the original food flavor.
Dextrose is a reducing sugar and improves, in comparison with sucrose, the inhibition of oxidative degradation, thus increasing color stabilization. This can help to extend the shelf life of food products.
Dextrose monohydrate and anhydrous dextrose are available in a variety of particle size distributions and granulometry to provide ease and stability of blending. Coarse dextrose products are perfect in relation to flowability and dust minimalisation.
Some Cargill products are only approved for use in certain geographies, end uses, and/or at certain usage levels. It is the customer's responsibility to determine, for a particular geography, that (i) the Cargill product, its use and usage levels, (ii) the customer's product and its use, and (iii) any claims made about the customer's product, all comply with applicable laws and regulations.
|
<urn:uuid:35e00309-1fb0-4776-9a9c-745aad100d6e>
|
CC-MAIN-2016-26
|
http://www.cargillfoods.com/emea/en/products/sweeteners/dextrose/functional-properties/index.jsp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92707
| 1,010
| 3.453125
| 3
|
Astronauts in Hard Hats
Outer space is a heck of a place to build a home. Just ask the astronauts charged with assembling the International Space Station. For starters, it's alternately frigidly cold and dark as ink, then blisteringly hot and bright as a noontime desert—a climatic flip-flop that takes place every three-quarters of an hour. Below, see and hear six astronauts tell what it's like to "translate" (spacewalk) and otherwise cope with the largest construction project ever undertaken in orbit. For capsule biographies of the astronauts, click on their names.
Working in space
"Building the space station is not an ordinary construction job. It's very different, and it's different from building a house or a skyscraper, in that we're working in a very hostile environment, a very isolated environment. So, first of all, we're not as efficient working in these big, bulky spacesuits. We're time-constrained, up to eight hours at a time. We're constrained on the amount of tools and weight we can carry to space. So there it's somewhat unique.
"But part of the problem is, we have to have it all figured out before we show up in space. It's got to be all figured out, while on a construction site, if you're building a house and you cut a two-by-four to a certain length and you go to install it and it doesn't work, well, it might be a little inefficient, but basically you toss that board, you go cut another one, and you're off and running. If we show up in space, and now that piece that we're putting on doesn't fit, we're out of luck. That may have just delayed the next series of missions. We may have to add another mission now to bring up the right component, because they're all linked. So it's very complicated."
"When you go up there for construction, not only do you have to bring the material to do the construction and all the equipment to do the outfitting, but you have to bring every single little tool that you might require, every infrastructure that you require to build the actual station. And then you have to bring every single thing you need to sustain the lives of the construction workers, the crew members, that go up there, because there is absolutely nothing. As I often say to people, unfortunately, there is no hardware store around the corner, so if you forgot a particular type of washer or you don't have the right screwdriver, well, you can't go back and get it."
"The tasks themselves are actually rather mundane. Although the environment is rather exciting, we were hooking up power and data cables between Zarya, the Russian control module, and Unity, the American-built node. So we were hooking up data cables, power cables, we were putting out sun shades, cleaning things up, preparing for the next missions. So on some level, the tasks are the mundane tasks that go into building something—to making a laboratory, an office building, ready for people to inhabit it and do work in it.
"That's what's fascinating about it. It is the environment in the end which is the challenge and not the mundane tasks. It's being able to go outside and to have only two people at a time typically, and to work in a vacuum with temperature extremes from -200°F to +200°F, and [going] from daylight, and 45 minutes later it's dark, 45 minutes later it's day."
"There is a plan as the space station grows to take up a large space station manipulator arm, and once it is on the station, that will be able to help us with the construction....This new space station arm is able to walk, it can literally walk from one end to another. The beauty of that is that, as the station grows and becomes very large, you'd have a hard time designing any arm to do everything you wanted it to do. So what's been designed into the station is, either end of the arm can attach itself to any point on the station. So ideally you attach to an area where you need to work, and if a week later or days later you need to work in another area of the station, you can walk the arm around the station to position it to a more usable space to work."
"What we call 'translating' around the space station, people call spacewalking. But, of course, you can't actually walk in space. There's nothing for your feet to walk on, in the sense of gravity holding you down to walk. What we end up doing is actually more of a space crawl. It's like climbing something that's going straight up. That is, you use your hands a lot, in fact, almost entirely, in order to move around the space station. So you grab ahold of one handrail, then you grab ahold of the next one. You let go of the last one, you grab the one you're on, you grab the next one. And that's literally how you do it, from one after the other handrail to wherever it is you're going.
"There is one other way, and we call it the Elevator. That is, if you get on the end of the robotic manipulator system, on the end of the arm, it can actually take you wherever it is you need to go."
"One of the things we have to do while we're on our spacewalks is loosen a lot of bolts. And that's complicated in space because you have to react to all the torques. That's the technical way to put it, but if you've ever been in a swimming pool and worked on something, you know that if you turn something one way, it's going to turn you the other. If you're on ice and you push on something, you obviously go the other way, and in space, of course, that's true in all three dimensions.
"So if you have an electric drill as we did, and loosen a bolt, and you're not tied down with your feet or your hands, you can feel it trying to turn you the other way. So we have to react to all the torques ourselves, and there are two ways to do it: one with your feet in a foot restraint, or free-floating."
"[A] lot of the tasks we have to do free-float. You're really working hard. You might watch an EVA [Extra Vehicular Activity, or spacewalk] and think, "Gosh, these guys are moving slow, and they're not working that hard." Inside the suit, you're constantly flexing all of your muscles to keep control. A good example: If you start translating [spacewalking], and then you go to stop—you put on the brakes with your hand—you're going to pitch up. Your feet are going to tend to fly up over your head. So you have to constantly sense any rates that your body might be getting on it, then put in a force to null that rate. So when you go to stop, you're stopping with one hand and you're pushing up with the other, so you don't pitch out of control."
Inside a spacesuit
"Training in the Neutral Buoyancy Lab [a swimming pool used for instruction before spaceflight] is very good training. It's harder in some ways than actually going outside, because of the gravity effects in the pool. Once we're in the pool, they put us in a spacesuit, a real spacesuit, and pressurize us ... and that's what really makes a lot of spacewalking difficult, because your hands are in this balloon. And you know that if you want to bend a balloon, it takes work. So every time you close your fingers or open your hands, you're actually working against the suit...The training in the tank is very important because it doesn't pay to fight the suit. The suit will win, so you have to learn how to be one with the suit."
"Different tasks can be more tiring than others. But just the entire experience of being in the suit, which is pressurized, can be very, very tiring, because any movements—you're sort of like the Michelin Man, you know, you're puffed up in this suit, which means that any movements you make with your hands or your arms, they're all against pressure. It's like, you're pressurized, and you're a little puffed up, so there's a lot of resistance.
"The way the suit is designed, it's very protective, because, of course, you don't have anything out there. You're in your own little spacecraft, actually, when you're wearing this suit. So as a result of all the protections, it's kind of stiff. So essentially what they're doing is lifting weights for six to seven hours. And so obviously if the task is very intensive with their hands, then they're lifting even more weight. So there are times built in for them to rest. But from an endurance and a strength standpoint, it's an incredible workout."
"The American suits will have a SAFER backpack on. It's called a Simplified Aid for EVA Rescue, and it's a fancy acronym for saying that if for some reason your two tethers have come loose and the worst has happened and you've floated away from where you've been attached to the station, you can activate this SAFER, and it's got some nitrogen jets that you can use to fly yourself back to the station and grab hold and get yourself retethered."
"If we got a small hole [in our spacesuit], our oxygen would start coming out of the spacesuit, and the pressure would start to drop. If the pressure drops too far, then you'll get the bends, and if it drops below about 2.5 psi, then you don't have enough oxygen to maintain useful consciousness.
"Now, we have a device called the Secondary Oxygen Pack on the back of our backpack, and it's got a whole lot of oxygen in it, and it would be able to support a small leak for—depends on how small it is—but for at least 30 minutes. So for a small leak, we'd have our secondary-oxygen system kick in, and we'd be heading back to the airlock and repressurizing, and we'd be fine. A big leak—like a big hole in the suit, say a half-inch hole or something like that, or if your glove blew off—would pretty much be a real bad day for you."
Better be tethered
"Your safety tether is like a reel. It's a wire tether on a spring-loaded spool that tends it, and that is attached to you and the space station at all times, because if you should ever let go of the station, the tether will pull you back in, or you can grab it and pull yourself back in. If you didn't have that tether and you let go, even if you're two inches away from the station, you can't swim over there. There's no water to create forces against. So it's critical from a safety perspective that you be tethered at all times."
"It is a challenge, because if you let go of something, it floats away. So you have to be very careful in how you tether. If you let go of the structure, you would float away, so you have to be very careful, make sure your safety tethers are secure."
"One of the things that you learn to do, in addition to processing the task that you're working on and thinking two or three steps ahead, you're also constantly thinking about where your tether is, where your buddy is, where his tether is, and where the airlock is. So that in any point in time, if something should go wrong, if he should get a leak in his suit, I know where he is, and I'm going over there, and I'm going to help him get back to the airlock."
Blueprint for a Space Station |
Astronauts in Hard Hats
Inspired by Science Fiction |
Site Map |
Stationed in the Stars Home
Editor's Picks |
Previous Sites |
Join Us/E-mail |
About NOVA |
Site Map |
PBS Online |
NOVA Online |
© | Updated November 2000
Support provided by
For new content
visit the redesigned
|
<urn:uuid:63ef8da7-efdc-479a-b1b3-aa97e806f3f1>
|
CC-MAIN-2016-26
|
http://www.pbs.org/wgbh/nova/station/hardhats.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00065-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969445
| 2,618
| 2.78125
| 3
|
Few consequences of aging seem scarier than a shrinking brain.
But brain shrinkage happens to most people … and greater shrinkage is linked to higher risk for Alzheimer’s and other forms of dementia.
Earlier, we reported on several studies linking higher blood levels of omega-3s – or regular use fish oil supplements – to better cognitive health and less age-related brain shrinkage.
(For more on those studies, and links to related reports, see "Brain Benefits of Fish Bolstered by MRI Study”, "Fish Oil Aided Size and Health of Aging Brains”, "Brain Decline Deterred by Omega-3s & Vitamins”, and "Fish Oil Lowers Cortisol and Body Fat Levels”.)
The long-chain omega-3s also found in seafood – specifically DHA – account for about 40 percent of the fatty acids in the brain, where they play essential structural and functional roles.
(The body can make DHA from plant-source omega-3s, but only very inefficiently. This conversion provides just enough DHA to maintain normal – not necessarily optimal – brain, eye, and immune function.)
DHA is concentrated near the synapses, where critical communications between brain cells (neurons) occurs.
A new study in older women adds more hard evidence that diets rich in seafood-source omega-3s help preserve brain volume and function.
These findings strongly suggest that diets rich in seafood-source omega-3s should help deter or delay dementia.
Fish oil may slow brain shrinkage linked to dementia
The new study involved 1,111 women in the Women's Health Initiative Memory Study, and comes from University of South Dakota scientists.
Each volunteer’s omega-3 (EPA and DHA) levels was measured at the outset of the memory study.
Eight years later – when the women’s ages averaged 78 – they underwent MRI scans to measure their brain volume.
The women with higher levels of omega-3s had slightly larger brain volumes.
Specifically, the brains of the women whose omega-3 levels measured double the average were 0.7 percent larger than their peers'.
Even more importantly, the brain advantage seen in the women with higher omega-3 levels was greatest in the hippocampus region, which plays key roles in memory formation and recall.
Among the women with higher omega-3 levels, the hippocampus/memory region was 2.7 percent larger than the same brain area in women with lower omega-3 levels. (In Alzheimer's disease, the hippocampus begins to atrophy even before symptoms appear.)
According to lead author James V. Pottala, PhD, "… the results suggest that the effect on brain volume is the equivalent of delaying the normal loss of brain cells that comes with aging by one to two years.” (AAN 2014)
Omega-3s and brain decline: The picture is mostly positive but mixed
The record of research on omega-3s and dementia is mixed, with some positive findings and other studies showing no benefit.
For example, the same University of South Dakota team recently published the results of a six-year study among 2,157 women (Ammann EM et al. 2013).
The women had normal cognitive capacities at the outset, and received annual cognitive tests for a median of 5.9 years.
Their omega-3 (DHA + EPA) blood levels and cognitive capacities were measured at the start of the trial, and annually thereafter for about six years.
After adjusting the results to account for the known effects of various personal characteristics, no significant differences were found between women in the high and low DHA + EPA levels … either at the time of the first annual cognitive tests or over time.
As the authors wrote, "We did not find an association between RBC [red blood cell] DHA + EPA [omega-3] levels and age-associated cognitive decline ...”. (Ammann EM et al. 2013)
But as the authors of a recent evidence review concluded, the overall picture is encouraging (Cederholm T et al. 2013):
Animal studies have been consistently positive, with rodents getting omega-3s over long periods showing three big advantages: 1) less buildup of the amyloid proteins linked to Alzheimer’s disease; 2) less brain shrinkage in the hippocampus/memory region; 3) better cognitive performance over time.
Most epidemiological studies have linked higher fish intakes or omega-3 DHA blood levels to reduced rates of age-related cognitive decline.
The results of clinical trials in healthy old people have been mixed. Some small, short-term trials have detected positive effects in older adults who were cognitively healthy or had only mild cognitive impairment at the outset. In others, no advantages seen among the participants taking fish oil vs. placebo capsules.
Omega-3 supplements have not produced significant benefits in people already diagnosed with Alzheimer’s disease … though the leading drugs don’t help much either.
The negative outcomes of most of the clinical trials published to date may be misleading, because, as the authors pointed out, "the treatment periods may have been too short.” (Cederholm T et al. 2013)
And, given the positive evidence to date, the authors suggest following "the general CDC dietary recommendations of 2-3 fish meals per week or the equivalent intake of long chain omega-3 fatty acids, particularly DHA.” (Cederholm T et al. 2013)
Unsurprisingly, it may well be that omega-3s just can't do it all alone ... as we’ll explain.
Omega-3s need help from plant-rich, whole food diets
The apparent anti-dementia effect of omega-3s varies greatly, based the presence or absence of a gene variation called ApoE4, which is linked strongly to increased risk for Alzheimer’s and heart disease.
Omega-3s seem to bring more benefit in people who don’t carry the ApoE4 gene variation (Huang TL et al. 2005; Whalley LJ et al. 2008).
But diets rich in omega-3 DHA may bring serious brain benefits to ApoE4 carriers who also cut back on sugars, starches, and omega-6 fatty acids (from cheap vegetable oils) and get plenty of antioxidant-rich vegetables and fruits (Henderson ST et al. 2004; Florent-Béchard S et al. 2007; Johnson EJ et al. 2008).
The standard American diet suffers from a pro-inflammatory excess of omega-6 fats from cheap vegtable oils (corn, soy, cottonseed, sunfloer, safflower).
In combination with sugars, refined starches, and sedentary lifestyles, this omega-6 overload raises the risk of obesity, diabetes and heart disease, which are major risk factors for cognitive decline and Alzheimer's disease … particularly in people carrying the ApoE4 variation.
Thus, to get maxiumum brain benefits from seafood-source omega-3s, everyone – especially ApoE4 carriers – must cut back on refined carbs and omega-6 fats, eat ample amounts of whole, antioxidant-rich plant foods, and get active!
Aguilar CA, Talavera G, Ordovas JM, et al. The apolipoprotein E4 allele is not associated with an abnormal lipid profile in a Native American population following its traditional lifestyle. Atherosclerosis. 1999 Feb;142(2):409-14.
American Academy of Neurology (AAN). Can fish oil help preserve brain cells? January 22, 2014. Accessed at http://www.eurekalert.org/ pub_releases/2014-01/aaon-cfo011514.php
Ammann EM, Pottala JV, Harris WS, Espeland MA, Wallace R, Denburg NL, Carnahan RM, Robinson JG. ω-3 fatty acids and domain-specific cognitive aging: secondary analyses of data from WHISCA. Neurology. 2013 Oct 22;81(17):1484-91. doi: 10.1212/WNL.0b013e3182a9584c. Epub 2013 Sep 25.
Cederholm T, Salem N Jr, Palmblad J. ω-3 fatty acids in the prevention of cognitive decline in humans. Adv Nutr. 2013 Nov 6;4(6):672-6. doi: 10.3945/an.113.004556.
Eto M, Saito M, Okada M, et al. Apolipoprotein E genetic polymorphism, remnant lipoproteins, and nephropathy in type 2 diabetic patients. Am J Kidney Dis. 2002 Aug;40(2):243-51.
Florent-Béchard S, Malaplate-Armand C, Koziel V, et al. Towards a nutritional approach for prevention of Alzheimer's disease: biochemical and cellular aspects. J Neurol Sci. 2007 Nov 15;262(1-2):27-36.
Harris WS, Pottala JV, Varvel SA, Borowski JJ, Ward JN, McConnell JP. Erythrocyte omega-3 fatty acids increase and linoleic acid decreases with age: observations from 160,000 patients. Prostaglandins Leukot Essent Fatty Acids. 2013 Apr;88(4):257-63. doi: 10.1016/j.plefa.2012.12.004. Epub 2013 Jan 31.
Henderson ST. High carbohydrate diets and Alzheimer's disease. Med Hypotheses. 2004;62(5):689-700.
Huang TL, Zandi PP, Tucker KL, et al. Benefits of fatty fish on dementia risk are stronger for those without APOE epsilon4. Neurology. 2005 Nov 8;65(9):1409-14.
Jofre-Monseny L, Minihane AM, Rimbach G. Impact of apoE genotype on oxidative stress, inflammation and disease risk. Mol Nutr Food Res. 2008 Jan;52(1):131-45.
Johnson EJ, McDonald K, Caldarella SM, et al. Cognitive findings of an exploratory trial of docosahexaenoic acid and lutein supplementation in older women. Nutr Neurosci 2008;11:75–83.
Kalaria RN, Maestre GE, Arizaga R, et al. Alzheimer's disease and vascular dementia in developing countries: prevalence, management, and risk factors. Lancet Neurol. 2008 Sep;7(9):812-26. Epub 2008 Jul 28.
Kivipelto M, Rovio S, Ngandu T, et al. Apolipoprotein E epsilon4 Magnifies Lifestyle Risks for Dementia: A Population Based Study. J Cell Mol Med. 2008 Mar 4;12(6B):2762-71. Epub 2008 Feb 8.
Laitinen MH, Ngandu T, Rovio S, et al. Fat intake at midlife and risk of dementia and Alzheimer's disease: a population-based study. Dement Geriatr Cogn Disord. 2006;22(1):99-107. Epub 2006 May 19.
Messier C. Diabetes, Alzheimer's disease and apolipoprotein genotype. Exp Gerontol. 2003 Sep;38(9):941-6.
Pottala JV et al. Higher RBC EPA + DHA corresponds with larger total brain and hippocampal volumes. Neurology 10.1212. Published online before print January 22, 2014, doi: 10.1212/WNL.0000000000000080. Accessed at http://www.neurology.org/content/early /2014/01/22/WNL.0000000000000080.short .
Whalley LJ, Deary IJ, Starr JM, et al. n-3 Fatty acid erythrocyte membrane content, ApoE varepsilon4, and cognitive variation: an observational follow-up study in late adulthood. Am J Clin Nutr. 2008 Feb;87(2):449-54.
|
<urn:uuid:1c8ce975-4c64-49d2-9e94-8267d60678cf>
|
CC-MAIN-2016-26
|
http://www.vitalchoice.com/shop/pc/articlesView.asp?id=2119
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00042-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.86748
| 2,571
| 2.921875
| 3
|
A nautical chart is one of the most fundamental tools available to the mariner. It is a map that depicts the configuration of the shoreline and seafloor. It provides water depths, locations of dangers to navigation, locations and characteristics of aids to navigation, anchorages, and other features.
The nautical chart is essential for safe navigation. Mariners use charts to plan voyages and navigate ships safely and economically. Federal regulations require most commercial vessels to carry nautical charts while they transit U.S. waters.
Since the mid-1830s, the U.S. Coast Survey (a NOAA predecessor agency) has been the nation’s nautical chartmaker. NOAA's Office of Coast Survey is still responsible for creating and maintaining all charts of U.S. coastal waters, the Great Lakes, and waters surrounding U.S. territories.
|
<urn:uuid:49b19e64-1f21-4a93-a2f6-6c8bf6e2a89a>
|
CC-MAIN-2016-26
|
http://oceanservice.noaa.gov/facts/nautical_chart.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00165-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.916174
| 175
| 3
| 3
|
There's a slight problem with the article, because the two basic definitions of the noun, "small body of water" and "collective effort" are not etymologically related at all. The first is from Old English and beyond that its origins are disputed. The other meaning comes from Latin pullus "young of an animal" via French poule "hen, stake", related to the Indo-European root pau- "few, little". How is this best solved?
I found etymology for pool. http:// books.google.com/ books?id=hsRISNLSSHAC&pg=PT654&lpg=PT654 22.214.171.124 22:56, 21 August 2013 (UTC)
Etymology 1 is fully logical, only it is difficult to see how Breton POL could also be borrowed, like the Irish word. Whether the Welsh and Cornish words are borrowed from English is also disputable. A Celtic root is most likely, due to these considerations. However, compare Cornish BAL (ore mine) with the Proto-Indo-European presented in the main entry.Werdna Yrneh Yarg (talk) 18:03, 18 August 2015 (UTC)Andrew
means 'Absolutely not; means 'Exceedingly unlikely'; means 'Very dubious'; means 'Questionable'; means 'Possible'; means 'Probable'; means 'Likely'; means 'Most Likely' or *Unattested; means 'Attested'; means 'Obvious' - only used for close matches within the same language or dialect, at linkable periods.
Andrew H. Gray 21:41, 4 November 2015 (UTC)Andrew
The following information has failed Wiktionary's verification process.
Failure to be verified may either mean that this information is fabricated, or is merely beyond our resources to confirm. We have archived here the disputed information, the verification discussion, and any documentation gathered so far, pending further evidence.
Do not re-add this information to the article without also submitting proof that it meets Wiktionary's criteria for inclusion. See also Wiktionary:Previously deleted entries.
Rfv-sense:A group of nations for the purpose of a knockout tournament.
I don't think it is necessarily a group of nations, a group of teams would do too. I am unsure if it is called a knockout tournament while teams/nations are still playing in pools, perhaps the knockout stages are described as "direct knockout" or whatever. --126.96.36.199 17:53, 6 November 2013 (UTC)
- Looks like a straightforward mistake. Doesn't refer to the nations; doesn't have to be nations can be cities or people or animals (or objects I suppose, anything!) It refers to the group and not for a knockout tournament, but for a competition (I can't think who you'd use this outside of a competition, anyone?) Mglovesfun (talk) 18:00, 6 November 2013 (UTC)
- I think it is often used for the initial stages of what will later become a knockout tournament. Also note that, at least sometimes, but I think often, the word "group" is used when the teams aren't national teams. Even other words are used when the pools are split geographically (e.g. to save travel time/expenses). --188.8.131.52 20:00, 6 November 2013 (UTC)
- Should be changed to: (sports): A group of teams in the round-robin stage of a tournament. Purplebackpack89 (Notes Taken) (Locker) 18:17, 6 November 2013 (UTC)
- I think there are always several pools, but I'm not sure. I think that it's quite possible to have several round-robin stages. --184.108.40.206 20:00, 6 November 2013 (UTC)
- I think in most cases, there are at least two pools, Pool A and Pool B (the only exception I can think of is the medal round of the 1980 U.S. Hockey tournament). And, yes, there are some competitions that have two round robin stages Purplebackpack89 (Notes Taken) (Locker) 05:20, 8 November 2013 (UTC)
- RFV failed on the sense; only one quotation provided, to be seen at revision history. --Dan Polansky (talk) 08:52, 10 May 2014 (UTC)
|
<urn:uuid:edbab832-ee33-49fc-94e6-21744ae8601f>
|
CC-MAIN-2016-26
|
https://en.m.wiktionary.org/wiki/Talk:pool
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00171-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964408
| 938
| 2.5625
| 3
|
5 Ways to Start Storing Water
Having a store of clean drinking water is one of the most important and crucial components of resiliency-building and emergency preparedness. It has been said many times that a person can live for three weeks without food but only three days without water. There are many emergency situations and events that can cause your everyday water supply to become unavailable or undrinkable. The unexpected natural disaster or perfect storm will typically wreak havoc on the powergrid. When widespread power outages occur, water may not flow from the tap, or treatment plants may not be working to properly treat and deliver safe water. It is events like these that we all must prepare for. Here we explore ways to ensure you are prepared to meet these types of situations without much effort or concern.
Keep in mind the recommendation that you have one gallon of water per person per day for emergency situations. I personally try to store three gallons per person for my family to ensure that we have water both for drinking and for all the other activities in life that require water (cooking, dishwashing, sanitation, and water for the animals and plants).
Although there are many things to consider when developing a water storage plan, taking a few simple, low-cost steps to build up a store of water is easier than you think. The following ways to store water will hopefully provide you with the insights and methods to make the process a whole lot easier.
A few main considerations to observe before getting into the methods of storing water:
- Water can be stored for a very long time if prepared properly (6 months to 5 years, depending on the preparation method).
- Water is heavy and can take up a lot of space (one U.S. gallon = 8.34 pounds; 5-gallon container = 41.7 pounds plus the container).
- It is recommended that you have two gallons of water per person per day. Try to store a minimum of a 3-day supply. So for a family of four, that would be 24 gallons of water.
- Light and air are not good for water. For long term storage, always try to use opaque, airtight containers and store them in cool, dark spaces.
- To prevent water from growing bacteria and other bugs, use water preserver or bleach solution.
- It is recommended that you rotate your water once a year for freshness and check on your water supply every month to ensure that leaks or contamination have not occurred.
- Water containers can be stored in many different places such as closets, underneath beds, etc. Get creative, but keep in mind that a leaking container can give you a whole lot more headache than you planned for. Be mindful of possible water damage issues.
We will now cover five different ways to store water and build more resiliency into our lives.
New Store-Bought Bottled or Packaged Water
I have found that the easiest and quickest way to establish a baseline of water storage is simply to buy larger-sized bottled water. Anywhere from 1 gallon to 2.5 gallons can be purchased for very little, and you can get a few days' supply of water in one shopping trip. Keep in mind that this type of storage usually is not re-usable and needs to be replaced every 1-2 years. As a first step though, having 12 gallons (2 cases) of water set aside can give you great peace of mind. As you either expand or get more durable solutions, you can donate or distribute this water to folks who may be not as prepared as you are in time of need. Some additional thoughts with this method:
- Bottled water is available in almost any location in a variety of sizes. (1 - 2.5 gallons)
- Gallon-sized jugs can cost as little as $1, sometimes less, and can come in cases of 6. The heavy-duty cardboard boxes stack easily and protect the jugs from rupture and light exposure.
- Low upfront cost when buying bigger sizes (10 gallons for just over $11.00)
- Lighter and easier to transport. Good for emergency grab-and-go needs.
- Easy to store in cabinets and other small spaces.
- Durability of bottles for long-term storage is questionable (would need to protect the floor from leaks if storing under a bed)
- Chemicals may leak from plastic containers during long-term storage (seek out BPA-free bottles if possible). It's not recommended to reuse bottles for drinking water.
- Not easy to stack one-gallon jugs and larger sizes
- Increased waste/recycling considerations
Glass Canning Jars
Thought most resources do not recommend using glass jars for water storage since jars are more breakable than their plastic counterparts, I find that having a few gallons in ½ gallon canning jars to be a practical and convenient way to have easily accessible water for short term water outages.
We keep three gallons of filtered water available in the pantry (stored in the original half-gallon case with extra cardboard padding between the bottles) and rotate them in and out by using them in the car as part of our everyday carry car kit (one gallon in the car at all times). The water is always fresh and there is no need to worry about chemical leaching from the container in hot conditions.
- Easy to clean and sterilize
- Completely re-useable and can be used for other foods when not being used for water
- Won't leach any chemicals
- Easy to see how much is left in each container
- Easy to pour
- Jars are more breakable and can chip
- Expensive for the volume being stored
- 1/2 gallon jars can't be frozen for freezer backup like plastic bottles can
Heavy-Duty, Thick, Polyethylene Containers
Most folks who begin to learn about storing water will come across containers specifically designed for water storage and transport on the medium scale. These containers are made of polyethylene food-grade plastic and designed to withstand stacking and storage for long periods of time. They usually come in volumes of 3.5 to 7 gallons. They are opaque and can be used and re-used multiple times. Most are BPA-free and have less risk of leaching chemicals into the water during storage.
I have used many versions of these containers, and my current recommendation is the WaterBrick System. They come in a 3.5 gallon size that makes them easier to carry and pour from. As another advantage, their wide mouth allows for them to be used to easily store dry goods instead of water if desired, making them a dual-purpose container. The shape of the waterbricks also makes them easier to fit in tight places (under the bed, for instance).
- Most containers/systems are stackable
- Durable thicker plastic (can be bumped and dropped without bursting)
- Most come with spout/dispensing options
- Able to transport in grab-and-go situations
- Container cost can add up quickly when trying to achieve larger volumes
- Very heavy when full – (30 to 40+ lbs)
Bathtub Water Bladder
The WaterBob is a relatively new and innovative way to safely store a large amount of water when a natural disaster is anticipated or an emergency situation is in progress. You place the bladder in the tub and fill it with clean water while the supply is available and reliable. Then dispense the water with the included siphon pump. The bladder system can hold up to 100 gallons of drinkable water for short-term durations (not meant for long-term storage). The only issue with this method is that you need to be prepared in advance and aware of when a possible emergency is going to happen so you can fill the WaterBob before the water supply is compromised.
- Inexpensive and easy to setup
- Easy to store in the bathroom where you will need it.
- Large volume for multi-day supply of fresh water.
- Disposable - can only be used once
- Water should only be stored for 4 weeks
- May be difficult to extract the water if the siphon pump fails
Outside Storage: Rain Barrels/Bulk Tanks/Pools & Ponds
This method requires the most planning and infrastructure investment, but also provides the largest volume available and longer duration of use when water is in short supply. If you are using rain barrels and bulk tanks for rainwater harvesting, keep in mind that water will need to be filtered and purified before using as potable water. Review the following articles by Peak Prosperity members about how they manage and setup their bulk water systems: | Rainwater Harvesting | Water Storage: An Example of Resiliency Building
- Holds large volumes of water (55 – 5000+ gallons of water)
- Allows for rainwater harvesting and other free natural sources
- Easier access for both indoor and outdoor uses (supply for a garden, animals, fire protection, etc.)
- Will require purification and filtration to ensure safety for drinking
- Usually requires additional infrastructure (solid foundations, pumping, piping, weather protection)
- Not easily moved
- Publicly visible water storage alerts folks of a water source during long-duration outages
I hope these ways of storing water have shown you that is easy to take the steps necessary to build water resilience into your life and prepare for the unexpected. Start small and build up your supply. And please post your own ideas and experiences in our comments section below, and share other ways you are storing water "just in case." Investing a relatively small amount of effort toward water storage can mean great peace of mind and improved success during an emergency.
|
<urn:uuid:5f502956-7db4-42c2-8a60-f33d514d7eaf>
|
CC-MAIN-2016-26
|
http://www.peakprosperity.com/wsidblog/80199/5-simple-ways-start-storing-water
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937385
| 2,005
| 3.03125
| 3
|
stark reality of World War 1's impact on Dunedin families is
made clear in a city street map marked with hundreds of
crosses, at the Toitu Otago Settlers Museum.
About 1900 combatants from the present Greater Dunedin area
were killed during that war and several thousand others were
The map is a striking part of a new exhibition, ''Dunedin's
Great War'', which opens at the museum today.
Monday is the 100th anniversary of the outbreak of World War
1, on August 4, 1914.
The exhibition provides a survey of Dunedin and its people
during the war, and runs until May 3.
Museum exhibitions developer William McKee yesterday
discussed a large street map of central Dunedin, based on
1917 street names, which he had developed, showing the home
addresses of many of the city's Great War casualties.
''It really shows that the whole city suffered,'' he said.
Colour-coded crosses show which years casualties - deaths and
significant injuries - occurred, with many in 1914, the first
year, and many also in 1918, the final year.
Dunedin historian and museum curator Sean Brosnahan said
depicting the experiences of Dunedin people during the war
was a ''solemn duty'', which museum staff took very
The map developed by Mr McKee showed not only that people of
all walks of life and in every part of the city had been
affected by the war, but also showed other signs of social
change between 1917 and now.
There had recently been a trend towards more people living in
central city apartments, but many people, including single
men, also lived in the city centre, including near the
Octagon, before World War 1, including in hotels and boarding
houses, he said.
|
<urn:uuid:402ef58b-9fe5-49ee-bbaf-41df15a96e59>
|
CC-MAIN-2016-26
|
http://www.odt.co.nz/news/dunedin/311291/impact-city-dwellers-mapped-out
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00122-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971349
| 381
| 3.171875
| 3
|
Many of the explanations of UTF-8 discuss encoding of code points on Code
Planes 1-16 using the intermediate concept of surrogates as in UTF-16. I
believe that this is both unnecessary and misleading, as UTF-8 is
fundamentally a direct 21-bit encoding scheme, as may be seen in the
attached document. So, I believe that the concept of surrogates is not
relevant for UTF-8 encoding on Code Planes above the BMP.
This is a slightly different explanation of how UTF-8 works, written by me
for the Ultracode(r) bar code spec (Ultracode encodes all of Unicode 3
directly). If any Unicodotti find any errors in it... please let me know!
Clive P Hohberger, PhD
VP, Technology Development
& Director of Patent Affairs
Zebra Technologies Corporation
333 Corporate Woods Parkway
Vernon Hills IL 60061-3109 USA
Voice: +1 847 793 2740
FAX: +1 847 793 5573
Cellular: +1 847 910 8794
From: Theodore H. Smith [mailto:email@example.com]
Sent: Wednesday, May 29, 2002 7:12 AM
Subject: How is UTF8, UTF16 and UTF32 encoded?
I need to know exactly how UTF8, UTF16 and UTF32 is encoded. I heard
that UTF32 can have surrogates, so I can't just expect them
to be scalar values.
Having a nice detailed and clear explanation would help, with
plenty of examples and effects of the encoding and all kinds of
things to make it easier to understand would help.
Or perhaps I'm just reacting to the confusion of the UniCode
website and its not that hard to understand and a simple definition
would do? But the first idea certainly wouldn't hurt.
-- Theodore H. Smith - Macintosh Consultant / Contractor. My website: <www.elfdata.com/>
This archive was generated by hypermail 2.1.2 : Wed May 29 2002 - 15:48:50 EDT
|
<urn:uuid:f033cae2-6a2a-4fcb-b8b2-b8325622b448>
|
CC-MAIN-2016-26
|
http://unicode.org/mail-arch/unicode-ml/y2002-m05/0296.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00073-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.864622
| 457
| 2.734375
| 3
|
GIS Standards and Interoperability
GIS technology is evolving beyond the traditional GIS community and becoming an integral part of the information infrastructure in many organizations. The unique integration capabilities of a GIS allow disparate data sets to be brought together ("integrated") to create a complete picture of a situation. GIS technology illustrates relationships, connections, and patterns that are not necessarily obvious in any one data set, enabling organizations to make better decisions based on all relevant factors. Organizations are able to share, coordinate, and communicate key concepts among departments within an organization or among separate organizations using GIS as the central spatial data infrastructure. GIS technology is also being used to share crucial information across organizational boundaries via the Internet and with the emergence of Web services.
To fully realize the capability and benefits of geographic information and GIS technology, spatial data needs to be shared and systems need to be interoperable. GIS technology provides the framework for a shared spatial data infrastructure and a distributed architecture. Esri has developed its products based on open standards to ensure a high level of interoperability across platforms, databases, development languages, and applications. Esri is also committed to supporting and actively leading efforts to integrate interoperability and standards in its commercial software products.
The Value of Being Open
An open GIS system allows for the sharing of geographic data, cooperation of different GIS technologies, and integration with other non-GIS applications. It is capable of operating on different platforms and databases and can scale to support a wide range of implementation scenarios from the individual consultant or mobile worker using GIS on a workstation or handheld device to enterprise implementations that support hundreds of users working across multiple regions and departments. An open GIS also exposes objects that allow for the customization and extension of functional capabilities using industry-standard development tools.
A state chief information officer, for example, would expect an enterprise GIS solution to provide a spatial data warehouse supporting shared spatial data and services across multiple agencies such as transportation, environmental protection, natural resources, state police, and information technology (IT). Each agency might also have a local database to update and maintain the framework data for which the agency is responsible and provide an e-government portal for public access. Today's "always on" availability requirements and the growing security considerations also dictate that any GIS solution operates in clustered, high-availability environments and be easily replicated to remote backup server locations.
Esri has a large team of people involved in each of the phases of developing open standards including creating standards, reviewing standards, and integrating standards into our products. Esri also works with a number of standards organizations and directly participates in the creation, review, and introduction of industry standards. Esri's efforts are focused on two major areas:
GIS Data and Technology Interoperability
Many organizations need a GIS capable of integrating services and data from multiple sources and in different formats. Esri's technology and products support this level of interoperability, and its active role in the development of open standards has helped ensure that Esri data can be easily accessed by other technologies and applications. Esri products support numerous data converters and direct read access of more than 40 formats including Spatial Data Transfer Standard (SDTS), Vector Product Format (VPF), imagery, CAD files, digital line graph (DLG), and TIGER. Of equal importance, Esri systems enable organizations to share GIS services and communicate across different vendor implementations. An open, distributed, and networked GIS architecture provides the framework for sharing data and services.
Interoperability of GIS Technology With Other Technologies and Systems
Esri has also given great attention to the relationship between GIS and the rest of the IT infrastructure. For our users, this means compatibility and interoperability with major enterprise systems such as enterprise resource planning (ERP), customer relationship management (CRM), database management systems (DBMS), work management systems, decision support systems, and others.
GIS software is increasingly used in large multiuser environments in which spatial data is accessed using a variety of platforms and devices from relational database management systems residing on a wide assortment of servers and operating systems. To be open, therefore, a GIS must support platform-independent solutions implemented in heterogeneous environments composed of different server hardware; operating systems; networks; databases; development tools; and desktop, Web, and mobile clients.
Standards Are a Process Resulting in Interoperability
It is important to recognize that standards must support working GIS systems and be practical to implement. They must support users' requirements for interoperability. Esri applies a phased engineering approach to its technical work on standards.
In order to be successful, GIS interoperability is also heavily influenced by, and must fit within, the broader computing industry standards efforts. Technology, such as operating systems, commodity hardware, DBMS, and the Internet, certainly influences interoperability work of the GIS industry. For example, consider the recent development of Web services standards and their potential influence on GIS.
The Web Services Framework
Web services are a new framework of technology and standards for computing. Web services will provide the means to connect a network of distributed computing nodes, which includes a range of devices such as servers, workstations, desktop clients, and lightweight "pervasive" clients (e.g., phones, PDAs), in a loosely coupled fashion. Web services standards are the first attempt at building a foundation through which computers and devices interact to form a greater computing whole, accessed from any other device on the network. It is also important to recognize that Web services are not just for the Internet; they are the next evolution in distributed computing.
A Web services architecture supports the integration of information and functionality maintained in a distributed network via a registry. This architecture is appealing to organizations, such as local governments, that have entities or departments that independently collect and manage spatial data (e.g., roads, pipes, surveys, land records, administrative boundaries). At the same time, many of the functions of a local government require these data sets to be integrated. The use of Web services (a connecting technology) coupled with GIS (an integrating technology) can efficiently support this need. The result is that the various layers of information can be dynamically queried and integrated, while at the same time the custodians of the data can maintain this information in a distributed computing environment.
Looking Forward With Web Services
The GIS community has been pursuing open interoperability for many years, and the solutions to achieving this goal have changed with the development of new technologies. As GIS technology continues to evolve, the question that many organizations are asking today is, "What is the best long-range solution for application/system interoperability?" Esri believes the answer is Web services, an area in which Esri is focusing much of its research and development.
Web services allow GIS users to publish spatial data and functionality to integrate GIS with systems external to the GIS. Web services help avoid many issues and complications that interoperability at the database and application levels can cause. GIS users can manage their data using the best methods and tools of their commercial GIS in whatever database environment they choose, yet publish selected capabilities using an open Web services framework that enables disparate applications to communicate. Web services is a collaborative computing model that enables existing computing nodes and software to work in peer-to-peer relationships. Web services allow server-to-server as well as client-to-server interoperability of data and functionality.
GIS vendors, such as Esri, use relational DBMSs with specific schemas and methods, as well as specialized file formats, to optimize the performance capabilities of their tools. Web services allow each GIS vendor to build and distribute its own GIS products using the best available technology and methods, while at the same time enabling the technology to interoperate with a wide range of external systems, without compromising the design and implementation of the core technology. The result for the GIS user is a distributed GIS computing framework that maximizes performance and functionality internally and interoperability externally.
Web Services and Distributed GIS
This loosely coupled Web services architecture provides a new and promising solution for implementation of complex collaborative applications needed in a distributed GIS. In some ways, the integration of GIS and Web services simply means that GIS can be more extensively implemented, and people will be able to take mapping, data, and geoprocessing services from many servers and integrate them to solve new problems using a common Web service-based environment. Unique to GIS-based Web services is the ability to not only connect and interoperate but also to integrate data using the unique properties that are inherent within GIS itself (i.e., data integration and fusion based on geographic location).
Web services will help to enable many of the shared visions for GIS that have been formulated throughout the last decade to be realized. These include the following:
Esri has made major investments in the development and implementation of open GIS standards, not only to serve our own customers but also to promote sharing geographic data across all GIS platforms. We believe our continuing investments in Web services will result in the most open and interoperable GIS solution ever deployed. Esri constantly looks to its customers for feedback regarding the value of its initiatives and is especially interested in how our customers are leveraging our investments in interoperability to meet their GIS needs and solve real-world problems.
For further information on standards and to download a white paper on this topic, visit www.esri.com/standards.
|
<urn:uuid:c6931e83-a7d7-4596-876e-5c96f39e77f6>
|
CC-MAIN-2016-26
|
http://www.esri.com/news/arcnews/spring03articles/gis-standards.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00128-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932996
| 1,946
| 2.890625
| 3
|
In 1673, Harvard was facing a succession crisis.
Harvard had been established just 37 years earlier, and, in the winter of 1673, it faced one of the most serious threats in its early history. Increase Mather had just returned from England after a stint there advocating on behalf of Puritan freedoms when the Harvard Corporation, the University’s highest governing body, had appointed him one of its members. And, after the resignation of Leonard Hoar, the Corporation asked Mather to take on the position of president. Mather politely declined.
Born in 1639 to Richard and Katherine Mather, Increase Mather followed in the footsteps of his family’s religious heritage. The Mather clan was part of a group of prominent and influential families in the first colonies of New England. Richard Mather, an Oxford scholar and minister, assisted in the Puritan journey to the new world and subsequent establishment of the Massachusetts Bay Colony.
Although Increase Mather would go on to play a crucial role in the development of Harvard College, in 1673, Mather was less than willing to lead the University. Mather was the second oldest of five brothers. In 1651, at the age of 12, he began at Harvard, intending to carry on his father’s spiritual work. After graduating in 1656, Mather began his work in the church, preaching his first sermon at 18.
Mather left Massachusetts in 1657 to study in Ireland, where he attained a masters degree from Trinity College and continued his work as a chaplain at Guernsey until 1661. He returned to America the following year to pursue a more significant religious post and began preaching to the Second Church of Boston upon his arrival. Back in Massachusetts, he married Maria Cotton and was ordained in 1664. Increase strongly opposed the popular practicing of the half-way covenant, which sought to lower the requirements of church membership, and took part in a campaign to reestablish and reform New England from the evils of liberalism. In his stringent Puritanism, Mather was a representative of the College’s early religious heritage, and his ideological conviction would feature heavily in his later life when he found himself embroiled in the Salem witch trials.
When Mather finally agreed to take on the presidency, his ambivalence toward the job was clear. Harvard’s top brass had made a point of wanting a president who lived on campus, but Mather refused to move to Cambridge. As president, he spent a total of three months in close proximity to the University during his eight years in office.
Not surprisingly, Mather’s energies often seemed focused on issues far removed from the confines of Cambridge. During Mather’s tenure as president, King James II of England attempted to reform New England’s colonial economy and flush out the Puritan influence, making the College a primary area for the debate and contest over American colonial identity. Mather’s time as president was widely criticized for the time he spent lobbying against the King’s revocation of the charter of Massachusetts (a document that established the legality of the College itself).
But when his attention turned toward Harvard, Mather worked to Puritanize much of his alma mater’s curriculum and rules. According to historian Samuel Morrison, Mather restored instruction in Greek and Hebrew and emphasized the use of Biblical and Christian writings in ethics courses. He also rewrote college laws requiring students to reside in dormitories and have regular attendance at meals and in lectures.
By 1691, the sun began to set on Mather’s tenure as president. By that year—his last as president—had racked up enormous debts while traveling. In the process, he had lost much of his authority and control in the church and at Harvard. The College’s General Court presented him with an ultimatum—move to Cambridge or resign the presidency. He acquiesced, only to move back to Boston six months later. With that, his time as president ended.
At this time, the accusations against Salem women as so-called “witches” had just begun, and Mather left to restrain the court in Salem. Mather came under criticism for his delay in using his considerable moral authority against the trials, as he supported the jury that finally decided to hang 19 innocent people in 1692. During this time and after the hangings, however, Mather wrote many sermons relating his distrust of spectral evidence to convict witches. His own daughter was even accused of witchcraft at one point, which resulted in his immediate decision to advocate against the execution of “witches.” In one of his most famous sermons, “Cases of conscience concerning Evil Spirits,” he argued that “better ten witches go free than the blood of a single innocent be shed,” a phrase that would later be known as Blackstone’s formulation.
|
<urn:uuid:9fd74042-0238-4141-a205-b99b32a0e9f4>
|
CC-MAIN-2016-26
|
http://www.thecrimson.com/article/2011/10/21/puritan-president-mather/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981006
| 1,007
| 3.71875
| 4
|
The word perpetuity means the property of being perpetual, or lasting forever. The perpetuity of an eternal flame means that it will burn forever, while an ordinary candle flame will eventually extinguish.
First appearing in the 15th century, the noun perpetuity derives from the Latin word perpetuus meaning "continuing throughout." It can mean the quality of being perpetual, continuing forever, or everlasting. If a person sent into exile from their native country is never allowed to return, they have been banished "in perpetuity."
|
<urn:uuid:c5eedff3-0029-47ab-9f7a-c58cc47fd0d5>
|
CC-MAIN-2016-26
|
https://www.vocabulary.com/dictionary/perpetuity
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930289
| 106
| 3.046875
| 3
|
Scholars from different fields have joined forces to reexamine every aspect of the Hebrew Bible. Their research, carried out in universities and seminaries in Europe and America, has revolutionized our understanding of almost every chapter and verse. But have they killed the Bible in the process?
In How to Read the Bible, Harvard professor James Kugel leads the reader chapter by chapter through the "quiet revolution" of recent biblical scholarship, showing time and again how radically the interpretations of today's researchers differ from what people have always thought. The story of Adam and Eve, it turns out, was not originally about the "Fall of Man," but about the move from a primitive, hunter-gatherer society to a settled, agricultural one. As for the stories of Cain and Abel, Abraham and Sarah, and Jacob and Esau, these narratives were not, at their origin, about individual people at all but, rather, explanations of some feature of Israelite society as it existed centuries after these figures were said to have lived. Dinah was never raped -- her story was created by an editor to solve a certain problem in Genesis. In the earliest version of the Exodus story, Moses probably did not divide the Red Sea in half; instead, the Egyptians perished in a storm at sea. Whatever the original Ten Commandments might have been, scholars are quite sure they were different from the ones we have today. What's more, the people long supposed to have written various books of the Bible were not, in the current consensus, their real authors: David did not write the Psalms, Solomon did not write Proverbs or Ecclesiastes; indeed, there is scarcely a book in the Bible that is not the product of different, anonymous authors and editors working in different periods.
Such findings pose a serious problem for adherents of traditional, Bible-based faiths. Hiding from the discoveries of modern scholars seems dishonest, but accepting them means undermining much of the Bible's reliability and authority as the word of God. What to do? In his search for a solution, Kugel leads the reader back to a group of ancient biblical interpreters who flourished at the end of the biblical period. Far from naïve, these interpreters consciously set out to depart from the original meaning of the Bible's various stories, laws, and prophecies -- and they, Kugel argues, hold the key to solving the dilemma of reading the Bible today.
How to Read the Bible is, quite simply, the best, most original book about the Bible in decades. It offers an unflinching, insider's look at the work of today's scholars, together with a sustained consideration of what the Bible was for most of its history -- before the rise of modern scholarship. Readable, clear, often funny but deeply serious in its purpose, this is a book for Christians and Jews, believers and secularists alike. It offers nothing less than a whole new way of thinking about sacred Scripture.
James L. Kugel is Starr Professor of Hebrew Literature at Harvard University, and a regular visiting Professor of Biblical Studies at Bar-Ilan University in Israel. He is the author of a number of books of biblical scholarship, including How to Read the Bible (2007), for which he won the National Jewish Book Award for best book, The Great Poems of the Bible (1999), and The Bible As It Was (1997). In 2001, Kugel was awarded the prestigious Grawemeyer Prize in Religion. He lives in Jerusalem, Israel, and in Cambridge, Massachusetts.
Have a question about this product? Ask us here.
|
<urn:uuid:611d2893-a6d1-4c21-8afc-e19b4ba2bad1>
|
CC-MAIN-2016-26
|
http://www.christianbook.com/read-bible-guide-scripture-then-ebook/james-kugel/9781451689099/pd/24361EB
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967828
| 730
| 2.84375
| 3
|
Biomass Definition Debated
Two biomass streams that attract considerable attention and argument are the biomass harvested from federal forests and municipal solid waste (MSW). The intensity of the debate makes sense as these represent some of the largest sources of potential biomass. There are more than 190 million acres of federal forest, and the U.S. generated more than 250 million tons of MSW in 2006.
Federal forests have long been protected with good reason, but a recent proliferation of pine beetles has changed the landscape of that debate and millions of acres of national forests have been turned into giant tinder boxes. According to an article in the Colorado Independent, Colorado alone has more than 2 million acres of forest that have been decimated by the pine beetle and towns such as Vail and Frisco are looking to their now grey hills as a potential source of renewable power. Clearly, the British thermal units (Btus) from these infested trees will be released at some point. The question is, will those Btus be captured or will they be released during potentially devastating forest fires? In a nod to this eventuality, the committee amended the bill to allow biomass resources from national forests "that are removed to reduce hazardous fuels, to reduce or contain disease or insect infestation, or to restore ecosystem health" to qualify.
It is less clear where the committee stands on MSW. While MSW is excluded from official biomass classification with the phrase "but not municipal solid waste," there is language earlier in the bill that suggests that advanced technologies to produce energy from MSW would be allowed under a "Qualified Waste to Energy" provision. The language in this section aims to steer the industry away from straight incineration and favors gasification or pyrolysis.
Either of these streams could find themselves outside of the definition of biomass once again, but the biomass portions of HR 2454 appear to be the product of thoroughly argued policymaking.
Tim Portz is a business developer with BBI International's Community Initiative to Improve Energy Sustainability. Reach him at firstname.lastname@example.org or (651) 398-9154.
|
<urn:uuid:50adf1ce-1845-485a-ae59-22258b58272c>
|
CC-MAIN-2016-26
|
http://www.biomassmagazine.com/articles/2787/biomass-definition-debated
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954181
| 433
| 3.015625
| 3
|
March 5 is National Multiple Personality Day!
Today is a great day to bring to light the oft stigma-attached issue of mental illness. Multiple Personality Disorder or Dissociative Identity Disorder (DID) is a mental disease in which a person exhibits two or more defined and recurring identities.
National Multiple Personality Day is a day to spread awareness, but also to celebrate and embrace the slight multiple personalities we all have. So, talk to yourself without fear of being judged today and may your personalities live in harmony.
|
<urn:uuid:2e6e9d58-c569-4cc9-9c9a-6c28cecfd5f3>
|
CC-MAIN-2016-26
|
http://blog.calendars.com/tag/disorders/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00153-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935956
| 104
| 2.71875
| 3
|
Jan 2013 Issue
BrailleWise® aircraft toilet
Making air travel easier for visually impaired people
Braille is a tactile writing system used by the blind and those with serious vision impairment. It was invented in 1824 by Louis Braille who went blind as a toddler after an accident. Even though he could not see, he was desperate to read. Drawing inspiration from a military night writing code, the 15-year-old school boy developed a set of raised dot alphabets that allowed him to read and complete his education.
Just as the Braille inventor, people with visual disability work hard to adjust to a life without sight. In total darkness and unknown places they can hardly orientate. Every day they face much discomfort when getting around, using public transports and toilets. Therefore, the School of Design at The Hong Kong Polytechnic University has recently designed a new aircraft lavatory especially for them by providing an organized system for reading Braille and other tactile information. This unconventional design is called BrailleWise®, which gives good indication to quickly find and use lavatories on planes. With BrailleWise®, the visually impaired people can now enjoy greater independence and comfort when using toilets.
Braille toilet signs are not a common sight on plane and even if they exist, they can only be found next to an amenity. But BrailleWise® goes about it differently. Beams are put up around a lavatory compartment showing simple directions. A beam with signs in Braille letters for all functions will show the visually impaired users where they can find the amenities such as toilet rolls. The tactile signs on the beam show the names of the amenities along with upward or downward arrows pointing to their actual locations.
Once in a cabin lavatory, a visually impaired person can instantly feel the presence of the Braille beams at waist level. Running his/her fingers down the beam, a user can quickly locate a wanted function such as the toilet bowl, the flush handle and the wash basin. With good bearings, one can move around freely and independently with greater confidence without relying on a guide. He/she does not need to feel around and risk touching the toilet seat anymore, which is often covered in filthy stains.
Travelling and sightseeing are great ways to connect with people. The leader of Public Design Lab in School of Design, Prof. Michael Siu, wanted to make public toilets accessible and comfortable so that the visually impaired people would face less struggles on the go. “Using the toilet in public places is not that straight-forward for the visually impaired. Finding their way around in unfamiliar territory is a big challenge for them. That’s why they would usually avoid using public toilets by not eating and drinking. But it is not healthy,” said Prof. Siu, who has been working with his fellow researchers and the Hong Kong Blind Union since 2000 on products that cater to the special needs of the visually impaired. “Their disability shouldn’t take away their social life and exclude them from society,” said Prof. Siu.
The modern, chic-looking design blends seamlessly into the décor of the cabin lavatory. It means a lot to the visually impaired who work very hard as self-supporting and contributing members of the society and want minimal obstructions to the people around.
BrailleWise® is a simple and economical solution that can transform any public toilet into a barrier-free space in no time. Many design awards have already gone to this invention, including the Diamond Award in the 2012 Successful Design Awards in China and GOOD DESIGN Award in the United States. BrailleWise® also received the Runner-up Prize at the Crystal Cabin Award 2012 held in Germany, the only international prize for aircraft innovations, alongside world-renowned aviation companies as winners including Almadesign, B/E Aerospace & Teague, C&D Zodiac, Lufthansa Systems and TTF Aerospace. This prestigious award is a seal of approval that BrailleWise® maximises autonomy and well-being for disabled air passengers through excellence in design.
|
<urn:uuid:7f66a17b-574e-403d-897f-e5ef3c141c9a>
|
CC-MAIN-2016-26
|
http://www.polyu.edu.hk/ife/corp/en/publications/tech_front.php?tfid=4355
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00197-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952532
| 839
| 2.9375
| 3
|
All areas of Washington State are vulnerable to severe weather. A severe storm is an atmospheric disturbance that results in one or more of the following phenomena: strong winds and large hail, thunderstorms, tornados, rain, snow, or other mixed precipitation. Typically, major impacts from a severe storm are to transportation and loss of utilities. Most storms move into Washington from the Pacific Ocean.
Funk & Wagnalls New World Encyclopedia
Don't know anything about a topic? Start here and find some general articles to get you started. This is suitable for upper elementary and older, as it does not include pictures.
Science Reference Center
Topics covered include biology, chemistry, earth & space science, environmental science, health & medicine, history of science, life science, physics, science & society, science as inquiry, scientists, technology and wildlife. Content is correlated to state and national curriculum standards.
|
<urn:uuid:58fad116-18e2-4dcd-ae1a-eea484787868>
|
CC-MAIN-2016-26
|
http://www.sno-isle.org/research/disasters/severestorm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00201-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918704
| 180
| 3.328125
| 3
|
Compiled by John McDaris and Kendra Murray at SERC.
This collection presents links to visualizations of volcanoes and volcanic processes. Visualizations include general depictions of igneous processes as well as work done on real volcanos on Earth and around the solar system.
Browse the complete set of Visualization Collections. If you have comments or additional resources to add to the page, use our Feedback box to let us know.
Earthscope - Augustine Visualization ( This site may be offline. )
Augustine is a stratovolcano located in Cook Inlets just outside of Anchorage, AK. This page from the Earthscope project presents a map of the island, a photo galley and two 360 degree panaoramas of the island (one interactive and the other a movie).
Savage Earth Animation: Volcanic Eruption (more info)
PBS presents an annotated flash animation of a stratovolcano. The text simply describes the features of the volcano, its characteristic eruption, and details about pyroclastic flows.
How Volcanoes Work: Dynamics of a Plinian Eruption (more info)
A QuickTime animation of a plinian eruption in cross section view. A variety of pressure surfaces exist within the magma column beneath the erupting volcano, and within the eruption column above the volcano. The site includes describes the nature of these pressure surfaces and explains the dynamic processes associated with the eruption model. It also includes a 3D fly by animation of the eruption model.
Volcano Animations: Mt. Kilauea and Mount Etna (more info)
A series of animations that depict geohazards, including earthquakes and volcanoes. The animations stress how Earth is constantly undergoing change. Scroll to the bottom of the page and click on "volcano animation" links.
Video Clips on Volcano Live
This site provides eruption video footage of various volcanoes. From multiple sources, these clips are small and often grainy, but contain stunning footage of active eruptions.
Video Clips of the 12 May 1996 Pyroclastic Activity: Pyroclastic Flow Meets the Atlantic Ocean! Montserrat Volcano Observatory (more info)
For centuries, volcanologists have debated about the fate of pyroclastic flows that come in contact with bodies of water. On 12 May 1996, on the tiny island of Montserrat, a series of three pyroclastic flows cascaded off the unstable dome and raced towards the Atlantic Ocean and were captured on film.
Igneous Rock Crystallization Animation (more info) This Flash animation contains three separate movies, each exhibiting the formation of an igneous rocks in a different environment: a) rocks forming from a deep magma chamber where the slow cooling of magma results in large interlocking crystals; b) rocks forming from a pyroclastic flow with a combination of large and small crystals; and c) rocks with small crystals created from a fast cooling lava results.
Sulfur Volcano on Io (more info) A 3D QuickTime simulation of Io's Pillan Patera sulfur volcano.
|
<urn:uuid:bf15e0f3-e033-4666-97b1-b257d9f5481c>
|
CC-MAIN-2016-26
|
http://serc.carleton.edu/NAGTWorkshops/hazards/visualizations/volcano.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.888473
| 639
| 3.234375
| 3
|
GRIGSBY, JAMES BOSTON
GRIGSBY, JAMES BOSTON (1878–1953). James Boston Grigsby, African-American insurance executive and civil rights activist in Houston, was born in Macon, Noxubee County, Mississippi, to Henry and Mariah (Dismuke) Grigsby on February 18, 1878. James had at least three brothers and two sisters; he was called Jimmy as a child. Very early in the twentieth century, Grigsby was living in Houston. By 1910 he was married to Bessie Arzelia Rose Grigsby, and they were living in Houston’s Third Ward.
On July 7, 1908, the American Mutual Benefit Association was incorporated. It was organized in Houston and under the directorship of James Grigsby. By the late 1920s he was president and treasurer of that company. In 1929 Grigsby and others in Houston founded the Gibraltar Life Insurance Company. He also was president of the American Mutual Insurance Company and, at one point, an executive at the Atlanta Life Insurance Company in Houston. His wife Bessie worked there also. For many years, the couple lived on Hadley Avenue in Houston.
In 1920 Grigsby was chosen as a delegate to the National Republican Convention. The same year, he was a candidate on the “Black and Tan” ticket for the position of Harris County tax collector. In July 1928 Grigsby and O. P. DeWalt filed a lawsuit in federal district court in an effort to prevent the exclusion of blacks from “lily-white” Texas Democratic primary elections (see WHITE PRIMARY). Both Grigsby and DeWalt told the judge that they had become Democrats a few years earlier. The national NAACP did not join the suit because of a possible conflict of interest. The suit was struck down by Judge J. C. Hutchison, Jr. Both DeWalt and Grigsby were persuaded to stop pursuing the matter in the courts. Grigsby founded the Business Men’s Luncheon Club in Houston in 1929, but the club was short-lived.
In March 1939 Grigsby filed as a candidate for trustee of the Houston Independent School District’s (HISD) board. The following night, a huge cross was burned in his front yard, an act that was thought to have been taken by the Ku Klux Klan. In 1939–40 Grigsby and Benjamin J. Covington complained that their names had been used without permission on printed material being distributed by the National Colored Democratic Association (NCDA) that incorrectly associated them with Texas politician John Nance Garner’s campaign for the U.S. presidency. In 1944 Grigsby helped organize, in Dallas, the Texas League of Democratic Voters to help stimulate statewide African American voting. Grigsby was named secretary of the organization.
James Grigsby died in Houston on May 13, 1953, at the age of seventy-five. He had been a resident of Houston for half a century. He had no children. He was buried in Cemetery Beautiful in Houston.
Abilene Reporter-News, August 14, 1944. Corsicana Daily Sun, July 20, 24, 1928. El Paso Evening Post, July 26, 1928. Darlene Clark Hine, Black Victory: The Rise and Fall of the White Primary in Texas (Columbia, Missouri: University of Missouri Press, 2003). “James B. Grigsby,” Find A Grave Memorial (http://www.findagrave.com/cgi-bin/fg.cgi?page=gr&GRid=95085306), accessed on March 30, 2014. Howard Jones, The Red Diary: A Chronological History of Black Americans in Houston and Some Neighboring Harris County Communities—122 Years Later (Austin: Nortex Press, 1991). Merline Pitre, In Struggle Against Jim Crow: Lulu B. White and the NAACP, 1900–1957 (College Station: Texas A&M University Press, 2010).
Image Use Disclaimer
All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law.
For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml
If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner.
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, Robert J. Duncan, "Grigsby, James Boston," accessed June 28, 2016, http://www.tshaonline.org/handbook/online/articles/fgrig.
Uploaded on July 29, 2014. Published by the Texas State Historical Association.
|
<urn:uuid:9e909af5-3771-407e-82b3-85f3460e2c3e>
|
CC-MAIN-2016-26
|
https://www.tshaonline.org/handbook/online/articles/fgrig
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955471
| 1,083
| 2.5625
| 3
|
Is Tomorrow the End?
Source Newsroom: Indiana University
Newswise — INDIANAPOLIS -- Is the world coming to an end Dec. 21, 2012? According to some, the Mayan calendar predicts such will be the case.
Indiana University-Purdue University Indianapolis anthropologist Larry Zimmerman, Ph.D., discusses end-of-the-world theories in a class at IUPUI titled “Lost Tribes, Sunken Continents, and Ancient Astronauts: Pseudoscience, Fractured Science, and the Past.”
“We cover end-of-the-world predictions, because they are so common in human history,” Zimmerman said.
Believers in the Mayan Doomsday claim really don’t understand the Mayan calendar system, the professor said.
“The Mayan calendar was based on 394-year cycles called baktuns. The 13th baktun since the date of the Mayan creation story 5,126 years ago ends Friday. Then we just start the 14th baktun. A friend, colleague, and Mayan expert, Rosemary Joyce, likens it to a car odometer rolling over, which is a terrific analogy . . . The Mayan Doomsday got picked up by New Agers, who were very active in predicting the end in the 60s-70s. The tourism industry in Mexico liked it, and the internet helped spread it quickly to almost everywhere. Even a few Maya liked the attention it brought, but the vast majority of the 6 million Maya (yes, they have not disappeared!) have just ignored it.”
End-of-the-world misinformation includes the use of the Aztec calendar stone as the Mayan calendar, Zimmerman said.
The Aztec and Maya are separated by both geography and time. The Maya live in Yucatan, Guatemala, Belize, Honduras, and the more southern Central American countries. The Aztecs were in central Mexico and flourished from roughly 1400 to 1600 AD.
“Yes the Aztecs had a calendar, and it was similar to the much more sophisticated Mayan calendar. But what we know of the Mayan calendar doesn’t come from a calendar stone,” the professor said.
Larry Zimmerman, Ph.D., is professor of anthropology and museum studies and the Public Scholar of Native American Representation (a shared position with the Eiteljorg Museum of American Indians and Western Art) in the School of Liberal Arts at IUPUI. His research interests include North American archaeology, indigenous and community archaeology, Native American issues, cultural and intellectual property, and archaeology of the contemporary world. Zimmerman teaches museum ethics, Indigenous People and museums, issues in cultural heritage and fantastic archaeology at IUPUI.
|
<urn:uuid:406a27ec-a7d8-4c47-ab05-3612b98e6b86>
|
CC-MAIN-2016-26
|
http://www.newswise.com/articles/is-tomorrow-the-end
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00109-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936774
| 578
| 2.96875
| 3
|
During each workshop, students rotated through three activities that reflected engineering courses offered through the Project Lead the Way engineering program at the high school.
The activities included:
• Fischerteknics, in which students built a model and then wrote a computer program to make the model move;
• Students soldered electronic components to make a reaction tester;
• Students used a three-dimensional modeling program to design a game, and then they were able to create it with equipment in the technology department.
Baker has held the workshop every year since 2003. The high school’s technology department designed the workshop to encourage females to think about pursuing engineering courses when they enter high school, as well as to consider a career in engineering or another field in math or science. Currently, about 20 percent of the students enrolled in Project Lead the Way courses at the high school are females.
|
<urn:uuid:cb9005d0-cd9b-45d7-bda5-033656c87920>
|
CC-MAIN-2016-26
|
http://blog.syracuse.com/neighbors/2010/01/baldwinsville_girls_study_career_of_engineering.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977099
| 178
| 3.796875
| 4
|
Robots are supposed to help us humans across different dimensions of life, such as being a useful bellhop whenever human resources run scarce at a hotel, or perhaps rendering your job useless if you flip burgers to earn a little bit of extra income on the side. This particular robot, however, goes on the offensive if you decide to actually touch it. Given the nickname of being the ‘Scare Bear’ robot, it will be able to send a jolt that has been measured at a whopping 25,000 volts to anything that touches it.
Forget the Care Bears – this ‘Scare Bear’ robot is not going to send out any shiny rainbow beams from its torso, but rather, it sends a warning to those around by blinking its eyes, while its body and head will be able to rotate from side to side while a siren is let off, screaming to warn all in its path.
The 25,000 volts of electricity will be delivered via a swinging chain, and the creation of this robot has been attributed to a Turkish farmer who has gotten sick and tired of having real bears mess up his crops. 46-year-old farmer and inventor Mustafa Karasungur’s creation is not all doom and gloom, however, as it can also double up as a mobile power station when it comes to juicing up depleted batteries or to power a light.
Filed in Robot.. Read more about
|
<urn:uuid:2603d519-14e7-4cc3-8ab6-680750c6637a>
|
CC-MAIN-2016-26
|
http://www.ubergizmo.com/2014/08/scare-bear-robot-could-knock-you-out-with-a-25000-volt-punch/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00066-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970354
| 292
| 2.734375
| 3
|
Five U.S. academic institutions, including Arizona State University, will share parts of a rare meteorite that exploded in a fireball over California last year.
The Smithsonian Institute's Field Museum said Wednesday that the meteor dates to the early formation of the solar system 4 to 5 billion years ago. They used a CT scan to determine the meteor's age and chemical composition.
According to researchers, it was probably about the size of a minivan when it entered the Earth's atmosphere in April 2012.
The Smithsonian cut the meteorite into five sections that will go to five institutions: The Field Museum in Chicago; the Smithsonian's National Museum of Natural History in Washington; the American Museum of Natural History in New York; Arizona State University; and the University of California-Davis.
Scientists plan to use the pieces for research.
|
<urn:uuid:d462a7a5-f66a-4bdf-a73e-2b189a6239f5>
|
CC-MAIN-2016-26
|
https://radio.azpm.org/s/15665-asu-to-get-piece-of-meteorite/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.895261
| 166
| 3.40625
| 3
|
Local and regional dynamics of Succisa pratensis
Land use change is considered to be one of the biggest threat to global species diversity. In Sweden, abandonment of grazing is one of the most common reasons for decline in species richness in semi-natural grasslands. Today semi-natural grasslands often occur as more or less isolated fragments. The result for species that benefits from grazing is a smaller area of suitable habitat and higher extinction risks and a lowered ability to colonize new areas. Succisa pratensis is a long-lived perennial plant that benefits from grazing and is common in Swedish semi-natural grasslands. I have assessed the performance of Succisa pratensis at various spatial and temporal scales, in a Swedish rural landscape. I performed demographic matrix modelling of populations at grazed and ungrazed sites. A regional level was then added, by incorporating data collected from a large number of populations and habitat types into the matrix models and extinction risks over 50 years were calculated. A dynamic metapopulation model was created and the regional dynamics, in terms of colonisations resulting from long distance dispersal and population extinctions were examined. The effects of management history were incorporated into the model by using historical maps. In addition, I made an analysis of the impact of management history on the distribution and performance of four grassland species, using vegetation maps from 1945 and 2001. Local dynamics of Succisa pratensis was negatively affected by abandonment of grazing. Recorded population sizes were ten times higher in grazed sites than in ungrazed. The turnover rate of the system was estimated to about one extinction or colonisation per year. Both the simulation study and the analyses of vegetation maps suggested a pronounced legacy of management history in Succisa pratensis in the study landscape. Overall, the results of this thesis demonstrate the importance of management history for species in the rural landscape.
Source Type:Doctoral Dissertation
Keywords:NATURAL SCIENCES; Biology; plant ecology
Date of Publication:01/01/2005
|
<urn:uuid:2162fff2-3e0b-4aff-aeac-aa6c780bdbee>
|
CC-MAIN-2016-26
|
http://www.openthesis.org/documents/Local-regional-dynamics-Succisa-pratensis-420592.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936265
| 416
| 2.953125
| 3
|
A February 28, 1800, act provided for the taking of the second census of the
United States, which included the states and territories northwest of the Ohio
River and Mississippi Territory. The guidelines for the 1800 enumeration
followed those of the first enumeration, except that the work was to be carried
on under the direction of the Secretary of State.
The enumeration was to begin, as in 1800, on the first Monday in August, and
conclude in 9 calendar months. The marshals and secretaries were required to
deposit the returns of their assistants, which were to be transmitted to the
Secretary of State (not the President as in 1800), on or before September 1,
The 1800 census covered the following states:
- District of Columbia
- Indiana Territory1
- Mississippi Territory1
- New Hampshire
- New Jersey1
- New York
- North Carolina
- NorthWest Territory1
- Rhode Island
- South Carolina
The following "district" census were lost:
Georgia, Indiana Territory, Kentucky, Mississippi Territory, New Jersey,
Northwest Territory and Tennessee. Unfortunately, there are no known
substitutes. It is suggested that genealogists consult other records for
information concerning ancestors found within these locations around 1800.
Much of Suffolk County schedules were lost, including all of
Boston. The only existing census for this county are those of Hingham and Hull.
Boston researchers may want to consult the 1798
US Direct Tax for Boston.
Found Within the 1800 Census
- Name of Head of Household
- Name of the county, parish, township, town, or city where the family
- Number of free white males and free white females in specific age
- Number of All other free persons (by sex and color) (not Native American)
- Name of a slave owner and number of slaves owned by that person
Strategy for the 1800 Census
The 1800 census takes up where the 1790 census, but provides a little more
specific information for both the location of the family, as well as the
composition of the family.
- Establishing the Composition of a Family
While it does not provide names, or exact ages, the 1800 census does
provide an idea of the composition of each family. In it you can find the
number of members of the family, their approximate age, and their sex. By
using other resources, such as vital records, wills, and land records you
can establish further details on each person in the household, and compile
further information like their exact name, birth, marriage and death
- Tracking the Head of Household
The 1800 census provides the name of the head of household. This will
be useful for tracking this family in future census.
- Location of the Household
As in all census, the location of the household at the time the
census was taken becomes a valuable tool for further research allowing you
to concentrate on records of that time period in that particular location.
The 1800 census will provide you the exact county, parish, township, town,
or city where the family resided.
It is possible to identify relatives by looking at the census for
the nearest neighbors to your ancestor. However, in certain cases, the
census was rewritten so that the census appears in alphabetical order2.
- Slave Research
Slaves were identified by the number of such in a household.
There were a total of 887,612 slaves enumerated in the 1800 census of the
United States3. Researchers who have identified
a slave holder of a possible ancestor should then consult probate or tax
records for possible further identity of specific individuals.
- Native American Research
It is possible to find your Native American ancestor in the 1800
census only if they were residing in an area being taxed. If this is the
case, then your ancestor would be enumerated as any other tax paying citizen
1800 Census Forms
Source: A Guidebook of American Genealogy, Revised Edition, Edited by
Loretto Dennis Szucs and Sandra Hardgreaves Luebking, 1997. Ancestry, Inc.,
Salt Lake City, Utah.
- Inter-University Consortium for Political and Social
Research. Study 00003: Historical Demographic, Economic, and Social Data:
U.S., 1790-1970. Anne Arbor: ICPSR.
- Online Census Membership Programs
- A Comparison of
- Ancestry's 1800 Census Images (requires membership $$$)
- Genealogy.com's Census Images (requires membership $$$)
- Online Census Directories
|
<urn:uuid:203084d2-3c42-4fd6-919a-be7e9921d06b>
|
CC-MAIN-2016-26
|
http://www.gengateway.com/census/1800_census.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928682
| 958
| 3.609375
| 4
|
Like many American military aircraft of the period the Consolidated Catalina actually gained its first combat experience in British hands. The Air Ministry purchased a single example of the PBY-4 (as the commercial Model 28-5) in 1939, and in July 1939 the aircraft flew across the Atlantic to the Marine Aircraft Experimental Establishment at Felixstowe, Suffolk to undergo tests. Even though these tests were cut short by the outbreak of the Second World War, the RAF still decided to place an order for the Catalina. The first of around 700 Catalinas to enter RAF service arrived early in 1941, and entered service with Nos.209 and 240 Squadrons of Coastal Command.
Sources differ on who was responsible for the use of the name Catalina, with both the RAF and Consolidated being given the credit. Catalina Island is off the coast of California, close to Los Angeles, and not too distant from Consolidated at San Diego. The RAF did prefer to give American aircraft names that reflected their country of origin, while Consolidated also named their aircraft. In either case the RAF knew the aircraft as the Catalina from 1939, while the US Navy did not adopt the name until 1 October 1941, when the vast majority of existing types of service aircraft were given names.
Coastal Command in Home Waters
The Catalina first saw active war service with RAF Coastal Command, before the United States entered the Second World War, but it was never present in British waters in large numbers. Seven squadrons operated the Catalina from Britain and two briefly from Iceland, giving the often quoted total of nine Coastal Command squadrons. However of these squadrons six used the Catalina with Coastal Command for less than a year, and only No.210 Squadron retained the Catalina in British from its introduction in 1941 until the end of the war. At no point were there more than three squadrons operating the Catalina from Britain.
Despite this limited level of use, the Catalina squadrons did produce some noteworthy achievements with Coastal Command. On 26 May 1941 it was a Catalina from No.209 Squadron that located the Bismarck, after the Navy lost radar contact with the German battleship, and Catalinas of No.240 Squadron shadowed her until surface ships regained contact. This was a rare example of the Catalina acting as the “eyes of the fleet” for the Royal Navy, a role that was normally performed by land based aircraft. Towards the end of the war, on 7 May 1945, a Catalina of No.210 squadron sank the 196th and last U-boat claimed by Coastal Command.
India and the Far East
The Catalina was far more important on overseas stations, where its ability to operate from any suitable stretch of water was much more important than in Britain. Nine squadrons would operate the Catalina from bases in India and Sri Lanka, flying anti-submarine patrols, convoy escort and air sea rescue missions over the Indian Ocean, as well as dropping agents on the coasts of occupied Burma and Malaya.
No.205 Squadron was equipped with the Catalina early in 1941, and at the time of the Japanese attack was operating from Singapore and Sri Lanka. The squadron was then sucked into the fighting in the Dutch East Indies, suffering heavy losses before eventually reaching Australia in March 1942 with only two aircraft remaining. The squadron was then moved back to India to reform, and spent the rest of the war involved in the same routine as the newer Catalina squadrons.
Six Catalina squadrons operated from bases around the coast of Africa, four in East Africa and two in West Africa. These squadrons performed the same mix of anti-submarine, convoy escort and air-sea rescue missions as the Indian based squadrons, but in an area where there was relatively little enemy activity. As a result the routine for the crews in these squadrons was one of long periods of dull routine patrols over vast expanses of empty ocean, interrupted with sudden bursts of activity.
The designation Catalina I was given to 100 PBY-5s purchased directly by the RAF. The Catalina I was given British equipment, including six Vickers machine guns – one in the nose, one in the rear tunnel and a twin gun on a manual mounting in each of the blister windows.
The Catalina IA was the RAF designation for fourteen Model 28-5 AMCs produced in November-December 1941 for the RCAF.
The Catalina IB was the designation given to 225 aircraft built by Consolidated as the PBY-5B. Of these aircraft 60 were retained by the US Navy, leaving 165 for the RAF. The change of designation from Catalina I or PBY-5 was probably due to the start of lend-lease –the Catalina Is purchased directly by the RAF had not needed a US Navy designation, but all lend-lease equipment had to have an official American designation
The designation Catalina II was given to 7 PBY-5s purchased directly by the RAF. They carried slightly different equipment to the Catalina I, although of three squadrons to operate the Catalina II only one did not use it alongside the Mk.I.
The designation Catalina IIA was given to 36 PBV-1s built by Canadian Vickers, and identical to the PBY-5.
The designation Catalina IIIA was given to eleven PBY-5As, the only examples of the Amphibian version of the Catalina to enter RAF service. They came from a batch of PBY-5As delivered between December 1941 and March 1942, and spent most of their RAF career operating as part of a trans-Atlantic ferry service.
The designation Catalina IVA was given to 97 PBY-5s.
The designation Catalina IVB was given to 194 (193 in some sources) Boeing of Canada PB2B-1 Catalinas purchased by the RAF. The PB2B-1 was identical to the PBY-5. Boeing actually built 240 examples of this aircraft between July 1943 and October 1944, of which 34 went to New Zealand, 7 to Australia and 5 to the US Navy.
The designation Catalina V was reserved for the PBN-1 Nomad, but none of that aircraft entered RAF service, and so the designation was never used.
The Catalina VI was the RAF designation for the Boeing of Canada produced PB2B-2, which was effectively a PBY-6A but without the landing gear.Sources differ on the number of Catalina VIs produced, ranging between 59 and 67, with the biggest part of the difference being accounted for by disagreement on one block of serial numbers. All sources agree that 47 of these aircraft were used by the RAAF.
|
<urn:uuid:40bb48ce-83dc-4f1d-ad72-8a386d1f8a91>
|
CC-MAIN-2016-26
|
http://historyofwar.org/articles/weapons_PBY_catalina_RAF_service.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972264
| 1,375
| 3.1875
| 3
|
The Atmos clocks don't need to be wound up. they get all the energy to run from small temperature changes in the encapsulated environment, and can run for years without human intervention.
Its power source is a hermetically sealed capsule containing a mixture of gas and liquid ethyl chloride, which expands into an expansion chamber as the temperature rises, compressing a spiral spring; with a fall in temperature the gas condenses and the spring slackens. This motion constantly winds the mainspring. A variation in temperature of only one degree in the range between 15 and 30 degrees Celsius is sufficient for two days' operation.
Some cool side angles from Dje of Watchprosite
A variety of rare antique and vintage Atmos clocks
Some vintage Atmos advertising - via Atmosdam
The Atmos clock was invented by Neuchâtel engineer Jean-Léon Reutter (1899- 1971). From his youth, he wanted to produce a clock that could be wound by atmospheric fluctuations, and in 1928 he succeeded. Reutter’s patent was first licensed to a French company who exploited it until 1935. Subsequently, it was purchased by Jaeger-LeCoultre. via Antiquorum
Related Watchismo Times Posts;
Ikepod Has Landed (Again)
Ikepod Black Hole Revealed
1980s (pre-Ikepod) Pod Watch & Clock
All Watchismo Times Clock Posts
All Jaeger LeCoultre Posts
| Watchismo Blog | Watchismo Shop | Contact Us | Subscribe |
|
<urn:uuid:08472c9b-b7e4-432b-b6e9-38313a6ec323>
|
CC-MAIN-2016-26
|
http://watchismo.blogspot.com/2008/07/big-changes-in-atmosphere-jaeger.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.911024
| 317
| 2.84375
| 3
|
Several liberal Washington columnists have recently commented that many members of Congress seem unaware of President Obama’s willingness to cut Social Security and Medicare. The President has certainly spoken publically about his readiness to make these cuts, even mentioning during one of the campaign debates his intention to “tweak” Social Security, and he was much more specific during the 2011 deficit reduction negotiations.
With many in Congress now seriously discussing the need for deficit reduction, it is clear that some of them also have their eyes on Social Security. They see Social Security as part of the deficit problem.
This misunderstanding is exactly why the purpose of the Social Security Trust Fund must be explained to Congress and the American people. Social Security may now be running a small deficit (i.e. taxes collected are less than benefits being paid), but this was foreseen many years ago, and the Social Security Trust Fund was created for the purpose of protecting benefits during a deficit.
Congress passed a law in 1939 which required Social Security to loan any surplus funds (i.e. funds collected in taxes but not needed to pay benefits at that time) to the U.S. Treasury. The Treasury would spend the funds borrowed on other government programs, but would pay the money back when needed by Social Security for benefit payments.
Social Security ran a surplus every year from 1937 through 1956. However, from 1957 through 1962, there was a series of deficits and the Trust Fund’s assets were reduced from $22.5 billion to $18.3 billion as the Treasury paid back the money that was needed.
In 1975, Social Security once again began a period of deficits, and by 1983 the Trust Fund had been reduced from $37 billion to $19.7 billion.
Legislation passed in 1983 significantly affected the size of the Trust Fund, deliberately creating very large surpluses that were intended to fund deficits that were seen in the distant future. Never before had the annual surplus been much more than $4 billion. The 1984 surplus was more than $7 billion, and quickly the surpluses reached tens of billions, then more than $100 billion each year. By the second term of George W. Bush, it was approaching $200 billion.
The Washington politicians loved spending the surplus. They became addicted to having the money for their favorite subsidy and pork-barrel programs.
But then, as had been foreseen in the 1980’s, the surpluses turned into deficits. Not only was there no longer a surplus to spend, but it was time to start paying the money back to Social Security.
In 2010, the Treasury had to repay about $8 billion, increasing to $11 billion in 2011. But in 2012 it increased to more than $50 billion, with projections that it would be between $60 and $70 billion for each year, 2013 through 2017.
The Washington politicians panicked. They could only continue funding their favorite programs by running the biggest peacetime deficits in U.S. history, and the American people were demanding that deficits be reduced.
How could the politicians keep funding their pet programs? The biggest pot of money available was the Social Security Trust fund, about $2.6 trillion. If they could cut Social Security, they would not need to pay back that $60 to $70 billion per year. Instead, they could keep that money for other purposes.
Congress and the President face a choice. They can cut the programs they value the most – such as Obama’s “green energy” boondoggles – or they can cut Social Security instead, refusing to pay back the money from the Trust Fund.
We must demand that Social Security be fully funded, and that other programs be cut.
|
<urn:uuid:bfd2adbf-1828-4560-ae58-a010ff64bbbb>
|
CC-MAIN-2016-26
|
http://www.conservativeusa.org/issues/social-security-trust-fund-threatened
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00014-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.983172
| 760
| 2.53125
| 3
|
1 Answer | Add Yours
Quantum physics deals with the characteristics of matter and energy at subatomic levels. The properties exhibited by particles that lie in the realm of quantum physics and by those that lie in the realm of classical physics is the same. The differences that arise between the two is due to the fact that in classical physics it is possible to make approximations and explain characteristics without introducing a substantial degree of inaccuracy.
For example, the uncertainity principle in quantum mechanics states that it is physically impossible to know both the position and the momentum of a particle at the same time. The position of a particle is measured by bouncing off electromagnetic radiation from it and measuring the time taken by it to return. If one desires to measure the position of a particle with a higher degree of accuracy it is essential to use radiation with a lower wavelength. As the wavelength of electromagnetic radiation is decreased, there is an increase in frequency and subsequently an increase in energy. When radiation bounces off the particle a part of the energy is transmitted to the particle which alters the momentum of the particle. The attempt to accurataly measure the position of a sub-atomic particle itself changes the momemtum randomly by a very large extent.
In classical physics effects like the one just descibed can be ignored as the momentum of a large particle is affected to a very insignificant extent irrespective of the frequency of radiation being used to measure its position.
Classical physics is essentially the same as quantum physics. The differences in characteristics between the two is due to the fact that it is not possible to make the approximations that are allowed in classical physics due to the extremely small mass and size of the particles.
We’ve answered 328,308 questions. We can answer yours, too.Ask a question
|
<urn:uuid:ff3e3a3d-7ae7-4a83-b43d-c5297b4969c7>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/explain-characteristics-quantum-world-quantum-429257
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00114-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952868
| 362
| 3.40625
| 3
|
Global earth observations may be instrumental to achieve sustainable development, but to date there have been no integrated assessments of their economic, social and environmental benefits.
The objective of the EC sponsored project “Global Earth Observation – Benefit Estimation: Now, Next and Emerging” (GEOBENE) is to develop methodologies and analytical tools to assess societal benefits of GEO in the domains of: Disasters, Health, Energy, Climate, Water, Weather, Ecosystems, Agriculture and Biodiversity.
The assessment will be carried out using quantitative and qualitative methods and data. The 36-months-project led by IIASA’s Forestry Program aims at drawing up policy conclusions from the modeling exercise for supporting the implementation of international agreements.
This project is funded by the European Commission
|
<urn:uuid:aafaa790-9d18-4dd1-a990-aab69ca79436>
|
CC-MAIN-2016-26
|
http://www.geo-bene.eu/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.889749
| 162
| 2.640625
| 3
|
In this cross-sectional, retrospective study, the bone mineral content (BMC) and density (BMD) of the whole skeleton, upper limbs, lower limbs, femoral neck, and lumbar vertebrae were measured using dual photon absorptiometry and the results compared in healthy young males involved in: weight-lifting, running, cross-training, or recreational exercises. When adjusted for body weight, the upper limb BMD was highest in those engaged solely in weight-lifting, (mean 1.021, SE 0.019, and 95% CI 0.981-1.061) and lowest in runners (mean 0.908, SE 0.019 and 95% CI 0.869-0.946). These differences were significant (P = 0.0004). There were no significant differences in upper limb BMD between weight-lifters and cross-trained athletes and between runners and those engaged in recreational exercises. Significant differences in BMD were observed between weight-lifters and recreational athletes (P = 0.001) and between cross-trained athletes and runners (P = 0.03). No other significant differences were observed. These data suggest that healthy, young, adult males reporting a history of intensive weight-lifting had significantly greater bone mass of the upper limb bones than those reporting a history of non-weight-lifting exercises. These results imply a specific versus generalized effect of mechanical load on bones of the skeleton.
(C)1994The American College of Sports Medicine
|
<urn:uuid:77929293-32f5-4e39-bdde-45acd0f13323>
|
CC-MAIN-2016-26
|
http://journals.lww.com/acsm-msse/Abstract/1994/07000/Regional_differences_in_bone_density_of_young_men.12.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960675
| 310
| 2.921875
| 3
|
Posted 3 years ago on March 3, 2013, 2:30 p.m. EST by ProblemSolver
This content is user submitted and not an official statement
Here is why :
The government sells bonds to pay off it's debts.
How are bonds purchased? From money that is already in circulation.
As bonds are sold, the debt increases. The money recieved from the sale of bonds is redistributed back into circulation. This does not increase the amount of money in circulation, but does increase the size of debt.
As the size of debt increases more bonds are sold to cover debt interest. Again money is taken out of circulation to purchase bonds and than redistributed back into circulation. And again, the debt increases but the money in circulation does not increase.
This is a run away system .. as with each bond-sale cycle the debt increases but the amount of currency in circulation remains the same. Placing a heavy burden on the tax payer to pay the ever increasing debt interest ..with the same amount of taxable resources..
This system is not self correcting.
Governments should not be allowed to sell bonds.
if Governments need money taxes should be their only resource, or the printing out of thin air method can be used.
Yes , printing out of thin air in this system will create inflation.
Inflation is good for those locked into debt.
Here is why:
When locked into debt, your payments remain the same, but as inflation increases , so do wages . Therefore, as your wages increase and your payments remain the same , the debt ratio burden is reduced. If you were earning $50 per day when you went into debt and your payments were $10 per day .. and now with adjusted inflation and wage increase you earn $100 per day but your payments locked in at $10 per day , inflation actually eases the streess load of your payment.
|
<urn:uuid:280eed0b-85d4-4426-9030-b8cbdc559973>
|
CC-MAIN-2016-26
|
http://www.occupywallst.org/forum/the-current-system-is-not-self-correcting/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954303
| 385
| 3.5625
| 4
|
Population of China
Date: 05/13/97 at 01:12:40 From: Julie Subject: continuous growth I tried figuring this problem out and I think I got the answer, but I don't know how I got it. Can you please help me and tell me if I am right? Can you also explain how to get the answer? Here's the problem: Given birth rates and death rates, the population of China is assumed to be growing continuously. If the population growth rate is 4.3 percent and the current population of the country is 1.27 billion people, what year will the population reach 2 billion? I got year 2008 and I don't know how. I'd really appreciate it if you could help me. Thanks.
Date: 05/13/97 at 05:49:53 From: Doctor Mitteldorf Subject: Re: continuous growth Dear Julie, You can solve this problem by "brute force" by multiplying 1.27 times 1.043, and continuing to multiply the resulting numbers by 1.043 as many times as it takes to get to 2. You find that the number passes 2 after 11 multiplications, so 1997+11 = 2008 is the right answer. Is this what you did? There's a faster way to do this problem, but you need to know about logarithms. Take the log of 2 and subtract the log of 1.27. log(2) = .3010 log(1.27) = .1038 log(2) - log(1.27) = .1972 This tells you that the log has to increase by .1972 as the population increases. But each year, the log of the population increases by: log(1.043) = 0.01828 Now you can just divide .1972 by 0.01828 to get 10.8, so the population will pass 2 billion in 10.8 years. This logarithm method works because multiplying two numbers corresponds to adding their logarithms. -Doctor Mitteldorf, The Math Forum Check out our web site! http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2015 The Math Forum
|
<urn:uuid:390d139a-e9a6-4e01-8d0b-8685016caa8d>
|
CC-MAIN-2016-26
|
http://mathforum.org/library/drmath/view/55546.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931464
| 460
| 2.78125
| 3
|
A spectacularly well-preserved sea monster that once prowled the oceans during the Cambrian Period has been unearthed in China.
The 520-million-year-old creature, one of the first predators of its day, sported compound eyes, body armor and two spiky claws for grabbing prey.
The fossils of the new species were so well preserved that the nervous system and parts of the brain were still clearly defined. [ Cambrian Creatures: Photos of Primitive Sea Life ]
Before the Cambrian Period, which lasted between 543 million and 493 million years ago, most life resembled simple algae and stationary jellyfishlike creatures, but during the Cambrian explosion, a period of rapid evolution when biodiversity exploded, swimming sea creatures with compound eyes, jointed legs and hard exoskeletons emerged.
The period also saw the rise of an iconic group of shrimplike creatures known as anomalocaridids. These ancient sea monsters were the top predators of the Cambrian seas, and sported bladed body armor and a cone-shaped mouth made of concentric plates. Some of the biggest of these bizarre creatures could grow to be up to 6 feet (1.8 meters) long.
But most anomalocaridid specimens paleontologists found have been poorly preserved, making it difficult to know precisely where they fit in the tree of life, said study co-author Peiyun Cong, a researcher at Yunnan University in China.
Some scientists thought anomalocaridids belonged to a group that split off before the most recent common ancestor of all living arthropods, while others thought the animals were part of a group called chelicerates that includes spiders and scorpions. Still others thought anomalocaridids had converged upon similar features to those of modern arthropods but didn't evolve from the same lineage, Cong said in an email.
In the last several years, the researchers unearthed three spectacularly preserved specimens of a new species of anomalocaridid in fossil sediments in China. The sediments had frozen these creatures in time so perfectly that the entire nervous system, as well as the gut and some muscles, were still visible.
The creature, dubbed Lyrarapax unguispinus, was about 6 inches (15 centimeters) long. "The three known specimens may represent immature stages of the animal, so it might be larger," Cong wrote in an email to Live Science.
L. unguispinus had a tail that looked a bit like that of a lobster, and two giant pincers for grasping prey. As it grew, the creature molted, shedding its outer cuticle.
— Tia Ghose, Live Science
More from Live Science:
|
<urn:uuid:5b5a7602-75f2-4526-a72c-da9f1902cf15>
|
CC-MAIN-2016-26
|
http://www.nbcnews.com/science/science-news/520-million-year-old-sea-monster-preserved-brain-unearthed-n157606
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00058-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958812
| 562
| 3.5
| 4
|
646-3 GAMES FOR TEACHING SCIENCE AND MATH CONCEPTS
Major concepts: Cells and Energy - photosynthesis and Cellular Respiration
f. Students know usable energy is captured from sunlight by chloroplasts and is stored through the synthesis of sugar from carbon dioxide. g. Students know the role of the mitochondria in making stored chemical-bond energy available to cells by completing the breakdown of glucose to carbon dioxide.
Cells and Energy Jeopardy: this game is used as a review for the cells and energy unit. A class of 32-40 students will be divided into groups of 5-8 students (5 groups). Each group will have the opportunity to call a question and answer at a time... following the jeopardy rules.
Students will be prepared to take the unit test that will cover photosynthesis and respiration.
•Major concepts: natural selestion, adaptations
Students will be able to collect data, enter the data in an excel document, analyze the data and represent it in graphs. Students will also write a paragrad explaining the graph representation.
|
<urn:uuid:be8dbd4f-dcf8-4d95-aadd-3ada0348dbea>
|
CC-MAIN-2016-26
|
http://www.csun.edu/~aef21890/coursework/646/assignments/games/index.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00089-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.895728
| 224
| 3.71875
| 4
|
Rude software causes emotional trauma
My PC is ignoring me
Scientists at California University in Los Angeles (UCLA) have discovered computers can cause heartache simply by ignoring the user. When simulating a game of playground catch with an unsuspecting student, boffins showed that if the software fails to throw the ball to the poor student, he is left reeling from a psychological blow as painful as any punch from a break-time bully.
Matthew Lieberman, one of the experiment's authors and an assistant professor of psychology at UCLA explains that the subject thinks he is playing with two other students sitting at screens in another room, but really the other figures are computer generated. "It's really the most boring game you can imagine, except at one point one of the two computer people stop throwing the ball to the real player," he said.
The scientists used functional magnetic resonance imaging (fMRI) to monitor brain activity during a ball-tossing game designed to provoke feelings of social exclusion. Initially the virtual ball is thrown to the participating student but after a short while the computer players lob the ball only between themselves. When ignored, the area of the brain associated with pain lights up as if the student had been physically hurt.
Being the class pariah is psychologically damaging and has roots deep in our evolutionary past. "Going back 50,000 years, social distance from a group could lead to death and it still does for most infant mammals," Lieberman said.
The fact that this pain was caused by computers ignoring the user suggests interface designers and software vendors must work especially hard to keep their customers happy, and it's not surprising that failing and buggy software is so frustrating. If software can cause the same emotional disturbance as physical pain, it won't be long before law suits are flying through the courts for abuse sustained at the hands of shoddy programming. ®
|
<urn:uuid:68c092aa-014e-4f56-ae0a-f28f8834dee1>
|
CC-MAIN-2016-26
|
http://www.theregister.co.uk/2005/02/07/rude_software/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00179-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960987
| 374
| 2.96875
| 3
|
Going outside the box: skills development, cultural change and the use of on-line resources.
Computers and Education, 47(3) pp. 316–331.
This is the latest version of this eprint.
Full text available as:
Using an academic library has always been a crucial part of studying in higher education, but this has presented problems for independent learners taking distance education courses. In the past, Open University (UKOU) students received almost everything needed to successfully complete any course. Nowadays, growth in Internet use enables learners to go 'outside the box' to locate resources that might be relevant for their studies. Although such resources are increasingly being included as components of UKOU courses, the extent to which students use them varies enormously between courses. Data from a large-scale survey is examined and a number of explanatory factors are considered in an attempt to account for this variability. It is argued that students' use of on-line 'external' resources is closely related to the pedagogic design of courses and to assessment requirements, not merely to the increased availability of information sources on the World Wide Web.
Available Versions of this Item
Download history for this item
These details should be considered as only a guide to the number of downloads performed manually. Algorithmic methods have been applied in an attempt to remove automated downloads from the displayed statistics but no guarantee can be made as to the accuracy of the figures.
Actions (login may be required)
|
<urn:uuid:8bb38e8c-4cac-4d25-b730-30792cc9368f>
|
CC-MAIN-2016-26
|
http://oro.open.ac.uk/6614/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941914
| 300
| 2.625
| 3
|
New Zealand History/Politics
Politics in the Twentieth Century
Political Parties and Key Policies of the Twentieth Century
At the turn of the century, the Liberals, New Zealand's first modern political party, were in power as the Government. The Liberals created a 'family farm' economy, by subdividing large estates and buying more Maori land in the North Island. New Zealand gained strong economic ties with Britain, exporting farm produce and other goods. Under Liberal, New Zealand started to form its own identity, and due to this, New Zealand declined to join the Australian Federation of 1901.
The Liberals were defeated in the 1912 election by the Reform Party, and never fully recovered. William Massey, the leader of the Reform Party, had promised state leaseholders they could freehold their land, which proved to be a good promise in winning the election. Under the Reform Party, New Zealand entered World War I, aiding Britain.
New Zealand had prosperous years at the end of the 1920s, and so was hit hard by the Great Depression of the 1930s. The Conservative coalition government failed to get New Zealand out of the Depression, which led to the rise of the Labour Party in 1935.
Under Labour, New Zealand's economy slowly recovered. The Reserve Bank was taken over by the state in 1936, spending on public works increased, and the State Housing Programme began. The Social Security Act 1938 increased the state of welfare dramatically.
With the outbreak of World War II in 1939, the New Zealand Government again chose to support Britain with troops. New Zealand also chose to fight in Korea in the early 1950s.
In 1945, Peter Fraser played a significant role in the conference that set up the United Nations, but the Labour Government was losing support. In 1949, the National Party became the Government of New Zealand. In the 1960s, the National Government sent troops to Vietnam to keep on side with the United States, despite protests, but this didn't hinder New Zealand's support of the National Party, and National ruled New Zealand until 1984 with only two exceptions.
New Zealand's culture remained based on Britain's through the 1960s, and the economy was still mainly made up of exporting farm produce to Britain. However, when Britain joined the European Economic Community in 1973, New Zealand no longer had an assured market for farm products.
After the second oil shock of 1978, the National Government tried to fix the problem with new industrial and energy initiatives and farm subsidies. The economy faltered in the 1980s when the fall of oil prices made these schemes unsound. Inflation and unemployment went up as a result.
The National Government of 1990-99 passed the controversial Employment Contracts Act which opened up the labour market, but diminished the power of trade unions.
In 1996, a new voting system was introduced, Mixed Member Proportional Representation, which allowed minority or coalition Governments to become the norm, but the National and Labour parties still remained dominant.
Prime Ministers of the Twentieth Century
|Name||Term in Office||Party|
|Joseph Ward||6 August 1906 - 28 March 1912||Liberal|
|Thomas Mackenzie||28 March 1912 - 10 July 1912||Liberal|
|William Massey||10 July 1912 - 10 May 1925||Reform|
|Francis Bell||10 May 1925 - 30 May 1925||Reform|
|Gordon Coates||30 May 1925 - 10 December 1928||Reform|
|Joseph Ward (2nd time)||10 December 1928 - 28 May 1930||United (Liberal)|
|George Forbes||28 May 1930 - 6 December 1935||United (Liberal)|
|Michael Joseph Savage||6 December 1935 - 27 March 1940||Labour|
|Peter Fraser||27 March 1940 - 13 December 1949||Labour|
|Sidney Holland||13 December 1949 - 20 September 1957||National|
|Keith Holyoake||20 September 1957 - 12 December 1957||National|
|Walter Nash||12 December 1957 - 12 December 1960||Labour|
|Keith Holyoake (2nd time)||12 December 1960 - 7 February 1972||National|
|Jack Marshall||7 February 1972 - 8 December 1972||National|
|Norman Kirk||8 December 1972 - 31 August 1974||Labour|
|Hugh Watt (Acting)||31 August 1974 - 6 September 1974||Labour|
|Bill Rowling||6 September 1974 - 12 December 1975||Labour|
|Robert Muldoon||12 December 1975 - 26 July 1984||National|
|David Lange||26 July 1984 - 8 August 1989||Labour|
|Geoffrey Palmer||8 August 1989 - 4 September 1990||Labour|
|Mike Moore||4 September 1990 - 2 November 1990||Labour|
|Jim Bolger||2 November 1990 - 8 December 1997||National|
|Jenny Shipley||8 December 1997 - 5 December 1999||National|
|
<urn:uuid:dee0dcb8-b8b4-4020-b281-590c6f80a13f>
|
CC-MAIN-2016-26
|
https://en.wikibooks.org/wiki/New_Zealand_History/Politics
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00138-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950028
| 1,008
| 3.84375
| 4
|
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Finding Figurative Language in The Phantom Tollbooth
|Grades||6 – 8|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Four to eight 40-minute class sessions|
MATERIALS AND TECHNOLOGY
- Computers with Internet access and word processing software
- Electronic version of the Figurative Language Chart for each student (and answer key)
- One copy of The Phantom Tollbooth by Norton Juster (Random House, 1993) for each student
- Welcome to the Doldrums PowerPoint presentation
Students will read Chapters 1 and 2 of The Phantom Tollbooth before beginning this activity. In Chapter 2, the main character visits an imaginary land where he gets stuck "in the doldrums." This phrase will be the focus of the motivational part of this lesson.
|
<urn:uuid:2d9916e6-3472-4e0b-a8a1-c54fe8b87444>
|
CC-MAIN-2016-26
|
http://www.readwritethink.org/classroom-resources/lesson-plans/finding-figurative-language-phantom-79.html?tab=3
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.837229
| 297
| 3.765625
| 4
|
Medical Emergencies: How to Anticipate and Prepare for Injuries
by Albert Tyldesley, ISI Safety Committee Chair
Skating rinks offer activities that can lead to personal injuries. The characteristics of hockey, figure skating, even general public skating are such that the prudent person would expect some injuries to occur. How rink staff handles these medical emergencies is increasingly important as increasing numbers of customers consider litigation a natural extension of personal injury.
The types of injuries caused from skating accidents vary. However, there are patterns that rink managers should be aware of. Wrist injuries are common in all categories of skating. Hockey players get hit across the wrist with sticks, and skaters attempt to break a fall by putting their hands out in front of them. Sprained and broken wrists are a common injury in skating rinks. Shoulder and elbow injuries are also common for the same reasons and are more difficult to deal with than a wrist injury.
Head and spine injuries might not occur with great frequency, but when they do this type of injury requires skilled medical evaluation and cautious handling. The knee joint suffers injury in all categories of skating and is very difficult to evaluate. Lacerations that cause bleeding occur on a daily basis in most skating rinks. Emergency medical personnel refer to all of these injuries as trauma injuries. The significance of these injuries is the possibility of additional medical problems caused by the injury that may not be seen or understood by untrained rink staff. Shock is a common problem associated with trauma injuries.
Having a staff member trained in first aid on duty at all times should be a goal of all rink managers. What constitutes a "trained" staff member? Because the law varies from state to state, it is difficult to give a single answer to this question. The medical community and local public safety officials have established an Emergency Medical Services (EMS) system that is accepted by every state. Emergency medical care within this system is provided by Emergency Medical Technicians. EMTs are rated by letters which designate the level of care to which they are trained. An EMT-P is a paramedic, the highest level.
EMTs are found in every community and have become the standard first aid provider in many athletic venues. EMTs are covered by the "good Samaritan law," which protects care givers from lawsuits.
First aid training and certification offered by organizations such as the Red Cross may be useful for minor cuts and bruises but are of questionable value for serious injuries. While it’s nice to have a doctor in the house, it’s also important to remember that many doctors and nurses are not trained in emergency procedures. Other titles such as athletic trainer may or may not cover you from a legal standpoint. If you are providing emergency medical coverage with anyone other than an EMT you might want to check state law or with you legal counsel to see if they meet state requirements. You should also establish ground rules for medical staff who enter your rink with visiting teams.
In most communities across the U.S., emergency medical services are provided by the local fire department. Off-duty firefighters make excellent part-time employees in skating rinks. You have the benefit of employees who work with mechanical equipment every day and are easy to train, plus you have an EMT on duty at the rink.
Sending rink employees to be trained as EMTs is possible. However, the course can run for more than six months and EMT training is usually provided by local community colleges, hospitals, or fire departments in conjunction with the area Emergency Medical Services system.
Many rinks hire EMTs to provide medical coverage at high profile events such as college and high school hockey games. Should you find that most of the injuries in your rink occur during public skating sessions or perhaps during senior hockey games, you might consider retaining EMTs at those times.
EMTs are also capable of providing first aid training to your staff. Basic first aid courses should be provided to all skating rink employees. Understanding what not to do in case of injury can be as important as knowing what to do.
Every skating rink should have several employees trained in CPR. This life saving skill can be taught at the rink by local instructors. A basic first aid course should be presented to your entire staff, especially skate guards, once per year by a qualified teacher.
Documentation on how an injury occurred and the services provided the victim by rink staff can be very important, should litigation follow. All employees need to know how to handle an injury from patient care to filling out an injury report. Check with your insurance company for information on incident reports, or see the 2000 edition of the ISI Instructor Manual for guidelines for a safe skating environment, rink liability information, a sample incident report, a skate at your own risk waiver, and emergency first aid information.
Every skating rink should have a room dedicated to providing care to people with injuries. The first aid room must be clean, stocked with the correct medical supplies, be accessible to the ice and should have an outside door for patient removal to an ambulance. Walking into a dirty, dust-covered first aid room filled with unrelated equipment and rink supplies does not convey a good image. Empty supply cabinets or absence of first aid supplies will not only embarrass you but can delay patient care. What medical supplies you keep on hand will be determined by the level of care your staff is qualified to provide. Unqualified employees using medical equipment they are not trained or certified in may result in a lawsuit.
Listing medical supplies that should be kept in the first aid room is difficult and may legally differ from community to community. You might check with your local EMS authorities on appropriate medical supplies and equipment. Standard items such as adhesive bandages, slings, medical tape, latex gloves, gauze bandages, sterile water, etc. are usually safe and acceptable at all levels of patient care.
Any staff member dealing with an open cut or wound MUST wear latex gloves. This is for their protection as well as the patient’s. Aspirin is commonly found in first aid rooms but cannot (by law) in most states be dispensed to children. You can sell aspirin in the snack bar, but you cannot offer it to a child who is hurt.
Skating rink first aid rooms may have advanced medical equipment that can only be used by certified personnel. Backboards, oxygen, splints, pen lights, slings and other such equipment can be on hand for qualified people to use in an emergency. Should you have an injury on the ice and qualified employees present, it’s possible to remove the injured person from the ice prior to the arrival of the fire department. Correct patient removal by qualified personnel allows superior patient care and saves the rink time but must be acceptable to your local EMS provider.
Training rink employees in first aid procedures must always include how to call for help. Access to telephones to call the fire department must be available when the rink is open. Emergency numbers should be posted at the phone. Did you know that every pay phone must accept a 911 call without a coin?
Emergency procedures should be anticipated in your facility. Hiring independent medical experts, training employees, and providing a first aid room are the responsibility of the rink manager.
* Editor’s Note: To order a copy of the 2000 edition of ISI’s Instructor Manual, call 972-735-8800, extension 213. This 74-page book contains a wealth of essential information.
|
<urn:uuid:8ff3c168-950b-40ff-a898-ddccd925cf95>
|
CC-MAIN-2016-26
|
http://www.skateisi.com/site/Sub.Cfm?content=archive_Medical%20Emergencies
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964393
| 1,522
| 3
| 3
|
A diverse group of bees pollinate blueberries in North Carolina. During the past three years, one of the focuses in our lab has been to understand the relative importance of the different bees for blueberry pollination.
When we initially set out to assess pollinators in blueberries, our goals were both to determine which species were present and how important each species was for fruit production. This second goal, identifying which bees are the “best” blueberry pollinators, is more complicated than it might seem.
In a recently published paper, recent graduate Shelley Rogers highlights five criteria for evaluating pollinator performance: abundance, per-visit efficiency, activity pattern, visitation rate and species interactions.
Honey bees were stocked at all of the sites we visited, and not surprisingly, were the most abundant species present. Southeastern blueberry bees were also present in relatively large numbers across sites and were notable in their abundance early in the bloom period. Bumble bees, carpenter bees, and “small native bees” (the most common of which was Andrena bradleyi) were less abundant. Osmia cornifrons, commonly referred to as the hornfaced bee or orchard bee, was very abundant at one site in western North Carolina, but absent from the other three sites.
A single visit by a small native bee, which could not be distinguished to species on the wing, resulted in the highest seed set, but single visits by bumble bees and hornfaced bees also resulted in high seed set. Both honey bee and southeastern blueberry bee visits produced relatively few seeds.
Honey bees were less abundant on cloudy, cool, and windy days, but other bee groups were not negatively impacted by weather.
We did not quantify visitation rate at our sites, but other studies have measured this variable. Southeastern blueberry bees had the shortest handling time of all blueberry pollinators (see Rogers, et al. for refereces), meaning they could potentially visit more flowers in a shorter period. Casual observations at our sites suggest that some of the small native bees may very long handling time. We often observed them crawling inside blueberry corollas while foraging.
What's the best bee?
So which bees are the best blueberry pollinators? It depends on how “best” is defined. Small native bees and bumble bees produce the most seeds in a single visit, but there are fewer of them in blueberry fields. Honey bees are abundant, but they are less active when the weather is poor and produce fewer seeds in a single visit.
Southeastern blueberry bees are active early in the year and abundant across sites, and although they produce fewer seeds in a single visit, their visits are fast. Because no single bee species produces high numbers of seeds, is present in large numbers (either naturally or through man-made augmentation), is active under all weather conditions, and visits lots of flowers quickly, we need a diversity of bees to produce the most blueberries.
Our future research activities will focus on relating bee diversity to crop value and production practices so that we can make recommendations to growers both as to what bees they should foster on their farm as well as what management practices to avoid and enhance.
|
<urn:uuid:ea53c488-7443-41c5-9467-2a6f66ec8fad>
|
CC-MAIN-2016-26
|
http://southeastfarmpress.com/print/orchard-crops/which-bees-are-best-blueberry-pollinators
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97366
| 662
| 3.625
| 4
|
Insulin hormones from the pancreas regulate the concentration of blood sugar, or glucose, in the body. When blood glucose rises, insulin is released into the blood to utilize the glucose for energy. People with diabetes have difficulty producing insulin; thus, their blood glucose levels remain high. High glucose levels in the body can negatively affect a number of internal organs. Fasting blood sugar is the amount of glucose in your body after you've not eaten for at least 8 hours. Measuring your fasting blood sugar is a way to check for diabetes.
1Purchase a traditional blood sugar testing kit.
- Visit your local pharmacy to obtain a testing kit. Most kits contain lancets (testing needles), a lancing device, testing strips, and a meter to read the results. Different brands will vary as to how much blood must be obtained and how long the test results take to display.
2Refrain from eating at least 8 hours before measuring your fasting blood sugar.
- Go to bed at around 10 or 11 p.m. the night before you test your blood sugar. When you wake up around 7 or 8 a.m. the next day, you will have fasted for at least 8 hours without having to stop yourself from eating throughout the day.
3Perform the test.
- Assemble the lancing device. This might involve inserting a lancet into the holder.
- Prick your fingertip with the lancing device.
- Collect the blood from your fingertip onto one of the testing strips. The strip should indicate where to place the blood. A small droplet of blood should be adequate for testing, but you should consult the manual that came with the testing kit to determine how much blood is needed.
- Place the testing strip into the meter as the instruction booklet indicates. This typically involves inserting the end of the strip with blood into a slot on the meter.
4Read the results.
- If the meter reads a level that is 100 mg per dL or fewer, your blood glucose levels are normal. A measurement of 100 mg per dL means that there are 100 milligrams of glucose for every deciliter of blood in your system.
- If the meter reads somewhere between 100 mg per dL and 125 mg per dL, the measurement does not indicate whether or not you have diabetes. Although normal ranges fall below 100 mg per dL, a reading about 100 mg per dL does not necessarily mean you always have abnormal glucose levels.
- If the meter reads 126 mg per dL or more, your blood glucose levels are high.
5Contact your doctor.
- Report your blood glucose levels to your doctor. If your concentration was between 100 mg per dL and 125 mg per dL, your doctor might want to test your levels again in the office. If your concentration was higher than 126 mg per dL, your doctor will undoubtedly recommend another test.
6Repeat fasting blood glucose tests if you are worried about acquiring diabetes.
- A high reading on a fasting blood sugar test is typically the first sign of diabetes or Pre-diabetes. Being aware of your glucose levels can prevent health complications in the future.
- Many blood glucose testing kits allow for collecting blood in a location other than your fingertip. Although this might be convenient, your fingertip provides the most accurate glucose levels in your body.
- Do not use a lancet more than once. Use it, then throw it away in a safe receptacle.
Things You'll Need
- Blood sugar testing kit
|
<urn:uuid:a4925e61-7313-4628-bd76-261d0d358776>
|
CC-MAIN-2016-26
|
http://www.wikihow.com/Measure-Fasting-Blood-Sugar
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00079-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.882717
| 727
| 3.25
| 3
|
1.) Take a tour of her secret annex here:
Please take some notes on what the annex looked like.
a.) What where the living conditions like for Anne Frank?
b.) How might someone survive those hardships?
c.) Draw a picture of the floor plan.
2.) Pick two Anne Frank Quotes and analyze them. What do they mean? Do you agree with this quote? Use the form I have provided. You can type your answers.
3.) Take a look at pictures of Anne Frank's diary.
Read some passages from her diary. Then you must create your own diary entry.
Choose from the topcis below:
What If? What if you were forced from your home, separated from your family, put in a cattle car, and dropped off at a concentration camp? How would you cope with the situation? Do you think something like the Holocaust could happen in America today? Why or why not?
Describe how you would feel if the government forced you to wear a symbol on your clothes EVERYDAY to let people know that you belonged to a certain race, religion, etc.
|
<urn:uuid:6554236f-d98c-4382-9f9b-9d8d26472e9a>
|
CC-MAIN-2016-26
|
http://msalleva.blogspot.com/2011/03/anne-frank-webquest.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00192-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.922438
| 230
| 3.484375
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.