text
stringlengths
174
640k
id
stringlengths
47
47
dump
stringclasses
17 values
url
stringlengths
14
1.94k
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
43
156k
score
float64
2.52
5.34
int_score
int64
3
5
Our new state-of-the-art cardiac catheterization lab is at the heart of the Via Christi Heart and Vascular Center. Our Siemens Coroskop catheterization lab is one of the most recognized names in heart care. Benefits of the Catheterization Procedure - Allows Physicians to Assess How Well Heart is Pumping - Assists Physicians in Locating and Treating Blockages in Coronary Arteries - Shows Physicians the Function of Heart Valves - May Be Used to Assess and Treat Blockages in Leg Arteries How the Procedure Works First an IV will be started prior to the procedure, the patient will be given a mild sedative to help in relaxation and comfort, but it will not put the patient to sleep. The patient will remain awake throughout the procedure in order to follow the doctor's instructions and alert staff of any discomfort or problems. Once in the catheterization lab, nurses and technicians will prepare for the procedure by placing EKG electrodes on the chest, shaving and cleaning the groin area or arm with antiseptic solution and covering with sterile towels and sheets. The cardiologist or surgeon will inject the groin or arm with a numbing medication, much like you receive at a dentist office. After this medication has taken effect, the doctor will make a small puncture into the blood vessel with a small needle. A larger IV will then be placed and the catheter will be inserted. The physician will watch the movement in the catheter by x-ray. Some pressure at the site of the insertion may be felt. The patient will not be able to feel the catheter inside the body. Once the catheter has been guided to the heart, the contrast material or dye, as it is sometimes called, is administered through the catheter. When this occurs, a patient may feel hot or flushed for a short time. This is a normal reaction to the dye and is not a cause for concern. There may be several injections of the dye, and the x-ray equipment may be moved around during the procedure. This is necessary to get different views of the patient's heart and coronary arteries. The dye in the coronary arteries is seen by the x-ray as a dark line. A disruption of the dark line may signify an area of plaque build-up inside the wall of the artery. During this same procedure, dye is injected into the heart's pumping chamber in order to see how well the heart muscle is contracting and how well the valves are working. Pressure measurements are also taken at this time and are interpreted by a computer. The entire procedure should only take 1-2 hours. Once the catheterization is complete the catheter is removed and firm pressure or a vessel closure device along with tight dressing will be put in place. After the catheterization, the patient will be returned to a recovery area or a patient room. Via Christi Staff will continue with observation and any follow-up treatment after the procedure is complete. Recent advances in closure devices have made recovery time after catheterization much shorter and many patients go home the same day. Treatment of a Blockage In some cases the catheterization procedure reveals that fatty deposits known as plaques have collected along the walls of a patient's arteries, narrowing the arteries and making it difficult for blood to pass through. If a blockage is noted, our physicians may use one of three methods to improve blood flow in the artery. During an angioplasty procedure, a catheter with a small balloon at the tip is advanced into the artery with blockage. When the catheter reaches the narrowed area, the balloon is inflated. This stretches the artery and flattens the fatty deposits against the artery's walls increasing blood flow. A stent is a small device that is placed in a coronary artery to keep it open. It is a permanent implant that remains in the artery. By keeping the artery open, the stent improves blood flow and relieves symptoms of coronary heart disease. Drug Eluding Stent Just like a normal stent, this is a small device that is placed in an artery to keep it open. This stent also contains medication that prevents the regrowth of fatty deposits or plaques along the artery walls, in turn keeping the artery open for blood to pass. It is a permanent implant that remains in the artery.
<urn:uuid:d4dbc29b-3625-4ca6-9179-06597ab6b463>
CC-MAIN-2013-20
http://www.via-christi.org/cardiovascular-catheterization-lab
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943987
900
2.53125
3
Caesarian birth greatly increases a baby’s chances of developing allergies, research has shown. Babies delivered by C-section are five times more likely than those born ‘naturally’ to become allergic to triggers such as dust mites and pets, said Dr Christine Cole Johnson, from the Henry Ford Hospital, Detroit, US. Babies could be left vulnerable by avoiding bacteria on the journey through the birth canal, the study found. ‘This further advances the hygiene hypothesis that early childhood exposure to micro-organisms affects the immune system’s development and onset of allergies,’ Dr Johnson said. ‘We believe a baby’s exposure to bacteria in the birth canal is a major influencer on their immune system.’ Dr Johnson’s team studied 1,258 newborn babies and assessed them when they were one month, six months, one year and two years old. By two years of age, babies born by C-section were much more likely to have developed allergies to triggers in the home such as the droppings of house dust mites, and dander, or dead skin, shed by dogs and cats. An estimated 21 million UK adults have at least one allergy. Ten per cent of children and adults under the age of 45 have two or more allergies. Maureen Jenkins, director of clinical services at the charity Allergy UK, said: ‘During a natural birth the baby travels slowly down the birth canal where it ingests normal bacteria, which has been shown to aid a healthy immune response and protect against allergy. ‘In the case of a Caesarian section, the baby has no contact with the birth canal. Instead it is immediately removed from a sterile environment, meaning the chances of developing allergy could be heightened.’
<urn:uuid:f1e94db9-3295-4824-84bd-3e9bb87af644>
CC-MAIN-2013-20
http://metro.co.uk/2013/02/24/caesarian-births-linked-to-allergies-for-newborns-3511941/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959046
371
3.65625
4
The Surgeons Hall On 1st July 1505 the Barbers and Surgeons of Edinburgh successfully petitioned the Town Council to be enrolled among the Incorporated Crafts of the Burgh. They were granted a Seal of Cause which conferred certain privileges that included the Barber Surgeons a monopoly of distilling and selling aqua vitae within the Burgh of Edinburgh. It is thought that Barber Surgeons needed aqua vitae to preserve the parts of the human body, which they needed for dissection, since they were also granted a special dispensation in 1505 Seal of Cause to have one body of a hanged criminal each year for dissection. It is unknown how many additional bodies where “purchased” for dissection illegally. The Barber Surgeons guarded their monopoly very zealously. Some of the earliest documents preserved in the College’s Archive relate to the Incorporation’s (what the College was constituted as before 1778) taking out prosecutions against people in Edinburgh distilling illegally – though caught were required by magistrates to fines to the Barber Surgeons. After 1612, aqua vitae does not feature in the College’s Records again. It is possible, with the growth of brewing ale in Edinburgh, that a cheaper and perhaps safer preservative spirit than aqua vitae became available and the surgeons simply allowed their monopoly of distilling and selling aqua vitae to lapse. Aqua vitae is only mentioned again in 1700, when Alexander Monteath, Deacon (whom we would nowadays call President) petitioned the Scottish Parliament “that the art discovered by him to draw spirits from malt equal in goodness to true French Brandie may be declared manufactory with the same privileges and immunities as are granted to other manufactories.” This seems to be a rare instance of the Barber Surgeons’ realising the commercial potential of distilling. Nothing further came of that.
<urn:uuid:32fe3c2d-1397-4718-ba29-17af792a4e92>
CC-MAIN-2013-20
http://thewhiskystramash.com/edinburgh/venue
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971305
394
2.59375
3
Coconut - food |Image: A cross section painting of a coconut shows the layers within the fruit.| Desiccated coconut is the washed, steamed, shredded and dried meat used in sweets, baking, savoury dishes and as snack food. The oil is used for cooking in India, and to make margarine, ice creams and sweets. Oil can be processed using fresh coconut or more often, by pressing dried coconut meat, known as copra. Ball copra is an Indian speciality produced by slow drying, de-husking and shelling of the whole nut. It is used to prepare sweets offered during religious and cultural events. Coconut water from the seed cavity is sweet, and is now commercially extracted and preserved as a drink. Palm hearts and sap |Image: Chunks of coconut sugar, known as jaggery, made from the sap.| As with many other palms, the heart is a delicacy. It is the tender, young apex at the top of the stem, also known also as palm cabbage. Coconut palms yield one of the heaviest palm hearts, which can weigh in at up to 12 kg. A sweet sap, known as toddy, or neera in India, is tapped from unopened flowering branches. To collect the sap, the base of the flowering branch is bashed with a mallet and a small slit is made in the skin covering the flowering branch. A container is placed beneath the slit to collect the fluid that oozes out. This can be boiled to give a rich palm sugar, known as jaggery or gur. Jaggery is fermented into an alcoholic wine which, in turn, can be distilled into a strong liquor called arak. Palm wine is produced as a by-product of palm vinegar.
<urn:uuid:a9f04245-b47d-49be-a2e3-b29cbd6091df>
CC-MAIN-2013-20
http://www.kew.org/plant-cultures/plants/coconut_food.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974121
368
2.78125
3
Notes from the Trenches: Going Public with Social Media Public archaeology is about talking with people–students, adults, construction workers, public officials–really, anyone who will listen. These conversations take place during outreach events, excavation site tours, hands–on activities, or lectures. Traditional approaches to public archaeology require that the audience be physically present and the archaeologists get their message across within the time limit of the event. Engagement opportunities last anywhere from a few minutes to a few hours and conclude with a few follow–up questions and, sometimes, an exchange of business cards. The chances that the audience retains the central messages conveyed during the event depend on their memories, notes, and handouts. Over the past few years, social media have developed into a worldwide phenomenon. On its face, discussing archaeology through digital technologies is not new. Listservs have been used to exchange information between professionals since 1986 (Hirst 2001). Arizona State University began hosting HISTARCH in 1994; it reached 1,463 subscribers in 2010 (L–Soft 2010). During the Levi Jordan public archaeology project, Carol McDavid successfully engaged stakeholders using a website that she constructed (McDavid 2004:50). Two differences between these technologies and social media is the diversity of web platforms available and the speed with which information is exchanged. Web 2.0 is a label used to refer to social media technologies: blogs, social bookmarking sites, photo– and video–sharing communities, and platforms such as Facebook and MySpace (Agichtein et al. 2008:183). Collaborative by nature, social media produces user–generated content that is created, exchanged, and accessed on a variety of devices (Kaplan and Haenlein 2010:61). Currently, Facebook has over 500 million active users with 250 million users accessing their accounts from a mobile device. On average, Facebook users create 90 pieces of content every month (Facebook 2011). The diversity of platforms and amount of content appear daunting, but social media is becoming a trend that places nonusers in a minority category. Archaeologists must engage in social media to maintain relevance in an increasingly technological society. However, there is no reason for us to reinvent the wheel. In the larger social media ecosystem, archaeological professionals are just beginning to experiment with technologies that marketing, entertainment, and other fields have been using for years. Rebecca Whitham, the Public Relations Coordinator for Woodland Park Zoo, wrote about her institution's rationale behind moving into social media. They started a Twitter account to reach minority populations who accessed the Internet through mobile phones. In her work, Whitham realized that social media users were not a homogenous group (Whitham 2010:9). User behaviors ranged from creator (submitting photos or blog content), critic (leaving comments on various types of content), or simply spectator. Whitman and others encourage an approach to social media in which the user consciously chooses an approach and adapts it based on feedback from measurement of web traffic (Whitham 2010:9). When used in conjunction with a critical approach to public archaeology, the audience and quality of engagement increase. Many archaeologists now engage the public using social media. The following three examples were selected as a critique. For the purpose of a quick comparison only, use of one social media platform–Facebook–was examined. World Diggers Day was a widespread event created by Lawrence Shaw, a postgraduate student at the University of Birmingham, through Facebook in February 2011 to encourage people involved in archaeology all over the world to show their support for their profession. Participants were asked to change their profile pictures to an image of Indiana Jones or Lara Croft. While problems arise with the association with these fictitious adventurers, the general public found them relatable and their use garnered a lot of attention. Over 12,000 people from more than 30 different countries participated in this media event. Current posts from the World Diggers Day page provide links to professional archaeology blogs, Twitter feeds from archaeology conferences, and other genuine archaeology materials. Facebook pages build and strengthen a community of volunteers and stakeholders in archaeological projects. Archaeology in the Community (AITC) is a nonprofit organization directed by Dr. Alexandra Jones that uses a Facebook page to share its mission statement and goals, photos, and upcoming events. The page connects members of AITC with the public and other archaeologists who follow their work. Using social media platforms means giving up a degree of control and devising a plan to deal with unexpected content. For example, the Society for Historical Archaeology Annual Conference page on Facebook received comments and inquiries from a treasure hunter prior to the 2010 conference at Amelia Island, Florida. Administrators temporarily suspended the page and switched to an invitation-only event listing. Since that time the page has reopened for public participation. SHA ultimately decided to keep the page public but holds participants to the SHA ethical guidelines. Some may be turned away, but the move struck a balance between engaging the public and upholding the institution's core principles. Increasingly, social media is becoming the public face of institutions, one that requires ongoing maintenance. The PEIC does not advocate that every archaeologist participate in social media. A web presence with minimal content is worse than no presence at all. However, we recognize the potential to reach new audiences through social media and support those immersed in this form of outreach. The use of social media in public archaeology entails both advantages and disadvantages, with the above examples showing the potential for both positive and negative effects based on the way we portray the field of archaeology. Public archaeologists must strike a balance between educating the public while maintaining professional standards for research, excavation, and preservation. Agichtein, Eugene, Carlos Castillo, Debora Donato, Aristides Gionis, and Gilad Mishne 2008 Finding High–Quality Content in Social Media. In Proceedings of the International Conference on Web Search and Web Data Mining, M. Najork, A. Z. Broder, and S. Chakrabarti, editors, pp. 183-194. New York: ACM. 2011 Statistics. http://www.facebook.com/press/info.php?statistics. Accessed May 1, 2011. Hirst, K. Kris 2001 Articulations: Chatting with Archaeologists, March 11th Chat: Anita Cohen–Williams. http://archaeology.about.com/library/chat/blchatcohenwilliams.htm. Accessed May 1, 2011. Kaplan, Andreas M., and Michael Haenlein 2010 Users of the World, Unite! The Challenges and Opportunities of Social Media. Business Horizons (53):59–68. L–Soft International, Inc. 2010 Historical Archaeology. http://www.lsoft.com/scripts/wl.exe?SL1=HISTARCH&H=LISTS.ASU.EDU. Accessed May 1, 2011. McDavid, Carol 2004 From "Traditional" Archaeology to Public Archaeology to Community Action: The Levi Jordan Plantation Project. In Places in Mind: Public Archaeology as Applied Anthropology, P. A. Shackel and E. J. Chambers, editors, pp. 35–56. New York: Routledge. Whitham, Rebecca 2010 Finding Your Place in Social Media. Connect (February):8–9.
<urn:uuid:934ec873-dd61-4c8d-a142-045a02adeda6>
CC-MAIN-2013-20
http://www.sha.org/about/peic.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918696
1,503
2.5625
3
Last modified: 2005-11-05 by phil nelson Keywords: japan | Links: FOTW homepage | search | disclaimer and copyright | write us | mirrors Civil and State Flag/Ensign: A red disk on a white field. The disk is known as the Hinomaru, a mon representation of the sun. It can be used on land and sea by civilians and government (excluding military). Ground Self-Defense Force (War Flag): A hinomaru with 7 red rays extending outward. A gold border lies partially around the edge. Naval Ensign (War Ensign): A hinomaru set towards the hoist, with 16 red rays. Although the hinomaru has been a symbol and flag of Japan for centuries (and unofficially a national flag since 1868), it was not officially adopted as the flag of Japan until 1999. The flag dimensions were set at 2:3.1 Following World War II, the War Ensign, which had been in use by the Japanese Navy since 7 October 1889 the flag use was discontinued as part of the treaty of San Francisco until 30 June 1954 when readopted for use by the Japanese Maritime Self-Defense Force. Proportions are 2:3.2 The War Flag was adopted 30 June 1954 as the flag of the Japanese Ground Self-Defense Force.3 It's proportions are approximately 8:9. Flag of Japan by António Martins. War flag and ensign by Željko Heimer Flags of the World
<urn:uuid:41e5a570-ad22-4532-b9da-a6aefc07aedf>
CC-MAIN-2013-20
http://fotw.fivestarflags.com/jp_fact.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953434
322
3.265625
3
Parent-Child Interaction Therapy« Back to list |Category||Type||Target Age Group||Setting||Outcomes||Source of Rating| |Promising Programs||Delinquency & Recidivism|| ||5.1% reduction in recidivism|| Parent-Child Interaction Therapy (PCIT) is an empirically-supported treatment for conduct-disordered young children that place emphasis on improving the quality of the parent-child relationship and changing parent-child interaction patterns. In PCIT, parents are taught specific skills to establish a nurturing and secure relationship with their child while increasing their child’s prosocial behavior and decreasing negative behavior. This treatment focuses on two basic interactions: Child Directed Interaction (CDI) is similar to play therapy in that parents engage their child in a play situation with the goal of strengthening the parent-child relationship; Parent Directed Interaction (PDI) resembles clinical behavior therapy in that parents learn to use specific behavior management techniques as they play with their child. PCIT draws on both attachment and social learning theories to achieve authoritative parenting. Attachment theory asserts that sensitive and responsive parenting provides the foundation for the child's sense of knowing that he or she will be responded to when necessary. Thus, young children whose parents show greater warmth, responsiveness, and sensitivity to the child’s behaviors are more likely to develop a secure sense of their relationships and more effective emotional and behavioral regulation. For this reason, in the first phase of PCIT parents learn the Child-Directed Interaction (CDI), which aims to restructure the parent-child relationship and provide the child with a secure attachment to his or her parent. Social learning theories emphasize the contingencies that shape the interactions of conduct-disordered children and their parents. Patterson’s coercion theory provides a transactional account of early conduct-disordered behavior in which child conduct problems are inadvertently established or maintained by the parent-child interactions. Thus, in the second phase of PCIT parents learn the Parent-Directed Interaction (PDI), which specifically addresses these processes by establishing consistent contingencies for child behavior. Treatment goals include: - An improvement in the quality of the parent-child relationship or, in residential treatment centers and foster homes, the caregiver-child relationship - A decrease in child behavior problems with an increase in prosocial behaviors - An increase in parenting skills, including positive discipline - A decrease in parenting stress PCIT was initially targeted for families with children ages 2-to-7 with oppositional, defiant, and other externalizing behavior problems. It has been adapted successfully to serve physically abusive parents with children ages 4-to-12. PCIT may be conducted with parents, foster parents, or others in a parental/caretaker role. Caregiver and child must have regular, ongoing contact to allow for daily homework assignments to be completed. For more Information or to find Technical Assistance, visit: Parent-Child Interaction Therapy (PCIT) International PCIT Training Center CAARE Diagnostic and Treatment Center UC Davis Children’s Hospital References and/or Published Evaluations PCIT outcome research has demonstrated statistically and clinically significant improvements in the conduct-disordered behavior of preschool age children: After treatment, children’s behavior is within the normal range. Studies have documented the superiority of PCIT to waitlist controls and to parent group didactic training. In addition to significant changes on parent ratings and observational measures of children’s behavior problems, outcome studies have demonstrated important changes in the interactional style of the fathers and mothers in play situations with the child. Parents show increases in reflective listening, physical proximity, and prosocial verbalization, and decreases in sarcasm and criticism of the child after completion of PCIT. Outcome studies have also demonstrated significant changes on parents’ self-report measures of psychopathology, personal distress, and parenting locus of control. Measures of consumer satisfaction in all studies have shown that parents are highly satisfied with the process and outcome of treatment at its completion. For a summary of PCIT and information about the future research directions of PCIT see: Zisser, A., & Eyberg, S.M. (2010). Treating oppositional behavior in children using parent-child interaction therapy. In A.E. Kazdin & J.R. Weisz (Eds.) Evidence-based psychotherapies for children and adolescents (2nd ed., pp. 179-193). New York: Guilford. For evaluation studies of the effectiveness of PCIT, please see: Boggs, S. R., Eyberg, S. M., Edwards, D., Rayfield, A., Jacobs, J., Bagner, D., & Hood, K. (2004). Outcomes of parent-child interaction therapy: A comparison of dropouts and treatment completers one to three years after treatment. Child & Family Behavior Therapy, 26(4), 1-22 Chaffin, M. et.al. (2004). Parent-child interaction therapy with physically abusive parents: Efficacy for reducing future abuse reports. Journal of Consulting and Clinical Psychology, 72, 500-510. Harwood, M., & Eyberg, S. M. (2004). Effect of therapist process variables on treatment outcome for parent-child interaction therapy. Journal of Clinical Child and Adolescent Psychology, 33, 601-612. Hood, K. K., & Eyberg, S. M. (2003). Outcomes of parent-child interaction therapy: Mothers' reports on maintenance three to six years after treatment. Journal of Clinical Child and Adolescent Psychology, 32, 419-429. Nixon, R. D. V., Sweeny, L., Erickson, D. B., & Touyz, S. W. (2003). Parent-child interaction therapy: A comparison of standard and abbreviated treatments for oppositional defiant preschoolers. Journal of Consulting and Clinical Psychology, 71, 251-260. Eyberg, S .M., Funderburk, B. W., Hembree-Kigin, T., McNeil, C. B., Querido, J., & Hood, K .K. (2001). Parent-child interaction therapy with behavior problem children: One- and two-year maintenance of treatment effects in the family. Child & Family Behavior Therapy, 23, 1-20. Schuhmann, E., Foote, R., Eyberg, S. M., Boggs, S., & Algina, J. (1998). Parent-child interaction therapy: Interim report of a randomized trial with short-term maintenance. Journal of Clinical Child Psychology, 27, 34-45. - Has this program been replicated at other sites? If so, how many and where are they? Yes, in many sites throughout the United States, as well as in Australia, Canada, England, Hong Kong, Russia, The Netherlands and Australia. - Is there a formal curriculum or program guidelines in place? What is the approximate cost for these materials? Assessment instruments and scoring forms as well as the step-by-step clinician guide are needed for training (Hembree-Kigin & McNeil, 1995). Manuals for detailed implementation of the treatment program, coding of sessions, and handouts for use in treatment will complement the guide. Assessment procedures and instruments include: - Semi-structured intake interview - Child Behavior Checklist (parent form) - Eyberg Child Behavior Inventory - Parenting Stress Index (short form) - Dyadic Parent-Child Interaction Coding System - Sutter-Eyberg Student Behavior Inventory (as appropriate) PCIT concludes with a post-treatment evaluation. In most cases, the pre-treatment assessment procedures are repeated, including parent reports, teacher report, child report, and direct observation measures. The Dyadic Parent-Child Interaction Coding System observations are repeated at the end of the last discipline coaching session. Parents also complete a parent-report measure of consumer satisfaction called the Parenting Stress Index) can be completed at booster sessions to assist in tracking maintenance of behavioral improvements or for long-term follow-up of treatment. - Semi-structured intake interview - What kind of training and technical assistance is available for this program? There are a number of settings within the Network that are available for training, such as the University of Oklahoma Health Sciences Center and the Trauma Treatment Training Center (Cincinnati Children’s Hospital), the University of Florida and the University of California, Davis, CAARE Center. Please see www.pcit.org for more information about non-network trainings and other resources. The training is for mental health professionals with a minimum of a master’s degree in psychology or a related field. It involves 40 hours of direct training with ongoing supervision and consultation for approximately the next four-to-six months. The latter can be accomplished through conference calls, videotapes, and distance-learning technology. Competency criteria will be assessed at the completion of the 40-hour training with fidelity checks throughout the supervision and consultation period. - Once the program has been implemented, can an organization obtain assistance with fidelity monitoring or quality assurance? Session-by-session protocols and fidelity checklists filled out by the therapist and parent are essential. During the four to six months of supervision and consultation, the session-by-session protocols and fidelity checklists should be reviewed on a continual basis. - Can an organization obtain assistance with data collection or measurement of outcomes? There are variations among training and technical assistance providers, however, the University of California, Davis CAARE Center’s training program includes 16-hours of didactic training for clinicians, clinical supervisor, home visitors, and school-based personnel. This training covers an overview of PCIT, training on assessment and use of standardized measures, introduction to PCIT protocol, practice in the relationship enhancement component of PCIT, and the application of these techniques to maltreated and at-risk populations. - Is a risk assessment tool typically used to identify referrals for this program? If so, which one? Risk assessment tools are not needed. Appropriate referrals are children between the ages of 2-7 years who are exhibiting some challenging behavioral issues. PCIT is most effective with young children and parents who want to improve their relationship with their children. - Other considerations: Implementation involves two rooms, one for treatment, and one for observations and coaching. Generally this is accomplished through use of a one-way mirror system, “bug in the ear” device, video camera, and monitor, although in-room therapist coaching is also possible. The therapist is extremely active and directive during the sessions and must be able to commit to the family for up to 22 sessions. The therapist should have a referral network in place to address issues not covered by PCIT.
<urn:uuid:ce09c8a8-669a-4f09-a846-65eeae601546>
CC-MAIN-2013-20
http://www.calgrip.ca.gov/applications/ebp/?v=detail&navid=168&categoryIds=3&typeIds=0&ageGroupIds=0&settingIds=0&id=32
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903375
2,262
2.828125
3
For the undergraduate student wishing to devote only one quarter to a course in epidemiologic methods. Description of ways in which variation in disease occurrence is documented and how that variation is studied to understand causes of disease. Offered: WSp. This course is a basic introduction to epidemiology. The uses of descriptive and analytic epidemiology are presented. Key concepts include: classification of disease, definitions of incidence and prevalence, uses of rates, rate adjustment, outbreak investigation, study design, cohort studies, case-control studies, experimental studies, life tables, and screening. Student learning goals To describe, define and apply basic concepts of disease transmission and occurrence To recognize appropriate data sources for epidemiologic study To define, compute and interpret measures of disease occurrence to solve problems and design epidemiologic studies To distinguish between random and systematic error in the interpretation and design of epidemiologic studies To recognize and apply appropriate research study designs and to compare and contrast their individual advantages General method of instruction Class will be evenly divided between lecture and discussion. The professor will provide the lectures and the TA's are responsible for the discussions. The discussion material is based on exercises that the student completes prior to coming to class. The exercises are designed to highlight the main issues covered in the weekly lectures. A basic course in biology and statistics would be helpful but not essential. Class assignments and grading Textbook readings. Discussion exercises require application of epidemiologic thinking and simple calculations. Grades are based on a combination of homework, discussion section quizzes, a midterm and final examination. The mid-term and final are multiple choice examinations.
<urn:uuid:7c6d87fb-2757-4692-9d21-71194a3337cb>
CC-MAIN-2013-20
http://www.washington.edu/students/icd/S/epidem/420goldie1.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91282
333
3.140625
3
By Cris Carl, Networx Attracting hummingbirds and butterflies to your garden not only can bring a sense of delight, but tie you into something greater as well. “You can enjoy the full circle in your garden. Our plants are part of a global eco-system,” said Sarah Mary Gerchman, assistant manager of Annie’s Garden and Gift Center in Amherst, Mass. Gerchman gave the example of how hummingbirds eat aphids. “Aphids are often seen as immediate pests in gardens. So you can enjoy beautiful flowers, and attract hummingbirds which help protect the plants,” she said. An additional note on garden pest control: At a Vegetable Entomologist Workshop in Dallas, pest control experts said to "avoid broad-spectrum insecticies to conserve natural enemies" like hummingbirds. Flowers and tips to attract hummingbirds to your garden Gerchman said that hummingbirds are attracted to tubular-shaped flowers. For colder climates such as New England, she said one of the best choices is Trumpet Vine as they are attracted to both the shape and bright red color. “What a lot of people don’t know is that hummingbirds don’t have a sense of smell. People think they are attracted to flowers with a strong fragrance. They are actually attracted to the colors -- bright reds, pinks, and oranges,” she said. Gerchman recommended Mandevilla plant for warmer, more tropical climates, though they can be grown anywhere if they are brought inside for the winter. “They are a big draw for hummingbirds because the flowers are the perfect shape for their beaks,” said Gerchman. “Also, if you have a big ugly fence you want to hide, Trumpet Vines are aggressive climbers. They have beautiful flowers and foliage” she added. Gerchman added that you can plan your hummingbird garden as perennial, or annual, and you can also incorporate shrubs. An added bonus of strategically planting shrubs is that they can help cut your summer electrical costs by shading your house. - Recommended perennials include Aster, Bee Balm, Day Lily, Foxglove, Globe Thistle, Hollyhock, Lupine, Milkweed, and Phlox. - Annuals include Cleome, Fuchsia, Impatiens, Petunia, Salvia, Snapdragon, and Zinnia. - Vines to attract hummingbirds include Honeysuckle, Morning Glory, and Scarlet Runner Bean.
<urn:uuid:f3547e65-d7c1-41cd-b597-5cc5abb549fd>
CC-MAIN-2013-20
http://www.wisn.com/Want-to-attract-butterflies-Put-flat-rocks-in-your-yard/-/9374034/17764280/-/q0clp5/-/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949248
536
3.03125
3
Evolution is change over time. Under this broad definition, evolution can refer to a variety of changes that occur over time—the uplifting of mountains, the wandering of riverbeds, or the creation of new species. To understand the history of life on Earth though, we need to be more specific about what kinds of changes over time we're talking about. That's where the term "biological evolution" comes in. Biological evolution refers to the changes over time that occur in living organisms. An understanding of biological evolution—how and why living organisms change over time—enables us to understand the history of life on Earth. They key to understanding biological evolution lies in a concept known as as descent with modification. Living things pass on their traits from one generation to the next. Offspring inherit a set of genetic blueprints from their parents. But those blueprints are never copied exactly from one generation to the next. Little changes occur with each passing generation and as those changes accumulate, organisms change more and more over time. Descent with modification reshapes living things over time and biological evolution takes place. All life on Earth shares a common ancestor. Another important concept relating to biological evolution is that all life on Earth shares a common ancestor. This means that all living things on our planet are descended from a single organism. Scientists estimate that this common ancestor lived some 3.5 to 3.8 billion years ago and has since given rise to all living things that have inhabited our planet. The implications of sharing a common ancestor are quite remarkable and mean that we're all cousins—humans, green turtles, chimpanzees, monarch butterflies, sugar maples, parasol mushrooms and blue whales. Biological evolution occurs on different scales. These scales on which evolution occurs can be roughly grouped into two categories: small-scale biological evolution and broad-scale biological evolution. Small-scale biological evolution, better known as microevolution, is the change in gene frequencies within a population of organisms changes from one generation to the next. Broad-scale biological evolution, commonly referred to as macroevolution, refers to the progression of species from a common ancestor to descendent species over the course of numerous generations.
<urn:uuid:57b4e358-c275-48cf-b06f-4e3475525159>
CC-MAIN-2013-20
http://animals.about.com/od/evolution/ss/evolution.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921614
443
3.84375
4
When you go on an orienteering course, you need to have some ways to measure things. Some of the methods used require you to have something of known length for comparison. A personal measurement log will help you with this. Record some common measurements before you go out – arm span, arm reach, hand span, index finger length, foot length, wrist to elbow, and height. The important part is to be consistent in how you spread your hand or exactly where you measure. Then when you need to measure something on the course, you can choose the personal measurement which best fits your need. You should also determine your pace length. Go to an large space of known distance. 100 meters is considered optimal. Measure the number of paces you need to walk that distance in a normal manner. Now divide the distance by the number of paces you took to cover it. That is your pace length. Boy Scouts can use this with First Class Requirement 2: Using a map and compass, complete an orienteering course that covers at least one mile and requires measuring the height and/or width of designated items (tree, tower, canyon, ditch, etc.) Printable copy of Personal Measurement Log
<urn:uuid:8d3cf45c-6e06-4171-9544-757d7e474acb>
CC-MAIN-2013-20
http://scoutermom.com/1842/personal-measurement-log-for-orienteering/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946765
245
2.90625
3
One of the best historical sights in Myanmar is Bagan, formerly known as Pagan, offering the stunning picture of sunrise or sunset. Over 13,000 pagodas once sprawled over this dry land during the golden age of the 11 great kings (approximately 1044-1287); this ended with the threat of attacks by Kublai Khan from China, and this particular region was deserted. Today there are less than 3,000 pagodas. The existing village of Bagan boasts a museum, market, restaurants ware workshops and an impressive temple are within easy access. The Bagan area is approximately 40 sq kilometres or 15 sq miles, housing dozens of open temples. Tourist attractions are the Shwegugyi Temple (constructed in 1311 and known for its intricate stucco carvings), the Gawdawpalin Temple (severely damaged by an earthquake in the 1975, but remains one of the most attractive of the Pagan temples) and the Thatbyinnyu Temple (the highest temple in Bagan). This ancient royal city is teeming with palaces, temples, pagodas and stupas and is the principal centre of Buddhism and Burmese arts, even though Mandalay has experienced many bad fires which have destroyed several buildings. Visitors can find some gold-leaf industries, stone-carving workshops and various great craft markets in Mandalay. Taking its name from Mandalay Hill (ascending about 240 metres or 787 feet to the northeast of the palace), the city was established in 1857 by King Mindon. The old wooden palace buildings at Amarapura have been relocated and rebuilt. Sites of importance include the massive Shweyattaw Buddha (located near the hill, with its finger pointing towards the city), the Eindawya Pagoda (constructed in 1847 and protected in gold leaf), the Shwekyimyint Pagoda (housing the original Buddha image sanctified during the Bagan period by Prince Minshinzaw) and the Mahumuni Pagoda or ‘Great Pagoda’ (containing the revered Mahumuni image). Cased in gold leaf over the years Buddhists, this image was delivered from Arakan in 1784, though it is believed to be much older. foundation, moat and large walls are entirely all that is left of the once marvellous Mandalay Palace, at one time an enormous walled city (mostly of timber structure) rather than a palace. It was burnt down in 1942. A large-scale model depicts what it must have been like. The Shwenandaw Kyaung Monastery was once part of the palace complex which King Mindon and his chief queen used as an apartment. Similar to the palace, the wooden building was at one time attractively gilded. There are some intricately carved panels inside and also a photograph of the Atumashi Kyaung Monastery, destroyed in 1890 by fire. The remains can be seen in the south of the Kuthodaw Pagoda, called ‘the world’s biggest book’ because of the 729 marble slabs that encircle the central pagoda – they are engraved with the whole Buddhist Mandalay houses many older, deserted capital cities. Sagaing is reached and has attractive pagodas at Aungmyelawka, Kaunghmudaw and Tupayon. Sagaing was once the capital of a Shan Kingdom. In the 15th century, Ava became the kingdom’s new capital and it was so until well into the 19th century, when the kingdom disappeared; the old city walls can still be traced. river trip from Mandalay, has the Bell, believed to be the largest uncracked, hung bell in the world. It was cast in 1790 by King Bodawpaya, meant to be hung in his huge pagoda, which was never completed because of the king’s death in 1819. The foundation of the pagoda alone is approximately 50 metres or 165 feet high. In 1783, Bodawpaya founded Amarapura, south of Mandalay. The city is well-known for its silk weaving and cotton.
<urn:uuid:88aa91b5-5f89-416e-8f77-9be2851b8d11>
CC-MAIN-2013-20
http://travelpuppy.com/asia-myanmar/central-myanmar.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948778
935
2.671875
3
Concentration and Activity of Lapatinib in Vestibular Schwannomas Tumors can grow on the auditory nerves and can cause hearing loss. A common type of tumor that does this is a vestibular schwannoma (VS), or acoustic neuroma. These tumors are not cancerous. Most often, people have only one VS. Occasionally, people have more than one VS and may have a condition called neurofibromatosis type 2 (NF2). Because VS can cause hearing loss, many people with VS will have treatment to preserve their hearing. This treatment usually involves surgery or radiation therapy. There are risks to these procedures, and sometimes they do not work to prevent hearing loss. Because surgery and radiation have risks and are not able to help everyone with VS, other methods of treatment are being explored. One area of exploration is looking to see if there is a drug that can be taken that might prevent the VS from growing larger and causing hearing loss, and might possibly even cause the VS to shrink in size. This study is exploring whether a drug that is approved by the FDA and is currently used to treat breast cancer might also work to treat VS. This study will measure the amount of drug that travels from the bloodstream and arrives at the tumor. This drug is safe and has few side effects. If this drug is shown to reach the tumor, it might be used in the future to treat VS without needing surgery or radiation. This study is recruiting people who are having surgery for VS. If you are going to have surgery to treat a VS, you may be eligible to participate. |Study Design:||Allocation: Non-Randomized Endpoint Classification: Pharmacokinetics Study Intervention Model: Parallel Assignment Masking: Open Label Primary Purpose: Basic Science |Official Title:||Exploration and Estimation of Intratumoral Concentration and Activity of Lapatinib in Vivo in Vestibular Schwannomas| - To assess steady-state lapatinib plasma concentrations at the time of surgical resection, 10 (+3) days after oral dosing. [ Time Frame: one year ] [ Designated as safety issue: No ] - To assess whether lapatinib can reach a minimum tumor concentration level of >3uM in VS after oral dosing. [ Time Frame: one year ] [ Designated as safety issue: No ] - Assess the level of ErbB2 and EGFR phosphorylation and activity of downstream signaling effectors in VS. [ Time Frame: one year ] [ Designated as safety issue: No ] - Assess markers of tumor proliferation and cell death in VS after exposure to lapatinib. [ Time Frame: one year ] [ Designated as safety issue: No ] - Explore the difference in the concentration of lapatinib achieved in NF2-related versus idiopathic VS. [ Time Frame: one year ] [ Designated as safety issue: No ] - Perform NF2 gene mutation analysis via exon scanning and MLPA as well as protein expression in all VS and explore differences between sporadic and NF2 related VS. [ Time Frame: one year ] [ Designated as safety issue: No ] |Study Start Date:||June 2009| |Estimated Primary Completion Date:||December 2012 (Final data collection date for primary outcome measure)| Subjects will receive lapatinib for 10 days prior to surgery for vestibular schwannoma resection. 1500 mg lapatinib by mouth per day for 10 days Other Name: Tykerb No Intervention: control Control subjects will not receive any intervention prior to surgery for vestibular schwannoma resection. Neurofibromatosis type 2 (NF2) is a rare autosomal dominant genetic disorder with an incidence of approximately 1/40,000. The most common tumor type in NF2 is vestibular schwannoma and the majority of NF2 patients develop progressive hearing loss in adolescence or young adulthood due to bilateral vestibular schwannoma (VS). In addition to hearing loss, VS can cause significant morbidity, and in some cases mortality, due to brain stem compression. Currently, the only accepted modality for treatment of VS in patients with NF2 is surgical resection. Although surgical resection is effective at tumor reduction, it is often associated with morbid complications such as hearing loss, facial palsy, CSF leaks, chronic headache and infection. In addition, the tumors often recur after surgery. Radiation therapy (RT) has been proposed as an alternative, however, its safety in the NF2 population has not been established and there is concern about long term efficacy. For a distinct population of NF2 patients, surgery and RT at not feasible and no additional therapy is currently available. Hence, a systemic therapy is needed. Sporadic VS are common with roughly 3,000 new cases per year in the United States and a growing incidence in recent years. These tumors cause unilateral hearing loss, tinnitus, and vertigo. The primary treatment modality for these tumors is surgical resection or radiosurgery. Surgery is associated with the same complications listed above for NF2-related VS. Hence, RT is often offered in place of surgery. Although considered safe in sporadic VS, it may not have good long term efficacy and may complicate future procedures. Again, a systemic therapy that could control tumor progression obviating the need for an invasive procedure is needed. As the understanding of tumor molecular biology continues to advance, there are an increasing number of attractive targets for VS growth inhibition. EGFR and ErbB2 have been identified as important targets for VS. In a study of 21 sporadic and 17 NF2-related VS samples, both EGFR and ErbB2 were found to be upregulated in the majority of tumors. In addition, an anti-ErbB2 monoclonal antibody reduced schwannoma cell proliferation in vitro. Collectively, this data suggests that abnormal signaling via EGFR and ErbB2 is a major contributor to tumor growth and progression in both sporadic and NF2-related VS, and that inhibition of this signaling pathway can result in decreased tumor growth. Although agents targeting these pathways are commercially available, there is little pre-clinical data to assist in prioritizing which agents to advance to clinical trials. Given the relative rarity of the disorder and the enormous patient, financial and time commitments an efficacy study requires, there is a need to carefully select agents for testing that have the best chance of success. In this trial, we propose to assess the delivery of lapatinib, a commercially available inhibitor of ErbB2 and EGFR, to VS via tissue sampling at the time of clinically indicated surgery. Demonstrating that lapatinib reaches meaningful intratumoral concentrations is important data to recommend this drug above other small molecule inhibitors for efficacy trials for VS. The primary objective is to determine the steady state concentration of lapatinib in VS in patients with NF2 and in patients with sporadic VS. Patient who are planning to have surgical resection of their tumor for clinical indications will be given lapatinib for 15 days prior to resection. At the time of resection, VS tissue will be assessed for drug concentration and molecular markers of drug activity. Demonstrating that lapatinib reaches meaningful concentrations within VS would support selecting this agent for investigation in efficacy studies for VS, and tissue-based molecular studies will provide corollary information about the behavior of VS in general and about lapatinib specifically in VS tissue. This may further our understanding of the pathophysiology of VS, the similarities and differences between NF2-related and sporadic VS, and inform the design of subsequent efficacy trials. |Contact: Latoya Stewart, BAfirstname.lastname@example.org| |United States, California| |House Reserach Institute||Recruiting| |Los Angeles, California, United States, 90057| |Contact: Roberta Leyvas 213-273-8025 email@example.com| |Principal Investigator: William H Slattery, MD| |Sub-Investigator: Marco Giovannini, MD| |United States, Maryland| |Johns Hopkins Hospital||Recruiting| |Baltimore, Maryland, United States, 21287| |Contact: Latoya Stewart, BA 410-614-9916 firstname.lastname@example.org| |Principal Investigator: Jaishri O Blakeley, MD| |Sub-Investigator: John Niparko, MD| |United States, Massachusetts| |Massachusetts General Hospital||Recruiting| |Boston, Massachusetts, United States, 02114| |Contact: Teresa Alati 617-726-0160 email@example.com| |Principal Investigator: Scott Plotkin, MD| |United States, Missouri| |Washington University Medical Center||Recruiting| |St. Louis, Missouri, United States, 63110| |Contact: Lisa Ochsner firstname.lastname@example.org| |Principal Investigator: David Tran, MD| |United States, New York| |New York University Medical Center||Recruiting| |New York, New York, United States, 10016| |Contact: Iyore Ayanru 212-263-9945 email@example.com| |Principal Investigator: Matthias Karajannis, MD| |Sub-Investigator: Jeffrey Allen, MD| |Sub-Investigator: J T Roland, MD| |Sub-Investigator: John Golfinos, MD| |Sub-Investigator: Pamela Roehm, MD| |Sub-Investigator: David Zagzag, PhD| |Weil Cornell Medical College, New York Presbyterian Hospital||Recruiting| |New York, New York, United States, 10065| |Contact: Kerry Maleska, RN, CPNP 212-746-3276 Kam9123@NYP.org| |Principal Investigator: Kaleb Yohay, MD| |United States, Ohio| |Ohio State University Medical Center||Recruiting| |Columbus, Ohio, United States, 43210| |Contact: Beth Miles-Markley, MS 614-366-9244 firstname.lastname@example.org| |Principal Investigator: D. Bradley Welling, MD| |Sub-Investigator: Abraham Jacob, MD| |Sub-Investigator: Edward E Dodson, MD| |Principal Investigator:||Jaishri O Blakeley, MD||Johns Hopkins University|
<urn:uuid:5f85fbd7-c0c9-4a61-9c65-cc5ac5797096>
CC-MAIN-2013-20
http://www.clinicaltrials.gov/ct2/show/NCT00863122
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898057
2,264
2.921875
3
The Civilian Conservation Corps| Roosevelt's Tree Army in Maryland Part III: 100th Anniversary CCC Plaque Dedication and Personal Sept. 17, 2006, nine veterans of Roosevelt's Civilian Conservation Corps joined DNR Secretary Ron Franks for the commemoration of their efforts with a new plaque at Gambrill State Park. Remarks by DNR Secretary Ronald C. The Civilian Conservation Corps -- truly one of the most spectacularly successful public works project in American history -- was born of the despair of the Great Depression in the 1930s. It was the era of soup kitchens, Hoover Villages, and “The Grapes of Wrath.” of President F.D. Roosevelt’s famous “Hundred Days Legislation” to get the nation back on its feet was the Emergency Work Act, one aspect of which established the CCC. The CCC recruited millions of young, unemployed men across the nation to perform conservation work in forests, parks, on waterways and even on private property, to reclaim the nation’s natural resource base. Government could work amazingly fast in those days: FDR signed the Emergency Work Act on March 27, 1933, and the first camp opened in Virginia only 21 days later! By the first of July, 270,000 enrollees were serving in 1,300 camps across the nation. The CCC boys were organized into 200-man companies, and assigned work camps across the nation. The U.S. Army provided discipline, camp officers, quarters, food, and medical care. Federal, state and local authorities designated work projects, trained the boys to do the work, and provided oversight of the work. The boys themselves received discipline, hearty food, clothing, shelter, medical care, educational opportunities, and, most importantly of all, a sense of hope and purpose. Their monthly pay was $30 -- $25 of which was sent home to their families. These pay disbursements alone had a significant impact on the national economy. By the time World War II superseded the CCC, nearly 3 million young men had served, and they had enhanced millions of acres of natural resources and historic sites across the nation. Maryland benefited hugely from the CCC. Over 30,000 CCC boys served in our state, at over 60 camps. Together they: - built 274 bridges; - constructed 3,500 erosion check dams; - planted four and a half million trees; - improved over 60,000 acres of forests stands; - and reduced fire hazards on over 23,000 acres. The CCC boys also built the first major state park facilities in Maryland: - Herrington Manor (cabins and lake) - Swallow Falls (pavilions, trails, camp sites) - Big Run - New Germany (cabins, lake, pavilions) - Gambrill (all you see here, including this overlook where we are standing, was built by the CCC) - Elk Neck (cabins) - Fort Frederick (the fort’s walls were restored and support - Washington Monument (the monument was reconstructed, picnic pavilions and support facilities were built) - Patapsco Valley (trails and pavilions) - Pocomoke (a public fishing pier) If ever a debt of gratitude was owed by a present generation to a past one, our nation and state owe a huge one to the work of the Civilian - Address delivered by C. Ronald Franks, Secretary Maryland Department of Natural Resources Gambrill State Park, Sept. 17, 2006 Park Service Veterans Gather for Reunion By Geoffrey D. Brown FREDERICK — Clarence Simmons, 88, stood a few yards from Gambrill State Park's Frederick Overlook and admired the stone he hauled almost 60 years ago. Maryland's parks are what they are in large part due to the work of almost 40,000 young men who swept into the state from 1933 to 1942 and labored in dozens of camps, restoring forests, building shelters and cabins and fire towers, blazing trails and fighting fires. On Sunday nine veterans of President Franklin D. Roosevelt's storied Civilian Conservation Corps returned for the commemoration of their efforts with a new plaque at Gambrill State Park. Fifteen had been invited. One had died since the invitations were sent. The ceremony was part of the Maryland Park Service's 100th anniversary celebration. Joining the ceremony also were family members of Maryland's first State Forester, Fred W. Besley, who was among a handful of conservationist pioneers nationwide. Mr. Besley had a staff of 25 at most before the CCC boys arrived. The "tree army" of unemployed boys and young men earned $30 a month to start — $25 of it sent home to family — in Roosevelt's jobs program, which aimed to restore the land and put unemployed men to work. "It was the greatest shot in the arm the young forestry department could have had," said Offutt Johnson, a retired state naturalist who worked closely with Frederick County officials to establish and improve parks in the county. Mr. Simmons, who now lives in Hagerstown, worked in several state parks, and drove a truck carrying rock to construction sites, and ferrying workers to and from the camp on what is now Old Camp Road. Joe Bianchini, 85, of Mount Rainier, visited with his wife Anita, and they both recalled their own service to the country. Mr. Bianchini, who grew up in the Bronx, New York City, took the train with 500 CCC boys and wound up in Idaho, where he repaired roads. Ms. Bianchini was a riveter at an airplane factory during World War II. John Patrick Curley, 89, of Spring Ridge spent six years in eight camps and learned what was to be his lifelong trade as an operating engineer, running heavy construction equipment. Joseph Decenzo, 88, of Clinton, Md., was a camp clerk, became a leader at the Sligo, Pa. camp at a whopping $45 a month, and was a star on camp baseball, softball and basketball teams. Keith Paugh, 81, of Middle River, Md., was a truck driver at the New Germany, Md. camp, and made $36 a month as an assistant leader. "I didn't spend all the money, either, did you?" Mr. Paugh asked Mr. Decenzo. "No, I didn't either," Mr. Decenzo said. A movie cost 15 cents, a pack of cigarettes a nickel. George Smith, 81, of Bowie, showed off a small, battered tin frame with a black-and-white photo of himself in his CCC uniform, aged 18. The photo and frame cost a dime. Mr. Smith joined the CCC while still in high school in June of 1941. In September he went back to school. That December, the Japanese bombed Pearl Harbor, and in January 1942 Mr. Smith joined the Navy. In Maryland, the CCC built 274 bridges, installed 3,500 check dams to preserve trails, planted 4.5 million trees, and improved over 60,000 acres of park land. "All you see here, including this overlook where we're standing, was built by the CCC boys," C. Ronald Franks, secretary of the Maryland Department of Natural Resources, told the gathering. "Our nation and our state owe a debt of gratitude to the CCC." Note: The above article, "Park service veterans gather for reunion," by Geoffrey D. Brown, News-Post Staff is reprinted here with permission of the Frederick News-Post and Randall Family, LLC as published on September 18, 2006. Maryland DNR and the Centennial Committee would like to thank Mr. Brown for his thorough coverage of this event. New Germany Remembers the CCC By Bill Martin One bright, sunny morning in June 1933 a convoy of covered stake-body and dump trucks appeared at New Germany. They carried tents, field kitchen, water purification equipment, clothing, tools and a cadre of regular army enlisted men from Fort Meade. New Germany was designated Civilian Conservation Corps (CCC) Company 326, S-52. The trucks would be used to transport the company of CCC enlistees to the temporary site. All equipment and supplies were off-loaded into the field and the covered trucks were dispatched to Meyersdale, PA. The men had traveled from Baltimore via the B&O Railroad. The convoy returned in late afternoon with a bedraggled crew. Some of the men had never been out of the city before and were unprepared for camp life. The minimum age for enlistment was 18, but in the camp at New Germany there were frequently lads of 14 and 15. Enlistment was for six months but could be extended. The first problem was providing shelter for 150 men. Piles of canvas laying on the ground, their new homes, had to be assembled before they could sleep. A field kitchen was set up and supper served. Bedding was issued. Eight to a tent, the men were bedded down for the night. In addition to the usual complement of 120 men, there was a first sergeant, supply sergeant, mess sergeant and company clerk. Officers included a company commander, adjutant and doctor. These were reserve officers called to active duty by Congress. The tent city included squad tents, officers' tents and supply tent. The orderly room and dispensary tents were then set up for operation. The field kitchen was in the center. All meals were served from that facility. Slit trenches were used initially and later outside toilets were built. All water was trucked into the camp. Drinking water was available in Lister bags throughout the camp. The only bath facility was the lake. After several days orientation, the boys were issued: clothing and equipment. This included a mess kit - two flat pans hinged together, a knife, fork and spoon, solid aluminum cup and a canteen. Each boy was issued blue dungaree trousers, coats and a round blue hat as a work uniform. Olive drab (OD) shirts and pants completed the dress uniform. An overseas hat and overcoat were issued later in the year. Other wardrobe items included a raincoat, galoshes, ties, gloves, shoes, socks, underwear towels and At the onset everything was one size (too big). It took some time to fit each individual. But everyone had something to wear. The first big project was to build platforms for the tents. The tents were erected on the platforms. When stretched tightly and tied down, it was very cozy inside. A space heater was installed in each tent. The first winter at New Germany was spent in tents. The field kitchen served three meals each day. Kerosene stoves were used for cooking. A large gasoline generator in the woods east of the tent city provided electricity. It supplied power for the Refrigerators, officers' tents, dispensary and orderly room. Enlistees used candles and kerosene lanterns in the squad tents. At mealtime, the entire company would assemble in a line at the field kitchen. Meals were served cafeteria style. In inclement weather, mess kits were carried back to the tents. At other times, the CCC boys ate under the shade of the nearest tree. The quality and quantity of the food served from the field kitchen was outstanding. Most boys never ate any better at home. Most of the food was locally produced. After eating, the boys sterilized their kits. They scrubbed their kits in a can of boiling soapy water and then double-rinsed in a can of clear boiling water. Periodically they scoured kits with sand to shine them. Woe unto anyone found with a dirty mess kit. One of the first permanent buildings was the mess hall. Construction began in the fall of 1933. The building was 200 by 30 feet with the kitchen about halfway on the east side. At the north end the supply room held staples and canned goods and a refrigerated compartment. The south end was the officers’ The kitchen was modern. Hotel ranges fired with coal were used for cooking and baking. A serving line formed on the east side of the mess. Picnic tables were used in the hall. A generator supplied lights. Heat came from three large space After several months of operation, the camp commander enlisted local men as trainers and supervisors. This category of enlistees, known as local experienced men (LEMs), was permitted to live at home. They earned several dollars more per month, wore uniforms and were subject to the same rules and regulations as Next came the construction of permanent quarters. Local carpenters supervised construction of six 80 by 30 feet barracks on poles east of the recreation hall. There were three barracks on each side, with a company area and flag pole in the center. These barracks were not occupied by the company members until late spring 1934. The winter spent in tents acquainted everyone with the hardships of winter. The recreation hall was the center of camp life. The rec hall is little changed from when it was built. There were several pool tables and tennis table games. Books and writing materials were available. Gambling was prohibited, but there was usually some sort of card game being played. The canteen was located in the alcove that now houses the snack bar. It was open daily and catered to personal needs. North of the recreation hall was a combination bath house/toilet. This heated facility was welcome after almost a year without showers or toilet facilities. The barracks were not occupied by the company members until late spring 1934. The winter spent in tents acquainted everyone with the hardships of winter. By early summer of 1934, the majority of the permanent buildings had been completed and were in use. With most of the buildings completed, the CCC boys formed crews to build roads. These included from the top of Savage Mountain to the High Rock Tower. In the winter, most of the roads were shoveled by the CCC boys. Snow plows did not venture onto the back roads. The timber used to build the cabins and picnic shelters was cut and sawed by Sam Otto on his sawmill. Fire control was another duty during fire season. In the winter, most of the roads were shoveled by the CCC boys. Snow plows did not venture onto the back roads. Camp personnel eventually built a wooden snow plow pulled by a small tractor. The summer of 1934 saw the rebuilding of the breast of the lake. It was just a jumble of rocks stumps and logs. The lake was completely drained and the existing breast completely removed. The present earthen breast works were built along the spillway. The lake was drained several times between 1934 and 1938 for purpose of stump and log removal. The bigger fish were taken to Sam Otto's pond. It was not unusual to catch brown and rainbow trout from the catch pen below the spillway that were 30 inches in length. Most other fish were allowed to enter Poplar Lick and eventually made their way into Most boys gave a day's work for a day's pay. Foremen did not work anyone beyond his ability. Malingerers, gold brickers and malcontents were 'fired' from their jobs and returned to camp. They cleaned out grease traps in the kitchen or joined the "honey bucket brigade" cleaning out the toilets. After several days they were overjoyed to return to their job on the road or in the woods. Periodically, a vaudeville show would perform at the camp. This was usually well attended by both camp personnel and local citizens. The generators were shut down after 9 p.m. One was kept running to provide lights for fire exits and the orderly room. "Lights out” was strictly observed. Some individuals circumvented this policy by hiding under the covers with a flashlight to read or write letters. Church services were held at the rec hall on Sunday providing there was a chaplain available. He was known as "Holy Joe." If no chaplain was available, those who wished to attend church were taken to local churches. Movies were shown in the rec hall several times each week. They were free to the camp personnel. Local citizens were charged five cents. If you didn't have the nickel, you were welcome anyway. Saturday night was a special night. This was Liberty Run night. The boys were loaded into the covered trucks and taken to Frostburg or Lonaconing. Being close to a body of water was a great temptation to many of the boys. Some of them could not swim so in the summer of 1934, and every summer thereafter, water safety courses and swimming instruction were given to anyone who was interested. In 1938, CCC Company 326 at New Germany was disbanded. All men and equipment was moved to Meadow Mountain camp, S-68. Note: This story originally appeared in the Fall 1990 issue of Parkline. Billy Martin grew up working at the newly-created New Germany State Park with his father, the first state forester at Savage River State Forest. After World War 11, Martin worked at New Germany and Patapsco before reenlisting in the Air Force from which he retired in 1965. He returned to the Grantsville area and beginning in 1985 he served as a contractual employee 10 months out of the year at New Germany. He volunteered his time during the other two months and inherited the job of historical interpreter. Part I - A National Perspective Part II - A Maryland Perspective
<urn:uuid:bf34a3a5-818a-40c9-bdf9-549579357cfd>
CC-MAIN-2013-20
http://dnr.maryland.gov/centennial/CCC_History_Part_III.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974459
3,913
2.8125
3
Impact assessment research: use and misuse of habituation, sensitisation and tolerance in describing wildlife responses to anthropogenic stimuli Bejder, L., Samuels, A., Whitehead, H., Finn, H. and Allen, S. (2009) Impact assessment research: use and misuse of habituation, sensitisation and tolerance in describing wildlife responses to anthropogenic stimuli. Marine Ecology Progress Series, 395 . pp. 177-185. |PDF - Published Version | Download (193kB) | Preview *Open access, no subscription required Studies on the effects of anthropogenic activity on wildlife aim to provide a sound scientific basis for management. However, misinterpretation of the theoretical basis for these studies can jeopardise this objective and lead to management outcomes that are detrimental to the wildlife they are intended to protect. Misapplication of the terms ‘habituation’, ‘sensitisation’ and ‘tolerance’ in impact studies, for example, can lead to fundamental misinterpretations of research findings. Habituation is often used incorrectly to refer to any form of moderation in wildlife response to human disturbance, rather than to describe a progressive reduction in response to stimuli that are perceived as neither aversive nor beneficial. This misinterpretation, when coupled with the widely held assumption that habituation has a positive or neutral outcome for animals, can lead to inappropriate decisions about the threats human interactions pose to wildlife. We review the conceptual framework for the use of habituation, sensitisation and tolerance, and provide a set of principles for their appropriate application in studies of behavioural responses to anthropogenic stimuli. We describe how cases of presumed habituation or sensitisation may actually represent differences in the tolerance levels of wildlife to anthropogenic activity. This distinction is vital because impact studies must address (1) the various mechanisms by which differing tolerance levels can occur; and (2) the range of explanations for habituationand sensitisation-type responses. We show that only one mechanism leads to true behavioural habituation (or sensitisation), while a range of mechanisms can lead to changes in tolerance. |Publication Type:||Journal Article| |Murdoch Affiliation:||Centre for Fish and Fisheries Research| |Copyright:||(c) Inter-Research 2009| |Item Control Page|
<urn:uuid:d8ef686c-8d24-4852-a253-a58c21d1a08f>
CC-MAIN-2013-20
http://researchrepository.murdoch.edu.au/1252/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.856495
470
2.59375
3
The Optics Jones program displays a traveling electromagnetic wave. The default electromagnetic wave is right-circularly polarized but this polarization can be changed by specifying the components of the wave's Jones vector using the input fields. Jones is an Open Source Physics program written for the teaching of optics. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the optics_jones.jar file will run the program if Java is installed. Other optics programs are also available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Optics. Please note that this resource requires at least version 1.5 of %0 Computer Program %A Simov, Kiril %A Christian, Wolfgang %D May 1, 2008 %T Jones Program %7 1.0 %8 May 1, 2008 %U http://www.compadre.org/Repository/document/ServeFile.cfm?ID=7179&DocID=363 Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:cdb45d61-9b74-4482-88df-f9a5c11dd070>
CC-MAIN-2013-20
http://www.compadre.org/OSP/items/detail.cfm?ID=7179
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.840634
253
2.890625
3
What Part do Women Play in Sustaining the World? Yolí Sánchez Neyoy, Guest Contributor A sexual and reproductive health advocate’s work is never done. Although the 45th Session of the UN Commission on Population and Development just ended, it is already time to raise our voices in preparation for Rio+20, the United Nations Conference on Sustainable Development that will convene in Brazil in June. One hot button issue that is popping up on blogs and social media as the conference approaches is the need for women’s sexual and reproductive health to be represented in world leaders’ commitments toward sustainable development. But what exactly is the role sexual and reproductive health and rights plays in sustainable development? And why should we put the emphasis on women? Let me begin with an aside: most of us who advocate for sexual and reproductive health and rights (SRHR) are not monothematic in our perspectives or politics. We recognize that humanity has a number of diverse needs and that there are many challenges to overcome in the world. Even though our work appears to be focused on one particular topic, we understand that a healthy world requires not only SRHR, but also sufficient resources for the growing population and care for the environment. All of these goals will be achieved through cross-issue collaboration—not silos—which is why Rio+20 is such an important one-in-a-generation event. From an SRHR perspective, a healthy and productive population must include women who are empowered to make informed choices regarding their health and reproduction. It also includes teenage girls who are able to avoid unwanted pregnancies and births, young people who know how to protect themselves from sexually transmitted infections like HIV, girls who are encouraged to and supported in getting an education, and societies in which women are not subjected to gender-based violence while securing food and water for their families. We cannot afford to ignore the connection between SRHR, women’s rights, environmentalism, and sustainability. We need global leaders to make these critical and interconnected issues a top priority on the Rio+20 agenda. If you, like me, are out there fighting for an outcome that truly promotes sustainable development, tell the decision makers attending the United Nations Conference on Sustainable Development to give women’s rights and SRHR their rightful place on the Rio+20 agenda. As IPPF/WHR Regional Director Carmen Barroso wrote in Grist, “Women hold up half the sky, as the old Chinese proverb says, and they must be protagonists in the next chapter of the world’s aspirations for a sustainable future.” Yolí Sánchez Neyoy is a Mexican sexual and reproductive health activist who makes her contribution to the world as youth involvement officer at dance4life.
<urn:uuid:088475d1-7399-428c-93f8-d57287f8e2b3>
CC-MAIN-2013-20
http://www.ippfwhr.org/pt-br/comment/reply/3140
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952115
569
2.796875
3
Significance and Use 4.1 This test method evaluates the ability of coated fabrics to withstand a prescribed bend at an established low temperature. Fabrics coated with polymeric materials are used in many applications requiring low temperature flexing. Data obtained using this test method may be used to predict in-use behavior only in applications in which the conditions of deformation are similar to those specified in this test method. This test method has been found useful for specification purposes but does not necessarily indicate the lowest temperature at which the material may be used. 1.1 Fabrics coated with rubber or rubber-like materials display increased stiffening when exposed to decreasing ambient temperatures. This test method describes a simple pass/fail procedure whereby material flexibility at a specified low temperature can be determined. Failure is indicative of unacceptability of the coated fabric for use at that temperature. 1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses are for information only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific precautionary statement see 8.1. 2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard. D751 Test Methods for Coated Fabrics bend test; coated fabrics; flexibility; low temperature; low temperature bend test; rubber-coated fabrics; subnormal test temperature; ICS Number Code 59.080.40 (Coated fabrics) ASTM International is a member of CrossRef. Citing ASTM Standards [Back to Top]
<urn:uuid:8ce8f5e9-ebb5-4e87-9369-d5f9f778f66f>
CC-MAIN-2013-20
http://www.astm.org/Standards/D2136.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.879417
371
2.546875
3
Click on a thumbnail to go to Google Books. Martin's Big Words: The Life of Dr. Martin Luther King, Jr. (edition 2007) by Doreen Rappaport, Bryan Collier (Illustrator) Martin's Big Words: The Life of Dr. Martin Luther King, Jr. by Doreen Rappaport References to this work on external resources. Wikipedia in English Amazon.com Amazon.com Review (ISBN 0786807148, Hardcover)In this elegant pictorial biography of Martin Luther King Jr., author Doreen Rappaport combines her spare, lyrical text with King's own words for an effective, age-appropriate portrayal of one of the world's greatest civil rights leaders. From King's youth, when he looked up to his preacher father and vowed one day to "get big words, too," to his death at a garbage workers' strike ("On his second day there, he was shot. He died."), Rappaport imbues the story with reverence. Acclaimed artist Bryan Collier depicts his subject with stunning watercolor and collage illustrations, balancing glorious recreations of stained glass windows with some of the more somber images of peace marchers and the famous bus that pitched Rosa Parks into the civil rights movement. A brief chronology and bibliography provide additional resources for readers. Here is an exquisite tribute to a world hero. (Ages 4 and older) --Emilie Coulter (retrieved from Amazon Thu, 14 Apr 2011 12:03:07 -0400) A short biography of Dr. Martin Luther King, Jr. (summary from another edition) Is this you? Become a LibraryThing Author.
<urn:uuid:6e452725-71ec-47a3-b167-35a638dea1fe>
CC-MAIN-2013-20
http://www.librarything.com/work/1678/92181076
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893641
356
2.703125
3
Uniform Convergence of Power Series So, what’s so great right now about uniform convergence? As we’ve said before, when we evaluate a power series we get a regular series at each point, which may or may not converge. If we restrict to those points where it converges, we get a function. That is the series of functions converges pointwise to a limiting function. What’s great is that for any compact set contained within the radius of convergence of the series, this convergence is uniform! To be specific, take a power series which converges for , and let be a compact subset of the disk of radius . Now the function is a continuous, real-valued function on , and the image of a compact space is compact, so takes some maximum value on . That is, there is some point so that for every point we have . And thus we have for all . Setting , we invoke the Weierstrass M-test — the series converges because is within the disk of convergence, and thus evaluation at converges absolutely. Now every point within the disk of convergence is separated by some compact set (closed disks are compact, so pick a radius small less than the distance from the point to the boundary of the disk of convergence), within which the convergence is uniform. Since each term is continuous, the uniform limit will also be continuous at the point in question. Thus inside the radius of convergence a power series evaluates to a continuous function. This gives us our first hint as to what can block a power series. As an explicit example, consider the geometric series , which converges for to the function . This function is clearly discontinuous at , and so the power series can’t converge in any disk containing that point, since if it did it would have to be continuous there. And indeed, we can calculate the radius of convergence to be exactly . It’s important to note something in this example. For , we have , but these two functions are definitely not equal outside that region. Indeed, at the function clearly has the value , while the geometric series diverges wildly. The equality only holds within the radius of convergence.
<urn:uuid:ab9d6f43-b479-4405-b778-9aa72f38a4ff>
CC-MAIN-2013-20
http://unapologetic.wordpress.com/2008/09/10/uniform-convergence-of-power-series/?like=1&_wpnonce=d19428bcb3
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930079
448
2.515625
3
|Name: _____________________________||Period: ___________________________| This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics. Short Answer Questions Directions: Answer the question with a short answer. 1. Who talks about opening a tea shop? 2. Which of the following statements does Beatrice make about her father? 3. What is the subject of the phone call that Beatrice receives in Act 1? 4. What is Beatrice's final line in the play? 5. After the Science Fair, Beatrice's demeanor can best be described as: Short Essay Questions Directions: Answer the questions with a short paragraph. 1. Briefly describe Beatrice's recurring nightmare. 2. What is the business scheme that Beatrice is preoccupied with in Act 2?... This section contains 1,280 words| (approx. 5 pages at 300 words per page)
<urn:uuid:79bd2ddf-2444-4b57-a729-29ab7d7d55bd>
CC-MAIN-2013-20
http://www.bookrags.com/lessonplan/gammarays/test6.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.88808
203
2.875
3
Ask questions about projects relating to: computer science or pure mathematics (such as probability, statistics, geometry, etc...). Moderators: MelissaB, kgudger, Ray Trent, Moderators My science fair topic is mathematical sequences found in music. I used a frequency chart from the website www.techlib.com and wrote down the frequencies for the notes in the first eight measures of three different selections of music (one classical, one rock, and one Irish). I tried to analyze the data I collected and couldn't find any mathematical sequences whatsoever. But, I'm pretty sure there's supposed to be mathematical sequences and I think I'm just analyzing the data wrong. Is there an easier form of measurement to figure out mathematical sequences found in music? - Posts: 1 - Joined: Thu May 05, 2011 4:18 pm - Occupation: student, 8th grade - Project Question: My topic is mathematical sequences found in music. I measured the frequencies of notes using the website www.techlib.com and I didn't found any patterns. Am I analyzing my data wrong? - Project Due Date: May 9th, 2011 - Project Status: I am conducting my experiment Return to Grades 6-8: Math and Computer Science Who is online Users browsing this forum: No registered users and 2 guests
<urn:uuid:4ea1e6e9-2035-4649-b875-99297f57994b>
CC-MAIN-2013-20
http://www.sciencebuddies.org/science-fair-projects/phpBB3/viewtopic.php?f=27&t=7861
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929925
277
3.1875
3
Why Animal Rights at schools?The widespread violence in our society is a source of concern for the Israeli public, and for the Israeli education system in particular. Cruelty toward animals is a form of violence that is not uncommon among youth and children, and educating against it is of great importance. This importance becomes clearer when thinking about the close link between animal abuse and violence towards other human beings: cruelty to animals goes hand in hand with cruelty to people, and reducing one will reduce the other. Caring for animals is one the most evident characteristics of a solid value system. It involves helping a fellow creature that cannot express gratitude or pay back the favor. Acting for the welfare of animals develops in young people sensitivity and independent thought, because they have to cope with behaviors that few people stop to think about, and fewer still take action against. Despite the general consensus against animal abuse, there is a lot of ambiguity as to what animal abuse is and who the abusers are. Anonymous wishes to look deeply – together with the pupils – into these subjects, and to suggest to these young people ways to help animals. What we offerIn order to promote protection of animals in Israel and following the many requests we have received, we started to give lectures in schools, youth movements, army units etc. We have been doing it for several years now, and in 2009 alone we offered 78 lectures to a total of 12,700 young people. In these lectures we wish to encourage regarding animals as fellow creatures whose needs must be considered, and who are not mere tools for the use of man. The framework of the lectures is fixed, but we adapt the content and style of the lecture to the age and cultural background of the audience. Each lecture is designed to be joint learning of the lecturer and the pupils about our attitude towards animals, and an attempt to think together and find practical solutions in order to create a better world for all of us. The lectures are coordinated by Ariel Tsovel, a qualified teacher who studies the relations between humans and animals in Tel Aviv University. The team of lecturers is small and select and comprises of veteran Anonymous activists who acquired a lot of experience in talking to groups of young people and other audiences. We offer an introductory lecture that can be held in front of large groups – from a single class to whole age groups. The lectures are free of charge except for travel expenses, which are to be paid by the school. The school can order a series of lectures, or lectures that are tailored to specific curricular subjects. We can also provide the school with information stalls in events like Green Day. In case the students show special interest, we will be glad to assist them in writing assays on the subject, producing a newspaper, organizing events dedicated to animal rights etc. The introductory lectureWhat are animal rights? Is cruelty to animals rare? How can we help the animals? The lecture is intended to answer these questions methodically, with the aid of slides of text and photos, and with the participation of the audience. The recommended duration of the lecture is two academic hours (90 minutes), but we also have a shorter version (45 minutes). The technical aids we need are: a projector connected to a laptop, or a slide projector or a VCR. Here are the main questions that are raised in the lecture: What are animal rights? When talking about animal rights we usually mean rights that stem from their ability to feel pain and to suffer. Animals suffer when they are hurt, neglected, or denied their need to a varied environment, activity, companionship and family. These rights are regarded as moral rights. Some of these rights have been recognized by Israeli law, as well as by many other countries. Is cruelty to animals rare? Cruelty to animal is regarded in our society as a crime, but most of the cruelty happens away from the public eye, and on many occasions with financial support from the public. These day to day events that only few people are aware of are not regarded as cruelty - until they are exposed in broad daylight. This happened in the case of circus animals, chicken in battery cages, snatching animals from the wild etc. How can we help the animals? Knowing the facts about cruelty to animals and exposing such cases to the public is an essential first step. Only when the facts are known can actions take place. Following the actions, public pressure builds up, starting with a consumer boycott – many people stopped buying cosmetics that were tried on animals, and many people choose to buy free range eggs. Along with the boycott a pressure is applied on the authorities to enforce the existing laws or to change them. Most often than not the exposure and the public pressure were initiated by high school students, who brought on impressive results. The ban on force feeding of geese, for example, is mainly the result of such activity. Achievements like these frequently start with seemingly humble activities among friends or in the child's neighborhood. Our lecture program is quite extensive, yet very effective in making children and teenagers reconsider their views on animal rights and what they can do to help them. To keep it going at that rate, and to be able to further extend it, we need your support. You can help us in our educational efforts with your donations. Click here for more information. Translated by Yehudit Openheimer
<urn:uuid:9b0a0103-4836-4e80-bea4-948664fd5ea3>
CC-MAIN-2013-20
http://anonymous.org.il/art725.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970196
1,087
2.90625
3
MOLECULAR BIOLOGY AND GENOMICS OF FOODBORNE PATHOGENS Location: Produce Safety and Microbiology Research Title: Carvacrol and cinnamaldehyde inactivate antibiotic-resistant Salmonella enterica in buffer and on celery and oysters | Ravishankar, Sadhana - | | Zhu, Libin - | | Reyna-Granados, Javier - | | Law, Bibiana - | | Joens, Lynn - | Submitted to: Journal of Food Protection Publication Type: Peer Reviewed Journal Publication Acceptance Date: September 25, 2009 Publication Date: February 20, 2010 Citation: Ravishankar, S., Zhu, L., Reyna-Granados, J., Law, B., Joens, L., Friedman, M. 2010. Carvacrol and cinnamaldehyde inactivate antibiotic-resistant Salmonella enterica in buffer and on celery and oysters. Journal of Food Protection. 73(2)234-240. Interpretive Summary: Salmonella enterica on contaminated foods is one of the leading causes of gastrointestinal foodborne illness. The emergence of antibiotic resistant strains of this pathogen is of concern to food processors, including the produce, poultry, and oyster industries. To help overcome this problem, we are participating in collaborative studies with colleagues at the Department of Veterinary Science and Microbiology at the University of Arizona in Tucson on new ways to inactivate antibiotic resistant foodborne pathogens with the aid of safe, food-compatible, plant-derived compounds and plant extracts. In the present study, we screened 23 Salmonella isolates for resistance against the following 7 antibiotics: amoxicillin/clavulanic, ampicillin, cefoxitin, chloramphenicol, streptomycin; trimethoprim/sulfamethoxazole, and tetracycline. Two resistant and two susceptible strains from this group were each exposed to the following plant compounds: carvacrol, the main ingredient in oregano plant essential oil, and cinnamaldehyde, the main ingredient in cinnamon plant essential oil. Both carvacrol and cinnamaldehyde inactivated both antibiotic resistant and nonresistant Salmonella strains in solution and on two foods, celery and oysters. The present study with Salmonella extends our previous study on the inactivation of antibiotic-resistant Campylobacter jejuni strains published in 2008 in the Journal of Food Protection, Volume 71, pages 1145-1149. Salmonella enterica is one of the leading causes of gastrointestinal foodborne illness. The emergence of antibiotic resistant strains of this pathogen is of concern to food processors, including the produce, poultry, and oyster industries. The objective of this research was to identify the potential antimicrobial activities of two plant-derived compounds, cinnamaldehyde and carvacrol, against antibiotic resistant strains of S. enterica in phosphate buffer (PBS) and on contaminated celery and oysters. Twenty thee isolates were screened for resistance to 7 antibiotics. Two resistant and two susceptible strains were chosen for the study. Different concentrations of cinnamaldehyde and carvacrol (0.1, 0.2, 0.3 and 0.4% v/v) were added to cultures of S. enterica with populations of 104 CFU/ml. These were mixed well and incubated at 37°C. Samples were taken at 0, 1, 5 and 24 h diluted, plated for enumeration, incubated at 37°C and counted after 24-48 h. Both cinnamaldehyde and carvacrol showed complete inactivation of S. enterica in PBS at 0.3 and 0.4% concentrations at all time points tested. No survivors were detected at 5 h of sampling with 0.2% concentration of both antimicrobials. Cinnamaldehyde at 0.1% showed no survivors after 5 or more hours, while survivors were seen for some strains with 0.1% carvacrol. These results and additional data based on dipping celery and oysters contaminated with ~ 7 logs CFU in 1% solutions of the antimicrobials for 10 min or 1 h followed by storage for 3 d at 4°C suggest that carvacrol and cinnamaldehyde have the potential to inactivate antibiotic-resistant S. enterica in liquid and solid foods at concentrations of 0.1% and higher.
<urn:uuid:1725a26b-1b5b-4b74-9742-7361f01f3892>
CC-MAIN-2013-20
http://www.ars.usda.gov/research/publications/publications.htm?seq_no_115=238976
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897398
943
2.59375
3
Join for FREE It only takes a minute! Bewl Water and Bedgebury Forest The reservoir of Bewl Water is the largest body of freshwater in south-east England, covering more than 300ha, and was completed in the mid-1970s. It has become an important venue for sailing enthusiasts and fly fishermen and consequently suffers from considerable disturbance, particularly in summer. However, the reservoir is irregularly shaped with many quiet inlets and there is a nature reserve of the Sussex Wildlife Trust in the southern part. The water is rather deep and marginal vegetation is not well-developed and these two factors would normally lessen the lake's attractiveness to birds. Despite this a good range of species occurs both on the water and around the banks and well over 200 bird species have been recorded. The reservoir is set in a landscape of farmland and woodland patches with dense scrub in some parts. Close to Bewl Water is a large area of conifer plantations known as Bedgebury Forest which includes the famous National Pinetum, a collection of more than 200 species of conifer and probably the best-known site for Hawfinch in Kent. Although the plantations have a limited birdlife there are areas of chestnut coppice and more open heathlike patches with a greater variety of species. Notable Species Although passage periods and winter are undoubtedly the best times for birds at Bewl Water, Great Crested Grebe and Little Grebe are present all year as well as Canada Goose, Mallard, Common Pochard and Tufted Duck with Common Teal also sometimes present through the summer. In recent years the numbers of Great Cormorant have increased and this is now virtually a resident species. In woods and farmland around the reservoir there are Common Kestrel and Eurasian Sparrowhawk, Tawny Owl, Green Woodpecker and Great Spotted Woodpecker. Summer visitors include Turtle Dove, Spotted Flycatcher and warblers such as Willow Warbler, Chiffchaff, Whitethroat, Blackcap and Garden Warbler. Like other reservoirs in southern England, Bewl Water is now a regular stop-over site for passage Osprey with at least four appearing most autumns and one or two each spring. Northern Hobby is another regular passage raptor but other species are rare. Garganey is also a rare but regular passage visitor. Late autumn brings Dunlin, Common Snipe and Grey Plover. Scarcer passage waders include Whimbrel, Eurasian Curlew and godwits. Common Tern and a few Black Tern also appear on both passages with small numbers of Arctic Tern and Little Gull in spring. In November a few Dark-bellied Brent Goose may drop down to the reservoir from the flocks which overfly the area. Passerine migrants include such species as Northern Wheatear and Whinchat. Winter at Bewl Water brings Eurasian Wigeon, Gadwall and Common Teal to join the thousands of Canada Goose, Mallard and Eurasian Coot. Common Pochard and Tufted Duck occur in smaller numbers with a few Northern Shoveler, Northern Pintail and Common Goldeneye. Severe weather sees the arrival of Goosander and Smew and other rare visitors may then include Red-throated Diver and Red-necked Grebe with perhaps Black-throated Diver or the scarcer grebes. Scoters and Red-breasted Merganser have also been recorded as well as a Nearctic vagrant, Ring-necked Duck. Rarities noted in the area include Black-eared Wheatear and probably the most surprising of all, a Blackpoll Warbler at Bewl Water in December 1994. Two-barred Crossbill has been recorded at Bedgebury Forest. Birds you can see here include: Red-throated Diver, Little Grebe, Great Crested Grebe, Red-necked Grebe, Great Cormorant, Great Bittern, Grey Heron, Mute Swan, Canada Goose, Dark-bellied Brent Goose, Eurasian Wigeon, Gadwall, Common Teal, (rare in Su), Mallard, Northern Pintail, Garganey, Northern Shoveler, Common Pochard, Tufted Duck, Common Goldeneye, Smew, Goosander, Ruddy Duck, Eurasian Sparrowhawk, Osprey, Common Kestrel, Northern Hobby, Common Pheasant, Water Rail, Common Moorhen, Common Coot, Little Ringed Plover, Common Ringed Plover, Northern Lapwing, Dunlin, Common Snipe, Eurasian Woodcock, Whimbrel, Eurasian Curlew, Common Redshank, Common Greenshank, Green Sandpiper, Common Sandpiper, Little Gull, Black-headed Gull, Common Gull, Lesser Black-backed Gull, Herring Gull, Great Black-backed Gull, Common Tern, Arctic Tern, Black Tern, Stock Dove, Common Wood Pigeon, Eurasian Collared Dove, European Turtle Dove, Common Cuckoo, Little Owl, Tawny Owl, Long-eared Owl, Eurasian Nightjar, Common Swift, Common Kingfisher, Eurasian Green Woodpecker, Great Spotted Woodpecker, Lesser Spotted Woodpecker, Eurasian Skylark, Sand Martin, Barn Swallow, Northern House Martin, Tree Pipit, Meadow Pipit, Yellow Wagtail, Pied Wagtail, Grey Wagtail, Common Wren, Dunnock, European Robin, Whinchat, European Stonechat, Northern Wheatear, Eurasian Blackbird, Fieldfare, Song Thrush, Redwing, Mistle Thrush, Lesser Whitethroat, Common Whitethroat, Garden Warbler, Blackcap, Common Chiffchaff, Willow Warbler, Goldcrest, Firecrest, Spotted Flycatcher, Long-tailed Tit, Coal Tit, Blue Tit, Great Tit, Eurasian Nuthatch, Eurasian Treecreeper, Common Jay, Common Magpie, Eurasian Jackdaw, Rook, Carrion Crow, Common Starling, House Sparrow, Chaffinch, Brambling, European Greenfinch, European Goldfinch, Eurasian Siskin, Eurasian Linnet, Lesser Redpoll, Common Crossbill, Eurasian Bullfinch, Hawfinch, Yellowhammer, Reed Bunting Other Wildlife Site Information History and Use Access and Facilities Bewl Water lies on the border of East Sussex and Kent and is well-signposted from the A21, the main London to Hastings road. There is a car-park, information centre, toilets and other facilties and leaflets are available showing various walks around the reservoir. A complete circuit involves a walk of more than 20km so most visitors prefer shorter walks. An alternative is to return to the A21 and head south, turning off towards Ticehurst. About 3km west of Ticehurst turn off the B2099 onto Wards Lane and park in the quarry. From here a short walk leads to a hide at the nature reserve and shoreline paths. Other Sites Nearby A morning's birding at Bewl Water can be conveniently followed by an afternoon visit to Bedgebury Forest. The pinetum is one of the best sites for Hawfinch in the south-east and, although numbers have declined in recent winters, usually 30-50 birds can be seen flying in to roost in the cypress trees between late November and early March. In addition to Hawfinch the area attracts Brambling, Lesser Redpoll and Siskin in winter. Long-eared Owl also occurs in winter. Breeding species of Bedgebury Forest include Eurasian Sparrowhawk, Tawny Owl and all three British woodpeckers. An evening visit in summer may produce European Nightjar and Woodcock both of which breed. Other breeding species include a variety of warblers, tits and finches and Common Crossbill sometimes nests after an irruption year. For Bedgebury Forest leave the A21 on the B2079 towards Goudhurst and after about 2km park in the public car-park. The entire area is crossed by a network of paths and rides which can be explored freely. The Pinetum is sign-posted and there is an entrance fee. Contact Details External Links Content and images originally posted by Steve
<urn:uuid:1826fd49-bd4e-4742-afe7-40f20f74ca33>
CC-MAIN-2013-20
http://www.birdforum.net/opus/Bewl_Water_and_Bedgebury_Forest
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923587
1,777
2.921875
3
Marine water quality objectives for NSW ocean waters The introduction of the Marine Water Quality Objectives is part of the NSW Government's program to set water quality objectives for all its major waterways. In 1999, water quality objectives for NSW rivers and estuaries were introduced in 31 catchments. To complement these, the Government has developed a set of Marine Water Quality Objectives for NSW ocean waters - a key initiative under the Government's Coastal Protection Package announced in June 2001. The aim of the Marine Water Quality Objectives is to simplify and streamline the consideration of water quality in coastal planning and management. This will ensure that the values and uses that the community places on ocean waters are recognised and protected, now and into the future. The Marine Water Quality Objectives are intended for communities, local councils, Catchment Management Authorities and state agencies to use in catchment management and land use planning activities. They cover the catchment areas of the: While the Marine Water Quality Objectives are not regulatory or mandatory, they are a useful tool for strategic planning and development assessment processes. For example, they will provide local councils with agreed guideline levels for water quality when considering coastal development assessments. Page last updated: 22 March 2012
<urn:uuid:e3a9e771-a6a3-4244-8e8c-5abd9ee9e77a>
CC-MAIN-2013-20
http://www.environment.nsw.gov.au/water/mwqo/index.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927855
247
3.078125
3
The idea of games has existed since the dawn of the human race. However, video games are a relatively new concept. So much so that, we as a culture are still trying to create the “rules” of video games. This is placing us in a cultural lag; where we want and need to further games in how they play into our everyday lives. But also, to restrict the content or story of the games themselves; which is placing video game developers in a tough position. When we look back at the history of video games, it is shocking that it can be dated as far back as the 1940s, when in 1947 Tomas T. Goldsmith and Estle Ray Mann filed a United States patent request for an invention they described as a "cathode ray tube amusement device." However, video gaming would not reach mainstream popularity until the 1970s, when arcade video games began with such titles like “Pong” and “Space Invaders”. Soon after the release of the arcade machines. Gaming consoles were released with games like the “Odysseys”. Then home computers were introduced to the general public. Since then, video gaming has become a popular form of entertainment and a large part of modern culture. There are currently considered to be eight generations of video game consoles, with the sixth, seventh and eighth concurrently ongoing, and the ninth just around the corner. (History of Video Games) Gaming has come a long way from its humble beginnings. The industry has changed from producing games for the “hardcore” gamer; the stereotypical thirty year old man who lives with his mom and paints models. To producing games for the “casual’ gamer and this ranges from three year olds to ninety year olds. This change has created a massive change in the gaming community. Gaming had come from a select few, to everyone. Anyone can play a video game; it’s no longer the guy who cant get a girl, playing video games. It’s girls and guys playing together. This has taken the gaming community from a very negative perception of being seen as “losers in their mom’s basement.” To a very positive perception “that everyone games in some way.” Almost all gamers own one of the following: Xbox, Playstation, Wii, smart phone, computer, and/or a tablet device such as the Ipad. Each of these devices offer a different experience of gameplay. They also indicate where the individual is on the scale of hardcore to casual gamer. Most hardcore gamers own either the Xbox or Playstation or both. Where as the more casual gamers may own anything from a smart phone to the Wii. However, this in no way limits what “level of gamer” an individual may be. The gaming community has created many slang terms. Most are used primarily while playing video games. Such as; “N00b” a derogatory term for a "beginner” or “pwn” meaning to ‘own’, ‘shut down’ or ‘destroy’. An example of these words in action would be; “I totally just pwned that n00b!” Many gamers usually participate in regular “video game nights”. Where a group of anywhere from two to ten gamers come together under the same roof to have a good time. Each bringing different snack foods, sodas, or games to the table. In contrast to the friendly group activities. It is not hard to find a gamer playing online with others, screaming, swearing, and threatening his or her competition. This has caused many problems in the video game community, with some “threats” actually made real. Incidents such as this are often widely publicized. This media attention often leads many people to believe that video games are bad for the youth of the nation. When we look at the gaming culture it is easy to see that it is not without it’s flaws. However, no culture can say its perfect. The gaming community is always trying to innovate and change for the better, wether it is safer online play for a child or better graphics for the new game. However the gaming culture very rarely changes.
<urn:uuid:d014f3e5-c8d3-425e-b277-a15a047c70b4>
CC-MAIN-2013-20
http://www.ign.com/blogs/scarletskink/category/uncategorized
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972297
875
2.6875
3
National Organization for Rare Disorders, Inc. It is possible that the main title of the report Arachnoid Cysts is not the name you expected. Please check the synonyms listing to find the alternate name(s) and disorder subdivision(s) covered by this report. Arachnoid cysts are fluid-filled sacs that occur on the arachnoid membrane that covers the brain (intracranial) and the spinal cord (spinal). There are three membranes covering these components of the central nervous system: dura mater, arachnoid, and pia mater. Arachnoid cysts appear on the arachnoid membrane, and they may also expand into the space between the pia mater and arachnoid membranes (subarachnoid space). The most common locations for intracranial arachnoid cysts are the middle fossa (near the temporal lobe), the suprasellar region (near the third ventricle) and the posterior fossa, which contains the cerebellum, pons, and medulla oblongata. In many cases, arachnoid cysts do not cause symptoms (asymptomatic). In cases in which symptoms occur, headaches, seizures and abnormal accumulation of excessive cerebrospinal fluid in the brain (hydrocephalus) are common. The exact cause of arachnoid cysts is unknown. Arachnoid cysts are classified according to location. NIH/National Institute of Neurological Disorders and Stroke P.O. Box 5801 Bethesda, MD 20824 MUMS National Parent-to-Parent Network 150 Custer Court Green Bay, WI 54301-1243 Genetic and Rare Diseases (GARD) Information Center PO Box 8126 Gaithersburg, MD 20898-8126 Arachnoid Cyst Awareness Network 616 Corporate Way Valley Cottage, NY 10989-2050 For a Complete Report This is an abstract of a report from the National Organization for Rare Disorders (NORD). A copy of the complete report can be downloaded free from the NORD website for registered users. The complete report contains additional information including symptoms, causes, affected population, related disorders, standard and investigational therapies (if available), and references from medical literature. For a full-text version of this topic, go to MyD-H, the Dartmouth-Hitchcock patient portal. You must be a registered MyD-H user for the Lebanon, Manchester, or Nashua locations to access this site. The information provided in this report is not intended for diagnostic purposes. It is provided for informational purposes only. NORD recommends that affected individuals seek the advice or counsel of their own personal physicians. It is possible that the title of this topic is not the name you selected. Please check the Synonyms listing to find the alternate name(s) and Disorder Subdivision(s) covered by this report This disease entry is based upon medical information available through the date at the end of the topic. Since NORD's resources are limited, it is not possible to keep every entry in the Rare Disease Database completely current and accurate. Please check with the agencies listed in the Resources section for the most current information about this disorder. For additional information and assistance about rare disorders, please contact the National Organization for Rare Disorders at P.O. Box 1968, Danbury, CT 06813-1968; phone (203) 744-0100; web site www.rarediseases.org or email email@example.com Last Updated: 4/23/2008 Copyright 1994, 2002, 2004 National Organization for Rare Disorders, Inc. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:67639fb5-cfd5-4732-8d03-b41de538aa3e>
CC-MAIN-2013-20
http://cancer.dartmouth.edu/pf/health_encyclopedia/nord989
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.817481
800
2.8125
3
Doing history forces us to make choices about the scale of the history with which we are concerned. Take the analogy suggested by the maps above. Are we concerned with Asia, China, or Shandong? Or in historical terms, are we concerned with the whole of the Chinese Revolution; the base area of Yenan, or the specific experience of a handful of villages in Shandong during the 1940s? And given the fundamental heterogeneity of social life, the choice of scale makes a big difference to the findings (post). Historians differ fundamentally around the decisions they make about scale. William Hinton provides what is almost a month-to-month description of the Chinese Revolution in Fanshen village – a collection of a few hundred families (Fanshen: A Documentary of Revolution in a Chinese Village). The book covers a few years and the events of a few hundred people. Likewise, Emmanuel Le Roy Ladurie offers a deep treatment of the villagers of Montaillou; once again, a single village and a limited time (Montaillou: The Promised Land of Error). Diane Vaughan offers a full study of the fateful decision to launch the Challenger space shuttle (The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA). She hopes to shed light on high-risk technology decision-making through careful study of a single incident. These histories are limited in time and space, and they can appropriately be called “micro-history.” At the other end of the scale spectrum, William McNeill provides a history of the world (A World History) and a history of the world’s diseases (Plagues and Peoples); Massimo Livi-Bacci offers a history of the world’s population (A Concise History of World Population); Jared Diamond offers a history of the interrelationships between the Old World and the New World through the medium of weapons and disease (Guns, Germs, and Steel: The Fates of Human Societies); and Goudsblom and De Vries provide an environmental history of the world (Mappae Mundi: Humans and their Habitats in a Long-Term Socio-Ecological Perspective: Myths, Maps and Models). In each of these cases, the historian has chosen a scale that encompasses virtually the whole of the globe, over millennia of time. These histories can certainly be called “macro-history.” Both micro- and macro-history have important shortcomings. Micro-history leaves us with the question, “how does this particular village shed light on anything larger?”. And macro-history leaves us with the question, “how do these grand assertions about causality really work out in the context of Canada or Sichuan?”. The first threatens to be so particular as to lose all interest, whereas the second threatens to be so general as to lose all empirical relevance to real historical processes. There is a third choice available to the historian, however, that addresses both points. This is to choose a scale that encompasses enough time and space to be genuinely interesting and important, but not so much as to defy valid analysis. This level of scale might be regional – for example, G. William Skinner’s analysis of the macro-regions of China (post). It might be national – for example, a social history of Indonesia (M. C. Ricklefs, A History of Modern Indonesia Since c. 1200). And it might be supra-national – for example, an economic history of Western Europe. The key point is that historians in this middle range are free to choose the scale of analysis that seems to permit the best level of conceptualization of history, given the evidence that is available and the social processes that appear to be at work. And this mid-level scale permits the historian to make substantive judgments about the “reach” of social processes that are likely to play a causal role in the story that needs telling. This level of analysis can be referred to as “meso-history,” and it appears to offer an ideal mix of specificity and generality. Here are a few works that represent the best of meso-history: R. Bin Wong, China Transformed: Historical Change and the Limits of European Experience; Kenneth Pomeranz, The Great Divergence: China, Europe, and the Making of the Modern World Economy; and Charles Tilly, Coercion, Capital and European States: AD 990 - 1992. Wong and Tilly define their scope in terms of supra-national regions. Pomeranz argues for a sub-national scale: comparison of England's agricultural heartland with the Yangzi region in China. Each pays close attention to the problem of defining the level of scale that works best for the particular task. And each does a stellar job of identifying the concrete social processes and relationships that hold this regional social system together. Both macro- and meso-history fall in the general category of "large-scale" history. So let's analyze this conception of history. Large-scale history can be defined in these terms. - The inquiry defines its scope over a long time period and/or a large geographical range; - the inquiry undertakes to account for large structural characteristics, processes, and conditions as historical outcomes; - the inquiry singles out large structural characteristics within the social order as central causes leading to the observed historical outcomes; - the inquiry aspires to some form of comparative generality across historical contexts, both in its diagnosis of causes and its attribution of patterns of stability and development. - History of the “long durée”—accounts of the development of the large-scale features of a particular region, nation, or civilization, including population history, economic history, political history, war and peace, cultural formations, and religion - Comparative history—a comparative account, grounded in a particular set of questions, of the similarities and contrasts of related institutions or circumstances in separated contexts. E.g. states, economic institutions, patterns of agriculture, property systems, bureaucracies. The objective is to discover causal regularities, test existing social theories, and formulate new social theories - World history—accounts of the major civilizations of the world and their histories of internal development and inter-related contact and development
<urn:uuid:0d025852-e36e-49c1-958e-dc11479f2e93>
CC-MAIN-2013-20
http://understandingsociety.blogspot.com/2009/07/scale-in-history-micro-meso-macro.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907621
1,298
2.765625
3
As gas prices rise, more attention is again being paid to making transport trucks more fuel efficient. New trucks with hybrid or electric drive may be an option. For existing fleets, however, the quickest payback probably comes from making the vehicles more aerodynamic. Most trailer trucks have tall, squared-off ends which produce a lot of drag and make the truck less efficient. Manufacturer ATDynamics is now producing an origami-like product called TrailerTail that can be attached to the trailer in order to improve fuel efficiency and economy. "TrailerTail delivers 6.6% fuel savings at 65 mph according to SAE type II J1321 third-party tests and is compatible with all major dry van and refrigerated trailer configurations." ATDynamics has several components that can improve the aerodynamic efficiency of tractor-trailer trucks. Ideas like this have been explored in recent years with university research and studies at Lawrence Livermore National Laboratory. These products are now becoming commercially available, and will likely be seen more often in the coming years. written by Garrett, April 09, 2011 written by june, April 11, 2011 written by Environment911, April 11, 2011 written by Paulo, April 21, 2011 |< Prev||Next >|
<urn:uuid:6be93ca8-125c-4f90-9dda-b523b7eb83ca>
CC-MAIN-2013-20
http://www.ecogeek.org/automobiles/3475-aerodynamic-trucks-save-fuel-and-money
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947883
261
2.671875
3
Building Envelope/Building Science Cellulose Insulation (Build It Green fact sheet, .pdf) Cellulose insulation is made from recycled paper that is applied as either loose fill into attics and closed wall cavities or damp-sprayed into open wall cavities. Due to its recycled content and potentially higher energy and acoustic performance, cellulose is an environmentally preferable product. Cotton insulation comes in batts that are comparable to fiberglass in ease of installation, fire resistance, and energy efficiency. However, it has better sound dampening qualities and avoids some of the potential health problems of fiberglass. Fiberglass Insulation (Build It Green fact sheet, .pdf) Fiberglass insulation is composed of commonly found minerals-primarily silica-that are spun from a molten state into fibers. From an environmental perspective, there are some significant drawbacks to fiberglass. However, some manufacturers have made noteworthy strides to address some of the problems. "Green" fiberglass insulations are made with recycled content materials and may have better indoor air quality properties than conventional fiberglass. Insulation and Air Sealing (Build It Green fact sheet, .pdf) Insulation and a tight exterior seal are two of the most important components of a home's protection against outside conditions, often called its "thermal envelope". This envelope consists of all six sides of the home, the four walls, roof, and foundation. All envelope components interact as a system to affect the flow of heat, air, moisture, and sound into or out of a home. The better the thermal envelope performs, the better the health and comfort of occupants and the lower their utility and maintenance bills. Passive Solar Design, Part 1 (Build It Green fact sheet, .pdf) Passive solar design is the process of creating a home that provides both shelter and comfort year-round while responding to regional climate conditions and minimizing dependence on energy-consuming mechanical systems. The goal is to build and occupy a home that a) utilizes solar heat gain in the winter to warm the interior of a home, b) controls solar heat gain in the summer, and c) facilitates daylighting, natural ventilation, and nighttime cooling to keep a home comfortably cool in the summer. Passive Solar Design, Part 2 (Build It Green fact sheet, .pdf) Moving on from Passive Solar Design, Part 1, the interior of the home, surfaces and building materials must be carefully chosen and strategically placed to perform as absorbers and thermal mass storage of solar heat gain in winter and convective cooling breezes in summer. In addition, window coverings for preventing heat loss in winter and heat gain in summer are recommended. Finally, fans and controls for air distribution from one room to another can provide supplemental heating and cooling to the entire home. Radiant Barriers (Build It Green fact sheet, .pdf) A radiant barrier system (RBS) is comprised of a sheet of reflective foil placed next to an air space, the combination of which discourages radiant heat transfer. In a hot climate, an RBS properly installed beneath a roof blocks up to 95% of the heat transfer from the roof to the attic insulation, resulting in a cooler living space and less cooling load. Wall systems (Build It Green fact sheet, .pdf) Today, wood framing is the most common construction method for residential and small scale commercial buildings. However, environmental concerns, and volatile fuel and lumber prices are driving the quest for high performance building envelope systems such as Structural Insulated Panels (SIPs) and Insulated Concrete Forms (ICFs). In addition, natural disasters throughout the U.S. and large payouts for insurance companies are motivating builders to consider more robust and durable building material. Water Management (Build It Green fact sheet, .pdf) The underlying principle of water management is to layer materials from roof to foundation in such a way that water is always directed downward and outward from the building. Good water management practices require good drainage details. The typical building envelope is subject to water entry in numerous locations. Keeping water out of a building envelope is the primary line of defense against mold and a necessary condition for durability. Windows (Build It Green fact sheet, .pdf) Inefficient windows can account for 9% of all residential energy consumption. Energy performance in windows can be improved through multiple panes of glass, low conductivity gasses (Argon and Krypton) between panes to boost R-value, and low emissive (Low-E) coatings of tin and silver oxide to block radiant heat gain. Window frame materials such as wood, fiberglass, composites, vinyl, and metal, are also a consideration for conductivity and environmental impact. Performance Fenestration & Case Studies (pdf) Inefficient windows and doors are a major contribution of heat loss. This report reviews the different types of high performance windows and doors and their impact on energy savings. Building Science Consulting BSC is a Boston based building science consulting firm. Their web site offers technical resources for a variety of climates and building situations. Information is applicable to building professionals and home owners. Building Science Consulting (Unvented Roof Systems) BSC has developed several unvented roof systems for hot-dry and hot-humid climates. This link contains information on regions where these types roofs are appropriate, and technical drawings for the various systems. California Building Climate Zone Map Depicts the 16 climate zones within California. Cool Roofing Materials Database Energy-efficient roofing systems can reduce roof temperatures significantly during the summer, and thus reduce the energy requirements for air conditioning. The purpose of this Cool Roofing Materials Database is to assist with the selection of roofing materials which reflect, or otherwise reject, the sun's radiant energy, before it penetrates into the interior of the building. Efficient Windows Collaborative EWC members have made a commitment to manufacture and promote energy-efficient windows. This site provides unbiased information on the benefits of energy-efficient windows, descriptions of how they work, and recommendations for their selection and use.
<urn:uuid:5b076ed7-1c45-4c93-b5fb-8361023f7322>
CC-MAIN-2013-20
http://builditgreen.org/building-envelope-building-science/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92994
1,255
3.203125
3
Coffee may taste good and get you going in the morning, but how will it affect your health? A growing body of research shows that coffee drinkers, compared to non-drinkers are: • Less likely to have type 2 diabetes, Parkinson’s disease, and dementia. • Have fewer cases of certain cancers, heart rhythm problems, and strokes. Some experts say there is certainly much more good news than bad news, in terms of coffee and health. But, they say, coffee isn’t proven to prevent those conditions. Researchers don’t ask people to drink or skip coffee for the sake of science. Instead they ask them about their coffee habits. Those studies can’t show cause and effect. It’s possible that coffee drinkers have others advantages, such as better diets, more exercise, or protective genes. So there isn’t solid proof. But there are signs of potential health perks – and a few cautions. Here is a condition-by-condition look at the research. TYPE 2 DIABETES One expert calls the data on coffee and [type 2] diabetes, “pretty solid,” based on more than 15 published studies. He says that the vast majority of those studies have shown a benefit of coffee on the prevention of diabetes, and that there is also evidence that decaffeinated coffee may have the same benefit as regular coffee. How might coffee keep diabetes at bay? “It’s the whole package,” says this one expert. Antioxidants – nutrients that help prevent tissue damage caused by molecules called oxygen-free radicals – have been pointed to. Coffee does have a very strong antioxidant capacity. Coffee also contains minerals such as magnesium and chromium, which help the body use the hormone insulin, which controls blood sugar. In type 2 diabetes, the body loses its ability to use insulin and regulate blood sugar effectively. It’s probably not the caffeine, though. Studies allow researchers to safely say that the benefits are not likely to be due to caffeine. HOLD THE CAFFEINE? Just because coffee contains good stuff, it does not necessarily follow that it’s good for us. It has not really been shown that coffee drinking leads to an increase in antioxidants in the body. We know that there are antioxidants in large quantities in coffee itself, especially when it’s freshly brewed, but we don’t know whether these antioxidants appear in the bloodstream and in the body when the person drinks it. Those studies have not been done. Regular coffee, of course, also contains caffeine. Caffeine can raise blood pressure, as well as blood levels of the fight-or-flight chemical epinephrine (also called adrenaline). HEART DISEASE AND STROKE Coffee may counter several risk factors for heart attack and stroke. First, there’s the potential effect on type 2 diabetes risk. Type 2 diabetes makes heart disease and stroke more likely. Besides that, coffee has been linked to lower risks for heart rhythm disturbances (another heart attack and stroke risk factor) in men and women, and lower risk for strokes in women. And, for women, coffee may mean a lower risk of stroke. In 2009, a study of 83,700 nurses enrolled in the long-term Nurses’ Health Study showed a 20% lower risk of stroke in those who reported drinking two or more cups of coffee daily, compared to women who drank less coffee or none at all. That pattern held regardless of whether the women had high blood pressure, high cholesterol levels, and type 2 diabetes. PARKINSON’S AND ALZHEIMER’S DISEASES For Parkinson’s disease, the date have always been very consistent: higher consumption of coffee is associated with decreased risk of Parkinson’s. That seems to be due to caffeine, though exactly how that works isn’t clear. Coffee has also been linked to lower risk of dementia, including Alzheimer’s disease. A 2009 study from Finland and Sweden showed that, out of 1,400 people followed for about 20 years, those who reported drinking 3 – 5 cups of coffee daily were 65% less likely to develop dementia and Alzheimer’s disease, compared with nondrinkers or occasional coffee drinkers. The evidence of a cancer protection effect of coffee is weaker than that for type 2 diabetes. But for liver cancer, the data seem to be very consistent. All of the studies have shown that high coffee consumption is associated with decreased risk of liver cirrhosis and liver cancer. As interesting a finding as it is, it’s not clear how it might work. Again, this research shows a possible association, but like most studies on coffee and health, does not show cause and effect. In August 2010, the American College of Obstetricians and Gynecologist (ACOG) stated that moderate caffeine drinking – less that 200 mg per day, or about the amount in 12 ounces of coffee – doesn’t appear to have any major effects on causing miscarriage, premature delivery, or fetal growth. But the effects of larger doses are unknown, and other research shows that pregnant women who drink many cups of coffee daily may be at greater risk for miscarriage than non-drinkers or moderate drinkers. Again, it’s not clear whether the coffee was responsible for that. CALORIES, HEART BURN, AND URINE You won’t break your calorie budget on coffee – until you start adding the trimmings. A 6-ounce cup of black coffee contains just 7 calories. Add some half and half and you’ll get 46 calories. If you favor a liquid nondairy creamer, that will set you back 48 calories. A teaspoon of sugar will add about 23 calories. Drink a lot of coffee and you may head to the bathroom more often. Caffeine is a mild diuretic – that is, it makes you urinate more than you would without it. Decaffeinated coffee has about the same effect on urine production as water. Both regular and decaffeinated coffee contain acids that make heartburn worse. See you next week. Copyright 2012 Dominica News Online, DURAVISION INC. All Rights Reserved. This material may not be published, broadcast, rewritten or distributed.
<urn:uuid:5865825a-2c9f-483a-acd8-1daa12468cc4>
CC-MAIN-2013-20
http://dominicanewsonline.com/news/homepage/columns/health-talk/health-talk-pros-and-cons-of-coffee/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942078
1,332
3.140625
3
Brain atrophy, also called cerebral atrophy, can develop when there is loss of grey matter (brain cells) or white matter (brain cell connections). Brain atrophy causes shrinkage of the brain and an overall reduction in brain size. Brain atrophy is more common with increasing age and several medical conditions such as epilepsy, strokes, Alzheimer’s dementia, multiple sclerosis, cerebral palsy, traumatic brain injury, Huntington’s chorea, and Parkinson’s disease. Brain atrophy can cause cognitive deficits that range from mild memory loss to severe dementia and aphasia. We have previously discussed the finding that physical fitness in midlife delays the onset of chronic medical conditions in later life and that physical activity decreases inflammatory markers associated with cardiovascular disease and aging. Recent research provides evidence that physical activity can protect against brain atrophy in older age. Researchers, led by Dr. Alan J. Gow from the University of Edinburgh, have found that physical activity was associated with less brain atrophy and white matter lesions in an elderly population. The results of their study were published online in the journal Neurology. The researchers studied the association between brain atrophy and physical activity in a longitudinal study using 691 study participants enrolled in the Lothian Birth Cohort 1936 study. The researchers used self-reported leisure and physical activity at age 70 years and correlated it with structural brain changes using brain imaging at age 73 years. The researchers found that physical activity was associated with less atrophy and white matter lesions, but not with socialization. The authors wrote, “Physical activity was associated with higher [fractional anisotropy], gray and [normal-appearing white matter] volumes, lower [white matter lesion] load, and less brain atrophy 3 years later. An effect on atrophy, gray matter volume, and [white matter lesion] load from the computational measures, and rated atrophy, remained after inclusion of age, sex, social class, prior cognitive ability, and self-reported health measures”. The authors also wrote, “Indeed, reduction in cardiovascular risk profile is one of the key mechanisms proposed as underlying the effect of physical activity on cognitive aging.3,8 The possibility that physical activity is a proxy for better general health should not be overlooked…The indicative benefit of physical activity deserves further study, including randomized control trials of physical interventions, to rule out alternative causal mechanisms whereby physical activity may indicate better general health, including lower cardiovascular risk, which is itself associated with fewer [white matter lesion] and less atrophy ”. The authors concluded, “The neuroprotective effect of physical activity is supported by the current analyses, in agreement with ‘indications that regular exercise promotes the structural … integrity of the CNS and, thereby, counteracts age-related decline.’ These indicative findings are important to those developing interventions designed to reduce or delay cognitive decline in the elderly, although ultimately a causal effect can only be demonstrated in randomized control trials of physical activity. There was, however, no support for a beneficial effect of more intellectually challenging or socially orientated activities” The health benefits of exercise and physical activity are well documented. As this study shows, physical activity may help to prevent and delay the onset of brain atrophy in the elderly. This study identifies an association between decreased white matter lesions and brain atrophy and exercise, but does not prove a cause and effect relationship. It may be that individuals who are engaged in exercise are healthier overall and thus have less brain atrophy. Future studies should work to identify a possible mechanism for this finding. Regardless, it’s always a good idea to engage in regular exercise and physical activity. Alan J. Gow et al. “Neuroprotective lifestyles and the aging brain: Activity, atrophy, and white matter integrity” Neurology published online October 23, 2012 vol. 79 no. 17 pages 1802 – 1808.
<urn:uuid:dc6a1254-2d2a-49b7-a709-aa23f5fc4abc>
CC-MAIN-2013-20
http://drsamgirgis.com/2012/10/23/exercise-prevents-brain-atrophy-and-other-signs-of-aging-in-the-elderly/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936965
803
3.625
4
Tuesday, January 20, 2009 how to: grow hope I've been trying to include my children in my excitement and anticipation about this Inauguration Day. We've had lots of discussions about hope and new opportunities. We decided to do a little project in honor of this significant day in U.S. history. This is a symbolic reminder of the power of planting seeds, caring for them, and watching them grow. It's also an easy activity for any age. -For instructions on how to grow paper whites in recycled soup cans, please review this December post. -Choose a word you would like to grow and plant a bulb for each letter. -Stamp, embroider, or glue your letters to the front of each can. -Nurture your word and watch it grow!
<urn:uuid:e841206e-9887-424f-b86f-297ee24944b0>
CC-MAIN-2013-20
http://mayamade.blogspot.com/2009/01/how-to-grow-hope.html?showComment=1232508120000
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970003
166
2.875
3
It is possible to look back on an action you have taken with a lens of 'well, I wish I had done that differently'. You can take the point of view that hindsight sometimes shows opportunities for having done something better, but you did the best you could at the time. You can move forward with lessons learned and without any regret. Sometimes, though, the feeling of regret takes over. Perhaps the action you took had a significant impact, and the outcome was far removed from what you wanted. Maybe people or things you cared about were impacted. Perhaps an opportunity was lost. Regret can drag you down. It eats away at the energy and drive you need to keep moving forward with the best you have to offer. That in itself is a compelling reason for clearing it away when you become aware of it. How can you be at peace so you can move on and not be dragged down? Here are three things that I find helpful: - Choose an attitude about the situation that works for you. Such as, 'I meant well, but I made a mistake. I won't do it again.' You could focus on what you learned from it. - If someone was impacted by your actions, you can choose to communicate with them about it. You could let them know that you understand your actions affected them, apologize, and commit to something for the future. - Write down some thoughts, such as what you learned from the situation. To quote the Dalai Lama: 'When you lose, don't lose the lesson'. Much learning comes from things that you would do differently with hindsight. Be grateful for the opportunity you had to learn.
<urn:uuid:475aa520-3399-417f-9faf-75139a4fe1a8>
CC-MAIN-2013-20
http://www.donna-horn.com/2011_09_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986338
335
2.546875
3
Also indexed as:Trans Fatty Acids Margarine was developed in the late 1800s as an inexpensive alternative to butter. Typically margarine is made from one or more partially hydrogenated vegetable oils (soy, corn, sunflower, or safflower), but it may also contain animal fats. Packaged baked goods, crackers and chips Most processed foods contain partially hydrogenated soybean, coconut, or palm oil. Vegetable shortening is created by the complete hydrogenation of vegetable oil. Because the hydrogenation process is complete, the shortening contains very few trans fats. Copyright 2013 Aisle7. All rights reserved. Aisle7.com The information presented in the Food Guide is for informational purposes only and was created by a team of US-registered dietitians and food experts. Consult your doctor, practitioner, and/or pharmacist for any health problem and before using any supplements, making dietary changes, or before making any changes in prescribed medications. Information expires June 2014.
<urn:uuid:c16c0c2f-d07d-499d-81c1-72542ef0838f>
CC-MAIN-2013-20
http://www.vitaminshoppe.com/content/en/healthguide/hncontent_printable.jsp?resource=/assets/food-guide/trans-fats/~default
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927537
208
2.765625
3
Recently I taught a volunteer class to professional engineers on MATLAB. Two of the most requested items of interest were 1. How do I read an excel file? 2. How do I do curve fitting? We address the first question here. It is easy to read an excel file with the xlsread command but what do you with it once the file has been assigned. So we took a simple example of an excel spreadsheet where the first column consists of a student number and the second column has the examination scores of the students. You are asked to find the highest score. It is better to download (right click and save target) the program as single quotes in the pasted version do not translate properly when pasted into a mfile editor of MATLAB or you can read the html version for clarity and sample output. %% READING AN EXCEL SPREADSHEET IN MATLAB % Language : Matlab 2008a % Authors : Autar Kaw % Last Revised : December 12, 2010 % Abstract: This program shows you how to read an excel file in MATLAB % The example has student numbers in first column and their score in the % second column disp(‘This program shows how to read an excel file in MATLAB’) disp(‘Authors : Autar Kaw’) disp(‘Last Revised : December 12, 2010′) % We have two column data and it has headers in the first row. % That is why we read the data from A2 to B32. disp (‘The data read from the excel spreadsheet is’) % Finding the number of rows and columns % Assigning the scores to a vector called score % Using the max command to find the maximum score % HW: Write your own function “max” % Finding which student got the highest score % HW: What if more than one student scored the highest grade?? fprintf(‘Student Number# %g scored the maximum score of %g’,… This post is brought to you by - Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, - the textbook on Numerical Methods with Applications available from the lulu storefront, - the textbook on Introduction to Programming Concepts Using MATLAB, and - the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos
<urn:uuid:03f3d4f5-6293-4c99-8364-64432e38c282>
CC-MAIN-2013-20
http://autarkaw.wordpress.com/2010/12/14/reading-an-excel-file-in-matlab/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.831113
532
2.671875
3
American English File Level 1 Teacher Presentation Tool - ISBN: 978-0-19-477485-7 - Binding: CD-ROM With texts and topics that make learners want to speak, American English File is the course that gets students talking. It gives you full skills coverage with a clear focus on pronunciation, plus wide-ranging support and resources too. Resources include Test Generator CD-ROMs, DVDs, Multi-ROMs, and websites. The highly popular teacher's site has extra lesson ideas and resources for you to download. Where to orderContact Oxford directly to place your order and for information and advice on any of our materials. Part of... American English File - Great texts that motivate students to talk - Four-skills syllabus with a clear focus on pronunciation - Level-specific features to address learners' different needs - Test Generator CD-ROMs - Online support, resources, and lesson ideas (Teacher Link) "Texts must be interesting enough for students to want to read them in their own language. Otherwise, how can we expect students to want to read them in English?" This is the authors' golden rule when they choose texts and topics for every level of American English File. It ensures you get material that learners will enjoy reading and will want to talk about. Texts such as 'Could you live without money?' really help to generate opinion and discussion. They create a desire to communicate. Having created the desire, American English File then helps you to develop learners' communication skills. One way it does this is with a strong focus on pronunciation. Research shows that poor pronunciation is a major contributor to breakdowns in communication (Jennifer Jenkins: The Phonology of English as an International Language). American English File integrates pronunciation into every lesson - the 'little and often' approach. But that's not all. The unique English File Sounds Chart puts a picture to each sound in the phonetic alphabet, so learners find it easier to remember the sounds and, ultimately, improve their pronunciation.
<urn:uuid:351bf10e-d063-4307-b7dc-7c8b7974066a>
CC-MAIN-2013-20
http://elt.oup.com/catalogue/items/global/adult_courses/american_english_file/level_1/9780194774857?cc=us&selLanguage=en&mode=hub
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915925
425
2.703125
3
By Cornelia Lee, PsyD, Judith Gault, PhD, Emily Crocker, MS, Tracey Leedom, MS, and Amy Akers, PhD A gene is the basic unit of heredity. Genes are made from DNA, the building block of life, and carry the information for creating the proteins which are the functional units that perform a particular characteristic or function. When a gene mutates, it changes from its natural state and can cause an illness. Genetic mutations can be inherited from your parent (and therefore, occur in every cell in your body), or acquired by single cells in your body throughout a lifetime. Cerebral cavernous malformations (cavernous angiomas) can occur either sporadically, or they may run in families and be inherited due to a genetic mutation. With familial cavernous malformations, a mutation of a specific gene has occurred in every cell of your body. While it is still not known why sporadic lesions form, it is believed that acquired genetic mutations occurr in just one cell in your body and cause the formation of a sporadic cavernous angioma. You may have one cavernous malformation and have no other family members with the illness. It is believed that a majority of those diagnosed with the illness fall into this category. The cause of sporadic cavernous malformations is not known. However, it is thought that a solitary cavernous malformation can be formed when a single cell has two specific mutations, or changes in both copies of a particular gene. As that cell replicates and divides, it goes on to form the cavernous malformation. A solitary cavernous malformation may be present at birth or may form later. If you have a sporadic cavernous malformation, it is likely that your children would have no greater chance of having the illness than anyone in the general population. In certain instances, individuals with the sporadic form of the illness have more than one cavernous malformation. This can be true if the individual has a developmental venous malformation (also known as a venous anomaly or venous angioma) or if they have undergone radiation treatments in the brain or spinal cord. As MRI technology has improved, there are also more cases in which small blood vessel leakage associated with aging is interpreted as the development of a new cavernous malformation when it is not. If you have more than one cavernous malformation and don't appear to have a family history of the illness, a knowledgeable physician possibly combined with genetic testing is an appropriate approach to determining if you have the hereditary form. Familial cavernous malformations are caused by a single gene mutation in any one of at least three different genes. A mutation on any one of three genes (CCM1, CCM2, or CCM3) can cause the illness. If you have familial cavernous malformation, this illness may run in your family or you may be the first in your family to have the illness. You may have just one cavernous malformation, but are likely to have multiple cavernous malformations. Familial cavernous malformation is a hereditary illness that is an autosomal dominant condition. This means that only one parent must have the illness for it to be passed on to offspring. Statistically, if you have the familial form of the illness and you have a child with someone who does not, your child will have a 50% chance of having the illness. If you are the first in your family to have multiple cavernous malformations, you are likely to be the first in your family to have a familial mutation. This puts your risk of passing on the illness to your children at 50%. Familial cavernous malformations are caused by a genetic mutation found in every cell in your body, rather than a mutation in a single cell (sporadic cases). We each have two copies of any gene. When one copy mutated, causing it to no longer function correctly, the other copy is a backup that will perform the same function. However, the backup must work perfectly to avoid any problems caused by the original mutation. Because of naturally occurring, random genetic mutations - this is almost never the case for every cell in the body. In the case of familial cavernous malformation, a mutation on the first copy of the gene causes it to stop functioning. Intermittent but naturally occurring problems (acquired mutations) with the backup gene copy in some cells cause the formation of cavernous malformations. Wherever the backup gene fails, a cavernous malformation develops. As a result, if you have familial cavernous malformations you are likely to have more than one malformation. It is thought that almost everyone with the familial form will eventually have multiple cavernous malformations. To date, three genes have been identified to cause the familial form of cavernous malformation. The first gene was identified in 1999 and was named CCM1 (for cerebral cavernous malformation 1). Subsequently, CCM2 was identified in 2003, and CCM3 was found in 2005. Each of these genes were named ‘CCM' because, when they were identified they were each novel genes with entirely unknown functions. Since their discoveries, researchers have been working to determine the function of these genes, and why mutation of any one causes onset of cavernous malformation. About 40% of familial cavernous malformation is caused my mutations in the CCM1 gene. Additionally, this is the gene responsible for most of the cases of familial multiple cavernous malformation in Hispanic families. In fact, most Hispanics with a specific CCM1 mutation (the Common Hispanic Mutation) are thought to share a common ancestor that can be traced back at least 17 generations. CCM1 is responsible for creating the CCM1 protein, also called KRIT1, or Krev interaction-trapped 1 protein. This protein is considered to be important for basic life development, mice that are mutant for both copies of CCM1 die very early in development, prior to birth. The exact function of KRIT1 protein is not known, but it is believed to play a role in determining and maintaining the structure of endothelial cells in blood vessels in the brain. The second gene is called CCM2 and controls the production of a protein named malcavernin. The malcavernin protein is also an essential protein for life - it is needed for cardiovascular development and to maintain the structure of blood vessels. Nearly 40% of familial cavernous malformation can be linked to a CCM2 mutation. Approximately half of affected individuals in the United States who have a CCM2 mutation, have a specific mutation that deletes a majority of the CCM2 gene. The third gene, CCM3, is responsible for creating a protein called Programmed Cell Death 10 or PDCD10. The name of the protein refers to this gene's function in regulating cell survival. How this function pertains to cavernous malformation illness remains unknown; however, recent evidence suggests that CCM3 also functions to control the structure of blood vessels. Mutations in the CCM3 gene account for nearly 10% of familial cases of the illness. About 10% of families with a history of cavernous angioma have no mutations in any of the known CCM genes. Thus, there remains a possibility that a 4th CCM gene may be discovered in the future. For more information on these three genes, please visit the Genetics Home Reference. Genetics Home Reference is a service of the National Library of Medicine. These are the links: CCM1 (KRIT1): http://ghr.nlm.nih.gov/gene=krit1 CCM2 (malcavernin): http://ghr.nlm.nih.gov/gene=ccm2 CCM3 (PDCD10): http://ghr.nlm.nih.gov/gene=pdcd10 Clinical genetic testing, the only kind of testing that can be used for diagnosis, is available for all three currently known genes. See our Genetic Testing page to find specific laboratories that have been approved to perform these tests. Because not all of the genes have been identified, genetic testing can not rule out a familial mutation. However, if a mutation is identified, it becomes very easy and economical to test other family members. Whether to have genetic testing is a very personal decision. Please make sure that you have a knowledgeable genetic counselor or physician to help guide you. Many researchers working on cavernous malformations are focused on genetic issues. It seems to hold the most promise for future understanding and eventual cure. The current focus is on identifying the precise functions of the proteins created by the genes. Please see our newsletter and our blog for ongoing information about genetic discoveries in this area. To find general information on genetics, visit GeneTests or the Genetics Home Reference. Akers AL, Johnson E, Steinberg GK, Zabramski JM, Marchuk DA. Biallelic somatic and germline mutations in cerebral cavernous malformations (CCMs): Evidence for a two-hit mechanism for CCM pathogenesis. Hum Mol Genet. 2009 Mar1;18(5):919-30. Bergametti F, Denier C, Labauge P, Arnoult M, Boetto S, Clanet M, Coubes P, Echenne B, Ibrahim R, Irthum B, Jacquet G, Lonjon M, Moreau JJ, Neau JP, Parker F, Tremoulet M, Tournier-Lasserve E; Societe Francaise de Neurochirurgie. Mutations within the programmed cell death 10 gene cause cerebral cavernous malformations. Am J Hum Genet. 2005 Jan;76(1):42-51. Borikova AL, Dibble CF, Sciaky N, Welch CM, Abell AN, Bencharit S, Johnson GL. Rho Kinase Inhibition Rescues the Endothelial Cell Cerebral Cavernous Malformation Phenotype. J Biol Chem. 2010 Apr 16;285(16):11760-4 Craig HD, Gunel M, Cepeda O, Johnson EW, Ptacek L, Steinberg GK, Ogilvy CS, Berg MJ, Crawford SC, Scott RM, Steichen-Gersdorf E, Sabroe R, Kennedy CTC, Mettler G, Beis M. J, Fryer A, Awad IA, LiftonRP, Multilocus linkage identifies two new loci for a Mendelian form of stroke, cerebral cavernous malformation, at 7p15-13 and 3q25.2-27. Hum. Molec. Genet. 7: 1851-1858, 1998. Gault J, Shenkar R, Reckseik P, Awad IA. Biallelic somatic and germline CCM1 truncating mutations in cerebral cavernous malformation lesion. Stroke. 2005 Apr;36(4):872-4. He Y, Zhang H, Yu L, Gunel M, Boggon TJ, Chen H, Min W. Stabilization of VEGFR2 Signaling by Cerebral Cavernous Malformation 3 Is Critical for Vascular Development. Cell Biol. 2010 Apr; 3(116). Hsu F, Rigamonti D, and Huhn S. Epidemiology of cavernous malformations. In: Awad I and Barrow D., eds. Cavernous Malformations. Park Ridge, Ill.: American Association of Neurological Surgeons; 1993:13-23. Liquori CL, Berg MJ, Siegel AM, Huang E, Zawistowski JS, Stoffer T, Verlaan D, Balogun F, Hughes L, Leedom TP, Plummer NW, Cannella M, Maglione V, Squitieri F, Johnson EW, Rouleau GA, Ptacek L, Marchuk DA. Mutations in a gene encoding a novel protein containing a phosphotyrosine-binding domain cause type 2 cerebral cavernous malformations. Am J Hum Genet. 73(6):1459-64, Dec 2003. Liquori CL, Berg MJ, Squitieri F, Leedom TP, Ptacek L, Johnson EW, Marchuk DA. Deletions in CCM2 are a common cause of cerebral cavernous malformations. Am j Hum Genet. 2007 Jan;80(1):69-75. Pagenstecher A, Stahl S, Sure U, Felbor U. A two-hit mechanism causes cerebral cavernous malformations: complete inactivation of CCM1, CCM2, or CCM3 in affected endothelial cells. Hum Mol Genet. 2009 Mar;18(5):911-8. Stockton RA, Shenkar R, Awad IA, Ginsberg MH. Cereberal cavernous malformations proteins inhibit Rho kinase to stabilize vascular integrity. J Exp med. 2010 Apr 12;207(4):881-96. Whitehead KJ, Chan AC, Navankasattusas S, Koh W, London NR, Ling J, Mayo AH, Drakos SG, Marchuk DA, Davis GE, Li DY. The cerebral cavernous malformation signaling pathway promotes vascular integrity via Rho GTPases. Nat Med. 2009 Feb; 15(2):177-84.Zawistowski JS, Stalheim L, Uhlik MT, Abell AN, Ancrile BB, Johnson GL, Marchuk DA. CCM1 and CCM2 protein interactions in cell signaling: implications for cerebral cavernous malformations pathogenesis. Hum Mol Genet. 2005 Sep 1;14(17):2521-31. This page was last updated 1/06/11
<urn:uuid:8a7a3a7f-ec09-454f-9f7a-56b425645e99>
CC-MAIN-2013-20
http://www.angiomaalliance.org/pages.aspx?content=65&id=53
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.895818
2,903
3.609375
4
|Australian Journal of Educational Technology 1989, 5(2), 132-160. In this paper, the criteria for selecting modern learning technologies are discussed and it is suggested that four teaching/learning activities might form the basis for selection combined with a number of types of conceptual representations. The most important aspects for a designer are the match between learning task and its ability to be presented or manipulated by the learner using a decreasing range of information technologies. Fifteenth century Europeans 'knew', that the sky was made of closed concentric crystal spheres, rotating around a central earth and carrying the stars and planets. That 'knowledge' structured everything they did and thought, because it told them the truth. Then Galileo's telescope changed the truth. (Burke, 1986, p.9)Over the past three years we have seen major changes in the information technologies. With the advent of the most recent computers such as the NeXT workstation, we are presented with a black box which enables words, numbers, visuals, sounds, dictionaries, thesauri, and external events to be controlled, manipulated and represented to the user in a variety of forms, often simultaneously, and also to other users linked into a network. Over this period, significant developments have also occurred in conceptualising research into the use of media in education and training. It is this relationship which forms the basis of this paper; the discussion will be divided into three main elements-technology, instructional design, and some ways of bridging the cultures and selecting modern media. We live in a world where ideas and manipulations can be achieved simply with tools such as computers and computer-controlled robots, the challenge for instructional designers is to recognise the possibilities and employ technologies through which the learner can manipulate the ideas, concepts and even physical skills being taught. In the past where media have been selected for learning, the algorithms often focussed upon the simple identification of attributes, motion versus still, colour versus black and white, projected versus opaque, etc (see for example, Kemp, 1977 & Romiszowski, 1981). With the sophistication of todays learning technologies, these rather simple conceptions are no longer adequate. The choices are most often within one medium rather than between a variety of media forms. The classification schemes are difficult to use when you are looking at combinations of forms within the one lesson presentation. To achieve better use of information technologies the instructional designer needs more than a simplistic grasp of the possibilities of the technology. The movement towards more integration of systems and technologies has provided an interesting environment for designers. It is becoming less necessary to learn about the diversity of different hardware systems as they start to adopt common user-interfaces and employ one or two formats for delivery. By way of a simple example, the new disk drives available with the latest Macintosh computers can read and write Apple II, Macintosh and IBM, high and low density formats - one drive suits all! Thus conceptualising anything in narrow hardware terms will not address the concepts to be learned and cognitive requirements of the task. This approach has always been limited by the availability of the necessary equipment, but such a limited conception of technology should not be the driving force for developing instructional programs for the next decade. The cost of hardware is decreasing, and the number of elements required to form a useful workstation is also declining. The workstation concept, which has grown with the advent of the word processor and the microcomputer, on which most are based, has enabled the presentation and manipulation of concepts in ways previously only possible with combination of media forms or more sophisticated computer systems. This power of manipulation and presentation of ideas has not gone unnoticed by such proponents for the use of technology in the teaching of mathematical ideas (Kaput, 1986; Papert, 1980; Pea, 1987). Foremost among these enthusiasts has been Seymour Papert, who generated some interesting challenges for educators with his book Mindstorms just over eight years ago. Since those first challenges, the technologies, which enable the manipulation and generation of ideas, have also developed. Four to five years ago the Macintosh burst onto the scene and provided the user with a graphic interface as a standard. The user was then able to manipulate concepts visually and more intuitively than had been available on mainframes or under the mnemonic operating systems of some personal computers. The provision of these powerful tools has enabled concepts to be understood more completely and learned more efficiently. Understanding the integral calculus of LOGO can lead to complex mathematical ideas in an intuitive context well before the student has progressed to levels of formal operational thought. Dealing with pictorial representations has also enabled the designer to present complex concepts in forms that are seductively simple to the learner. Shapes can be stretched and distorted by manipulating a "mouse" attached to "handles" of the figure. The latest graphics drawing tools use tangential line "handles" to change curvature and create complex smooth figures. Technologies, and particularly information technologies, are at a point where they can easily integrate a variety of components into one device. With increasing power of small systems there are also other trends which predict a greater integration of technologies and a corresponding reduction in the currently considered separate hardware/communications technologies. Nicholas Negroponte, Director of the Massachusetts Institute of Technology Media Lab, has described the situation as a series of overlapping circles. Using figure 1, he indicated to senior executives in the communications industries that their strategic planning for the future should take into account the convergence of technologies and that their products would increasingly become interchangeable and 'playable' on the one computer-based system. Figure 1: Converging technology industries (Negroponte, in Brand, 1988) An excellent example of where Negroponte's conception would lead is epitomised though recent developments in personal computers, such as Steve Job's next computer, where a number of information storage devices are combined, quite literally, into one black box. These developments can prove a boon to the designer in that more senses can be employed in the learning interaction between the learner and the technology. However, at the same time they raise instructional design challenges about the way the interaction should be developed. In studies of technology-as-hardware, student learning has not been enhanced by the hardware alone, other factors, in particular the design of the learning materials using the technology, have been more important (Clark, 1983, Johnson et al, 1988; Salomon, 1979). During the 1970s and 1980s, numerous authors have written about the technology-as-process approach to curriculum design (Reiser, 1987; Percival and Ellington, 1988). In a recent summary, Percival and Ellington (1988) outline the changing major concerns of the approach as: Within this context, any technology might be described as a mediator between the three human components of the interaction; the subject matter/ content expert, the instructional designer and the learner. Technology, on its own, is inanimate and lifeless; the human manipulation of the interaction creates the power of the technology for learning. The link between the original expert and the learner can be considered to be mediated through the attributes of the technology employed and the skills of the instructional designer (who incidentally may also be the teacher or instructor). The content organisation and the attributes of the technology the designer employs to present the ideas will help or hinder the learner's comprehension of them (Salomon, (1979). Learners, in turn, have their own individual understanding or conceptual sets which they apply to the presented materials to achieve mastery of the knowledge and information presented. Engelbart (1988) illustrated the concept, when he described the attributes of a hypermedia (note 1), environment (Figure 2), which augments human capabilities. His thesis is that most human capabilities are composites; any "example capability" can be thought of as a combination of the human-system and the tool-system capabilities. This process is possible, given the human skills and knowledge, to employ these systems. It is this last skill-the knowledge to employ-which is a major variable in technology adoption. Figure 2: Extending the capabilities of the individual through technology (Engelbart, 1988) In order to demonstrate how the instructional designer and the learner can use appropriate technology to improve skills, conceptual understanding and the process of communication of ideas, it becomes important to examine the current conceptions of how technology might be employed and what skills are required of both instructor and learner. Many of those coming to terms with technology in higher education are representative of these groups. Greater emphasis is being placed on learner involvement in learning, and demands are being made for a broader knowledge base. Thus learners are being compelled to venture into areas which were once the realms of specialists. For example, work has been undertaken with interactive videodiscs (note 2) where students can explore databases of realistic situations in the security of the classroom, and the technology enables them to become involved and make decisions. These decisions can be about key issues, such as, chemical experimentation or future employment. This interaction can occur without fear of failure (Scriven and Adams (1988): Ambron & Hooper, 1988). Word processing and literature searching are two common examples of increasing technology use as an extension of human capabilities. Traditionally, assignments were handwritten or an author employed a typist to create a respectable assignment presentation. The proliferation of word processors has changed that. Assignments must now be at least typewritten, preferably word processed, spellchecked and, in some instances, be presented with integrated illustrations and graphics laid out using a page layout program. Hard copies are not always required either. Some instructors request assignments to be submitted on disk, or in the case of distance education, assignments can be downloaded via a modem or placed on a bulletin board. In the area of literature searching, the contents of the school or institution's library sufficed or, if not, a researcher made an appointment with the "on-line search" specialist librarian to conduct a (rather costly) literature search. The advent of databases on CD-ROM (note 3) has enabled a "do it yourself" approach. This easy and cheaper alternative is encouraging academia to incorporate a more comprehensive review of the literature in areas which were once the kingdom of the textbook. Realistically though, not everyone employs technology in achieving a goal and many teachers, while using a technology at a basic functional level, do not think in terms of its potential to assist human thought and concept development. (Office for Technology Assessment, 1988; Roblyer, et al, 1988). From the work at the MIT media lab and the growing awareness of integrating technologies such as CD-ROM, CD-I (note 4), and DV-I (note 5), there are predictions that, not only will the future classroom be well equipped, but these systems will also allow home use at reasonable cost. The move over the next few years will be to publish and present knowledge in these technologies (see for example, Bitter, 1988; Hativa, 1986, Hedberg, 1989). Information technology-based, teaching materials are often confined to the role of a sophisticated presentation devices. However, with existing applications software, there is the opportunity for the student to use applications software packages for knowledge generation as well as knowledge presentation. (See for example, Hedberg, 1988a). Frustration and anxiety are a part of the daily life for many users of computerised information systems. They struggle to learn command language or menu selection systems that are supposed to help them to do their job. Some people encounter such serious cases of computer shock, terminal terror, or network neurosis that they avoid using computerised systems. These electronic-age maladies are growing more common; but help is on the way!While new and exciting aspects of information technology and its use are constantly being brought to the attention of the higher education community, the human-technology interface seems to have attracted attention in education only in recent years (e.g. Barrett and Hedberg, 1987; Shneiderman, 1987). This issue becomes more important when considered in the light of the problems faced by teachers as learners as they attempt to understand and use the technology as a tool. In summarising the state of technology adoption by teachers, the Office of Technology Assessment (1988) found that interactive technologies take more time and effort to learn than many other curricular innovations, and their use made teaching a bit tougher, at first. The choice of an appropriate technology for learning might focus on these issues, if more general use is to be made of the technology by teachers. ...the diverse use of computers in homes, offices, factories, hospitals, electric power control centers, hotels, banks, and so on is stimulating widespread interest in human factors issues. Human engineering, which is seen as the paint put on the end of a project, is now understood to be the steel frame on which the structure is built (Shneiderman, p. v, 1987). Several participants had never used a computer before. Not only was the idea of a data storage on a small disc unfamiliar to them, so too was the means of accessing the disc. Some of the most prohibiting factors were the necessity of knowing specific identifying words, the need to press specific keys for the generation of particular information, and the methods of correcting errors in typing or input. At a deeper level, several participants were willing to accept the first instance of information which appeared on the screen, without checking for details or the appropriateness of the response. They firmly believed that the computer could not err - (even if the error was in human input), and therefore the information must be correct. Beside the need for keyboard skills, which created a barrier to effective use of the technology, many participants concentrated more upon following correct procedures, rather than the information being presented. Optimistic assumptions about teachers' ability to use technology frequently cause problems with the instructional strategies in which the technology is employed. A related problem has been that some current application software appears to the novice user to have been written by those "in the know". Although most applications programs incorporate "help" mechanisms (approximately twenty-two screens of help were found in one database program), these resources are beyond the grasp of the novice user or one unfamiliar with the "language" of how to get to, and be able to read the "Help" file. The most important catchcry of the computer-based education enthusiasts has been learner control. However, while there are numerous studies indicating its importance for motivation and efficient learning, its actual implementation in courseware is often only lip service. Learners, to take control over their learning experience with technology, still need to understand how the software they are using works and where they stand in their performance so that they can make informed decisions about where to venture next. The current enthusiasm for Hypercard (note 6) as a medium for exploration is based on the ability of the keen learner to choose a path and enjoy the options. At any moment the student can review where they have been and jump directly to a particular screen (through the "recent" review function); this degree of flexibility and graphic summary of progress has either not been possible before in courseware or simply too difficult to include. While its impact has not been fully explored, the opportunity for a "hyperview" of their learning sequence does enable greater control of what and how some things can be learned. An extensive summary of the hypermedia options becoming available has been provided by Ambron and Hooper (1988), and this challenges the developers of computer-based software to conceive of different formulations of instructional sequences in place of the routine drill and practice, tutorial, simulation, and problem solving strategies of the past. There is a growing realisation that the forms of software presentation can now adapt to the modes of representation and learning styles preferred by individual learners. Visual learners can convert data tables into graphical forms, haptic learners can use robotics to see, touch and feel the meanings of computer commands and their effect on an object. The link between formal logic structure and physical representation can be explored in terms of a functional relationship. In the mathematical curriculum it is possible, with software such Geometric Supposer, Function Builder, to investigate and manipulate ideas in a one-to-one relationship. A change in the mathematical function will be shown by a change in its graphical representation, and modifying the graphical representation will produce a corresponding change in the function. On a more concrete level, the work by Papert and his colleagues with Lego LOGO also enables this link to be investigated (Papert in Brand, 1988). The use of the technology is not purely a function of the availability of equipment, it is also a problem of understanding the technology as a tool for thinking. While it might never be expected that all teachers will use the technology as a tool for everyday knowledge generation and presentation, special groups such as mathematics and science teachers do have some conceptual advantages in using the technology from a discipline point of view. However, that alone is not sufficient. In describing the effectiveness of an interactive videodisc mathematics lesson, Carnine, et al (1987) emphasised the importance of instructional design in the materials which made them more effective than teachers working by themselves with a computer. These authors emphasised that instructional design skills were equally as important as the provision of the curriculum materials. Even with less sophisticated materials, such as the production of class handouts, there are new skills involved in the preparation of printed curriculum materials using the skills of typist/graphics composer/page layout compositor. The microcomputer has required a re-working of tasks and roles. The availability and accessibility of this technology has enabled individuals to work directly with the material which is going to be used in the teaching process. The immediacy and closeness with which individual authors can work on their material has meant that high quality materials can be presented quickly and designed to improve learning and increase their effectiveness. Taking account of students prior conceptions. One of the key elements in the materials designed was the deliberate linking of previous learning by means of the technology to scientific method and theory, so that the materials created an environment in which new data and phenomena could be transformed from naive understanding into more lasting and sophisticated ideas. In many projects technology enabled students to work with their own levels of understanding and with representations of knowledge with which they were comfortable. Integrating directed instruction and inquiry learning. One of the concerns with instructional strategy led the team to apply a different approach to those previously advocated by the proponents of microworlds (Papers in Brand, 1988). The mix in instructional strategy was to overcome the problems of extremely open-ended environments which, they believed, rarely led to students reconstructing concepts that mathematicians had taken centuries to devise. By designing materials which employed technology in a hybrid of direct instruction and inquiry learning, teachers helped students develop and test their own ideas. Commercially available software was employed in this type of activity. Teaching how knowledge is generated. One ETC project, the Nature of Science Project, used a variety of resources to produce an understanding of scientific thinking within the context of specific phenomenon. An interactive videodisc was used to investigate several "black box" problems. With this technology a series of conjectures could be investigated without expensive experimental equipment and the results of each manipulation of variables could be easily demonstrated. When this introduction to the experimental method was combined with real experimentation, students moved away from narrow beliefs about science to understand that it originates in the mind of the scientist and that it involves persistent examination of ideas. These concepts about teachers and teaching strategies are not unique to this series of projects. The work at the MIT Media Lab and their associated elementary school has created similar environments for learning, with success for learners at different levels of ability. The outcome of all such activity has been to re-examine the role the teacher and technology can play, no longer can the teacher simply relinquish his/her presentation to an audiovisual presentation device, the teacher must take an active role in supporting the inquiry. As to the other aspect of insufficient curriculum software, many writers have promoted the use of templates for applications software (Hedberg 1988a). What is more important is the structure of the exercise and the ability of the student to change elements in the model. When the choice of appropriate hardware is linked with potential software, then great advances can be made at very little cost and with little time spent in software development. Hypercard and Linkway are two programs which enable users (whether they be teachers or students) to design a series of experiences which can present ideas and manipulate them cheaply with the minimum of programming effort. Further, as it is possible to exchange software produced on these systems, the cost of running a range of curriculum materials is the cost of the disk. Recently, Club Mac released a CD-ROM of all its software. Only one would be needed at each school, as most material is in the public domain. Further, simple authoring software is becoming available in this format, allowing teachers or typists to input tests and experiences which can be quickly modified. Compatibility issues. Over the years most educational systems, whether they be State Education Departments, universities or individual schools, have sought to simplify the process of compatibility by insisting on one or two machines. This is becoming less and less of a major problem. With bulletin boards it is a simple matter of copying files from one computer to the other. Often software is written in languages which enable transportability of software such as "C". This trend, when matched with the growing capability of reading and writing magnetic media from any of the three main systems (IBM, Apple II, or Macintosh) and the links between major mainframe and micro manufacturers (e.g. Digital and Apple), would indicate that there should be little real concern for constraining unified hardware requirements. Laurillard (1987) has spoken of the development of multifaceted design models and Hedberg (1988a) has mentioned the use of templates as simple ways that link the use of technology to regular tools which are in common (preferably daily) use by the learner. Such a concept needs first to examine the reasons for using technology in the teaching/learning process. For example, the use of the simple device of a spreadsheet with a prepared mathematical model allows at least three levels of processing. First, a learner may type their own numbers into a prepared pro forma, the package will calculate according to the prepared algorithms and changes in different elements will show a relationship between inputs and results. Changing the inputs allows the learner to model different results based on the input assumptions. A second level might involve the translation of the numbers into another form of representation such as a chart. This second level may have been already prepared by the instructor and the links simply updated as the learner changes the numbers in their first pro forma, or the reamer might use the links between spreadsheet and charting routines to clarify or further investigate relationships (especially if they are a visual learner). A third level would enable the leaner to change the underlying assumptions on which the analysis is based - the learner might decide to investigate the algorithms devised for the relationships between inputs and results. By changing the formulae, the learner can extend beyond the interaction designed by the subject matter expert and the instructional designer. At both the second and third levels, the learner is manipulating the technology to generate knowledge rather than simply to watch its presentation. Thus the technology allows the student to extend his or her understanding beyond the original intents. Recent work has tried to reassess the functions of technology in terms of the type of tools required for different types of learning activities. Consider Table 1, where four key activities for teaching and learning are described - knowledge generation, knowledge presentation, knowledge communication and information management. The instructional designer needs first, to focus on the underlying learning activity, then secondly, define a link between the concept presentation and how the students must work with the information to produce their own understanding of the ideas and issues. Foremost in this design concept is the idea of allowing the student to manipulate the concepts directly, and not to have the presentation totally circumscribed by the designer, who might decide to present information in a single conceptual model. Thus the model presented here is concerned with two basic functions of a technology for learning-teaching/learning activity and form of knowledge representation. Additionally, because learning may occur at a time or distance remote from the tutor, knowledge must also be communicated with others. The communication of results, questions and corrections between tutor and learner, or amongst students, is of particular interest, and technology can influence and assist the quality of this interaction. As mentioned previously, bulletin board software can be used to generate insights beyond the prepared brief of the designed materials. The last teaching/learning activity illustrated in the model indicates the important management function involved in all materials to be used in learning. Personal productivity software, when linked together, can provide a useful organising force for tutor, designer and student, especially for time management or idea generation. Each of the four teaching/learning activities can use technology in a variety of forms. Each different form is appropriate or needed for the ideas or concepts to be understood by the learner. Using current information technology we are no longer constrained to the simple verbal form. Mixtures of sound, music, words, pictures or moving sequences can be integrated into each teaching/learning activity. With computer control of external devices, it is possible to manipulate objects in three dimensional space and to link them with graphical or numerical representations. Richey (1986) emphasised that instructional design has been distanced from teachers when she opened her book: Planning instructional programs and materials has been separated from the jobs of those who actually deliver the instruction in a growing number of situations .... The dichotomy between instruction and instructional design ... is ... influenced by different theoretical orientations and different practice histories (Richey, 1986, p.2) Producing materials can occur through enthusiastic teachers, through teacher educators or as demonstrated by the ETC example at Harvard, through a collaborative approach of both. Models of instructional) design abound in the literature, and most of the recent attempts to link technology with practice have simplified the process and reduced the complexity of previous behavioural prescriptions. Emphasis is upon structuring the curriculum so that it can be represented by simple "epitomes" (see Reigeluth, elaboration theory, 1987) and graphical links between concepts and motivating environments (Reiser, 1987). Many organisations who must manage the production of learning resources operate on the just-in-time method for their generation. The cost of inventories, the complexity of multi-media storage, and the deterioration of electronic media with poor storage and time has meant that many curriculum packages are produced on-demand. These factors do not necessarily require a centralised production source. Most reasonably large organisations already possess the infrastructure to produce materials without the need for further bureaucratic centralisation. In fact, the notion is generally antagonistic to trends of development in information technology and the way in which people adapt and implement new technology. However, there is definite need to assist with the identification of good products which are often hidden in a growing mountain of alternatives. Instant access to information about and evaluations of packages, together with cheap copies of their associated documentation, can be made available through public bulletin boards and/or distributed through CD-ROM or other large database storing technology. Propinquity is also a major factor in producing a product. The fact that the subject matter expertise, the design expertise and a computer are frequently within walking distance of each other will help the production of materials in ways not envisaged in the traditional bureaucracies of curriculum development centres. However, it is very unlikely that any economies can be achieved without some coordinated curriculum development of quality and with an eye to appropriate technology for the learning task. It is difficult to predict future hardware formats and the most appropriate technology in which to develop resources. At the moment, the push is to use pre-recorded formats (usually optically encoded), such as CD-ROM, although, the recently released NeXT computer uses an optical read-write system holding about 250 megabytes. WORM technology exists to enable writing data once on optical media and then being able to read many times. Entire manufacturing plants are run on WORM technology. No paper is generated; everything is added and changed in centralised filing systems. However, most current projects have considered interactive videodisc which requires less change to existing systems of recording and distribution. Publishing companies are considering CD-I (digital, interactive, multimedia systems) as a potential device for distribution of interactive training, reference books, albums, home learning and do-it-yourself learning, either with or without the computer (in the latter case the technology would be built into the system). Some commercial companies promise DV-I with up to 75 minutes of full screen video and 3D motion pictures (see discussions in Bitter, 1988; Scriven and Adams, 1988). Whatever the final hardware choice, the growing trend toward file conversion and similar magnetic media formats will probably continue for the next few years. This development alone will enable exchange of software between the major systems. I - drill and practice/tutorialRecent educational software has provided instruction for both student and teacher, and it supports activities which are seen as important by the instructor (see for example, Geometric Supposer [Schwartz & Yerushalmy, 1985] and The Voyage of the Mini [Gibbon in Ambron & Hooper, 1988]). II - simulation and new forms of representation. The design of an "intelligent" software does not necessarily mean the move to more complex artificial intelligence systems; it could mean simply using the ideas of good game design which engages students by providing fantasy, creativity and challenge (Malone, 1981). Simulations should be open-ended and allow students to generate knowledge rather than manipulate the parameters (Hedberg,1989b; Goldenberg, 1988). Extending the range of experience through the use of peripherals such as CD-ROM and videodisc should be seen as commonplace rather than special events. The work undertaken with the only Australian videodisc system produced specifically for schools (Steele, 1988) has demonstrated that the systems can work. However, it does require the vision of educational departments, intelligent interactive media design, and a small additional investment in a distribution technology which is more robust and of higher quality than anything currently available. The move from traditional conceptions of what educational software might present with hypermedia involves greater control for teachers and modifiability of the software (Hativa, 1986). Early concepts of software saw instructional strategies being clearly defined and fixed within each software package. Recent systems have also included artificial intelligence components which enable strategies to be more closely matched to the learning style (Criswell, 1989). Even without artificial intelligence components, the move into Hypertalk language structures has enabled greater flexibility in design and the use of environments. Certainly, the addition of interactive videodisc and CD-ROM is a simple task and one that extends the capabilities of the software design (see Fielded and Steele, 1988, Ambron and Hooper, 1988, Hedberg, 1985). Throughout the preceding discussion, there have been a number of examples which indicate that media can provide a unique and useful contribution to a concept presentation. Of particular interest are its abilities such as linking multiple representations of a concept and linking physical demonstrations through robotics or hypermedia to their theoretical counterparts. Simplistic software design or thoughtless use of computer graphing in classrooms may further obscure some of what we already find difficult to teach. On the other hand, thoughtful design and the use of graphing software presents new opportunities to focus on challenging and important mathematical issues that were always important to our students but were never accessible before. (Goldenberg, 1988, p. 135)Many of the popular descriptions from the work of Seymour Papert have included descriptions where one student suddenly became the "expert" for some time and, for one brief shining moment, was looked up to by their fellow students (Papert, 1980; Papert in Brand, 1988). The environment provided by Lego LOGO and some multimedia software packages can provide for the social aspects of learning. Improved student performance was experienced in a videodisc based lesson on fractions. Carnine et al (1987) put this effect down to a number of factors, especially, the carefully selected curriculum and the teaching strategies which fostered high levels of student engagement and success. The teaching strategies employed included a concern for example selection, an explicit teaching strategy and discrimination practice to reinforce the concepts. Carnine et al claimed that the instructional design of the videodisc was critical in the development of improved student learning. All too often they felt that the use of inappropriate elements of design in poorly conceived materials interfered with or contradicted the intent of the curriculum. Importantly in their study, they were concerned for the use of the technology with a group based on the research summarised by Bangert, Kulik and Kulik (1983) which found there were often stronger effects for group learning than when the same materials were used individually. Representational correspondence can also be used to effect when dealing with difficult-to-grasp concepts such as the notion of a variable. With well designed software it is possible to create new concepts using both abstract and concrete models (Goldenberg, 1988, Janvier, 1987). There are a number of unresolved questions about the use of windows in educational software, especially how the user comprehends how different windows relate and how consistent is the interpretation. Consider for example, overlapping windows versus tile windows (non overlapping segments of one screen) - often it is easier to understand what is happening if a number of things which are happening simultaneously occur always in the same part of the screen. This means a more expensive screen system and certainly a higher resolution system. Many of these issues have not been investigated with non-expert audiences, the research on human factors to date being largely related to business and military applications. To improve the learning experience, software that enables the learner to have control over more than parameters is to be preferred. Students need to be able to control the underlying function as well as the parameters which might be the subject of a constrained set of experiences (Goldenberg, 1988; Kulik & Bangert-Downs, 1983-1984). A few years ago, the Curriculum Development Centre in Canberra was interested in a small package which simulated a fishing village economy of a Pacific island. The materials were designed to include a number of graphics, but the interaction was purely setting the values of three parameters and watching the wealth of the community and the size of the fishing fleet change as the parameters varied. Students were not able to examine the functions on which these relationships depended, a short-sighted design. It would have been just as easy to use a spreadsheet template and allow the students to change values, as well as the functions, and view the outcomes in a graphical or numerical form. This approach is possible using commercial spreadsheet programs at a fraction of the cost of distributing specially coded software written in BASIC and only running on the one computer. Thus designing a spreadsheet template would have taken less time, and could be more easily adapted for different packages and computers. Other presentation factors in computer-based material, such as the speed of execution, may hide the development of the idea. The speed with which an object is drawn or an equation solved has often led to an emphasis on the Gestalt rather than the incremental development of the idea (Goldenberg, 1988, Schoenfeld, 1987). Some software packages have had to slow down the presentation of information so that the developmental steps can be shown. Scale, another difficult concept, can be sometimes confused in poorly executed software. It can be difficult for some students to determine the difference between a change in scale, and "zooming" into a section of an object, where the scale is not changed, only its representation on the screen. This problem can be further complicated by multiple windows as mentioned above. Changes in scale are easily achieved with computers, there can be confusion between zooming-in on a scale and actually changing the scale. (ETC, 1988, Goldenberg, 1988). Scale can also be complicated with a simple change of screen size. With some computer systems, the same representations on different screen sizes will appear different sizes, and there is no continuity of experience. Some computers enable a fixed-size screen representation leading to a consistency in scale representation across different size screens. One of the interesting concepts that computers enable learners to manipulate is the idea of the finite versus the infinite. With the technology, even the best representation is still composed of finite pixels, and there are always jumps between elements. Consider the restructuring of knowledge which is required to develop an electronic encyclopedia (Kreitzberg & Shneiderman, 1988). The hypermedia approach to materials design that the new technology allows creates some interesting problems for someone who previously "thumbed through" a book. Electronic media require multiple indexes to point to the information. The student cannot easily browse in the traditional sense. Browsing is possible in that several of the programs now available allow a browse function which rapidly scans each "card" in a database, and the user can click to stop the process at any time. The technique is really limited to looking at some sample items and small databases, but some users not at ease with the technology have been known to sit and watch them all in order to find just one relevant item! Students require multiple point of access and tolerance of spelling mistakes to find appropriate information. The problems of information retrieval are not insignificant, but the storage cost of multiple and idiosyncratic indexes is not beyond possibility with CD-ROM and other technologies. If the instructional designers are excited, then there is the chance some of that excitement and creative energy will be communicated to those who learn from the materials they design. Bangert-Downs, R. L., Kulik, J. A., & Kulik, C. L-C. (1985). Effectiveness of computer-based education in secondary schools. Journal of Computer-Based Instruction, 12(3), 59-68. Barrett, J. and Hedberg, J. G. (Eds.) (1987). Using Computers Intelligently in Tertiary Education. Sydney: ASCILITE. Bitter, G. G. (1988). CD-ROM Technology and the classroom of the future. Computers in the Schools, 5(1/2), 23-34. Brand, S. (1988). The media lab. New York: Penguin. Bright, G. W. (1987). Computers for diagnosis and prescription in mathematics. Focus on Learning Problems in Mathematics, 9(2), 29-41. Bright, G. W. (1989a). Teaching mathematics with technology: Logo and geometry. Arithmetic Teacher, 36(5), January, 32-34. Bright, G. W. (1989b). Teaching mathematics with technology: Numerical relationships. Arithmetic Teacher, 36(6), February, 56-58. Brod, C. (1984). Technostress: The human cost of the computer revolution. Reading, MA: Addison-Wesley. Burke, J. (1986). The day the universe changed. Boston: Little, Brown and Company. Carnine, D., Engleman, S., Hofmeister, A., & Kelly, B. (1987). Videodisc instruction in fractions. Focus on Learning Problems in Mathematics, 9(1), 31-52. Clark, C. M. (1988). Asking the right questions about teacher preparation: Contributions of research on teacher thinking. Educational Researcher, 17(2), 5-12. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459. (Citation included in Kerr re relative effectiveness of media based education) Clark, R. E. (1985). Confounding in educational computing research. Journal of Educational Computing Research, 1(2),137-148. (Citation included in Bitter re effectiveness of CBT) Criswell, E. (1989). The design of computer-based instruction. New York: Macmillan. Educational Technology Center, (1988). Making Sense of the Future: A position paper on the role of technology in Science, Mathematics and Computer Education. Cambridge, MA: Harvard Graduate School of Education. Engelbart, D. C. (1988). The augmentation system framework. In S. Ambron & K. Hooper, (Eds.). Interactive Multimedia: Visions of multimedia for developers, educators, and information providers. Redmond, WA: Microsoft Press. Fielden, K. & Steele, J. (1988). Hypercard and interactive video. In J. Steele & J. G. Hedberg (Eds), EdTech'88: Designing for learning in industry and education. Belconnen, ACT: AJET Publications. pp.43-50. http://cleo.murdoch.edu.au/gen/aset/confs/edtech88/fielden.html Goldenberg, E. P. (1988). Mathematics, metaphors and human factors: Mathematical, technical and pedagogical challenges in the educational use of graphical representation of functions. Journal of Mathematical Behaviour, 7(2),135-173. Hativa, N. (1986). The microcomputer as a classroom audiovisual device: The concept, and prospects for adoption. Computer Education, 10(3), 359-367. Hedberg, J. G. (1985). Designing interactive videodisc materials. Australian Journal of Educational Technology, 1(2), 24-31. http://www.ascilite.org.au/ajet/ajet1/hedberg2.html Hedberg, J. G. (1988a). Technology, Continuing Education and Open Learning or Technology 1 - Bureaucracy 0. In J. Steele, and J. G. Hedberg (Eds.), Designing for Learning in Industry and Education. Canberra: Australian Society for Educational Technology, pp90-94. http://cleo.murdoch.edu.au/gen/aset/confs/edtech88/hedberg.html Hedberg, J. G. (1988b). Designing Ask the Workers...: Teams and conceptualisation. In J. Steele (Ed.) Ask the Workers...: Evaluation. Sydney: Australian Caption Centre. pp17-35. Hedberg, J. G. (1989a). CD-ROM: Expanding and shrinking resource-based learning. Australian Journal of Educational Technology, 5(1), 56-75. http://www.ascilite.org.au/ajet/ajet5/hedberg1.html Hedberg, J.G. (1989b). The relationship between technology and Mathematics Education: Implications for Teacher Education. In Department of Employment, Education and Training, Discipline Review of Teacher Education in Mathematics and Science. Vol 3. Canberra: Australian Government Publishing Service, pp103-137. Hedberg, J. G. and McNamara, S. E. (1985). Matching Feedback and Cognitive Style in Visual CAI Tasks. Paper presented to the Annual Conference of the American Educational Research Association, Chicago, May. Hedberg, J. G. and McNamara, S. E. (1989). The Human-Technology Interface: Designing for distance and open learning. Educational Media International, 26(2), 73-81. Jackson, P. W. (1986). The practice of teaching. New York: Teachers' College Press. Johnson, D. L., Maddux, C. D. & O'Hair, M. M. (1988). Are we making progress? An interview with Judah L Schwartz of ETC. Computers in the Schools, 5(1 / 2), 5-22. Johnson, J. L. (1987). Microcomputers and secondary school mathematics: A new potential. Focus on Learning Problems in Mathematics, 9(2), 5-17. Kaiser, B. (1988). Explorations with tessellating polygons. Arithmetic Teacher, 36(4), December, 19-24. Kaput, J. J. (1986). Information technology and mathematics: Opening new representational windows. Journal of Mathematical Behaviour, 5(2), 187-207. Kaput, J. J. (1987). Translational processes in mathematics education. In C. Janvier, (Ed.), Froblems of Representation in the Teaching and Learning of Mathematics. Hillsdale, NJ: Lawrence Erlbaum Associates. pp19-26. Kemp, J. E. (1977). Instructional Design: A Plan for unit and course development. (2nd ed.) Belmont, CA: Fearon-Pitman. Kerr, S. T. (1989). Teachers and technology: An appropriate model to link research with practice. Paper presented to the Annual Conference of the Association for Educational Communications and Technology, Dallas, Tx, February 1st to 5th. Kreitzberg, C. B. & Shneiderman, B. (1988). Restructuring knowledge for an electronic encyclopedia. Paper presented to the International Ergonomics Association, 10th Congress, Sydney, August 1st to 5th. Kulik, J. A. & Bangert-Downs, R. L. (1983-1984). Effectiveness of technology in pre college maths and science teaching. Journal of Educational Technology Systems, 12(2), 137-158. Laurillard, D. (1987). Interactive Media: Working methods and practical applications. London: John Wiley. Nation's future depends on reform of mathematics education. (1989, February 8th). Report on Education Research, pp. 3-4. Office of Technology Assessment. (1988). Power on! New tools for teaching and learning. Washington, DC: US Government Printing Office. Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. New York: Basic Books. Pea, R. (1987). Cognitive Technologies for mathematics education. In A. H. Schoenfeld, (Ed.). Cognitive Science and Mathematics Education. Hillsdale, NJ: Lawrence Erlbaum Associates. pp89-122. Pea, R., Soloway, E. & Spohrer, J. C. (1987). The buggy path to the development of programming expertise. Focus on Learning Problems in Mathematics, 9(1), 5-30. Percival, F. & Ellington, H. (1988). A Handbook of Educational Technology. 2nd. ed. London: Kogan Page. Reiser, R. A. (1987). Instructional technology: A history. In R. M. Gagne (Ed.), Educational technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum. pp11-48. Richey, R. (1986). The theoretical and conceptual bases of instructional design. New York: Kogan Page. Roblyer, M. D., Castine, W. H. & King, F. J. (1988). Assessing the impact of computer-based instruction: A review of recent research. Computers in the Schools, 5(3/4), 11-149. Romiszowski,A. J. (1981). Designing lnstructional Systems. London: Kogan Page. Salomon, G. (1979). Interaction of media, cognition, and learning. San Francisco: Jossey-Bass. Schoenfeld, A. H. (Ed.) (1987). Cognitive science and mathematics education. Hillsdale, NJ: Lawrence Erlbaum. Schwartz, J. & Yerushalmy, M. (1985). The geometric supposers. Pleasantville, NY: Sunburst Communications. Scriven, M. & Adams, K. (1988). Evaluation: The educational potentialities of videodisc. In J. Steele (Ed.) Ask the Workers...: Evaluation. Sydney: Australian Caption Centre. pp 51-97. Shneiderman, B. (1982). Fighting for the user. Bulletin of the American Society for Information Science, 9(2), 27-29. Shneiderman, B. (1987) Designing the User lnterface: Strategies for Effective Human-Computer Interaction. Reading, MA: Addison Wesley. Steele, J. & Hedberg, J. G. (Eds.) (1988). Designing for Learning in Industry and Education. Belconnen, ACT: AJET Publications. http://cleo.murdoch.edu.au/aset/confs/edtech88/edtech88_contents.html Steiglitz, E. L. & Costa, C. H. (1988). A statewide teacher training program's impact on computer usage in the schools. Computers in the Schools, 5(1 /2), 91-98. Trollip, S. R. & Alessi, S. M. (1988). Incorporating computers effectively in classrooms. Journal of Research on Computing in Education, 21(1), 70-81. |Author: John Hedberg was asked to prepare a paper on technology and learning Mathematics and Science for the recently completed Discipline Enquiry. This paper is a refocussing of the ideas to the general problems of selecting media for instructional tasks. He can be contacted at the Professional Development Centre, University of NSW, PO Box 1, Kensington NSW 2033. Please cite as: Hedberg, J. G. (1989). Rethinking the selection of learning technologies. Australian Journal of Educational Technology, 5(2), 132-160. http://www.ascilite.org.au/ajet/ajet5/hedberg2.html
<urn:uuid:c1534c15-5d3c-4492-b204-dd8c73f98733>
CC-MAIN-2013-20
http://www.ascilite.org.au/ajet/ajet5/hedberg2.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928335
10,329
3.703125
4
The Cospas-Sarsat satellite constellation is composed of search and rescue satellites in low Earth orbit (LEOSAR) and geostationary orbit (GEOSAR). LEOSAR Satellite Constellation The nominal system configuration is four satellites, two Cospas and two Sarsat. Russia supplies two Cospas satellites placed in near-polar orbits at 1000 km altitude and equipped with SAR instrumentation at 406 MHz. The USA supplies two NOAA meteorological satellites placed in sun-synchronous, near-polar orbits at about 850 km altitude, and equipped with SAR instrumentation at 406 MHz supplied by Canada and France. Each satellite makes a complete orbit of the Earth around the poles in about 100 minutes, traveling at a velocity of 7 km per second. The satellite views a "swath" of the Earth of approximately 6000 km wide as it circles the globe, giving an instantaneous "field of view" about the size of a continent. When viewed from the Earth, the satellite crosses the sky in about 15 minutes, depending on the maximum elevation angle of the particular pass. GEOSAR Satellite Constellation The GEOSAR constellation is comprised of satellites provided by the USA (GOES series), India (INSAT series) and EUMETSAT (MSG series).
<urn:uuid:4382f832-1986-42d2-ba06-c429ff7f6e2a>
CC-MAIN-2013-20
http://cospas-sarsat.org/system/detailed-system-description/satellite-configuration
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.859925
269
3.21875
3
The Chicago Public Library offers the following explanation of the flag's symbolism, which was originally designed (two stars) by lawyer turned writer, reporter and drama critic, Wallace Rice (1859-1939), and adopted in 1917: The Municipal Flag of Chicago consists of three White stripes separated by two stripes of Blue with four Red six-pointed stars on the center stripe of White. The White Stripes: Top White Stripe represents the North side of the city. Center White Stripe represents the West side of the city. Bottom White Stripe represents the South side of the city. The Blue Stripes: Top Blue Stripe represents Lake Michigan and the North Branch of the Chicago River. Bottom Blue Stripe represents the South Branch of the Chicago River and the Great Canal. The Red Stars: The First Red Star represents Fort Dearborn (added by City Council in 1939). The Points of the First Red Star Signify: The Second Red Star represents the Chicago Fire of October 8-10, 1871. The Points of the Second Red Star Signify: Esthetics [original spelling by Rice) The Third Red Star represents the World's Columbian Exposition of 1893. The Points of the Third Red Star Signify History of the Area: Great Britain 1763 Northwest Territory 1798 Indian Territory 1802 Illinois Statehood 1818 The Fourth Red Star represents the Century of Progress Exposition of 1933 (added by City Council in 1933) The Points of the Fourth Red Star Signify: World's Third Largest City City's Latin Motto “I will” Motto Great Central Market For more information on the Flag of Chicago, see T. E. Whalen's impressive bibliography, beginning with the 1892 "Tribune" offer of $100 for the best suggestions of "municipal colors": The Municipal Flag of Chicago Also recommended:Flags of the World and MUNICIPAL FLAG OF THE CITY OF CHICAGO from the Eastland Memorial Society. Sidebar: Wallace Rice, a prolific writer with eclectic interests, was a member of the Illinois State Historical Society,the Chicago Historical Society, the Cliffdwellers, the Society of Midland Authors, the Stage Guild, and the Playwrights Theater. For examples of his writing, see Internet Archive. My thanks to Gregory Tejeda at Chicago Argus for posing the question on his blog.
<urn:uuid:3392e187-e6de-4eb5-8156-9f6fcf044ca7>
CC-MAIN-2013-20
http://www.chicagohistoryjournal.com/2008/05/long-may-she-wave-flag-of-chicago.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.856589
515
3.5
4
Dementia is a loss of brain function that occurs with certain diseases. Alzheimer's disease (AD), is one form of dementia that gradually gets worse over time. It affects memory, thinking, and behavior. Senile dementia - Alzheimer's type (SDAT); SDAT Causes, incidence, and risk factors You are more likely to get Alzheimer's disease (AD) if you: - Are older. However, developing AD is not a part of normal aging. - Have a close blood relative, such as a brother, sister, or parent with AD. - Have certain genes linked to AD, such as APOE epsilon4 allele The following may also increase your risk, although this is not well proven: - Being female - Having high blood pressure for a long time - History of head trauma There are two types of AD: - Early onset AD: Symptoms appear before age 60. This type is much less common than late onset. However, it tends to get worse quickly. Early onset disease can run in families. Several genes have been identified. - Late onset AD: This is the most common type. It occurs in people age 60 and older. It may run in some families, but the role of genes is less clear. The cause of AD is not clear. Your genes and environmental factors seem to play a role. Aluminum, lead, and mercury in the brain is no longer believed to be a cause of AD. Dementia symptoms include difficulty with many areas of mental function, including: - Emotional behavior or personality - Thinking and judgment (cognitive skills) Dementia usually first appears as forgetfulness. Mild cognitive impairment is the stage between normal forgetfulness due to aging, and the development of AD. People with MCI have mild problems with thinking and memory that do not interfere with everyday activities. They are often aware of the forgetfulness. Not everyone with MCI develops AD. Symptoms of MCI include: - Difficulty performing more than one task at a time - Difficulty solving problems - Forgetting recent events or conversations - Taking longer to perform more difficult activities The early symptoms of AD can include: - Difficulty performing tasks that take some thought, but used to come easily, such as balancing a checkbook, playing complex games (such as bridge), and learning new information or routines - Getting lost on familiar routes - Language problems, such as trouble finding the name of familiar objects - Losing interest in things previously enjoyed, flat mood - Misplacing items - Personality changes and loss of social skills As the AD becomes worse, symptoms are more obvious and interfere with your ability to take care of yourself. Symptoms can include: - Change in sleep patterns, often waking up at night - Delusions, depression, agitation - Difficulty doing basic tasks, such as preparing meals, choosing proper clothing, and driving - Difficulty reading or writing - Forgetting details about current events - Forgetting events in your own life history, losing awareness of who you are - Hallucinations, arguments, striking out, and violent behavior - Poor judgment and loss of ability to recognize danger - Using the wrong word, mispronouncing words, speaking in confusing sentences - Withdrawing from social contact People with severe AD can no longer: - Understand language - Recognize family members - Perform basic activities of daily living, such as eating, dressing, and bathing Other symptoms that may occur with AD: - Swallowing problems Signs and tests A skilled health care provider can often diagnose AD disease with the following steps: - Complete physical exam, including neurological exam - Asking questions about your medical history and symptoms - A mental status examination A diagnosis of AD is made when certain symptoms are present, and by making sure other causes of dementia are not present. Tests may be done to rule out other possible causes of dementia, including: - Brain tumor - Chronic infection - Intoxication from medication - Severe depression - Thyroid disease - Vitamin deficiency - In the early stages of dementia, brain image scans may be normal. In later stages, an MRI may show a decrease in the size of different areas of the brain. - While the scans do not confirm the diagnosis of AD, they do exclude other causes of dementia (such as stroke and tumor). However, the only way to know for certain that someone has AD is to examine a sample of their brain tissue after death. The following changes are more common in the brain tissue of people with AD: - "Neurofibrillary tangles" (twisted fragments of protein within nerve cells that clog up the cell) - "Neuritic plaques" (abnormal clusters of dead and dying nerve cells, other brain cells, and protein) - "Senile plaques" (areas where products of dying nerve cells have accumulated around protein). There is no cure for AD. The goals of treatment are: - Slow the progression of the disease (although this is difficult to do) - Manage symptoms, such as behavior problems, confusion, and sleep problems - Change your home environment so you can better perform daily activities - Support family members and other caregivers Medicines are used to help slow down the rate at which symptoms become worse. The benefit from these drugs is usually small. You and your family may not notice much of a change. Before using these medicines, ask the doctor or nurse: - What are the potential side effects? Is the medicine worth the risk? - When is the best time, if any, to use these medicines? Medicines for AD include: - Donepezil (Aricept), rivastigmine (Exelon), and galantamine (Razadyne, formerly called Reminyl). Side effects include stomach upset, diarrhea, vomiting, muscle cramps, and fatigue. - Memantine (Namenda). Possible side effects include agitation or anxiety. Other medicines may be needed to control aggressive, agitated, or dangerous behaviors. Examples include haloperidol, risperidone, and quetiapine. These are usually given in very low doses due to the risk of side effects including an increased risk of death. It may be necessary to stop any medications that make confusion worse. Such medicines may include painkillers, cimetidine, central nervous system depressants, antihistamines, sleeping pills, and others. Never change or stop taking any medicines without first talking to your doctor. Some people believe certain vitamins and herbs may help prevent or slowdown AD. - There is no strong evidence that Folate (vitamin B6), vitamin B12, and vitamin E prevent AD or slows the disease once it occurs. - High-quality studies have not shown that ginkgo biloba lowers the chance of developing dementia. DO NOT use ginkgo if you take blood-thinning medications like warfarin (Coumadin) or a class of antidepressants called monoamine oxidase inhibitors (MAOIs). If you are considering any drugs or supplements, you should talk to your doctor first. Remember that herbs and supplements available over the counter are NOT regulated by the FDA. For additional information and resources for people with Alzheimer's disease and their caregivers, see Alzheimer's disease support groups. How quickly AD gets worse is different for each person. If AD develops quickly, it is more likely to worsen quickly. Patients with AD often die earlier than normal, although a patient may live anywhere from 3 - 20 years after diagnosis. The final phase of the disease may last from a few months to several years. During that time, the patient becomes totally disabled. Death usually occurs from an infection or organ failure. - Abuse by an over-stressed caregiver - Loss of muscle function that makes you unable to move your joints - Infection, such as urinary tract infection and pneumonia - Other complications related to immobility - Falls and broken bones - Harmful or violent behavior toward self or others - Loss of ability to function or care for self - Loss of ability to interact - Malnutrition and dehydration Calling your health care provider Call your health care provider if someone close to you has symptoms of dementia. Call your health care provider if a person with AD has sudden change in mental status. A rapid change may be a sign of another illness. Talk to your health care provider if you are caring for a person with AD and you can no longer care for the person in your home. Although there is no proven way to prevent AD, there are some practices that may be worth incorporating into your daily routine, particularly if you have a family history of dementia. Talk to your doctor about any of these approaches, especially those that involve taking a medication or supplement. - Consume a low-fat diet. - Eat cold-water fish (like tuna, salmon, and mackerel) rich in omega-3 fatty acids, at least 2 to 3 times per week. - Reduce your intake of linoleic acid found in margarine, butter, and dairy products. - Increase antioxidants like carotenoids, vitamin E, and vitamin C by eating plenty of darkly colored fruits and vegetables. - Maintain a normal blood pressure. - Stay mentally and socially active throughout your life. - Consider taking nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen (Advil, Motrin), sulindac (Clinoril), or indomethacin (Indocin). Statin drugs, a class of medications normally used for high cholesterol, may help lower your risk of AD. Talk to your doctor about the pros and cons of using these medications for prevention. In addition, early testing of a vaccine against AD is underway. Aisen PS, Schneider LS, Sano M, Diaz-Arrastia R, van Dyck CH, et al. High-dose B vitamin supplementation and cognitive decline in Alzheimer's disease: a randomized controlled trial. JAMA. 2008;300:1774-1783. DeKosky ST, Kaufer DI, Hamilton RL, Wolk DA, Lopez OL. The dementias. In: Bradley WG, Daroff RB, Fenichel GM, Jankovic J, eds. Bradley: Neurology in Clinical Practice. 5th ed. Philadelphia, Pa: Butterworth-Heinemann Elsevier; 2008:chap 70. DeKosky ST, Williamson JD, Fitzpatrick AL, Kronmal RA, Ives DG, Saxton JA, et al. Ginkgo biloba for prevention of dementia: a randomized controlled trial. JAMA. 2008;300:2253-2262. Knopman DS. Alzheimer’s disease and other dementias. In: Goldman L, Schafer AI, eds. Cecil Medicine. 24th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 409. Mayeux R. Early Alzheimer's disease. N Engl J Med. 2010 Jun 10;362(4):2194-2201. Peterson RC. Clinical practice. Mild cognitive impairment. N Engl J Med 2011 Jun 9;364(23):2227-2234. Qaseem A, et al., American College of Physicians/American Academy of Family Physicians Panel on Dementia. Current pharmacologic treatment of dementia: a clinical practice guideline from the American College of Physicians and the American Academy of Family Physicians. Ann Intern Med 2008 Mar 4;148(5):370-8. Querfurth HW, LaFerla FM. Alzheimer's disease. N Engl J Med. 2010 Jan 28;362(4):329-44. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. ©1997-2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. Sign Up for Free Newsletters Ask Your Doctor the RIGHT Questions! the most from your doctor visit. Emailed right to you! The Ask Your Doctor email series may contain sponsored content. 18+, US residents only please. Explore Original Articles About... Get the MOST from QualityHealth - Top Searches - 1. Arthritis Management: Nature Heals - 2. 5 Digestive To-Dos - 3. Men: Should You Shave It or Leave It? - 4. Today's Top Fitness Trends - 5. Sugar and Osteoarthritis : The Link - 6. Can't Afford Your Hospital Bills? - 7. Stay Energized All Day Long - 8. Phobias: Who Has Them and Why? - 9. What If Your EpiPen Fails? - 10. 5 Costly Medical Billing Mistakes - 1. Ice Falls Can Cause Serious Injuries - 2. Can Inactivity Act Like a Disease? - 3. Kale Snack Recipe for Diabetics - 4. How Running Affects Arthritis - 5. Sugar and Your Immunity System - 6. Do Weight Loss Supplements Work? - 7. 5 Super Foods for Spring - 8. The Hazards of Reusable Bags - 9. How to Avoid Ingrown Hairs - 10. Health Tip: Constantly Change Shoes - 1. 4 Common Treatments for Epilepsy - 2. What Does a Urogynecologist Do? - 3. GERD Without Heartburn? It's Possible - 4. Graston Technique: Can It Work on You? - 5. Music Therapy Can Help Autism - 6. 8 Ways to Fight MS-Related Fatigue - 7. Can You Still Bleed After Menopause? - 8. Be Your Own Health Care Advocate - 9. Why Is Syphillis on the Rise? - 10. Ideal Weight vs. Happy Weight The material on the QualityHealth Web site is for informational purposes only, and is not a substitute for medical advice, diagnosis, or treatment provided by a physician or other qualified health provider. See additional information.
<urn:uuid:0ed781ab-8e7f-48eb-9034-8b25216cb402>
CC-MAIN-2013-20
http://www.qualityhealth.com/health-encyclopedia/multimedia/alzheimers-disease
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894749
3,024
3.46875
3
USGS project to understand coastal evolution and modern beach behavior; to identify and model the physical processes affecting coastal ocean circulation and sediment transport; and to identify sediment sources and construct a regional sediment budget. Topics in Coastal and Marine Sciences provides background science materials, definitions, and links to give a common context for users from a variety of backgrounds. Coastal erosion was chosen as the first topic. Study addresses questions and concerns related to limited sand resources along the Louisiana shelf and their implications to long-term relative sea-level rise and storm impacts, using newly acquired geophysical and vibracore data. Home page for Coastal and Marine Geology with links to topics of interest (sea level change, erosion, corals, pollution, sonar mapping, and others), Sound Waves monthly newsletter, field centers, regions of interest, and subject search system. Interactive map server to view and create maps using available coastal and marine geology data sets of offshore and coastal U.S. and the Gulf of Mexico. Links to available data and metadata that can be downloaded. Information on video and still photography used to supplement laser altimetry measurements of the coast. The photography is used for recognizing geomorphic and cultural features impacted by storms. Links to photo collections of hurricanes and El Nino.
<urn:uuid:4e7e4457-23df-4848-9f98-4832889a3f96>
CC-MAIN-2013-20
http://www.usgs.gov/science/science.php?term=1799&type=theme
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906647
259
3.078125
3
Research has long shown an association between low folate levels and depression, particularly depression that’s more severe and less responsive to medical treatment. (Folate is a water-soluble B vitamin in its natural form. Folic acid is the synthetic version found in supplements.) Folate is critical in the development of the human nervous system, so pregnant women must take folic acid supplements. People who abuse alcohol, people with certain illnesses, and those who take a number of different medications are at risk for folate deficiencies, which can present with a variety of cognitive, emotional, and behavioral symptoms. Doctors may check folate levels as part of an initial workup of depression.
<urn:uuid:5501b2c8-51e6-4f6a-8896-690cf555e081>
CC-MAIN-2013-20
http://blogs.psychcentral.com/bipolar/category/bipolar-depression/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948342
138
2.734375
3
John C. MacInnis is a member of the Embedded and Communications Group at Intel. Courtesy Intel Corporation. All rights reserved. Embedded systems using the Intel architecture must include a firmware stack that initializes CPU cores, memory, I/O, peripherals, graphics, and provides runtime support for operating systems. While Intel architecture-based PC designs typically use a full BIOS solution as a firmware stack, many embedded systems are designed with a more optimized firmware layer known as a boot loader. The job of the boot loader is to quickly initialize platform hardware and boot the system to an embedded real-time operating system (RTOS) or OS. Until recently, many embedded operating systems were designed to boot the device and enable all the drivers and networking on the board with no power management per se. As Intel architecture expands into more differentiated types of embedded systems, power management becomes increasingly important both for saving electricity costs as well as maximizing battery life in mobile systems. OS-directed Power Management (OSPM) using Advanced Configuration and Power Interface (ACPI) methodology provides an efficient power management option. For system developers, an ACPI design can help yield full PM control with quick time to market and cost savings. It offers flexibility by pushing state machine management and policy decisions to the OS and driver layer. The OS creates policy decisions based on system use, applications, and user preferences. From a maintenance and support perspective, patches, updates and bug fixes are better managed at the OS and driver layer than in the firmware. A Note About Firmware Terminology: Since the first IBM clones in the early 1980s, the PC BIOS has been the predominant firmware layer in most of Intel architecture system designs commonly referred to as x86. It has been observed that many Embedded Intel Architecture product designers have unique requirements not always completely satisfied by the standard PC BIOS. This article uses the terms "firmware" and "boot loader" to denote the distinct differences between a PC BIOS and the hybrid firmware required for many of today's embedded systems. Dynamic System Power Management Many types of embedded systems built on Intel architecture are necessarily becoming more power-savvy. Implementing power management involves complex state machines that encompass every power domain in the system. Power domains can be thought of globally as the entire system, individual chips, or devices that can be controlled to minimize power use, as illustrated in Figure 1. Power and Thermal Management States G0, G1, G2, and G3 signify global system states physically identifiable by the user G3 -- Mechanical Off G2 -- Soft Off G1 -- Sleeping G0 - Working S0, S1, S2, S3, S4 signify different degrees of system sleep states invoked during G1. D0, D1,…, Dn signify device sleep states. ACPI tables include device-specific methods to power down peripherals, while preserving Gx and Sx system states; for example, powering down a hard disk, dimming a display or powering down peripheral buses when they are not being used. C0, C1, C2, C3, and C4 signify different levels of CPU sleep states. The presumption is that deeper sleep states save more power at the tradeoff cost of longer latency to return to full on. P0, P1, P2,…, Pn signify CPU performance states while the system is on and the CPU is executing commands or in the C0 state. T0, T1, T2,…, Tn signify CPU-throttled states while the CPU is in the P0 operational mode. Clock throttling is a technique used to reduce a clock duty cycle, which effectively reduces the active frequency of the CPU. The throttling technique is mostly used for thermal control. Throttling can also be used for things such as controlling fan speed. Figure 2 shows a basic conceptual diagram of a clock throttled to 50 percent duty cycle. Power Consumption and Battery Life Power consumption is inversely related to performance, which is why a handheld media player can play 40 hours of music but only 8 hours of video. Playing video requires more devices to be powered on as well as computational CPU power. Since battery life is inversely proportional to system power draw, reducing power draw by 50 percent doubles the remaining battery life. System PM Design Firmware: OS Cooperative Model; In Intel architecture systems, the firmware has unique knowledge of the platform power capabilities and control mechanisms. From development cost and maintenance perspectives, it is desirable to maintain the state machine complexity and decision policies at the OS layer. The best approach for embedded systems using Intel architecture is for the firmware to support the embedded OS by passing up control information unique to the platform while maintaining the state machine and decision policies at the OS and driver layer. This design approach is known as "OS-directed power management" or OSPM. Under OSPM, the OS directs all system and device power state transitions. Employing user preferences and knowledge of how devices are being used by applications, the OS puts devices in and out of low-power states. The OS uses platform information from the firmware to control power state transition in hardware. APCI methodology serves a key role in both standardizing the firmware to OS interface and optimizing power management and thermal control at the OS layer.
<urn:uuid:5d933415-b14e-4e7b-827a-39a726fe3c25>
CC-MAIN-2013-20
http://www.drdobbs.com/architecture-and-design/implementing-firmware-for-embedded-intel/222600829
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918354
1,088
2.9375
3
IEP - Coal Utilization By-Products Current Regulations Governing Coal Combustion By-Products - Database of State Regulations Database of State Regulations Affecting Disposal and Utilization of Coal Combustion By-Products A Summary Provided by the National Energy Technology Laboratory and the American Coal Ash Association Coal Combustion By-Products (CCBs) are generated when coal is used to generate electricity and power industrial processes. Tens of millions of tons of these materials are produced each year. Many uses of these byproducts are possible, but currently most of them wind up in landfills. Previous work at the National Energy Technology Laboratory (NETL) identified regulatory issues as one factor preventing more widespread reuse of CCBs. CCBs are generally regulated by state authorities, and the various states have developed widely differing rules. This web site was developed as one way to help CCB generators, users, and regulators share information across state boundaries. This site contains summary information on current regulations in each state, drawn from the American Coal Ash Association's biannual report "State Solid Waste Regulations Governing the Use of Coal Combustion Byproducts." In addition, contact information for individuals with regulatory responsibility in each state is provided. Regulations and personnel are subject to change, however. Information in this site is for informational purposes only and may not include the most recent regulations or all individuals involved in permitting a given site or use; CCB generators and users are encouraged to work with their state authorities. Click here to select states for which you want information.
<urn:uuid:24f06fba-917c-41cd-9f6e-f1df97aa19d9>
CC-MAIN-2013-20
http://www.netl.doe.gov/technologies/coalpower/ewr/coal_utilization_byproducts/states/stateregs.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928718
318
2.6875
3
Seattle City Symbols On March 17,2003, the City Council designated the Great Blue Heron as the official City Bird. Resolution 30586 notes that the designation "will raise public awareness of ... habitat requirements of this species, and foster public stewardship for its continued existence..." The Seattle Audubon Society sponsored a yearlong campaign and public contest to select the City Bird. In voting that took place at nature centers, City parks, and in school classrooms, the Heron defeated its nearest rival, the common Crow, by a margin of two to one. The Council passed Resolution 28207 on July 16, 1990, adopting an official City Flag. The Flag was designed by Councilmember Paul Kraabel. The Resolution called for a white and teal blue/green flag with a stylized portrait of Chief Sealth ringed by the words Seattle, City of Goodwill and undulating white lines, representing the waves in Puget Sound flowing from the center to the left edge. Only three copies of the flag were made. Ordinance 32137, approved November 19, 1913, established the dahlia as the City's official flower and requested that the Park Board of the City plant and cultivate the flower in suitable quantities to make effective displays in the City parks. Seattle has two official city slogans. Resolution 14456, adopted October 7, 1942, established Seattle as The City of Flowers. The Resolution requested and urged citizens to plant and cultivate a wide variety of flowers to further beautify the City. On July 16, 1990, the City Council passed Resolution 28207 designating Seattle The City of Goodwill. The latter resolution was adopted prior to the opening of the Goodwill Games, an international sporting competition held in Seattle during the summer of 1990. The current official corporate Seal was adopted in 1937 by passage of Ordinance 67033. The Seal includes an imprint of the profile of Chief Sealth in the center of a circle. On the upper outer edges of the circle and partially encircling the imprint are the words, CORPORATE SEAL OF THE, and in a smaller circle under the aforementioned words and above the imprint are the words CITY OF SEATTLE. Beneath the portrait is the year 1869 signifying the date the City was incorporated. Included in the outer circle, beneath the portrait, are two cones from an evergreen tree and what appear to be two salmon. The Seal was patterned after a model designed by artist/sculptor James A. Wehn of Seattle. The Seal was cast by Richard Fuller, director of the Seattle Art Museum. In May 1909 Arthur O. Dillon petitioned the City Council to adopt "Seattle the Peerless City" as the City Song. The Finance Committee recommended the petition be granted providing Mr. Sawyer (a member of Council) sings the song for the Council. The City Council subsequently granted Dillon's petition.
<urn:uuid:92973d16-43da-4d31-89bc-424aefda0576>
CC-MAIN-2013-20
http://www.seattle.gov/cityarchives/Facts/symbols.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945585
583
2.78125
3
Melanoma starts in the color-producing cells of the skin and may develop in an existing mole or may occur as a new mole. Early diagnosis and treatment can lead to a complete cure, while advanced forms are likely to have a poor outcome. Advanced melanoma can spread to lymph nodes as well as other areas in the body, typically the lungs, liver, and brain. - A family history of melanoma – Having someone in your family with melanoma increases your risk tenfold. - Fair skin, light eyes, and a tendency to freckle – The risk of getting melanoma is 1 in 50 for whites, 1 in 200 for Hispanics, and 1 in 1,000 for blacks. - A large number of moles, especially unusual appearing moles. - History of frequent sun exposure, especially in childhood. - History of sunburns. - Decreased immune system, such as transplant patients and patients with HIV/AIDS. Sunlamps and tanning beds may increase your risk of melanoma, especially if they cause sunburn. - Men are most likely to develop melanoma on the head, neck, and trunk. - Women are most likely to develop melanoma on the legs and arms. - A – Asymmetry: One half of the mole does not look like the other half. - B – Border: The outline of the mole is irregular. - C – Color: More than one color can be seen, such as brown, black, red, blue, and white. - D – Diameter: A mole larger than 6 mm (1/4 inch), which is roughly the size of a pencil eraser. - E – Evolving: Changes in the mole over time. Once a month, you should perform a self-exam to look for signs of skin cancer. It is best to perform the exam in a well-lit area after a shower or bath. Use a full-length mirror with the added assistance of a hand mirror when necessary. Using a hair dryer can help you examine any areas of skin covered by hair, such as your scalp. - In front of a full-length mirror, inspect the front of your body, making sure to look at the front of your neck, chest (including under breasts), legs, and genitals. - With your arms raised, inspect both sides of your body, making sure to examine your underarms. - With your elbows bent, examine the front and back of your arms as well as your elbows, hands, fingers, area between your fingers, and fingernails. - Inspect the tops and bottoms of your feet, the area between your toes, and toenails. - With your back to the mirror and holding a hand mirror, inspect the back of your body, including the back of your neck, shoulders, legs, and buttocks. - Using a hand mirror, examine your scalp and face. Prognosis and treatment depend on how deep the tumor has grown into the skin. If you have a melanoma that is very thin (less than 1 mm) and has been completely removed with the excision, this may be all the treatment you need. For thicker melanomas, your doctor will probably recommend a biopsy of your lymph nodes to determine if they contain melanoma cells. This is called a sentinel node biopsy. If these lymph nodes do have melanoma cells, you may need to have other lymph nodes surgically removed. If you have lymph nodes that contain melanoma, your doctor will also need to determine if the melanoma has spread to other parts of your body. You may have to have a chest X-ray, a CT scan, an MRI, and/or other tests to determine this. Treatment for melanoma that has spread to the lymph nodes or other parts of the body may include chemotherapy. For patients with melanoma that has metastasized, immunotherapy is another treatment that can help the body's own immune system to destroy cancer cells. Types of immunotherapy include vaccines, cytokines (proteins that boost the immune system), and interferon-alpha. If you have previously been diagnosed and treated for melanoma, you are at increased risk of developing another melanoma, especially in the first 3 years after diagnosis. Therefore, it is essential that you regularly follow up with your doctor to have a thorough skin examination. American Cancer Society. Detailed Guide: Skin Cancer - Melanoma. http://www.cancer.org/docroot/CRI/CRI_2_3x.asp?rnav=cridg&dt=39. Accessed on January 31, 2009. Bolognia, Jean L., ed. Dermatology, pp.1789-1815. New York: Mosby, 2003. Freedberg, Irwin M., ed. Fitzpatrick's Dermatology in General Medicine. 6th ed, pp.917. New York: McGraw-Hill, 2003. Ragel EL, Bridgeford EP, Ollila DW. Cutaneous melanoma: update on prevention, screening, diagnosis, and treatment. Am Fam Physician. 2005;72(2):269-276. PMID: 16050450.
<urn:uuid:9958c7ce-6c59-449c-bad4-ed2f1f8501d4>
CC-MAIN-2013-20
http://www.skinsight.com/adult/melanoma.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923315
1,088
3.546875
4
Going Green: Reusing materials To view our videos, you need to install Adobe Flash 9 or above. Install now. Then come back here and refresh the page. At a casual glance from the sidewalk, it might look like these old houses are being demolished. But that’s not the case, they’re being deconstructed so materials and fixtures can be reused or recycled. “Reuse means I’m going to take this product from this house and without much modification at all, I’m going to incorporate it into another project,” said Michael Gainer of ReUse Action, Inc. in Buffalo, N.Y. As opposed to recycling. Recycling is, let’s say I have a bunch of busted broken two by fours that I can’t reuse. That I can put by a dumpster and they can take to a facility and they grind that up and turn that into mulch. This project is dubbed a hybrid deconstruction and begins with what’s called a soft skim. On our first wave we send people in and we pull out all the windows, all the doors, all the cabinets, all the fixtures, everything we believe is valuable enough. “Then you dissemble the building into panels and when you get those panels down you then figure out strategically what valuable material you have in those panels and which of those materials it makes sense to try to go and harvest out of those panels,” Paul Crovella said, SUNY ESF Sustainable Construction Management. “Pick up that panel, lower it to the ground and rip it apart and get all the two by eights and tens and twelves out of there and the flooring if we can.” So what’s the market for reusing this lumber? Crovella said, “Right now most of this lumber is used for aesthetics and finishes. Exterior siding, interior flooring, clapboard all that kind of stuff and there’s a strong market for that because the supply is so low.” In this construction project, ESF has contracted for some of the salvaged material to be used in the new building going up on this site. Gainer said, “And when I talk about life-cycle analysis and closing the loop, that is really it. If we can figure out ways to quickly connect buildings that are coming apart to potential uses in new construction we’ve really taken a big step to making this not something that’s on the edge or out of the ordinary but it’s something that’s very sensible and very mainstream.”
<urn:uuid:df5a1029-1f9f-49bf-95a5-ffaaec0a1600>
CC-MAIN-2013-20
http://ithaca-cortland.ynn.com/content/features/622123/going-green--reusing-materials/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941339
557
2.71875
3
The MOST Theological Collection: Basic Scripture "Chapter 4: Using the Genre Approach to defend Inerrancy" We already saw, in answering Cardinal Koenig's charges, an example of this use of the genre approach. It is highly likely that the narrative parts of Daniel were meant as the edifying narrative pattern. There is apt to be a core of history, but along with it go some rather free additions. Again, the key word is assert or claim. The writer does not assert or claim he is writing pure history. Part of it will fit with history, but he does not assert that the fill-ins are historical. In using the literary genre technique we are not being unfaithful to Scripture. Rather, we are being completely faithful, and using a great means to defend Scripture against attacks. For it is clear that we should try to find out what the inspired writer really meant to say. To find that, we must ask: What did he mean to assert? To ignore that is to impose our own ideas on Scripture. That is being very unfaithful. So the poor misguided Fundamentalists think they are respecting the sacred text, but actually they are not. They are imposing their own ideas on Scripture. Genesis 1-11: When we looked at the first eleven chapters of Genesis we said the genre was that of an ancient story, which still conveys things that really happened. Pope John Paul II, in his series of audiences on Genesis, on November 7, 1979 called this narrative "myth". He explained: "The term myth does not designate fabulous content, but merely an archaic way of expressing deeper content." So we need not say God created in 6 times 24 hours. Still less need we say creation was 4000 years before Christ. That number was reached by adding up ages of patriarchs and others. Centuries ago, St. Augustine knew better. In his City of God 15.7 he noticed that Cain was said to have built a city, and named it for his son Enoch, at the time when Genesis listed only about three men alive. He replied that the purpose of the sacred writer was not to mention all humans, but only enough to show the line of descent of the two cities. Exodus: The books that describe the departure from Egypt and the wandering in the desert very probably use something like an epic genre. That genre tells of the great beginnings of a people. The story is basically history, yet has some fill-ins which are a bit fictional, which the writer does not assert really happened. But in spite of this, it is clear that there was an exodus, and not just a revolt of peasants in Canaan who never left there. The story of a great people beginning in slavery is not likely to be invented. But there are new discoveries. It is now certain that Sinai was in Midian--when Moses had to flee Egypt he went to Midian, married the daughter of a priest of Midian, and while watching sheep there saw the burning bush. Wyatt Archeological Research, Presentation of Discoveries went to the real Sinai, photographed the top of Sinai where the top rocks are still blackened from the fire at the time of the Ten Commandments. They also found and photographed the twelve pillars erected by Moses at the site. There are more remarkable things in this video (More controversial: at the start of the video we see the discovery by using radar that penetrates soil, of a large boat, right dimensions for the ark. The problem is that a high Pentagon officer told me he had been permitted to see the photos made by a U.S. satellite from space, on which the ark is in the open, partly covered with snow, farther up on Mt. Ararat). Also Larry Williams, in The Sinai Myth (Wynwood Press, NYC, 1990) visited the site of Mt. Sinai in Midian and photographed the blackened top of Sinai and saw the twelve pillars of Moses. He also engaged the services of George Stevens of Horizon Research who was able to study the photos taken by the French satellite with infrared. He was able to see the precise spot where Israel crossed the Gulf of Aqabah, and to trace other parts of their movements in the area. (Further comments below in chapter 10). Joshua vs Judges: These two books seem to contrast. Joshua tells of a great triumphant sweep of conquest; Judges gives a lower key picture of much struggle. The answer lies in the genres: Joshua is part of the epic style; Judges is a more sober narrative on the whole. Jonah: Another fascinating example is in the book of Jonah. God ordered Jonah to preach to Nineveh that He intended to destroy it - of course, if they did not repent. Jonah feared God would actually not destroy it, and thought that then he would seem to be a false prophet. So he boarded a ship headed out into the Mediterranean. Soon a great storm arose. The crew threw overboard much of the cargo to lighten the ship. But the danger was still great. Then one of the sailors remembered that Jonah when coming on board had said he was running away from his God. So the sailors came to Jonah and questioned him. Jonah replied that yes, he was the cause. So they should throw him overboard, and then the storm would cease. They did so, and the storm stopped. But a large fish - a whale? - swallowed Jonah, but threw him up on the shore on the third day. Then Jonah decided he had to preach to Nineveh. They all did penance at once in sackcloth and ashes. So God did not destroy the city. What did the sacred writer intend - to write history, or a sort of extended parable? There are difficulties against an historical view. The matter of the fish swallowing Jonah is not too difficult. In February 1891 the ship Star of the East caught an 80 foot sperm whale. But a seaman, James Bartley was missing. After a search, he was presumed drowned. Yet the next day when the whale was being cut up, they found Bartley inside, still quite alive. (Cf. Wallechinsky & Wallace, People's Almanac, Doubleday: Garden City, NY, 1975, p. 1339). Another inconclusive objection comes from the language of the text. It has some words that are later than the supposed date. But we know that the Jews sometimes deliberately updated the language of the ancient texts. So the objection is not strong. But there are more serious difficulties: Jonah 3:3 says, "Now Nineveh was an exceedingly great city, three days' journey in breadth." The remains found there do not show a city that size. A. Parrott (Nineveh and the Old Testament, New York, Philosophical Library, 1971, pp. 85-86) suggests perhaps Nineveh could have referred to a 26 mile string of settlements in the Assyrian triangle. Or else, since people gathered at the city gates, Jonah would speak there. And since there were many gates there, and Jonah would talk much at each, it could have taken three days. On the other hand, no matter what the genre of the book, it surely does teach two major lessons. First, the Assyrians then were considered the world's worst people, because of their deliberate terrorism in war. Yet God showed concern for them. So He must love everyone. Second - and this is not complimentary to us - when prophets went to the original people of God, they had a hard time, suffered much. But the pagan Nineveh welcomes Jonah readily. The Jews knew this: In the late 4th century Midrash, Mekilta de Rabbi Ishmael (tr. Jacob Lauterbach, Jewish Publication Society of America, Philadelphia, I. p. 7) we read words imagined as said by Jonah: "Since the Gentiles are more inclined to repent, I might be causing Israel to be condemned [by going to Nineveh]." In Jonah 4:11 God says there are more than 120, 000 people who do not know their right hand from their left. If one takes the expression to mean babies, it would imply a huge populace. But it could merely mean they did not know the basics of religion. Jonah 3:6 speaks of the king of Nineveh - not the usual Assyrian expression. He was called king of Ashur. But Jonah might not have used the Assyrian way of speaking. However, we do not know of a king living in Nineveh at the time supposed in the story. Nineveh became the capital under Sennacherib (704-681). It may be objected that Jesus Himself referred to Jonah, and said He was greater than Jonah. But to refer to a well-known story does not amount to asserting the story happened. We could quote Alice in Wonderland to illustrate things, and not think that tale was historical. Actually, this literary use occurs elsewhere in the New Testament, e.g., in 1 Cor 10:4 and Jude 9. Apocalyptic: Besides the narrative parts of the book of Daniel, there are parts in the apocalyptic genre. This genre first appeared in full-blown form about 2 centuries before Christ, had a run of three or four centuries. In it the author describes visions and revelations - not usually clear if he means to assert he had them, or is just using the account as a way of making his points. There are highly colored, bizarre images, secret messages. The original readers knew better than to take these things as if they were sober accounts. (Sadly, some today have taken some of the apocalyptic images about streams of fire etc. as proof there were ancient astronauts who overawed the simple people of the Hebrews. That was foolish, for we must recognize the genre). For a very strong example of apocalyptic, please read Daniel chapter 7. Touches of Apocalyptic: Now it happens at times that a writer will use some touches of apocalyptic in a work that is on the whole of a different genre. Thus Isaiah 13:10 includes some definitely apocalyptic language in speaking of the fall of Babylon: "For the stars of the sky and their constellations will not show their light, the sun will be dark when it rises, and the moon will not give its light." In foretelling the judgment on Edom, Isaiah 34:4 said: "All the stars will be dissolved, the sky will roll up like a scroll and all the host of the skies will fall, like withering leaves from the vine, like shriveled figs from their tree." Ezekiel 32:7-8 uses much the same language to prophesy the judgment on Egypt: "When I blot you out, I will cover the skies and will darken their stars. I will cover the sun in a cloud and the moon will not give its light." We cannot help thinking of the language of Matthew 24:4. So we gather that while God surely could make such signs happen at the face value of the text, yet we cannot be sure that He intends to do it: the language of Isaiah and Ezekiel shows such expressions can be merely apocalyptic. The "rapture": This brings us to the question of "the rapture". St. Paul in First Thessalonians 4. 13-17 is answering the concern of the people there: Would it not be too bad if we should die before the return of Christ - then the others would get to see Him before we would. Paul replies that it will be as follows: Christ will descend from the sky with a blast of a trumpet. Then the dead in Christ will rise, and after that, "we the living" will be taken to meet Christ in the air. Many fundamentalists say that this event must be different from the last judgment scene which we find in Matthew 25:21-46 in which Christ the Judge is seated on the earth, and has before Him the sheep and the goats. The fundamentalists say: the scene in First Thessalonians takes place in the air - the scene of the last judgment takes place on the earth. So there must be two separate events. So there is a separate rapture, when Christ will suddenly snatch out all good people from this world, leaving only the evil. The good will then reign with Him for 1000 years before the end. The trouble is that they have neglected the genre, as usual. Both passages are clearly using some apocalyptic language. For in the judgment, all persons of all ages of the world must stand before Christ. The whole globe would not give standing room for that. So it must mean some sort of spiritual revelation of the just judgments of God at the final resurrection. In apocalyptic, we do not make close comparisons, for the whole is loose. So the bumper sticker is wrong, which said: "In case of rapture, this car will be unmanned," and will crash into others. But no problem, only the bad people are left! Just incidentally, many who are not fundamentalist err in thinking that the words "we the living", which come twice, show that Paul must have expected to be alive at the end. So they reject his authorship of Second Thessalonians, in which he very clearly shows he does not expect that. They do that contrary to all the ancient witnesses who say both are by Paul. They reject his authorship for the sake of an expression which is at most, ambiguous. Really, many teachers will often say I or we to make something vivid, without intending to give any information about themselves at all. Wisdom literature: This genre is one the Hebrews had in common with other ancient near Eastern peoples. With most peoples it is basically a group of worldly wise counsels, especially for the young, on how to get along in this life. Egypt was specially famed for it, and the Jews may well have gotten ideas in their long stay there. The Egyptian Wisdom of Amenemopet has many parallels to the Old Testament. For example, Proverbs 22:17-18 says: "Incline your ear, and hear the words of the wise, and apply you mind to my knowledge; for it will be pleasant if you keep them within you, if all of them are ready on your lips." Amenemopet says: "Give thy ears. Hear what is said, give thy heart to understand them. To put them in thy heart is worthwhile (from ANET 421). Many texts of Proverbs and Amenemopet are given in parallel columns in J. Finegan, Light From the Ancient Past, 2d ed. Princeton Univ. Press, 1974, pp. 124-25.). We must keep in mind in reading the wisdom literature that only some things are meant as religious principles. Clement of Alexandria, head of the catechetical school at Alexandria in late 2nd century, tried to set up a counter attraction to the snob appeal of Gnosticism. So in books II and III of his Paidagogos, he tried for a deeper knowledge of the rules of morality, and gave very detailed rules for how a Christian should do everything: eat, drink, sleep, dress, use sex, and so on. He sometimes supports his injunctions from Scripture. He quotes Ecclesiasticus/Sirach 32:3 & 7, without understanding the genre: In Paidagogos 2. 7. 58: "I believe that one should limit his speech [at a banquet]. The limit should be just to reply to questions, even when we can speak. In a woman, silence is a virtue, an adornment free of danger in the young. Only for honored old age is speech good: 'Speak, old man, at a banquet, for it is proper for you... Speak [young man], if there is need of you, do it scarcely when asked twice." Variant Traditions: There is another kind of seeming error that we can solve by the use of genre and determining what is asserted. In Exodus 14:21-25 we find: "Then Moses stretched out his hand over the sea; and the Lord drove the sea back by a strong wind all night, and made the sea dry land, and the waters were divided. And the people of Israel went into the midst of the sea on dry ground, the waters being a wall to them on their right and on their left." We notice two different explanations: 1) a wind sent by God dried up the sea, 2) the water was like a wall on both sides of them. Clearly these two pictures do not fit. A sea dried up by the wind would be just shallow water - and after the drying, there would be no wall of water on left and right. But we ask: What did the inspired writer really mean to assert? Let us picture him sitting down to write. He has on hand two sources - written or oral - and they do not fit. He has no means of knowing which is the right one. He decides: "I will let the reader see both." But that means he does not assert both. That cannot be done. What he does assert it this: I found two accounts, and do not know which is it. Here they are. Another similar case concerns how David came to meet and know King Saul. In Chapter 16 of First Samuel, Saul is upset. He asks his servants to find a man skilled at playing a harp to soothe him. They bring David (16:18) "son of Jesse the Bethlehemite, who is skilled in playing, a man of valor, a man of war, prudent in speech. "So David enters his service, and becomes armor- bearer to Saul. Saul sends word to David's father saying he wants David to stay in his service. But in chapter 17 the picture is very different. David is feeding his father's sheep. One day his father sent him to bring food to his brothers who were in the army of Saul. David hears of the giant Goliath, and the great reward the king offers to one who will kill Goliath. So David goes to Saul, boasts of having killed lions and bears, offers to fight Goliath. Saul gives David armor, but David is not used to wearing armor, and discards it. So he gets some stones from the brook and a sling, and kills Goliath. In chapter 16 (verse 18), David is called a mighty fighter, a gibbor. But in chapter 17, after David has killed Goliath, Saul asks his captain Abner who that is. Abner says he does not know (though in chapter 16 David has previously been in the service of Saul). Abner takes David to Saul, holding the head of Goliath. Saul asks who he is. Clearly, the two accounts do not fit together. But we ask again: What did the inspired writer mean to assert? He meant to assert only: I found these two, and do not know which is right. But you can see both of them. He asserts no more than that. Poetic Genre: In any culture, poetry is apt to use fanciful images and exaggerations. Scriptural poetry does the same. But if one does not recognize that a passage is poetic, mistakes can result. St. Justin Martyr, in Second Apology 5, shows he believes angels have bodies. We do not blame lack of knowledge of genre for this: there was much hesitation in the patristic age on angels. But in Dialogue with Trypho 57 he says that angels have food in heaven since, "Scripture says that they [the Hebrews in the desert] ate angels' food." Justin does not understand Psalm 78:24 which speaks of bread from heaven, referring to the manna in the desert. Isaiah 40:2 says Israel has received double for all her sins. Now of course God would not punish twice as much as what was due: We need to recognize Isaiah is a lofty poet, and/or take this as Semitic exaggeration. Psalm 124. 3 has God saying: "All of them have turned, together they have gone astray. There is no one doing good, not one". One might imagine this could apply only to people of the time of composition, but St. Paul in Romans 3.10 cites it as meaning everyone. Again, we need to recall this is poetry. Paul had a different reason for citing it. He was out to prove that if one tries for justification by keeping the law, all are hopeless. To understand this, we need to know St. Paul at times uses a sort of focused view in which as it were he would say: The Law makes heavy demands, but gives no strength. To be under heavy demands without strength of course means a fall. In the focused view (a metaphor, as if one we were looking through a tube, and could see only what is framed by the circle of the tube) one does not see the whole horizon. Off to the side, in no relation to the law, divine help was available even before Christ. If one uses it, then the result is quite different. (More on focusing later on). Isaiah 64:5 said: "All the deeds we do for justification are like filthy rags." Some, not seeing the poetic nature of the passage, thought all our good deeds are sinful. It is true, there is imperfection in most good things we do. Yet not everything is a mortal sin. St. Paul says in Philippians 3:6 that before his conversion he kept the law perfectly. Luke 1:6 says the parents of John the Baptist were keeping all the commandments without blame. 2 Timothy 4:6-8 looks forward to a merited crown from the Just Judge.
<urn:uuid:75717510-14cf-4d9d-9e47-7af0d37c8aaa>
CC-MAIN-2013-20
http://www.catholicculture.org/culture/library/most/getchap.cfm?WorkNum=6&ChapNum=5
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97273
4,466
2.875
3
Whitley County in Indiana is situated in the North America. It comprises of four important cities to explore, Columbia City, Churubusco, South Whitley and Larwill. Formed in the year 1842, Whitley County has a wide historical importance. It was named after the great American warrior col. William Whitley, who gave his life in the Battle of Thames on 8th October 1813. To remember his sacrifice forever, the people of Indiana named this land as Whitely County. The county is separated into 9 townships: Columbia, Cleveland, Etna-Troy, Jefferson, Richland, Smith, Thorn Creek, Union and Washington. Since before, Whitley people were primarily agriculture minded because of the large fertile lands of Whitely County. Even today Whitley communities have maintained their connection to agriculture and conduct agriculture expo for benefit of farmers. Various County offices of Whitley County are located in Columbia City. Whitley County is becoming a center for business and hence, many residential and commercial complexes are developing in this area in the past years. Even though Whitley County is predominant for its lush pathways, Blue River Trails, recreational parks and historical locations, more and more industrial organizations are stepping here, thus employment opportunities in Whitely County are growing simultaneously. The Economic Development Corporation (EDC) of Whitley County works with regional and state partners to support and encourage the clients’ business development and efforts to reach their goals. Triad Metals International is one such company, which developed in this area and has become a large steel manufacturing unit in the area. Moreover, Whitley County is a best place to live due to its favorable climatic conditions, warm people and thriving scope for development.
<urn:uuid:7905dc93-be31-4981-a7c8-014ce8f5780d>
CC-MAIN-2013-20
http://www.columbiacity.org/whitley-county/whitley-county.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966158
354
2.71875
3
What is actuarial science? Actuarial science is the mathematical science of designing insurance and pension plans to make sure they are maintained on a sound financial basis. Actuaries use statistical data to determine probabilities of insurance claims and retirement. They may set up pension and welfare plans, calculate future benefits, and determine the amount of employer contributions. What is a problem an actuary might work on? Actuaries use probabilities to determine the price charged for insurance that will enable the insurance company to pay all claims and expenses and also make a profit. Who hires actuaries? The Federal government, State governments, and insurance companies. Some actuaries work as consultants contracting out their services to insurance companies, corporations, unions, government agencies, and attorneys. Where can you study actuarial science in California? UC Santa Barbara's actuarial studies program Meet an actuary Meet another actuary
<urn:uuid:4d8f80a1-a7c9-44c0-833d-f0baff1047f5>
CC-MAIN-2013-20
http://collegeofsanmateo.edu/math/actuary.asp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.861429
184
2.828125
3
Knowledge Base - Technical Articles HowTo: Create a custom projection file with units of feet or meters in ArcMap |Software:||ArcGIS - ArcEditor 8.3, 9.0, 9.1, 9.2, 9.3, 9.3.1 ArcGIS - ArcInfo 9.0, 8.3, 9.1, 9.2, 9.3, 9.3.1 ArcGIS - ArcView 8.3, 9.0, 9.1, 9.2, 9.3, 9.3.1| Instructions provided describe how to create a custom projection file for a projected coordinate system, using linear units in feet or meters, with the tools available in ArcMap. - Start ArcMap with a new, empty map, and add the data that was created in the custom projected coordinate system. - Navigate to View > Data Frame Properties > Coordinate System tab, and click on the New button > Projected Coordinate System ... - In the top box on the New Projected Coordinate System dialog box, name the new system. The name cannot contain spaces, but can include underscores (_). This name is used for the .prj file name. The name entered here should not include the .PRJ extension.The projection parameters for the custom projection file must be obtained from the data source. Refer to Item 6 in the knowledge base article titled, "Projection Basics: What the GIS professional needs to know". A link to this article is included in the Related Information section below. - Select the appropriate Projection from the drop-down list, and enter the values for the required parameters for that specific projection.The projection engine in ArcGIS Desktop calculates values to fifteen or sixteen significant digits. It is important to retain the decimal point in the values. If the numeric value is a repeating decimal, like .3333, extend the value to make up the required number of significant digits. For example, if the False Easting for the custom projection is 3280833.3333 US survey feet, enter: - Click on the Select button to select a Geographic Coordinate System (GCS) for the custom projected coordinate system. To select a suitable geographic coordinate system for the area of interest, see the link Related Information section for more information on what GCS and datum should be used for the data. Click OK on the Browse for a Coordinate System dialog box. Click OK again on the New Projected Coordinate System dialog box. - On the Data Frame Properties > Coordinate System tab, click the Add to Favorites button. This writes a copy of the custom projection file to disk. The custom projection file is listed in the 'Favorites' folder on the Coordinate system tab.The actual location of this 'Favorites' directory is C:\Documents and Setting\<user_name>\Application Data\ESRI\ArcMap\Coordinate Systems. To make the new, custom projection file easily available for defining coordinate systems and projecting data in ArcToolBox, copy the custom projection file from that folder. Make a new folder in C:\Program Files\ArcGIS\Coordinate Systems. Name the new folder 'Custom PRJ Files'. Paste the custom projection file into that folder. This makes the custom projection file more accessible in ArcGIS Desktop, and also insures against loss of the file. - Click Apply and OK on the Data Frame Properties dialog box. The new custom projected coordinate system is assigned to the current ArcMap document. - Add data to the map document, which is in a standard coordinate system, such as State Plane or UTM, or in a geographic coordinate system that: A. has the coordinate system defined, and B. is geographically located in the same area as the data in the custom coordinate system. If the datasets line up, the custom projection file has been created correctly. - Use the custom projection file to define the coordinate system for the data created in the custom coordinate system, by using the Define Projection Tool in ArcToolBox > Data Management Tools > Projections and Transformations. - Projection Basics: What the GIS professional needs to know The following concepts are fundamental to understanding the use of map projections in ArcGIS. 1. Coordinate systems, also known as map projections, are arbitrary designations for spatial data. Their purpose is to provide a common basis for comm... - What geographic coordinate system or datum should be used for my data? To determine which Geographic Coordinate System datum to use for data, select the appropriate Geographic Coordinate Systems by Area of Use link in the Related Information section below. This document lists Geographic Coordinate Systems (GCS) s... Last Modified: 5/3/2011 - Select the appropriate Projection from the drop-down list, and enter the values for the required parameters for that specific projection.
<urn:uuid:c1a79581-b455-49fd-93db-cc613a78ecf6>
CC-MAIN-2013-20
http://support.esri.com/en/knowledgebase/techarticles/detail/30583
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.786854
1,020
2.71875
3
I bet you listen to music. Not everyone does, but a good percentage of the population enjoys it in some form. Music is one of the rare things we all have in common. We listen to it when we're sad or happy or bored. We listen to our favorite songs to make it through a tough day. There are hundreds of types of genres -- something for everyone. Music is almost impossible to define, but we all know what it is. We don't always appreciate how much music impacts us. A song will inspire us, a band will change the way we dress for a year, and a concert can change our lives. Perhaps if we stop and listen, we'll realize how much music means to us. Contradicting my earlier statement that music cannot be defined, let's try to define it. The Official Webster's Collegiate Dictionary definition is: "The art or science of ordering tones or sounds on succession, in combination, and in temporal relationship to produce a composition having unity and continuity." When I asked friends and family for their definition, I got a better sense of how most people see music. One of the best definitions I received was from a classmate. She said: "Music is an expression of the soul, a way to show feelings and express yourself. Music is a way to connect, it allows us to communicate with others regardless of language or nationality. Music is something that moves us all." A family member explained it in a slightly more scientific way: "Music is the combination of human emotion and experience combined with mathematics and science." I think we can all agree that music isn't something easily defined or formulated because it has such an emotional quality. However, music does have its own math and formula to it. The number of spaces in intervals and the amount of beats in a measure are important in the construction of a song. Music is also made up of the science of sound waves and acoustics. Then why do we think of imagination and soulfulness when we think of music and musicians? I think it's because music, like any other art form, provides a glimpse into the essence of humanity, or, simply, it helps us understand each other. Music and art are usually seen as luxuries rather than needs. But what if we do need music? What would happen if we lost it? Music unifies us. We identify with people who like the same music, and it creates a common bond. Lifelong friendships can be made from sharing a love of the same artists and genres. Often, musicians create communities like the high school band or orchestra. Kids who feel isolated because they don't do well in sports or academics can turn to playing an instrument as an outlet. Music has given many a sense of purpose and direction in their lives. Musicians also can hold concerts and events for charities. U2 is famous for doing so, helping third-world communities struck by natural disasters. Music can also provide a chance for escape. It could be argued this is not necessary, but what would you do if you had no relief from the stress and strain of everyday life? Of course you can't hide from your problems, but taking time to listen and relax to your favorite song can be revitalizing. Playing an instrument also can be beneficial. Instead of watching two hours of a reality TV show, you can learn a new song. You can join the band or orchestra in your school or a music group out of school. If you feel like it's too late to participate in school music programs, you can always pick up an instrument on your own. It's easy to learn the basics on guitar and piano and then start branching out. Playing an instrument improves finger dexterity in most cases, and gives you a sense of rhythm. Playing guitar has given me a sense of rhythm and improved my ear for music. Now I can count how many measures of four there are in the chorus of a favorite song. After you learn one instrument, it's easier to pick up another, especially if you can read music. If you have the proper technology, you can put songs together by layering different parts you've played earlier. However, there's no substitute for playing with other people. Starting a band can be hard, but if you can find people who all get along it's fun. As hard as we try, I don't think we'll ever be able to define music. It's intangible, it moves, it stirs something inside of us. It propels us to greatness. It makes a friend out of a stranger. It gives a community culture and a teenager an identity. Maybe music isn't just enjoyable and exciting, maybe it is necessary. If life were a movie, music would be the continuous soundtrack playing in the background. Emily Coleman is a sophomore at Frontier High School.
<urn:uuid:1939892a-8412-4e1a-97fd-91754748e03b>
CC-MAIN-2013-20
http://www.buffalonews.com/apps/pbcs.dll/article?AID=/20110127/LIFE04/301279958/1306
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970326
983
2.953125
3
Agriculture, Forestry, and Fishing Research at NIOSH (2008)Board on Agriculture and Natural Resources Each report is produced by a committee of experts selected by the Academy to address a particular statement of task and is subject to a rigorous, independent peer review; while the reports represent views of the committee, they also are endorsed by the Academy. Learn more on our expert consensus reports. The agriculture, forestry, and fishing sectors are the cornerstone of industries that produce food, fiber, and biofuel. The National Institute for Occupational Safety and Health (NIOSH) conducts research in order to improve worker safety and health in these sectors. This National Research Council report reviews the NIOSH Agriculture, Forestry, and Fishing Program to evaluate the 1) relevance of its work to improvements in occupational safety and health and 2) the impact of research in reducing workplace illnesses and injuries. The assessment reveals that the program has made meaningful contributions to improving worker safety and health in these fields. To enhance the relevance and impact of its work and fulfill its mission, the NIOSH Agriculture, Forestry, and Fishing Program should provide national leadership, coordination of research, and activities to transfer findings, technologies, and information into practice. The program will also benefit from establishing strategic goals and implementing a comprehensive surveillance system in order to better identify and track worker populations at risk. - On the basis of the information provided by the AFF Program, remarks provided by stakeholders, and comments submitted by the public, the committee understands that the AFF Program has not fully engaged its stakeholders. - The AFF Program appears to have had considerable difficulty in applying the principles of and engaging in surveillance. - The AFF Program targeted specific populations that it deemed at higher risk than others but omitted certain other populations and fell short in defining the entire population of AFF workers at risk of injury and illness. - The committee concluded that AFF Program activities or outputs are going on and are likely to produce improvements in worker health and safety, and gave the AFF Program an impact score of 3. That score was merited by the fact that the program has made some contributions to worker safety and health, as seen in the success of projects that have affected children, commercial fishermen, and tractor operators. - The lack of consistent leadership, long-term strategic planning, and periodic review of that course has led to a piecemeal approach to the research program, and the program appears disjointed more often than not. - The committee assigned the AFF Program a score of 4 for relevance because it found that research has been in high-priority and priority subject areas, and research has resulted in some successful transfer activities.
<urn:uuid:eafad858-d27f-4af3-a5aa-f6872ea54003>
CC-MAIN-2013-20
http://www.dels.nas.edu/Report/Agriculture-Forestry-Fishing-Research/12088?bname=
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958829
537
2.703125
3
Robert W Arnold Week 2 Electronic Lab Notebook Aipotu Part III: Molecular Biology - Downloaded Aipotu. - Spent 10 minutes or so getting used to the program, determining how to identify promoters, terminators, exons, introns, etc. - Pulled up the DNA comparison between the upper gene window and lower gene window in Green 2. - Upper gene window was blue, lower gene window was yellow, resulted in a combined color of green. - Found that bases 79 and 80 were not compatible. Upper gene window had AC in 79 and 80 while the lower sequence had GG. - Switched base number 68 in the lower gene window from an A to a C and then folded the protein. - This resulted in the lower gene window producing a white color and the combined color of upper and lower to be blue. Possible sign of blue being dominant over white. - The difference between green and blue was the 11th amino acid. - Determined the difference from blue and yellow was the 10th amino acid in each sequence. The sequence in the blue protein had Tyr coded for by TAC while the yellow had Trp coded for by TGG. - The difference in the red strain also occurred in the 10th amino acid which coded for Phe with TTC. The middle base determines whether blue or red. If it is A, the protein is blue and if it is T, it is red. - Red and blue combined creates a purple color and yellow and red create an orange. - Results show incomplete dominance with the combination of color. - Genetically mutated a strain of red protein by adding in a Tyr with TAC before the Phe. This caused the strand to become purple. Self-crossing this plant will create a pure-breeding purple organism. - Interestingly, when a Trp was also added to the purple flower sequence, the flower became black. - White was determined to be the default color in the absence of a fully functioning chain. - The DNA sequences differed primarily in the range of bases 78 to 83, around the 10th and 11th amino acids. We will use green as the base color. Green had a DNA sequence of 5'-AUGUCUAAUCGGCACAUUUUGUUAGUGUACUGGCGGCAGUAG-3' which coded for a sequence of N-MetSerAsnArgHisIleLeuLeuValTyrTrpArgGln-C. For example, blue and green only differed by one base at base number 83 where a G was replaced by a T. The other starting colors all had this Cys but also had a different bases 78-80 coding for different amino acids. Yellow had TGG coding for Trp, red had Phe coded by TTC, and white had Val coded by GTC. - No, all white alleles do not have the same sequence. We found 4 or 5 different strains of white throughout our testing of different protein structures. Some different strains we stumbled upon had amino acid sequences of up to 17 or 18 aa long. - The DNA sequences for all 4 starting colors were identical for the first 9 amino acids of the chain. Green and blue were identical until amino acid 11 where a Cys was switched for a Trp. When compared to green, yellow switch the 10 and 11 amino acid to TrpCys, red switched to PheCys, and the starting white switched out for ValCys. - In order to create a pure-breeding, we mutated a red protein strain by adding in a Tyr before the Phe at position 10. This caused the strain to become purple. From here, we self-crossed the organism creating a pure-breeding purple organism. - The changing of the DNA sequence resulted in the changing of colors in each strain. The seemingly minute base changes often caused a radical result in changing color from say green to yellow or to blue. This helped to show that minute changes can have drastic effects to an organism's overall phenotype. Along with this, it was determined that white was the default color for DNA strains that may not be functional due to mutations causing stop codons to be early in the amino acid cycle. This was determined by mutating strains early in the AA cycle. In today’s lab, DNA sequences were manipulated and mutated to produce different results. The mutations affected flower color and resulted in white, red, orange, yellow, green, blue, purple or black. The template strand we used was a green strand with a DNA sequence of 5'-AUGUCUAAUCGGCACAUUUUGUUAGUGUACUGGCGGCAGUAG-3' and an amino acid sequence of N-MetSerAsnArgHisIleLeuLeuValTyrTrpArgGln-C. From here, the green strain was compared to the other naturally occurring blue, yellow and white strains. It was determined that the differences in the DNA sequence were occurring between bases 78-83, where the 10th and 11th amino acids were being coded. In all of the naturally occurring amino acids, except green, the 11th amino acid was changed to Cys. This Cys in place of Trp caused green to be blue. Then in the 10th amino acid spot, yellow had Trp, red had Phe, and white had Val. While these mutations were seemingly minute (most only involved a changing of 2 base pairs) the effects were drastic to the color of the flower. After this was determined, we began to breed different strains to produce the oranges (red and yellow) and the purples (blue and red). It was also determined that white was the default flower color in the case of a long protein or one shortened by an early stop codon. Then came the process of mutation to produce a pure-breeding purple organism. This was accomplished by adding a Tyr amino acid before the Phe in red. This caused the plant to become purple and then it was self-crossed to produce purple plants. We also found that by adding a Trp before the Tyr in a true purple flower, the flower became black. This assignment was extremely useful. It allowed the students to become familiar with Aipotu, which seems to be a very valuable tool and also refreshed some topics in genetics and molecular biology that may have not been used in a while. The only real unanswered questions after today would be the issue with the introns and exons. A few times, a part of the intron would be changed and it would cause it to not be an intron anymore. - Robert W Arnold Week 2 - Robert W Arnold Week 3 - Robert W Arnold Week 4 - Robert W Arnold Week 5 - Robert W Arnold Week 6 - Robert W Arnold Week 7 - Robert W Arnold Week 8 - Robert W Arnold Week 9 - Robert W Arnold Week 10 - Robert W Arnold Week 11 - Robert W Arnold Week 12 - Robert W Arnold Week 14 - Week 2 Assignment - Week 3 Assignment - Week 4 Assignment - Week 5 Assignment - Week 6 Assignment - Week 7 Assignment - Week 8 Assignment - Week 9 Assignment - Week 10 Assignment - Week 11 Assignment - Week 12 Assignment - Week 14 Assignment - Class Journal Week 1 - Class Journal Week 2 - Class Journal Week 3 - Class Journal Week 4 - Class Journal Week 5 - Class Journal Week 6 - Class Journal Week 7 - Class Journal Week 8 - Class Journal Week 9 - Class Journal Week 10 - Class Journal Week 11 - Class Journal Week 12 - Class Journal Week 14
<urn:uuid:5b2e6ff9-508a-4c97-a5e5-03f6b9d2d92a>
CC-MAIN-2013-20
http://www.openwetware.org/wiki/Robert_W_Arnold_Week_2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960024
1,592
3.0625
3
Saxon Hamwic was situated in an area now covered by the Queensland, Belvidere, Chapel and Crosshouse districts of Southampton. It is bounded by Oxford Avenue to the north, the River Itchen to the east, Marsh Lane and the Itchen Toll Bridge to the south and St. Andrew's Road, Kingsway and St. Mary's to the west. The middle Saxon (c.700-850) town of Hamwic was situated around what is now Northam and St Marys. Hamwic was an important port, and finds of pottery, glass, coins, stone and metalwork point to trading connections with Scandinavia, France, the Low Countries and the Rhineland. Excavations have also shown that many crafts and industries, including pottery making, iron working, lead making, weaving and bone working were practiced in Hamwic. We can picture Hamwic as a busy, densely settled town of merchants and craft's people. Hamwic declined towards the end of the 9th century, presumably as a result of economic and political changes brought about, in part, by Viking activity. The excavations at Hamwic have resulted in one of the best collections of Middle Saxon finds in Europe. A selection of Saxon objects are available in a Handling Box, which can be taken out to schools by our Education Officer.
<urn:uuid:187f7f45-057f-4faa-89ec-dc6d29f07c07>
CC-MAIN-2013-20
http://www.southampton.gov.uk/s-leisure/artsheritage/museums-galleries/ghtower-museumofarchaeology/saxon.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974005
278
3.0625
3
This article was published in Australian Dictionary of Biography, Volume 1, (MUP), 1966 William Buckley (1780-1856), 'wild white man', was born at Marton, near Macclesfield, Cheshire, England, the son of a small farmer. He was reared by his maternal grandfather, who sent him to school and apprenticed him to a bricklayer. He joined the Cheshire Militia, and later the 4th Regiment. Because of his great height, 6 ft 6 ins (198 cm), he became pivot man of his company. In 1799 he served in the Netherlands and was wounded in action. After his return to England, he was convicted at the Sussex Assizes on 2 August 1802 of having received a roll of cloth knowing it to have been stolen, and was sentenced to transportation for life. He was taken to Port Phillip in April 1803 in the Calcutta with a party under Lieutenant-Governor David Collins, and there he and two companions absconded from the camp. Fearful, weary and hungry, they sent signals of distress to the Calcutta from the other side of Port Phillip Bay but these were not noticed. Buckley's friends turned back and were not heard of again. He fed on shellfish and berries, and was befriended by Aboriginals of the Watourong tribe, who believed the big white stranger to be a reincarnation of their dead tribal chief. He learnt their language and their customs, and was given a wife, by whom, he said, he had a daughter. For thirty-two years he lived mostly in a hut that he built near the mouth of Bream Creek on the coast of southern Victoria. Legends have grown up around his name, but a careful investigation of John Morgan, The Life and Adventures of William Buckley (Hobart, 1852), suggests that his account is close to fact. Buckley said there were occasional white visitors to Port Phillip during these years, but he was afraid to give himself up until July 1835, when he overheard the Aboriginals plotting to rob a visiting ship and murder the white intruders. He surrendered to the party under John Wedge at Indented Head. At first he had forgotten his own language, but he was identified by the tattoo mark on his arm, and the initials 'W.B.' Wedge, who thought he would be a valuable intermediary, obtained his pardon from Lieutenant-Governor (Sir) George Arthur. John Batman employed him as interpreter at a salary of £50, and he later became government interpreter. But he was confused in his loyalties, and felt that neither the Aboriginals nor the whites trusted him entirely. Unhappy and disillusioned, he left for Hobart in December 1837. He became assistant store-keeper at the Immigrants' Home, and from 1841 to 1850 was gate-keeper at the Female Factory. He retired on a pension of £12 to which the Victorian government added £40 a year. On 27 January 1840 he had married Julia Eagers (also known as Higgins), the widow of an emigrant, at St John's Church of England, New Town. She had two daughters. Buckley died at Hobart on 30 January 1856. He has generally been represented as a person of low intelligence, but his easy assimilation into an unfamiliar way of life may also suggest that he was intelligent, shrewd and courageous. Some authentic portraits exist, including sketches by Wedge, in the State Library of Victoria, and a portrait by Ludwig Becker, later copied by Nicholas Chevalier, which is owned by J. E. Pyke, of Hawthorn, Victoria. Marjorie J. Tipping, 'Buckley, William (1780–1856)', Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/buckley-william-1844/text2133, accessed 23 May 2013. This article has been amended since its original publication. View Original This article was first published in hardcopy in Australian Dictionary of Biography, Volume 1, (MUP), 1966
<urn:uuid:6b4883d5-ffaa-4a5f-802c-c420468f26e3>
CC-MAIN-2013-20
http://adb.anu.edu.au/biography/buckley-william-1844
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989421
863
3.125
3
A frog. (Click here to ) |Many: see text| A frog is a fresh-water amphibian of the family Ranidae, in the Order Anura. They are closely related to toads. The Ranidae are sometimes called the "true frogs" since a few members of other families also have common names including the word "frog." |Table of contents| 2 Distribution and Status 3 Life cycle 4 A new frog 6 External links Types and characteristics of frogs Frogs are a diverse group with some 4800 species. Most spend their lives in or near a source of water (water frogs), although tree frogs live in moist environments that are not actually aquatic. The requirement for water becomes most acute for egg and tadpole stages of the frog, yet here again some species are able to utilize temporary pools and water collected in the axils of plants. Frogs range in size from less than 50mm to 300mm in Conraua goliath, which is the largest known frog. All frogs have horizontal pupils, smooth skin and long legs with webbing between their toes. This family has a bicornuated tongue that is attached in front: They also have a tympanum on each side of their head, which is involved in sound production. Most frogs have deep, booming calls, or croaks, with some being onomatopoeically represented by the word "ribbet" or "ribbit." Many species of frog secrete toxins from their skin when under threat. These toxins deter predatory animals from eating them, and some are extremely poisonous to humans. The natives of the Amazon area extract curare from the poison arrow frog. Distribution and Status Members of this family are found worldwide, but they have a limited distribution in South America and Australia. They do not occur in the West Indies and on most oceanic islands. In many parts of the world the frog population has declined drastically over the last few decades. Pollutants are one cause for this decline but other culprits include climatic changes, parasitic infestation, introduction of non-indigenous predators/competitors, infectious diseases, and urban encroachment. The life cycle of a frog involves several stages. A female frog lays her eggs in a shallow pond or creek, where they will be sheltered from the current and from predators. The eggs, known as frogspawn hatch into tadpoles. The tadpole stage develops gradually into an adolescent froglet, resembling an adult but retaining a vestigial tail. Finally the froglet develops into an adult frog. Typically, tadpoles are herbivores, feeding mostly on algae, whereas juvenile and adult frogs are rather voracious carnivores. Furthermore, The red-legged frogs normally reproduce from November to early April because during these months, the water is about six or seven degrees Celsius. Under these cool conditions, embryonic survival is ensured. Amplexus is the process wherein the male grasps the female while she lays her eggs. At the same time, he fertilizes them with a fluid containing sperm. The eggs are about 2.0 to 2.8 millimetres in diameter and are dark brown. After about six to fourteen days, the eggs hatch between July and September into brown tadpoles that are about three inches long. The tadpoles then progress to lose their tails, grow legs, and change into a juvenile form with adult characteristics. A new frog In 2003, Franky Bossuyt of the Vrije Universiteit Brussel (Free University of Brussels) and S.D. Biji of the Tropical Botanic Garden and Research Institute in Palode, India reported the discovery of a new species of frog so distinct in appearance and DNA that it merited its own new family, the first new family for frogs since 1926. This new species, dubbed Nasikabatrachus sahyadrensis, is dark purple in color, seven centimeters in length, and has a small head and a pointy snout. Genetically, its closest living relatives are the sooglossids found in the Seychelles. The new species was discovered in the Sahyadri (Western Ghats) Mountains in India. The BBC have a picture of one
<urn:uuid:be7828bc-fb93-4257-8736-4ece72418023>
CC-MAIN-2013-20
http://www.fact-index.com/f/fr/frog.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949473
876
3.4375
3
Heat is a major killer in the USA, and those at greatest risk are infants, children, seniors and people with chronic medical conditions. Heat was responsible for more deaths in the USA than any other weather-related cause between 2002 and 2011, says the National Weather Service. In that period, there were 1,185 heat deaths, compared with 1,139 hurricane deaths and 1,075 from tornadoes. June had a record number of high temperatures across the country, and more are predicted for July and August. Knowing how to stay cool can be a life saver, says Jay Dempsey, of the federal Centers for Disease Control and Prevention in Atlanta. People suffer heat-related illness when their bodies are unable to compensate and properly cool themselves. The body normally cools itself by sweating, but sometimes sweating isn't enough. In such cases, body temperature rises rapidly. Very high body temperatures may cause heat exhaustion and heat stroke, and can damage the brain or other organs. "We stress three things during heat waves," says Dempsey. "Staying cool, staying hydrated and staying informed. We also tell people not to rely on a fan to stay cool." He recommends checking on people at high risk at least twice a day. The CDC offers these precautions in the heat: •Find safe places. Air conditioning is the No. 1 protective factor. If you do not have it at home, spend time in shopping malls, movie theaters, libraries or public cooling centers. Cool baths or showers can also help lower body temperature. •Stay hydrated. Increase fluid intake, regardless of activity level. Avoid alcohol, caffeine and drinks with high sugar content because they cause fluids to be depleted more rapidly. Sports drinks help replace minerals and salt lost in sweat. Don't wait until you're thirsty. During strenuous activities, drink 16 to 32 ounces of cool fluids each hour. •Wear light, loose-fitting clothing. Avoid dark colors, which absorb heat, and 100% cotton, which gets drenched with sweat, adds extra weight and can contribute to a rising body temperature. Fabrics that wick away moisture are best. •Reschedule exercise. Move your workout (or kids' playtimes) indoors or into the shade, preferably in the early morning or evening. •Stay out of hot cars. Never leave a person or pet in a parked car. Even if a window is open a crack, it can heat up in seconds.
<urn:uuid:2191e529-cc77-4a51-b994-21ea2313fbd1>
CC-MAIN-2013-20
http://www.firstcoastnews.com/news/health/article/262759/10/Safety-tips-for-hot-summer-days
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946435
500
3.28125
3
Sexually Transmitted Disease: What Is It? Alan McStravick for redOrbit.com – Your Universe Online A sexually transmitted disease (STD) is a disease that passes from an infected person to a non-infected person during anal, vaginal or oral sex. Depending on the disease transmitted, there can be several differing symptoms and health risks for the infected person. The Centers for Disease Control (CDC) has published, on their website, an action forum on the prevention and treatment for the most common STD’s. On their site, they offer what they term “effective strategies” for diminishing your overall risk. The first strategy is the most obvious for eliminating your risk of contracting an STD. It is also the least likely, as sexual intimacy is as basic a human need as food and water. They claim that abstinence, the avoidance of anal, vaginal or oral sex, is the most reliable way to avoid possible infection. Another strategy, meant to be used before possible contraction of an STD, is vaccination. According to the CDC, vaccines are safe and effective. The current vaccinations available protect the individual from both Hepatitis B and the human papillomavirus (HPV). They state that HPV vaccines can protect both males and female from the most common strains of the virus, and recommend that an individual get all three dose, administered in a shot, before they become sexually active. Individuals can allay their risk to possible infection, also, by engaging in a relationship that is mutually monogamous. This means that both you and your partner have an understanding that you will be sexually active only with each other. It is important to discuss with your partner that neither of you is infected with an STD to practice this more reliable way to avoid infection. For those who are still sowing their oats, the CDC recommends reducing or limiting the overall number of sexual partners that you engage in the act of making love. Even with this strategy, it is important that both you and your partners undergo STD testing and share those results with one another. If one chooses to have non-monogamous sex with multiple partners, the use of condoms is highly recommended. When used consistently and correctly, the use of the male latex condom has been shown to be highly effective at limiting the spread of STD’s. It is recommended that a condom is used every time you engage in anal, vaginal or oral sex. And lastly, it is important to know your own STD status. By testing yourself often, you can know your own health. If you do have an infection, you can be active in protecting yourself and your partner from possible transmission. The CDC recommends you ask your health care provider to test you for STD’s. You can’t be certain you have received the correct tests unless you specifically ask for them. Also, encourage your partner or partners to do the same. With the knowledge you gain through testing, you will find that many STD’s are easily diagnosed and even treated. If you find that either you or your partner has an infection, it is important that you both receive treatment at the same time in order to prevent re-infection. STD’s are not the end of the world or of your sexual life. Choosing the responsible route of diagnosis and treatment can still allow you to have a meaningful and fulfilling sexual relationship with your partner or partners.
<urn:uuid:8fc7f3c4-2ad5-4b77-8f6b-73972726e724>
CC-MAIN-2013-20
http://www.redorbit.com/news/health/1112766593/std-sexually-transmitted-diseases-what-are-they-011813/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956748
698
3.109375
3
From Cornell University News Service Living amid green space is highly beneficial to children ITHACA, N.Y. -- A house surrounded by nature seems to help boost a child's attention capabilities, a study by a Cornell University researcher suggests. "When children's cognitive functioning was compared before and after they moved from poor- to better-quality housing that had more green spaces around, profound differences emerged in their attention capacities even when the effects of the improved housing were taken into account," says Nancy Wells, assistant professor of design and environmental analysis in the New York State College of Human Ecology at Cornell. Wells also conducted a study that suggests the mental health of adults improves with a move from poor to quality housing. Although the green-space study sample was small -- only 17 children -- the statistical findings were highly significant, says Wells. Children in the study who had the greatest gains in terms of "greenness" between their old and new homes showed the greatest improvements in functioning. "The findings suggest that the power of nature is indeed profound," she says. To conduct the study, published in Environmental and Behavior (2000, Vol. 32, pp. 775-795), the researcher assessed the extent of natural surroundings around the children's old and new homes by rating, for example, the amount of nature in the views from various rooms and the degree of the yard's natural setting. To assess their children's abilities to focus attention, parents answered a series of questions from the Attention Deficit Disorders Evaluation Scale, a nationally standardized measure of directed attention capacity. "The results suggest that the natural environment may play a far more significant role in the well-being of children within a housing environment than has previously been recognized," Wells says. She notes that simple interventions, such as preserving existing trees, planting new trees or maintaining grassy areas, would likely have a significant impact on children's welfare. The study was funded in part by the University of Michigan and the U.S. Department of Agriculture (USDA) and its Forest Service. Wells' other study, which found a link between housing quality and mental health, appears in the Journal of Consulting and Clinical Psychology . Wells and her co-authors developed an observer-based rating of quality of homes occupied by 207 low- and middle-income women with at least one child. They also gauged the women's levels of psychological distress. In addition, these measurements were used in an urban sample of 31 low-income women before and after they moved into a home constructed in collaboration with Habitat for Humanity. "We consistently found that housing quality can affect mental health, in that better-quality housing was related to lower levels of psychological distress, while statistically taking into account the effects of income," says Wells. "The research suggests that significantly better housing quality is linked to improvements in psychological well-being. Such evidence is important and can be used to encourage legislators and policy-makers to promote housing improvements for low- and moderate-income families." The researchers concluded that improved housing quality can benefit mental health. In addition, follow-up interviews conducted two years later revealed that the women's levels of psychological distress remained low, suggesting that the improvements in mental health are unlikely to be a mere "honeymoon" effect. The study, co-authored by Cornell colleague Gary Evans and former Cornell undergraduates Hoi-Yan Erica Chan and Heidi Saltzman, was supported in part by the USDA, the John D. and Catherine T. MacArthur Foundation Network on Socioeconomic Status and Health, the National Institute of Child Health and Human Development, and the University of Michigan.
<urn:uuid:65f4ee5f-797b-46c0-a696-b4b488991750>
CC-MAIN-2013-20
http://www.scienceblog.com/community/older/2001/B/200111767.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973065
731
3.015625
3
Disaster Information Kit for the Caribbean Media [Disponible en anglais uniquement] First produced in 1995, with a sixth edition in 2004, the 100-page Disaster Information Kit for the Caribbean Media includes sections on Tropical Weather Systems, Earthquakes, Tsunamis, Volcanoes, Floods, Landslides, Technological and Man-Made Disasters, and Epidemics. Fact sheets for generic types of disasters include suggestions on messages that might be communicated by the media and glossaries of terms. The Disaster Information Kit is one of several sets of guidance and information materials produced through collaboration between the Caribbean Disaster Emergency Management Agency (CDEMA) and the UNESCO Office in Kingston.Retour en haut de la page
<urn:uuid:4b7feed7-edda-4c03-9feb-72e84d625e29>
CC-MAIN-2013-20
http://www.unesco.org/new/fr/natural-sciences/priority-areas/sids/disaster-preparedness/unescos-past-activities-ie-pre-january-2005/disaster-information-kit-for-the-caribbean-media/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.787894
153
2.890625
3
Jerusalem’s Hadassah University Medical Center has announced a breakthrough in methods for cultivating embryonic stem cells that enables the next step in the development of stem cell therapy, and the world has taken notice. Citing medical breakthroughs in the scientific community can be irresponsible. Such announcements can raise expectations and false hopes for cures that are plausible only decades in the future, or even impossible to attain. However, Hadassah’s advance, as the scientists report in the prestigious journal Nature Biotechnology, takes stem cell researchers closer to realizing their dream of manufacturing mass market stem cell treatments for disorders such as Parkinson’s disease, diabetes and age-related macular degeneration. Lead researcher in the Hadassah study, Prof. Benjamin Reubinoff, director of the Hadassah Human Embryonic Stem Cells Research Center and an established and recognized researcher in the field, tells ISRAEL21c that stem celltherapy applications are not just science fiction. Within the next year or two, companies in the US and Hadassah’s technology company in Israel will start clinical trials on humans. His center’s advance – a novel technique that allows researchers to grow and cultivate embryonic cells in suspension – paves the way for making this therapy available to everyone, not just the rich. A decade in the making, so far “This advance is one important step forward,” says Reubinoff. “Human embryonic stem cells were derived more than 10 years ago. And during these years many scientists worldwide have been working on solving the problems and obstacles along the way to be able to exploit the potential for stem cell therapy. We’ve found a way to solve this obstacle, a concept known and existing for 20 years.” In their study, the scientists show how human-derived embryonic stem cells can be grown while floating in a solution. Until now, for cultures to grow, the stem cells had to be grown on a substrate, which is extremely labor intensive and limits the number of cells developed through cultivation. “There is an application to the FDA for a trial [in the US] to transplant stem cells into patients with spinal cord injury and they hope this clinical trial will start within the next year or two,” says Reubinoff, who adds that in Israel, “we are not very far away from the time we will start initial clinical trials. We still need to see that the cells [selected] will have a therapeutic effect and that the cells are constructed in a careful way – to avoid tumor formation.” The research team at Hadassah has started its own company called Cell Cure Neurosciences and is also hoping to begin clinical trials within the next two years, using stem cells to attempt to repair age-related macular degeneration in the eye, for which there is currently no cure. With the upcoming trials in humans in both the US and Israel, the promise that stem cell therapy may be able to “repair” degenerative or genetic diseases may be fulfilled sooner than was anticipated. Stem therapy accessible to millions But before any immediate application, whether for eyes, Alzheimer’s, diabetes or Parkinson’s, the researchers in Israel are happy just to contribute to this promising field. Their discovery, they say, opens up the possibility that stem cell therapy could be within reach of millions of people, not just the select few with the means to afford it. The aim in stem cell therapy is to grow millions of embryonic stem cells that can be matured into any kind of cell found in our body, potentially providing an endless supply of cells that could repair damage caused by specific diseases or replace missing cells. Until now, researchers have been extremely limited in the scope of their applications because cultivating stem cells is so labor intensive. With their new advance, the Hadassah researchers say they have created optimal conditions for the embryonic stem cells to grow while floating in a medium. Via this method, they say, the cells do not differentiate into specific cell types, which is an undesirable and dangerous effect. “Until now human embryonic cells derived from embryos developed in colonies,” Reubinoff explains. “We showed you can actually take an embryonic cell and place it into a medium without it being attached to surface and feeder cells. In our research we took an IVF embryo, with permission, one that was five days old. The stem cells multiplied and gave rise to many cultures. “We show that under specific conditions we’ve developed you can divide and grow the cells in suspension, opening the window for the development of systems that will allow the large scale development of bulk cultures of stem cells needed for patients,” concludes Reubinoff. This means that stem cells could be grown in large tanks, and cultivated in quantities big enough to meet the world’s needs.
<urn:uuid:16b10782-e109-4ef0-b5da-34026f894bc5>
CC-MAIN-2013-20
http://israel21c.org/health/moving-closer-to-stem-cell-therapy/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9436
1,007
2.90625
3
Transforming Robotics With Biologically Inspired Learning Models Aisha Sohail describes the Neuromorphics Lab at CELEST and the work of building artificial brains to be used in robotics June 10, 2011 I walked into the building and there was a human-sized robot waiting to greet me. It shook my hand, took my coat and brought me to sit in the room where my interview was going to be held. It asked me whether I needed a drink, and then proceeded to clean the countertops and water the plants. When I asked whether there was a reason it was working so hard, it simply said: "I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do." If you have ever seen Stanley Kubrick's tribute to humanoid computers, "2001: A Space Odyssey," then you already know I was merely making an allusion ... What actually happened during my first visit to the Neuromorphics Lab at Boston University was a slightly different, though no less entertaining, scenario. I walked into an office and there was a Roomba-like robot approaching and avoiding multicolored objects. It made its decisions based on a reward history ("bad robot" vs. "good robot"). On a desk, I noticed a dismembered radio control (RC) helicopter with half of its parts missing. Peeking into an additional room, I couldn't help but notice a toy car with a camera installed at the helm, and EEG electrodes hanging off on all sides. All around me, researchers were creating and refining artificial brain systems in virtual environments before deploying them in robots. Even before sitting down to talk with anyone about job opportunities, I knew this was the place for me. The Neuromorphics Lab is researching innovative robot learning-algorithms. Imagine having a cleaning robot that did what no other cleaning robot is currently able to do: learn. It could learn the one place in your house where your dog always loves to wipe his grubby little paws when he comes inside. It could learn that Tuesdays are softball practice, which means a certain trail of dirt leading up to your room. The keyword here, obviously, is learning. The problem with the conventional approach to robotics is that it requires explicit programming for robots to carry out specific tasks, leading to a lack of autonomous, general purpose artificial intelligence, or AI. Working in collaboration with Hewlett-Packard (HP) laboratories, the Neuromorphics Lab, part of the National Science Foundation- (NSF) sponsored Center of Excellence for Learning in Education, Science and Technology (CELEST), has undertaken the ambitious project of creating a brain on a chip--a fundamental predecessor to the design of autonomous robotics and general intelligence. Researchers in the Neuromorphics Lab are closer than ever to being able to accomplish the goal of creating a general mammalian-type intelligence. Most people have never even heard of the term "neuromorphic"--which is a technology with a specific form ("morphic") that is based on brain ("neuro") architecture. The neural models being developed by the Neuromorphics Lab implement "whole brain systems," or large-scale brain models that allow virtual and robotic agents to learn on their own to interact with new environments. Like any intelligent biological system, artificial-autonomous and adaptive systems need three things: a mind, a brain and a body. The CELEST models run on a software platform called Cog, which serves as the operating system within which the artificial "brain" is developed. Along with the hardware--currently general-purpose processors to be augmented by innovative nanotechnologies under development at HP--Cog offers an ideal environment for the design and testing of whole-brain simulation. The work of the Neuromorphics Lab focuses primarily on engineering the mind of the adaptive system. Once complete, a virtual animat, equipped with the artificial brain, will be able to learn how to navigate in its environment based on its inherent capabilities for responding to motivations, evaluating sensory data, and making intelligent decisions that are transformed into motor outputs. As a new employee of the Neuromorphics Lab, I recently participated in a demonstration of the adaptive robot. I watched as it was able to learn to distinguish and develop a preference for a set of multicolored blocks. Although this may seem like a trivial task, one that comes naturally to humans, the immensity of this task lies in the fact that the animat is not explicitly programmed to approach certain colored blocks, but rather to learn which objects to approach and avoid based on rewards and punishments associated with them. The process is similar to how animals learn by trial and error to interact with a world they were not "pre-programmed" to act upon. Whole-brain systems are difficult to engineer and test. The Neuromorphics Lab accelerates these processes by training the animat brain in virtual environments. Not being bounded by a physical substrate such as a robot, researchers are able to test thousands of different brains in parallel on high-performance computing resources, such as NSF's TeraGrid, and use the best versions on the robot. The platform the developes selected is the iRobot Create, a robot that looks a lot like the Roomba vacuum-cleaning robot. Since the animat is not explicitly programmed to solve specific tasks, there is greater flexibility for the robot's prospective functions. Eventually, it will function on an autonomous level and be able to take on more complex adaptive tasks such as intelligently interacting and caring for the elderly, autonomously exploring and collecting samples on an alien planet, and generally employing more humanoid behavior. This is a challenge for any artificial intelligence program under development: it is simply impossible to program a lifetime's set of knowledge into a robot! That is why it is so important for the next generation of artificial intelligence to be able to learn throughout a lifetime without needing constant reprogramming. Science fiction is rife with examples of learning robots and HAL 9000 from Kubrick's "2001 Space Odyssey" will forever come to mind as the media's favorite malfunctioning robot. Although confident about the advent of general intelligence machines in the near future, researchers at the Neuromorphics Lab are optimistic that misbehaving robots like HAL will live only in science fiction movies. Future robots will not be programmed, but will be trained. The key is to educate them well! This work was partially funded by the Center of Excellence for Learning in Education, Science and Technology (CELEST), a National Science Foundation Science of Learning Center (NSF SMA-0835976) and by the DARPA SyNAPSE program, contract HR0011-09-3-0001. The views, opinions and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the Department of Defense or the National Science Foundation. -- Aisha Sohail, Boston University, email@example.com This Behind the Scenes article was provided to LiveScience in partnership with the National Science Foundation. Trustees of Boston University Science of Learning Centers #0835976 CELEST: A Center of Excellence for Learning in Education, Science, and Technology LiveScience.com: Behind the Scenes: Transforming Robotics with Biologically Inspired Learning Models: http://www.livescience.com/14441-biologically-inspired-learning-robotics-bts.html Neuromorphics Laboratory: http://cns.bu.edu/nl/
<urn:uuid:a3902fa9-4439-412f-bc30-f55ccd3e4736>
CC-MAIN-2013-20
http://nsf.gov/discoveries/disc_summ.jsp?cntn_id=119750&org=SBE
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941605
1,574
2.84375
3
Dr. Paul Auerbach is the world's leading outdoor health expert. His blog offers tips on outdoor safety and advice on how to handle wilderness emergencies.See all posts » Beat the Heat The most effective ways to avoid heat-related illnesses are to: - Stay well hydrated. Adequate water ingested during exercise is not harmful, does not cause cramps, and is your best protection. If you are sweating a great deal, you should consider replacing electrolytes by drinking an electrolyte-and sugar-enriched beverage such as Gatorade. - Be very watchful of the very young and very old, as they do not regulate body temperature well. - Stay in shape. - Don't drink alcohol or use recreational drugs. - Condition yourself for the environment. - Wear clothing that is appropriate for the environment, so that you can shed layers as necessary. - If you are sweating, towel yourself off frequently. - Stay out of the sun on a hot day. - Avoid taking medications that inhibit the sweating process. - Finally, use common sense - if you are in the heat and feeling poorly, seek shade or another cooler location as soon as possible. Recent Blog Posts Feb 11, 2013 Topical Ivermectin Lotion for Treating Head Lice Feb 04, 2013 Public Health Interventions and Snowmobile Fatality Rates Jan 28, 2013 Physician Warnings for Persons with Impairments
<urn:uuid:916339d6-04c8-4cfb-8d13-13fbe66c5d8f>
CC-MAIN-2013-20
http://www.healthline.com/health-blogs/outdoor-medicine/beat-the-heat
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930161
304
3.140625
3
Focus On: Plan to Save Mali Cultural Treasures , Wed 20 Feb, 05:11 am At a meeting at Unesco headquarters in Paris, an action plan was drawn up to rescue what remains of Mali's cultural heritage. Comments Post a comment Radio France Internationale, 19 February 2013 11 of Timbuktu's 16 mausoleums have been demolished and up to 3000 priceless manuscripts have been destroyed, since islamists took control of northern Mali in 2012. UN News Service, 18 February 2013 International experts and decision-makers meeting at a United Nations forum in Paris today adopted an action plan to rehabilitate and safeguard Mali's cultural heritage, which has been the target of attacks by Islamic extremists in recent months. Egypt State Information Service (Cairo), 31 January 2013 Grand Imam of al-Azhar Ahmed el-Tayyeb condemned burning of Ahmed Baba Institute of Higher Learning and Islamic Research in Mali. Radio France Internationale, 28 January 2013 Islamists torched a building where priceless ancient manuscripts were stored, as they fled Mali's famous desert city of Timbuktu, which French-led troops were surrounding on Monday.
<urn:uuid:3cf2a10c-d387-4059-8b73-fa42343faa6e>
CC-MAIN-2013-20
http://allafrica.com/thread/comment/main/main/pkey/group:main:00022989.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944367
242
2.5625
3
Mary, Mary, quite contrary, how does your nanogarden grow? Harvard engineer Wim Noorduin has a green thumb. Only his thumb is only a few microns wide. By carefully controlling gradients of chemicals, he guided the construction of flower-like crystal structures to match their larger biological forms. It’s certainly art, but it also demonstrates a masterful manipulation of chemistry on the nano scale. Just how small are they? As NPR reports, these flowers could fit in the lapel of the tiny Abraham Lincoln statue on the back of a penny (back when pennies had the Lincoln Memorial on them, anyway). These electron microscope images are false colored to recreate fantastic flowers, and these manipulations will one day help control the construction of useful microstructures. If you’re seriously engineering-inclined, here’s the original research as it appears in Science. |Teacher :||What comes after 69?| |Teacher :||Get out.|
<urn:uuid:b8a94b9b-5032-4f3b-8391-ee942f84348c>
CC-MAIN-2013-20
http://bumblgoop.tumblr.com/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916873
208
2.84375
3
Chandra finds largest galaxy cluster in early universeJanuary 10th, 2012 in Space & Earth / Astronomy Composite image of the El Gordo galaxy cluster. (X-ray: NASA/CXC/Rutgers/J. Hughes et al; Optical: ESO/VLT & SOAR/Rutgers/F. Menanteau; IR: NASA/JPL/Rutgers/F. Menanteau ) (PhysOrg.com) -- An exceptional galaxy cluster, the largest seen in the distant universe, has been found using NASA's Chandra X-ray Observatory and the National Science Foundation-funded Atacama Cosmology Telescope (ACT) in Chile. Officially known as ACT-CL J0102-4915, the galaxy cluster has been nicknamed "El Gordo" ("the big one" or "the fat one" in Spanish) by the researchers who discovered it. The name, in a nod to the Chilean connection, describes just one of the remarkable qualities of the cluster, which is located more than seven billion light years from Earth. This large distance means that it is being observed at a young age. "This cluster is the most massive, the hottest, and gives off the most X-rays of any known cluster at this distance or beyond," said Felipe Menanteau of Rutgers University in New Brunswick, N.J., who led the study. Galaxy clusters, the largest objects in the universe that are held together by gravity, form through the merger of smaller groups or sub-clusters of galaxies. Because the formation process depends on the amount of dark matter and dark energy in the universe, clusters can be used to study these mysterious phenomena. Dark matter is material that can be inferred to exist through its gravitational effects, but does not emit and absorb detectable amounts of light. Dark energy is a hypothetical form of energy that permeates all space and exerts a negative pressure that causes the universe to expand at an ever-increasing rate. "Gigantic galaxy clusters like this are just what we were aiming to find," said team member Jack Hughes, also of Rutgers. "We want to see if we understand how these extreme objects form using the best models of cosmology that are currently available." Although a cluster of El Gordo's size and distance is extremely rare, it is likely that its formation can be understood in terms of the standard Big Bang model of cosmology. In this model, the universe is composed predominantly of dark matter and dark energy, and began with a Big Bang about 13.7 billion years ago. The team of scientists found El Gordo using ACT thanks to the Sunyaev-Zeldovich effect. In this phenomenon, photons in the cosmic microwave background interact with electrons in the hot gas that pervades these enormous galaxy clusters. The photons acquire energy from this interaction, which distorts the signal from the microwave background in the direction of the clusters. The magnitude of this distortion depends on the density and temperature of the hot electrons and the physical size of the cluster. X-ray data from Chandra and the European Southern Observatory's Very Large Telescope, an 8-meter optical observatory in Chile, show that El Gordo is, in fact, the site of two galaxy clusters running into one another at several million miles per hour. This and other characteristics make El Gordo akin to the well-known object called the Bullet Cluster, which is located almost 4 billion light years closer to Earth. As with the Bullet Cluster, there is evidence that normal matter, mainly composed of hot, X-ray bright gas, has been wrenched apart from the dark matter in El Gordo. The hot gas in each cluster was slowed down by the collision, but the dark matter was not. "This is the first time we've found a system like the Bullet Cluster at such a large distance," said Cristobal Sifon of Pontificia Universidad de Catolica de Chile (PUC) in Santiago. "It's like the expression says: if you want to understand where you're going, you have to know where you've been." These results on El Gordo are being announced at the 219th meeting of the American Astronomical Society in Austin, Texas. A paper describing these results has been accepted for publication in The Astrophysical Journal. More information: These results on El Gordo are being announced on 10 January 2012 at the 219th meeting of the American Astronomical Society in Austin, Texas. A paper, "The Atacama Cosmology Telescope: ACT-CL J0102−4915 'El Gordo', A Massive Merging Cluster at Redshift 0.87" by Felipe Menanteau et al, describing these results has been accepted for publication in The Astrophysical Journal. Provided by JPL/NASA "Chandra finds largest galaxy cluster in early universe." January 10th, 2012. http://phys.org/news/2012-01-chandra-largest-galaxy-cluster-early.html
<urn:uuid:c2d81a13-ed8e-40f4-a7b6-f8ebbade5b8b>
CC-MAIN-2013-20
http://phys.org/print245420023.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930439
1,035
3.171875
3
Allergy - Immunology- Allergic disorders including hayfever, food allergies, drug allergies. Often specialize also in asthma treatment. Evaluate patients who may have immunological disorders (trouble fighting off or frequent infections. Do testing, environmental control, and treatment often with "allergy shots" or desensitization. Ambulatory Care- Outpatient care. Usually in a "medicenter" or walk-in clinic. Great for minor ailments that come up suddenly when you can't get an appointment with your family doctor. Convenient as you usually don't require an appointment. Anesthesiology- Hospital or surgicenter based M.D. who provides anesthesia for major and minor surgery. Many specialize in pain treatment Cardiology- Medical Doctors who diagnose and treat diseases of the heart, lungs, and circulatory system. Interventional cardiologists do procedures such as angioplasty, pacemakers, cardioversion. Dermatology- Medical doctors who specialize in diseases of the skin, hair, and nails. Although not formally trained in surgery, many include surgical treatment of skin cancer. Modern trained dermatologists offer non-surgical cosmetic procedures. Emergency Medicine- Medical doctors who specialize in acute medical care. Most often work in hospital emergency rooms. Endocrinology- Specialize in Diseases of the glandular systems including thyroid, female hormonal problems, diabetes, pituitary, and adrenal glands. Family Practice- The primary physician for adults and children. Gastroenterology- Diseases of the stomach, esophagus, liver, and intestines including colon. General Surgery- The pre-operative, operative and post-operative care of surgical patients in a broad span of surgical conditions, affecting most area of the body. Also can involve comprehensive management of the trauma victim or the critically ill. Hematology /Oncology- Hematology/Oncology Specialize in medical diagnosis and treatment of cancer. Specialists in choosing and administering chemotherapy. Also deal with diseases of the blood. Infectious Diseases- Specialize in diagnoses and treatment of diseases spread by virus, bacteria and other organisms. Internal Medicine- Treat mainly adults and specialize in cardiovascular, respiratory, GI diseases. Diabetes, gerentology. Some patients choose Internists as their family physician. Maxillofacial Surgery- Oral surgery Neonatology- Pediatricians who specialize in treatment of newborn infants. Nephrology- Specialize in disease of the kidneys and bladder. Treat hypertension, monitor reanl transplant patients. Neurology- Specialize in diagnosis and treatment of diseases of the brain and nervous system. Neurosurgery- Surgeons who specialize in surgery and treat injuries to the brain, spinal cord,and other nervous structures, (carpal tunnel, etc.) Obstetrics /Gynecology- Obstetrics deliver babies and treat the mother during and after pregnancy. Gynecologists specialize in treatment of the female reproductive system. OBGYNs can do both elements or confine their practice to one or the other. Opthalmology- Medical Doctors specializing in surgery of the eye. (Optometrists fit glasses and treat minor eye diseases and are not medical doctors.) Orthopedic Surgery- Surgeons specializing in the usculoskelatal system. Treat sprains, fractures and other injuries. Perform joint repair and total joint replacement. Otolaryngology- Surgeons who specialize in surgery and diseases of the ears, nose, throat, sinuses, head and neck. Some ENT's also subspecialize in facial plastic surgery. Pathology- A laboratory based physician specializing in interpreting disease in tissue samples sent by other physicians. Usually works within a hospital. Pediatrics- Medical doctor specializing in the treatment of infants, children, and adolescents. Physical Medicine - Rehab- Medical doctor specializing in the treatment of musculoskeletal problems like back and neck pain, tendinitis, pinched nerves and more. Plastic Surgery- Surgery to correct functional and cosmetic deformiries of the face, head, body and extremities. Repairs scars or burned skin, reconstructs structures destroyed by cancer or accidents. Includes cosmetic surgery such as breast augmentation or reduction, body contouring (liposuction). Podiatry- The diagnosis and medical and/or surgical treatment of the lower extremity below the knee. Podiatrists generally treat conditions of the foot and ankle. Common conditions treated include sprains, fractures, skin disorders, infections, chronic wounds, heel pain, ingrown toenails, warts, hammertoes and bunions. Preventive and palliative solutions such as routine care, braces, splints, orthotics (arch supports), shoes, physical therapy, injections, tapings, casts, etc. Psychiatry- Prevention, diagnosis and treatment of mental and emotional disorders, including depression, anxiety disorders, substance abuse, developmental disabilities and sexual dysfunction. Treatment includes psychotherapy, psychoanalysis, diagnostic tests, medications and intervention with individuals and families who are coping with stress, crises and other emotional problems. Sub-speciaization exicts in child, adolescent and geriatric psychiatry. Psychology- Treat mental, emotional, and behavioral disorders. These range from short-term crises, such as difficulties resulting from adolescent rebellion, to more severe, chronic conditions such as schizophrenia. Pulmonary- Diagnosis and treatment of diseases of the lungs and airways, includes such conditions as pneumonia, cancer, pleurisy, asthma, bronchitis, emphysema, and other disorders of the lungs. Radiation Oncology- Specially of Radiology. Treatment of cancers and other diseases using radiation. Radiology- The use of various modalities such as X-ray, mamography, ultrasound, MRI, CAT and nuclear medicine to diagnose and treat disease. Rheumatology- Diagnosis and non-surgical treatment of diseases of the joints, muscle, bones and tendons. Included are such conditions as arthritis, back pain, muscle strains and collagen diseases. Thoracic Surgery- Heart Surgery and Lung Surgery Urology- Treatment of diseases of the urinary tract, both male and female, and of the reproductive system of the male. Organs include the kidneys, bladder, prostate gland, adrenal gland, penis and testes. Vascular Surgery- Vascular surgery is the branch of surgery that occupies itself with surgical interventions of arteries and veins, as well as conservative therapies for disease of the peripheral vascular system. Surgery of the heart is the specialism of the cardiothoracic surgeon.
<urn:uuid:1693f0ad-1e0b-4921-8c2d-af371b22583f>
CC-MAIN-2013-20
http://tchealth.com/doctor_search/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919101
1,368
2.921875
3
Martian paydirt: Barely a month along, Curiosity has a major find Share with others: The Mars rover Curiosity has opened a new era of planetary exploration. One month after landing safely on the Martian surface, Curiosity has uncovered evidence of what may have been an ancient riverbed. This is a milestone for a mission that has barely begun the meticulous work of uncovering the Red Planet's secrets. NASA scientists are ecstatic about the pictures Curiosity has beamed back to Earth of possible water flows. The photos are of smooth rocks and water-transported gravel from other sites that was probably churned up when a waist-high river flow carried the material downstream when humans didn't exist on nearby Earth. Much observation and many experiments have yet to be performed, but scientists are confident that the hypothesis that rivers once flowed on the Martian surface has been confirmed. Given that, they will move on to the next phase of the mission -- searching for carbon-based molecules, the element that all life has in common. As it ambles to Mount Sharp, a 3-mile-high peak, Curiosity will look for carbon in areas that could have supported life. At the rate it continues to thrill and surprise NASA scientists, we shouldn't be shocked if the robot rover comes across a Motel 6 quietly ensconced in the bleak Martian landscape. First Published October 1, 2012 12:00 am
<urn:uuid:9b601788-3e91-4110-99f4-5ea8013f2919>
CC-MAIN-2013-20
http://www.post-gazette.com/stories/opinion/editorials/martian-paydirt-barely-a-month-along-curiosity-has-a-major-find-655603/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94823
285
3.3125
3
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs Chapter 13 Managing Groups and Teams The coordination needed by a symphony to perform in unison is a prime example of teamwork. © 2010 Jupiterimages Corporation What’s in It for Me? Reading this chapter will help you do the following: - Recognize and understand group dynamics and development. - Understand the difference between groups and teams. - Understand how to organize effective teams. - Recognize and address common barriers to team effectiveness. - Build and maintain cohesive teams. Figure 13.2 The P-O-L-C Framework Groups and teams are ubiquitous on the organizational landscape and managers will find that team management skills are required within each of the planning-organizing-leading-controlling (P-O-L-C) functions. For instance, planning may often occur in teams, particularly in less centralized organizations or toward the higher levels of the firm. When making decisions about the structure of the firm and individual jobs, managers conducting their organizing function must determine how teams will be used within the organization. Teams and groups have implications for the controlling function because teams require different performance assessments and rewards. Finally, teams and groups are a facet of the leading function. Today’s managers must be both good team members and good team leaders. Managing groups and teams is a key component of leadership. In your personal life, you probably already belong to various groups such as the group of students in your management class; you may also belong to teams, such as an athletic team or a musical ensemble. In your career, you will undoubtedly be called on to be part of, and mostly likely to manage, groups and teams.
<urn:uuid:d81a82f9-be5a-4b29-be30-9607350d6495>
CC-MAIN-2013-20
http://catalog.flatworldknowledge.com/bookhub/reader/5?e=carpenter-ch13
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916724
589
2.890625
3
Thomism is the teachings that follow the theology and philosophy of Thomas Aquinas (d.1274). Aquinas left a massive amount of writings, dealing with much of the same issues that people wrestle with in modern times. His most well-known work is the Summa Theologica, a huge work that deals not only with theology, but also lays a foundation for philosophy, psychology, and government. He deals with each topic by answering almost every conceivable question, dividing each problem into many sub parts, each of which proposes all the objections imaginable, then answering each objection. Therefore Thomas Aquinas’ work is both a statement of theology and a defense against critics. Aquinas provides a well-reasoned foundation that answers many issues in Christian theology and philosophy. Distinctives of Thomistic theology and philosophy include the following: - The human mind is capable of reasoning through problems based on observing effects in the world. Thus man can reason to the existence of God by observing effects in the universe. Divine truths that are beyond what we observe must be revealed by God - When a man observes an effect, he can know that it has a cause. Learning about that cause is a natural desire of the human intellect. - Therefore humans can observe nature and learn some things about God. - The essence of a thing (e.g., what it is; the attributes) is distinct from the existence of the thing (e.g., that it is). Thus we can reason about existence separately from essence. Logically, existence precedes essence. Thus we must answer questions about existence before we can answer questions about essence. - God is a necessary being, is simple (e.g., not compound). Just as a stone can be 100% gray and 100% hard, every attribute God has, He is that completely and necessarily. - God is eternal, which is not “in time” but rather not bound by time. - We know about God through analogy. Thus we know God is good and powerful, but we know “good” or “power” in a way that is analogous to how God is good or powerful, not in exactly the same univocal sense. This is true because man is finite, and God is infinite. Thus man can know God, but in an analogous sense. - Modern problems in epistemology (how we know) are ultimately solved through metaphysics (how we exist) and by analogy of existence. Thomism denies that the knowledge in our mind is a representation of reality, but is instead another instance of reality. - God causes things consistent with its essence, thus when God causes movement in the human will, He does so through human free will, and not contrary to human free will. Authors who are influenced by Thomism include Norman Geisler, Thomas Howe, Joseph Owen, Jacques Maratain, George Klubertanz, Etienne Gilson, R. P. Phillips, and many others. Sources of how Thomism can be applied to evangelical theology and philosophy are: - Norman Geisler, who published a four-volume Systematic Theology, and a book called Thomas Aquinas: An Evangelical Appraisal. - Etienne Gilson, Being and Some Philosophers. - Henry Babcock Veatch, Two Logics: The Conflict Between Classical and Neo-Analytic Philosophy. For the beginner interested in Aquinas and for those who have little exposure to classical philosophy, I’d suggest starting with Geisler’s book Thomas Aquinas. For those who want to read Thomas, I suggest starting with Aquinas’ work titled Summa Contra Gentiles which is a bit more digestible, then try On Truth (De Veritate), a three volume set that is a little shorter. For those who have studied under modern analytic philosophers, the Veatch book Two Logics will show you the profound difference between classical philosophy and what you’ve learned. Most of Thomas Aquinas’ writings can be found online in English by doing an internet search (try here). However, works such as Summa Theologica assume that the reader has already been well-versed in metaphysics.
<urn:uuid:71be8fc8-618c-4248-a8dc-3ac85269f2d0>
CC-MAIN-2013-20
http://humblesmith.wordpress.com/what-is-thomism/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934608
867
3.765625
4
Here I want to highlight this concept by asking a few foundational questions. Fundamentally, what kind of concept is it? How does it function in social interpretation, description, or explanation? And how does it function as a component of empirical investigation? The concept of moral economy was extensively developed by E. P. Thompson in The Making of the English Working Class (1961) and an important essay, "The Moral Economy of the English Crowd in the Eighteenth Century," originally published in Past and Present in 1971 and included in Customs in Common: Studies in Traditional Popular Culture. The concept derives from Thompson's treatment of bread riots in eighteenth century Britain. In MEWC Thompson writes: In 18th-century Britain riotous actions assumed two different forms: that of more or less spontaneous popular direct action; and that of the deliberate use of the crowd as an instrument of pressure, by persons "above" or apart from he crowd. The first form has not received the attention which it merits. It rested upon more articulate popular sanctions and was validated by more sophisticated traditions than the word "riot" suggests. The most common example is the bread or food riot, repeated cases of which can be found in almost every town and county until the 1840s. This was rarely a mere uproar which culminated in the breaking open of barns or the looting of shops. It was legitimised by the assumptions of an older moral economy, which taught the immorality of any unfair method of forcing up the price of provisions by profiteering upon the necessities of the people. (MTWEC, 62-63) After describing a number of bread riots in some detail, Thompson writes, "Actions on such a scale ... indicate an extraordinarily deep-rooted pattern of behaviour and belief .... These popular actions were legitimised by the old paternalist moral economy" (66). And he closes this interesting discussion with these words: "In considering only this one form of 'mob' action we have come upon unsuspected complexities, for behind every such form of popular direct action some legitimising notion of right is to be found" (68). And Thompson often describes these values as "traditional" or "paternalist" -- working in opposition to the values and ideas of an unfettered market; he contrasts "moral economy" with the modern "political economy" associated with liberalism and the ideology of the free market. In "The Moral Economy of the Crowd" Thompson puts his theory this way: It is possible to detect in almost ever eighteenth-century crowd action some legitimising notion. By the notion of legitimation I mean that the men and women in the crowd were informed by the belief that they were defending traditional rights or customs; and, in general, that they were supported by the wider consensus of the community. On occasion this popular consensus was endorsed by some measure of licence afforded by the authorities. More commonly, the consensus was so strong that it overrode motives of fear or deference. ("Moral Economy," CIC 188) It is plain from these passages that Thompson believes that the "moral economy" is a real historical factor, consisting of the complex set of attitudes and norms of justice that are in play within this historically presented social group. As he puts the point late in the essay, "We have been examining a pattern of social protest which derives from a consensus as to the moral economy of the commonweal in times of dearth" (247). So the logic of Thompson's ideas here seems fairly clear: there were instances of public disorder ("riots") surrounding the availability and price of food, and there is a hypothesized "notion of right" or justice that influenced and motivated participants. This conception of justice is a socially embodied historical factor, and it partially explains the behavior of the rural people who mobilized themselves to participate in the disturbances. He recapitulates his goal in the essay, "Moral Economy Reviewed" (also included in Customs in Common) in these terms: "My object of analysis was the mentalité, or, as I would prefer, the political culture, the expectations, traditions, and indeed, superstitions of the working population most frequently involved in actions in the market" (260). These shared values and norms play a key role in Thompson's reading of the political behavior of the individuals in these groups. So these hypotheses about the moral economy of the crowd serve both to help interpret the actions of a set of actors involved in food riots, and to explain the timing and nature of food riots. We might say, then, that the concept of "moral economy" contributes both to a hermeneutics of peasant behavior and a causal theory of peasant contention. Now move forward two centuries. Another key use of the concept of moral economy occurs in treatments of modern peasant rebellions in Asia. Most influential is James Scott's important book, The Moral Economy of the Peasant: Rebellion and Subsistence in Southeast Asia. Scholars of the Chinese Revolution borrowed from Scott in offering a range of interpretations of peasant behavior in the context of CCP mobilization; for example, James Polachek ("The Moral Economy of the Kiangsi Soviet" (1928-34). Journal of Asian Studies 1983 XLII (4):805-830). And most recently, Kevin O'Brien has made use of the idea of a moral economy in his treatment of "righteous protest" in contemporary China (Rightful Resistance in Rural China). So scholars interested in the politics of Asian rural societies have found the moral economy concept to be a useful one. Scott puts his central perspective in these terms: We can learn a great deal from rebels who were defeated nearly a half-century ago. If we understand the indignation and rage which prompted them to risk everything, we can grasp what I have chosen to call their moral economy: their notion of economic justice and their working definition of exploitation--their view of which claims on their product were tolerable and which intolerable. Insofar as their moral economy is representative of peasants elsewhere, and I believe I can show that it is, we may move toward a fuller appreciation of the normative roots of peasant politics. If we understand, further, how the central economic and political transformations of the colonial era served to systematically violate the peasantry's vision of social equity, we may realize how a class "of low classness" came to provide, far more often than the proletariat, the shock troops of rebellion and revolution. (MEP, 3-4) Scott's book represents his effort to understand the dynamic material circumstances of peasant life in colonial Southeast Asia (Vietnam and Burma); to postulate some central normative assumptions of the "subsistence ethic" that he believes characterizes these peasant societies; and then to explain the variations in political behavior of peasants in these societies based on the moments of inconsistency between material conditions and aspects of the subsistence ethic. And he postulates that the political choices for action these peasant rebels make are powerfully influenced by the content of the subsistence ethic. Essentially, we are invited to conceive of the "agency" of the peasant as being a complicated affair, including prudential reasoning, moral assessment based on shared standards of justice, and perhaps other factors as well. So, most fundamentally, Scott's theory offers an account of the social psychology and agency of peasants. There are several distinctive features of Scott's programme. One is his critique of narrow agent-centered theories of political motivation, including particularly rational choice theory. (Samuel Popkin's The Rational Peasant: The Political Economy of Rural Society in Vietnam is the prime example.) Against the idea that peasants are economically rational agents who decide about political participation based on a narrowly defined cost-benefit analysis, Scott argues for a more complex political psychology incorporating socially shared norms and values. But a second important feature is Scott's goal of providing a somewhat general basis for explanation of peasant behavior. He wants to argue that the subsistence ethic is a widely shared set of moral values in traditional rural societies -- with the consequence that it provides a basis for explanation that goes beyond the particulars of Vietnam or Burma. And he has a putative explanation of this commonality as well -- the common existential circumstances of traditional family-based agriculture. One could pull several of these features apart in Scott's treatment. For example, we could accept the political psychology -- "People are motivated by a locally embodied sense of justice" -- but could reject the generalizability of the subsistence ethic -- "Burmese peasants had the XYZ set of local values, while Vietnamese peasants possessed the UVW set of local values." This programme suggests several problems for theory and for empirical research. Are there social-science research methods that would permit us to "observe" or empirically discern the particular contents of a normative worldview in a range of different societies, in order to assess whether the subsistence ethic that Scott describes is widespread? Are peasants in Burma and Vietnam as similar as Scott's theory postulates? How would we validate the implicit theory of political motivation that Scott advances (calculation within the context of normative judgment)? Are there other important motivational factors that are perhaps as salient to political behavior as the factors invoked by the subsistence ethic? Where does Scott's "thicker" description of peasant consciousness sit with respect to fully ethnographic investigation? So to answer my original question -- what kind of concept is the "moral economy"? -- we can say several things. It is a proto-theory of the theory of justice that certain groups possess (18th-century English farmers and townspeople, 20th-century Vietnamese peasants). It implicitly postulates a theory of political motivation and political agency. It asserts a degree of generality across peasant societies. It is offered as a basis for both interpreting and explaining events -- answering the question "What is going on here?" and "Why did this event take place?" In these respects the concept is both an empirical construct and a framework for thinking about agency; so it can be considered both in terms of its specific empirical adequacy and, more broadly, the degree of insight it offers for thinking about collective action.
<urn:uuid:5e260418-125d-4c16-b874-5a3610cb121f>
CC-MAIN-2013-20
http://understandingsociety.blogspot.com/2008/07/moral-economy-as-historical-social.html?showComment=1289429105381
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957271
2,048
2.59375
3
Why Are Gas Prices So High? Recently a youngziner asked us, “Does the fuel we use at gas stations come from Japan? Is this the reason why gas prices are going up?” It is a very thoughtful question and one we believe deserves an in-depth reply. Gasoline demand in the US The gasoline prices at the pump have been getting very expensive recently. The last time we saw prices rise this high was in July 2008. Although US is the 3rd largest crude oil producer in the world, US imports about 51% of its oil -- most of which comes from Canada, Venezuela, Saudi Arabia, Mexico and Nigeria. Now, this crude oil goes through a refining process, and at different temperatures, different products are produced such as gasoline or other petroleum products. The largest US refineries are located in the Gulf of Mexico region. Refined gasoline and other byproducts are transported along the Mississippi river upstream, and via trucks from the river heads to various gas stations all over the US. Law of Demand & Supply Prices of all products are affected by what economists call "Supply and Demand" -- lets try to understand that. Imagine, there are 20 kids in a clubhouse and only 5 computers available to play your favorite video game. The demand for computers will be far greater than its supply. Now, imagine if you could use candy to trade computer time. Do you think you will have to give up more candy for the computer time, than in another club house where there are 20 kids and 20 computers? Think of situations where there are 20 kids and 40 computers, or a case where you have an unlimited supply of candy. You may have rightly guessed that the more the number of computers, the less candy it will cost you. Also if the candy does not have a value for you (unlimited supply) -- you may be willing to give whatever it takes to get you the computer time! So you see how the price (number of candy) is a function of supply (availability of computers) and demand (kids). When either the demand increases or there is a supply issue, prices of items tend to go up. How natural disasters affect prices? Weather patterns -- such as hurricanes or floods, or political problems in countries that export oil can affect the movement of oil tankers bringing crude oil to the refineries from various parts of the world. Demand for gasoline tends to go up during the summer season when people drive more, or after disasters, when people need to rebuild their communities. As prices go up, it will start reducing the demand as less people can afford it. So you see how demand and supply change continuously -- like a see-saw, and cause gas prices to fluctuate. More recently Governments around the world have been printing money to ease the lives of people affected by the recent recession. When the supply of money around the world has gone up (like the unlimited candy situation), the price of goods go up. It is true that the Japanese reconstruction effort has had some effect on oil prices. But as you have seen, the combination of several other factors such as the world money supply, natural disasters, higher costs of producing gasoline among others, are creating a hole in our pockets, every time we visit a gas station.
<urn:uuid:6610bc0b-dbae-43a5-9485-02049a05b6c8>
CC-MAIN-2013-20
http://www.youngzine.org/article/why-are-gas-prices-so-high
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966053
669
2.578125
3
As you will see in the chapter on scaling, it may become important to facilitate occasional face-to-face meetings among subgroups of users. Thus it will be helpful to record their country of residence and postal code (what Americans call "Zoning Improvement Plan code" or "ZIP code"). create table users ( user_id integer primary key, first_names varchar(50), last_name varchar(50) not null, email varchar(100) not null unique, -- we encrypt passwords using operating system crypt function password varchar(30) not null, registration_date timestamp(0) ); Notice that the comment about password encryption is placed above, rather than below, the column name and that the primary key constraint is clearly visible to other programmers. It is good to get into the habit of writing data model files in a text editor and including comments and examples of the queries that you expect to support. If you use a desktop application with a graphical user interface to create tables you're losing a lot of important design information. Remember that the data model is the most critical part of your application. You need to think about how you're going to communicate your design decisions to other programmers. After a few weeks online, someone says, "wouldn't it be nice to see the user's picture and hyperlink through to his or her home page?" After a few more months ... create table users ( user_id integer primary key, first_names varchar(50), last_name varchar(50) not null, email varchar(100) not null unique, password varchar(30) not null, -- user's personal homepage elsewhere on the Internet url varchar(200), registration_date timestamp(0), -- an optional photo; if Oracle Intermedia Image is installed -- use the image datatype instead of BLOB portrait blob ); The table just keeps getting fatter. As the table gets fatter, more and more columns are likely to be NULL for any given user. With Oracle 9i you're unlikely to run up against the hard database limit of 1000 columns per table. Nor is there a storage efficiency problem. Nearly every database management system is able to record a NULL value with a single bit, even if the column is defined create table users ( user_id integer primary key, first_names varchar(50), last_name varchar(50) not null, email varchar(100) not null unique, password varchar(30) not null, -- user's personal homepage elsewhere on the Internet url varchar(200), registration_date timestamp(0) -- an optional photo; if Oracle Intermedia Image is installed -- use the image datatype instead of BLOB portrait blob, -- with a 4 GB maximum, we're all set for Life of Johnson biography clob, birthdate date, -- current politically correct column name would be "gender" -- but data models often outlive linguistic fashion so -- we stick with more established usage sex char(1) check (sex in ('m','f')), country_code char(2) references country_codes(iso), postal_code varchar(80), home_phone varchar(100), work_phone varchar(100), mobile_phone varchar(100), pager varchar(100), fax varchar(100), aim_screen_name varchar(50), icq_number varchar(50) ); char(500)or whatever. Still, something seems unclean about having to add more and more columns to deal with the possibility of a user having more and more phone numbers. Medical informaticians have dealt with this problem for many years. The example above is referred to as a "fat data model." In the hospital world you'll very likely find something like this for storing patient demographic and insurance coverage data. But for laboratory tests, the fat approach begins to get ugly. There are thousands of possible tests that a hospital could perform on a patient. New tests are done every day that a patient is in the hospital. Some hospitals have experimented with a "skinny" data model for lab tests. The table looks something like the following: Note that this table doesn't have a lot of integrity constraints. If you were to specify create table labs ( lab_id integer primary key, patient_id integer not null references patients, test_date timestamp(0), test_name varchar(100) not null, test_units varchar(100) not null, test_value number not null, note varchar(4000) ); -- make it fast to query for "all labs for patient #4527" -- or "all labs for patient #4527, ordered by recency" create index labs_by_patient_and_date on labs(patient_id, test_date); -- make it fast to query for "complete history for patient #4527 insulin levels" create index labs_by_patient_and_test on labs(patient_id, test_name); patient_idas unique that would limit each hospital patient to having only one test done. Nor does it work to specify the combination of test_dateas unique because there are fancy machines that can do multiple tests at the same time on a single blood sample, for example. We can apply this idea to user registration: Here is a example of how such a data model might be filled: create table users ( user_id integer primary key, first_names varchar(50), last_name varchar(50) not null, email varchar(100) not null unique, password varchar(30) not null, registration_date timestamp(0) ); create table users_extra_info ( user_info_id integer primary key, user_id not null references users, field_name varchar(100) not null, field_type varchar(100) not null, -- one of the three columns below will be non-NULL varchar_value varchar(4000), blob_value blob, date_value timestamp(0), check ( not (varchar_value is null and blob_value is null and date_value is null)) -- in a real system, you'd probably have additional columns -- to store when each row was inserted and by whom ); -- make it fast to get all extra fields for a particular user create index users_extra_info_by_user on users_extra_info(user_id); user_id first_names last_name password 1 Wile E. Coyote email@example.com IFUx42bQzgMjE user_info_id user_id field_name field_type varchar_value blob_value date_value 1 1 birthdate date -- -- 1949-09-17 2 1 biography blob_text -- Created by Chuck Jones... -- 3 1 aim_screen_name string iq207 -- -- 4 1 annual_income number 35000 -- -- If you're using a fancy commercial RDBMS and wish to make queries like this really fast, check out bitmap indices, often documented under "Data Warehousing". These are intended for columns of low cardinality, i.e., not too many distinct values compared to the number of rows in the table. You'd build a bitmap index on the select average(varchar_value) from users_extra_info where field_name = 'annual_income' One complication of this kind of data model is that it is tough to use simple built-in integrity constraints to enforce uniqueness if you're also going to use the users_extra_info for many-to-one For example, it doesn't make sense to have two rows in the info table, both for the same user ID and both with a field name of "birthdate". A user can only have one birthday. Maybe we should (Note that this will make it really fast to fetch a particular field for a particular user as well as enforcing the unique constraint.) create unique index users_extra_info_user_id_field_idx on users_extra_info (user_id, field_name); But what about "home_phone"? Nothing should prevent a user from getting two home phone numbers and listing them both. If we try to insert two rows with the "home_phone" value in the field_name column and 451 in the column, the RDBMS will abort the transactions due to violation of the unique constraint defined above. How to deal with this apparent problem? One way is to decide that the users_extra_info table will be used only for single-valued properties. Another approach would be to abandon the idea of using the RDBMS to enforce integrity constraints and put logic into the application code to make sure that a user can have only one birthdate. A complex but complete approach is to define RDBMS triggers that run a short procedural program inside the RDBMS—in Oracle this would be a program in the PL/SQL or Java programming languages. This program can check that uniqueness is preserved for fields that indeed must be unique. One argument in favor of fat-style is maintainability and self-documentation. Fat is the convention in the database world. A SQL programmer who takes over your work will expect fat. He or she will sit down and start to understand your system by querying the data dictionary, the RDBMS's internal representation of what tables are defined. Here's how it looks with Oracle: Suppose that you were storing all of your application data in a single table: select table_name from user_tables; describe users *** SQL*Plus lists the column names *** describe other_table_name *** SQL*Plus lists the column names *** describe other_table_name_2 *** SQL*Plus lists the column names *** ... This is an adequate data model in the same sense that raw instructions for a Turing machine is an adequate programming language. Querying the data dictionary would be of no help toward understanding the purpose of the application. One would have to sample the contents of the rows of create table my_data ( key_id integer, field_name varchar, field_type varchar, field_value varchar ); my_datato see what was being stored. Suppose, by contrast, you were poking around in an unfamiliar database and encountered this table definition: create table address_book ( address_book_id integer primary key, user_id not null references users, first_names varchar(30), last_name varchar(30), email varchar(100), email2 varchar(100), line1 varchar(100), line2 varchar(100), city varchar(100), state_province varchar(20), postal_code varchar(20), country_code char(2) references country_codes(iso), phone_home varchar(30), phone_work varchar(30), phone_cell varchar(30), phone_other varchar(30), birthdate date, days_in_advance_to_remind integer, date_last_reminded date, notes varchar(4000) ); |Note the use of ISO country codes, constrained by reference to a table of valid codes, to represent country in the table above. You don't want records with "United States", "US", "us", "USA", "Umited Stares", etc. These are maintained by the ISO 3166 Maintenance agency, from which you can download the most current data in text format. See http://www.iso.ch/iso/en/prods-services/iso3166ma/index.html.| Skinny is good when you are storing wildly disparate data on each user, such that you'd expect more than 75 percent of columns to be NULL in a fat data model. Skinny can result in strange-looking SQL queries and data dictionary opacity. When building user groups you might want to think about on-the-fly groups. You definitely want to have a user group where each member is represented by a row in a table: "user #37 is part of user group #421". With this kind of data model people can explicitly join and separate from user groups. It is also useful, however, to have groups generated on-the-fly from queried properties. For example, it might be nice to be able to say "this discussion forum is limited to those users who live in France" without having to install database triggers to insert rows in a user group map table every time someone registers a French address. Rather than denormalizing the data, it will be much cleaner to query for "users who live in France" every time group membership is needed. A typical data model will include a USERS table and a USER_GROUPS table. This leads to a bit of ugliness in that many of the other tables in the system must include two columns, one for user_id and one for user_group_id. If the user_id column is not NULL, the row belongs to a user. If the user_group_id is not NULL, the row references a user group. Integrity constraints ensure that only one of the columns will be non-NULL. In this case, we'd store the string "17 18" in the create table users ( user_id integer primary key, ... -- a space-separated list of group IDs group_memberships varchar(4000), ... ); group_membershipscolumn. This is known as a repeating group or a multivalued column and it has the following problems: create table user_group_map ( user_id not null references users; user_group_id not null references user_groups; unique(user_id, user_group_id) ); Note that in Oracle the unique constraint results in the creation of an index. Here it will be a concatenated index starting with the user_id column. This index will make it fast to ask the question "To which groups does User 37 belong?" but will be of no use in answering the question "Which users belong to Group 22?"A good general rule is that representing a many-to-one relation requires two tables: Things A and Things B, where many Bs can be associated with one A. Another general rule is that representing a many-to-many relation requires three tables: Things A, Things B, and a mapping table to associate arbitrary numbers of As with arbitrary numbers of Bs. users, user_groups, user_group_map: To answer the question "Is Norman Horowitz part of the Tanganyikan Ciclid interest group and therefore entitled to their private page" we must execute a query like the following: select user_groups.group_name from users, user_groups, user_group_map where users.first_names = 'Norman' and users.last_name = 'Horowitz' and users.user_id = user_group_map.user_id and user_groups.user_group_id = user_group_map.user_group_id; select count(*) from user_group_map where user_id = (select user_id from users where first_names = 'Norman' and last_name = 'Horowitz') and user_group_id = (select user_group_id from user_groups where group_name = 'Tanganyikans') Note the use of the tanganyikan_group_member_p. This column will be set to "t" when a user is added to the Tanganyikans group and reset to "f" when a user unsubscribes from the group. This feels like progress. We can answer our questions by querying one table instead of three. Historically, however, RDBMS programmers have been bitten badly any time that they stored derivable data, i.e., information in one table that can be derived by querying other, more fundamental, tables. Inevitably a programmer comes along who is not aware of the unusual data model and writes application code that updates the information in one place but not another. What if you really need to simplify queries? Use a view: What if you know that you're going to need this information almost every time that you query the USERS table? create view tanganyikan_group_members as select * from users where exists (select 1 from user_group_map, user_groups where user_group_map.user_id = users.user_id and user_group_map.user_group_id = user_groups.user_group_id and group_name = 'Tanganyikans'); This results in a virtual table containing all the columns of users plus an additional column called create view users_augmented as select users.*, (select count(*) from user_group_map ugm, user_groups ug where users.user_id = ugm.user_id and ugm.user_group_id = ug.user_group_id and ug.group_name = 'Tanganyikans') as tanganyikan_group_membership from users where exists (select 1 from user_group_map, user_groups where user_group_map.user_id = users.user_id and user_group_map.user_group_id = user_groups.user_group_id and group_name = 'Tanganyikans'); tanganyikan_group_membershipthat is 1 for users who are members of the group in question and 0 for users who aren't. In Oracle, if you want the column to bear the standard ANSI boolean data type values, you can wrap the DECODE function around the query in the select list: Notice that we've added an "_p" suffix to the column name, harking back to the Lisp programming language in which functions that could return only boolean values conventionally had names ending in "p". decode(select count(*) ..., 1, 't', 0, 'f') as tanganyikan_group_membership_p Keep in mind that data model complexity can always be tamed with views. Note, however, that views are purely syntactic. If a query is running slowly when fed directly to the RDBMS, it won't run any faster simply by having been renamed into a view. Were you to have 10,000 members of a group, each of whom was requesting one page per second from the group's private area on your Web site, doing three-way JOINs on every page load would become a substantial burden on your RDBMS server. Should you fix this by denormalizing, thus speeding up queries by perhaps 5X over a join of indexed tables? No. Speed it up by 1000X by caching the results of authorization queries in the virtual memory of the HTTP server process. Clean up ugly queries with views. Clean up ugly performance problems with indices. If you're facing Yahoo! or Amazon levels of usage, look into unloading the RDBMS altogether with application-level caching. Or perhaps you're building a public online learning community. You want users to be identified and accountable at the very least to their Internet Service Provider. So you'll want to limit access to only those registrants who've verified receipt of an email message at the address that they supplied upon registering. You may also want to reject registration from users whose only email address is at hotmail.com or a similar anonymous provider. A community may need to change its policies as the membership grows. One powerful way to manage user access is by modeling user registration as a finite-state machine, such as the one shown in figure 5.1. Rather than checking columnsNot a user | V Need Email Verification Rejected (via any Need Admin Approval pre-authorization state) | | Need admin approval<--------- ------------->Need email verification | | | | --------------------->Authorized<--------------------- | | Banned------------><-------- ------><---------------Deleted admin_approved_p, email_verified_p, banned_p, deleted_pin the userstable on every page load, this approach allows application code to examine only a single The authors built a number of online communities with this same finite-state machine and for each one made a decision with the publisher as to whether or not any of these state transitions could be made automatically. The Siemens Sharenet knowledge sharing system, despite being inaccessible from the public Internet, elected to require administrator approval for every new user. By contrast, on photo.net users would go immediately from "Not a user" to "Authorized". Questions: Do you store users' passwords in the database encrypted or non-encrypted? What are the advantages and disadvantages of encryption? What columns in your tables will enable your system to handle the query "Find me users who live within 50 kilometers of User #37"? Make sure that your data model and answers are Web-accessible and easy to find from your main documentation directory, perhaps at the URL One of the things that users love about the Web is the way in which computation is discretized. A desktop application is generally a complex miasma in which the state of the project is only partially visible. Despite software vendors having added multiple-level Undo commands to many popular desktop programs, the state of those programs remains opaque to users. The first general principle of multi-page design is therefore Don't break the browser's Back button. Users should be able to go forward and back at any time in their session with a site. For example, consider the following flow of pages on a shopping site: A second general principle is Have users pick the object first and then the verb. For example, consider the customer service area of an e-commerce site. Assume that Jane Consumer has already identified herself to the server. The merchant can show Jane a list of all the items that she has ever purchased. Jane clicks on an item (picking the object) and gets a page with a list of choices, e.g., "return for refund" or "exchange". Jane clicks on "exchange" (picking the verb) and gets a page with instructions on how to schedule a pickup of the unwanted item and pages offering replacement goods. How original is this principle? It is lifted straight from the Apple Macintosh circa 1984 and is explicated clearly in Macintosh Human Interface Guidelines (Apple Computer, Inc.; Addison-Wesley, 1993; full text available online at http://developer.apple.com/documentation/mac/HIGuidelines/HIGuidelines-2.html). In a Macintosh word processor, for example, you select one word from the document with a double-click (object). Then from the pull-down menus you select an action to apply to this word, e.g., "put it into italics" (verb). Originality is valorized in contemporary creative culture, but it was not a value for medieval authors and it does not help users. The Macintosh was enormously popular to begin with, and its user interface was copied by the developers of Microsoft Windows, which spread the object-then-verb idea to tens of millions of people. Web publishers can be sure that the vast majority of their users will be intimately familiar with the "pick the object then the verb" style of interface. Sticking with a familiar user interface cuts down on user time and confusion at a site. These principles are especially easy to apply to user administration pages, for example. The administrator looks at a list of users and clicks on one to select it. The server produces a new page with a list of possible actions to apply to that user. Ideally this drawing should be scanned and made available in your online documentation. Figure 5.2 is an example of the kind of drawing we're looking for. METHOD=POST. A heavy reliance on POST will result in a site that breaks the browser Back button. An attempt to go back to a page that was the result of a POST will generally bring up a "Page Expired" error message and possibly a dialog box asking whether the user wishes to resubmit information by using the "Refresh" button. Some of our students asked for further guidance on how to choose between GET and POST and here's the response from Ben Adida, part of the course's teaching staff in fall 2003: Questions: Can someone sniffing packets learn your user's password? Gain access to the site under your user's credentials? What happens to a user who forgets his or her password? Questions: How can the administrator control who is permitted to register and use the site? What email notification options does the site administrator have that relate to user registration? Many Web applications contain content that can be viewed only by members of a specific user group. With your data model, how many table rows will the RDBMS have to examine to answer the question "Is User #541 a member of Group #90"? If the answer is "every row in a big table", i.e., a sequential scan, what kind of index could you add to speed up the query?
<urn:uuid:dd3cd2f3-4e42-47f9-a36a-fba419745131>
CC-MAIN-2013-20
http://philip.greenspun.com/seia/user-registration-and-management
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.872399
5,343
2.671875
3
Alterations in microRNA expression patterns in liver diseases. Summary of "Alterations in microRNA expression patterns in liver diseases." In the past few years there has been growing interest for a type of short RNAs called microRNAs, which are involved in the regulation of gene expression mainly in a negative way. There are about 1000 known microRNA today. It has been demonstrated that expression level of microRNA may become altered from normal to diseased state, thus microRNAs could be employed as a reliable tool in the diagnosis of diseases. A liver-characteristic microRNA (miR-122) needed for functioning hepatocytes has been identified, which usually shows a decreased expression level upon liver injury. miR-122 has been suggested as a biomarker since it was downregulated in the liver tissue upon acetaminophen-induced toxicity and in turn elevated miR-122 level was detected in the plasma. Moreover, miR-122 level in the plasma was found to be more sensitive as compared with conventional assays based on the release of liver enzymes. Also, miR-122 expression tends to decrease as carcinogenesis progresses. In addition, miR-122 enhances the replication of hepatitis C virus and its level seems to influence the efficiency of interferon therapy. Nowadays, many microRNAs are known whose distinctive alterations in their specific patterns seem to characterize individual pathological processes. In this article, the major alterations in microRNA expression patterns in liver diseases such as drug- and alcohol-induced liver diseases, non-alcoholic fatty liver diseases, fibrosis, viral infections (hepatitis), cirrhosis and hepatocellular carcinoma are summarized. Orv. Hetil, 45, 1843-1853. Semmelweis Egyetem, Általános Orvostudományi Kar II. Patológiai Intézet Budapest Üllői út 93. 1091. This article was published in the following journal. Name: Orvosi hetilap - PubMed Source: http://www.ncbi.nlm.nih.gov/pubmed/20980222 - DOI: http://dx.doi.org/10.1556/OH.2010.28985 Medical and Biotech [MESH] Definitions Analysis of PEPTIDES that are generated from the digestion or fragmentation of a protein or mixture of PROTEINS, by ELECTROPHORESIS; CHROMATOGRAPHY; or MASS SPECTROMETRY. The resulting peptide fingerprints are analyzed for a variety of purposes including the identification of the proteins in a sample, GENETIC POLYMORPHISMS, patterns of gene expression, and patterns diagnostic for diseases. Liver Diseases, Parasitic Liver diseases caused by infections with PARASITES, such as tapeworms (CESTODA) and flukes (TREMATODA). Liver Diseases, Alcoholic Liver diseases associated with ALCOHOLISM. It usually refers to the coexistence of two or more subentities, i.e., ALCOHOLIC FATTY LIVER; ALCOHOLIC HEPATITIS; and ALCOHOLIC CIRRHOSIS. Drug-induced Liver Injury A spectrum of clinical liver diseases ranging from mild biochemical abnormalities to ACUTE LIVER FAILURE, caused by drugs, drug metabolites, and chemicals from the environment. Pathological processes of the LIVER. The details of molecular alterations occurring during hepatocarcinogenesis have not been revealed yet. Nevertheless, it is known that microRNAs (miRNA), these short RNA molecules regulating gene expre... The discovery of microRNA, a group of regulatory short RNA fragments, has added a new dimension to the diagnosis and management of neoplastic diseases. Differential expression of microRNA in a unique... Although microRNAs are expressed extensively in the central nervous system in physiological and pathological conditions, their expression in neurological disorder of epilepsy has not been well charact... PURPOSE: MicroRNAs play important roles in cancer development, progression, and metastasis. The aim of this study was to determine whether altered microRNA-155 expression is asso... MicroRNAs are approximately 22 nucleotides short, non-protein-coding RNA molecules, which bind semi-complementary to mRNA and have an inhibitory effect on protein expression. Aberrant microRNA express... Immune dysregulations, including cytokines and chemokines secretions occurs in alcoholic liver disease. Serum levels and liver expression of CCL2 are increased in patients with alcoholic h... The purpose of this research is to study body materials like blood proteins as well as white blood cell and liver cellular RNA in individuals with liver diseases such as chronic viral hepa... To identify mutations, changes in DNA copy number, structural rearrangements, or altered microRNA/mRNA expression that are important for the initiation, progression, or treatment response... The aim of this study are (1) To genome-wide profile the gene expression patterns of peripheral blood mononuclear cell (PBMC) in patients with obstructive sleep apnea (OSA) (2) To profile... Acute cellular rejection is relatively common after liver transplantation, typically does not affect graft survival, and is not associated with the development of chronic rejection. Acute...
<urn:uuid:97537601-944d-4a30-946b-8df3ca869a93>
CC-MAIN-2013-20
http://www.bioportfolio.com/resources/pmarticle/106753/Alterations-In-Microrna-Expression-Patterns-In-Liver-Diseases.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921646
1,134
2.796875
3
Attrition mills can operate in either batch or continuous mode and suit harder-to-grind material such as metal powders, metal carbides and glass frits. Their shaft speed runs from 75–500 rpm and media generally range in size from 5–13 mm. Feed material can be as coarse as 1/2 in., while end product size can be as fine as 2–3 microns if the mill operates in a batch mode. Dry grind mills also are used to make dispersion strengthened metal (DSM). In this process (known as mechanical alloying or cold welding) the grinding media break the metals and additives into small particles first, and then beat them together to form agglomerates. Repeating the process evenly mixes and disperses the various metals to form the DSM. Pigment makers also use these mills to develop color in pigments. High-speed attrition mills rely on small (2–3-mm) media and operate at a much higher speed, generally from 400–2,000 rpm. Proprietary design features such as shaft/arm configuration and side discharge screens allow these mills to continuously produce fine powders, which are discharged by centrifugal force. However, the small media size used limits feed materials to 40 mesh and finer. The end products from these continuous mills generally are in the 2–5 micron range. Dry grind mills can be used in conjunction with air classifiers or screeners to form a closed grinding process loop (Figure 5). By continuously classifying out fines and returning oversize material to the mill, such systems can very efficiently provide sharp particle-size-distribution grinds. As a rule of thumb, dry grinding generally will achieve particle sizes of 3–5 microns. To mill to sizes below that range requires wet milling. Today, the trend clearly is to produce nanoparticles. Wet grind processing can be done in batch, continuous or circulation modes. In recent years, many paint and mill manufacturers have focused much of their attention on a "new" type of "high circulation rate grinding" to achieve superior dispersions. In actuality, this type of grinding has been used for many years. These units combine a grinding mill with a large holding tank equipped with both a high-speed disperser and a low-speed sweep blade. The entire contents of the holding tank pass through the milling chamber at least once every 7.5 minutes or about 8 times per hour. This high circulation rate results in a uniform dispersion, narrow particle-size distribution and faster grinding. There are two types of high circulation mills — one uses 3–10-mm media to process material down to sizes of a few microns, the other uses 0.1-2-mm media to achieve sub-micron and nano-size products. Choice of grinding media depends upon several factors, some of which are interrelated. • Specific gravity. In general high-density media give better results. The media should be denser than the material to be ground. When grinding some slurries, media with higher density may be required to prevent floating. • Initial feed size. Smaller media can't easily break up large particles. • Final particle size. Smaller media are more efficient when ultrafine particles are desired. • Hardness. The harder the media the better the grinding efficiency and, consequently, the longer the life. • pH. Some strongly acidic or basic material may react with certain metallic media. • Discoloration. Certain applications require, for instance, white material to remain white. • Contamination. Material resulting from media wear shouldn't affect the product or should be removed by a magnetic separator, chemically or in a sintering process. • Cost. Media that may be two-to-three times more expensive may wear better, sometimes lasting five-to-six times longer — and therefore may justify their extra cost in the long run.
<urn:uuid:01a7fd1e-5960-4c2d-ab78-367e6f9f92cf>
CC-MAIN-2013-20
http://www.chemicalprocessing.com/articles/2010/172/?start=2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916522
812
2.8125
3
CHP.EXE is a very simple program utilising the Win32 CreateProcess API to silently launch GUI and console apps in a hidden window. CHP is free open source software. CHP yourapp arg1 arg2 arg3 ... Prefix your original commandline with CHP. CHP notepad <-- runs notepad.exe in a hidden window CHP notepad /p "New Text Document.txt" <-- silently prints a text file CHP cmd.exe /c ""d:\my batch file.cmd" arg1 "arg two"" <-- runs a batch file in a hidden window If CHP succeeds, its exit status is the process ID (PID) of the newly created process. If CHP fails to create the specified process, its exit status is the Win32 error code that caused the failure, multiplied by -1. Use the "NET HELPMSG" command to obtain the meaning of the error code. CHP also writes its exit status to stdout. However, because CHP is a windowless GUI application this output will not be visible unless it's piped into a program that writes own stdin to stdout (the MORE command is ideal). For example, in a cmd.exe shell:- CHP notepad | more This package includes a pre-compiled binary, but if you want CHP yourself, I recommend either of the following free IDE's:- Note: The source should be compiled as a GUI (not a console) Note from "Johan" -- If using Visual Studio to compile for Win64, change "main" to "WinMain" in the "main.c" and compile with the /link /SUBSYSTEM:WINDOWS option. |chp-0.1.1.13.zip||chp-0.1.1.13.MD5SUM.txt||0.1.1.13||2007-10-28||Win32||includes pre-compiled binary and source code| This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or(at your option) any later version.
<urn:uuid:81a16def-3207-42e9-910c-07898c4460e3>
CC-MAIN-2013-20
http://www.commandline.co.uk/chp/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.830974
478
3
3
Name _____________________________ Date ___________________ Algebra Review (Answer ID # 0222711) Complete. 1. Fifty-one more than 9 times a number is 114. What is the number? 2. Seven times a number is 31 8/9. What is the number? 3. 508 exceeds six times a number by 70. What is the number? 4. Sixty-three more than four-fifths of a number equals 111. What is the number? Answer Key 0222711 Sample This is only a sample pre-made worksheet. If you were a subscriber, the answers would appear on the second page that is printed out. Sign up now for the grade 9-12 materials!
<urn:uuid:34e37554-83f8-48cc-b36f-cd26e4acbecd>
CC-MAIN-2013-20
http://www.edhelper.com/AlgebraWorksheet0.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.911763
146
3.125
3
View your list of saved words. (You can log in using Facebook.) Constitutionally mandated process for electing the U.S. president and vice president. Each state appoints as many electors as it has senators and representatives in Congress (U.S. senators, representatives, and government officers are ineligible); the District of Columbia has three votes. A winner-take-all rule operates in every state except Maine and Nebraska. Three presidents have been elected by means of an electoral college victory while losing the national popular vote (Rutherford B. Hayes in 1877, Benjamin Harrison in 1888, and George W. Bush in 2000). Though pledged to vote for their state's winners, electors are not constitutionally obliged to do so. A candidate must win 270 of the 538 votes to win the election. This entry comes from Encyclopædia Britannica Concise. For the full entry on electoral college, visit Britannica.com.
<urn:uuid:68d3101d-f4e2-4006-b4be-846c2d4c1717>
CC-MAIN-2013-20
http://www.merriam-webster.com/concise/electoral%20college
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945148
191
3.734375
4
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists The Raven paradox, also known as Hempel's paradox or Hempel's ravens is a paradox proposed by the German logician Carl Gustav Hempel in the 1940s to illustrate a problem where inductive logic violates intuition. It reveals the problem of induction. Hempel describes the paradox in terms of a statement that all ravens are black. This statement is equivalent, in logical terms, to the statement that all non-black things are non-ravens. If one were to observe many ravens and find that they were all black, one's belief in the statement that all ravens are black would increase. But if one were to observe many red apples, and concur that all non-black things are non-ravens, one would still not be any more sure that all ravens are black. The principle of inductionEdit The principle of induction states that: - If an instance X is observed that is consistent with theory T, then the probability that T is true increases In the Raven paradox, the 'law' being tested is that all ravens are black. This problem has been summarized (derived from a poem by Gelett Burgess) as: - I never saw a purple cow - But if I were to see one - Would the probability ravens are black - Have a better chance to be one? The origin of the paradox lies in the fact that the statements "all Ravens are black" and "all non-black things are non-ravens" are indeed equivalent, while the act of finding a black raven is not at all equivalent to finding a non-black non-raven. Confusion is common when these two notions are thought to be identical. Philosophers have offered many solutions to this violation of intuition. For instance, the American logician Nelson Goodman suggested adding restrictions to our reasoning, such as never considering an instance as support for "All P are Q" if it would also support "No P are Q". Other philosophers have questioned the "principle of equivalence" between the two theorems. Perhaps the red apple should increase our belief in the theory all non-black things are non-ravens, without increasing our belief that all ravens are black. But in classical logic one cannot have a different degree of belief in two equivalent statements, if one knows that they are either both true or both false. Goodman, and later another philosopher, Quine, used the term projectible predicate to describe those expressions, such as raven and black, which do allow inductive generalization; non-projectible predicates are by contrast those such as non-black and non-raven which apparently do not. Quine suggests that it is an empirical question which, if any, predicates are projectible; and notes that in an infinite domain of objects the complement of a projectible predicate ought always be non-projectible. This would have the consequence that, although "All ravens are black" and "All non-black things are non-ravens" must be equally supported, they both derive all their support from black ravens and not from non-black non-ravens. Using Bayes theoremEdit Let X represent an instance of theory T, and let I represent all of our background information. Let represent the probability of T being true, given that X and I are known to be true. Then, where represents the probability of T being true given that I alone is known to be true; represents the probability of X being true given that T and I are both known to be true; and represents the probability of X being true given that I alone is known to be true. Using this principle, the paradox does not arise. If one selects an apple at random, then the probability of seeing a red apple is independent of the color of ravens. The numerator will equal the denominator, the ratio will equal one, and the probability will remain unchanged. Seeing a red apple will not affect one's belief about whether all ravens are black. If one selects a non-black-thing at random, and it is a red apple, then the numerator will exceed the denominator by an extremely small amount. Therefore seeing the red apple will only slightly increase one's belief that all ravens are black. In this scenario, observing a red apple really does increase the probability that all ravens are black. If one could see all the non-black things in the universe and observed that there were no ravens, one could indeed conclude that all ravens are black. In fact, as one observed a higher and higher proportion of non-black things (finding none to be ravens), the probability that all ravens are black would increase towards unity. The example only seems paradoxical because the set of non-black-things is far, far larger than the set of ravens. Thus observing one more non-black-thing which is not a raven can only make a very small difference to our degree of belief in the theory compared to the difference made by observing one more raven which is black. - Franceschi, P. The Doomsday Argument and Hempel's Problem, English translation of a paper initially published in French in the Canadian Journal of Philosophy 29, 139-156, 1999, under the title Comment l'Urne de Carter et Leslie se Déverse dans celle de Hempel - Hempel, C. G. A Purely Syntactical Definition of Confirmation. J. Symb. Logic 8, 122-143, 1943. - Hempel, C. G. Studies in Logic and Confirmation. Mind 54, 1-26, 1945. - Hempel, C. G. Studies in Logic and Confirmation. II. Mind 54, 97-121, 1945. - Hempel, C. G. Studies in the Logic of Confirmation. In Marguerite H. Foster and Michael L. Martin, eds. Probability, Confirmation, and Simplicity. New York: Odyssey Press, 1966. 145-183. - Whiteley, C. H. Hempel's Paradoxes of Confirmation. Mind 55, 156-158, 1945. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:e54c8f7c-16d5-45f5-b230-69ae073c3d3c>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Raven_paradox
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928449
1,371
3.828125
4
A mode of injection describes the timing, and the sequence, of injecting fuel. Simultaneous injection means, every injector opens at the same time. Fuel sprays into each intake port, where it stays, until the inlet valve opens. During each engine cycle, the injectors open twice, and each time they deliver half the fuel needs of each cylinder. This happens regardless of the position of the intake valve. The injectors are triggered by the ignition system. So, for a 6-cylinder engine, the control unit triggers the injectors on every third ignition pulse. Sequential injection means injection for each cylinder occurs once per engine cycle. It is timed to each individual cylinder in the firing order. Fuel spray stays in the intake port until the inlet valve opens. Grouped injection divides the injectors into 2 groups. A 6-cylinder engine can have injectors 1, 2 and 3 in group 1, and injectors 4, 5 and 6 in group 2. The control unit operates the groups in turn, to spray fuel once per engine cycle. Group 1 injects, then, 360°, or one crankshaft rotation later, so does group 2. This happens, regardless of the position of the intake valve. Just one injection provides the full quantity of fuel for each cylinder during that engine cycle. In some applications, different modes of injection are combined, so that the mode changes according to the operating conditions. Sequential mode may be used for low engine speeds, changing to simultaneous mode at high speeds. The same principle is used in changing from light loads to heavy loads. Similarly, the mode may change from group injection, to simultaneous. Using different modes for different operating conditions makes the most of how the fuel is used, and that improves power output, fuel economy and emission control.
<urn:uuid:edd62269-07f1-4450-85be-620825d672ca>
CC-MAIN-2013-20
http://www.cdxetextbook.com/fuelSys/efi/op/efimodes.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916909
371
3.15625
3
Signs and Symbols Mezuzah, tzitzit and tefillin are reminders of the commandments The menorah (candelabrum) is the ancient universal symbol of Judaism The Jewish star (Magen David) is a modern universal Jewish symbol Jews wear a skullcap (yarmulke) as a pious custom Chai, found on jewelry, is the number 18, which is a favorable number The Hamesh Hand is common in Jewish jewelry, but its connection to Judaism is questionable And you shall write [the words that I command you today] on the doorposts of your house and on your gates. -Deuteronomy 6:9, 11:19 On the doorposts of traditional Jewish homes (and many not-so-traditional homes!), you will find a small case like the one pictured at left. This case is commonly known as a mezuzah (Heb.: doorpost), because it is placed upon the doorposts of the house. The mezuzah is not, as some suppose, a good-luck charm, nor does it have any connection with the lamb's blood placed on the doorposts in Egypt. Rather, it is a constant reminder of G-d's presence and G-d's mitzvot. The mitzvah to place mezuzot on the doorposts of our houses is derived from Deut. 6:4-9, a passage commonly known as the Shema (Heb: Hear, from the first word of the passage). In that passage, G-d commands us to keep His words constantly in our minds and in our hearts by (among other things) writing them on the doorposts of our house. The words of the Shema are written on a tiny scroll of parchment, along with the words of a companion passage, Deut. 11:13-21. On the back of the scroll, a name of G-d is written. The scroll is then rolled up and placed in the case, so that the first letter of the Name (the letter Shin) is visible (or, more commonly, the letter Shin is written on the outside of the case). The scroll must be handwritten in a special style of writing and must be placed in the case to fulfill the mitzvah. It is commonplace for gift shops to sell cases without scrolls, or with mechanically printed scrolls, because a proper scroll costs more than even an elaborately decorated case ($30-$50 for a valid scroll is quite reasonable). According to traditional authorities, mechanically printed scrolls do not fulfill the mitzvah of the mezuzah, nor does an empty case. The case and scroll are then nailed or affixed at an angle to the right side doorpost as you enter the building or room, with a small ceremony called Chanukkat Ha-Bayit (dedication of the house - yes, this is the same word as Chanukkah, the holiday celebrating the rededication of the Temple). A brief blessing is recited. See the text of the blessing at Affixing Why is the mezuzah affixed at an angle? The rabbis could not decide whether it should be placed horizontally or vertically, so Every time you pass through a door with a mezuzah on it, you touch the mezuzah and then kiss the fingers that touched it, expressing love and respect for G-d and his mitzvot and reminding yourself of the mitzvot contained within them. It is proper to remove a mezuzah when you move, and in fact, it is usually recommended. If you leave it in place, the subsequent owner may treat it with disrespect, and this is a grave sin. I have seen many mezuzot in apartment complexes that have been painted over because a subsequent owner failed to remove it while the building was painted, and it breaks my heart every time I see that sort of disrespect to an object of religious significance. For more information about mezuzot or to purchase valid scrolls for a mezuzah online, visit the S.T.A.M. website. Tzitzit and Tallit They shall make themselves tzitzit on the corners of their garments throughout their generations, and they shall place on the tzitzit of each corner a thread of techeilet. And it shall be tzitzit for you, and you will see it, and you will remember all the mitzvot of the L-RD and do them and not follow your heart or your eyes and run after them. -Numbers 15:38-40 The Torah commands us to wear tzitzit (fringes) at the corners of our garments as a reminder of the mitzvot, kind of like the old technique of tying a string around your finger to remember something. The passage also instructs that the fringe should have a thread of "techeilet," believed to be a blue or turquoise dye, but the source of that dye is no longer known, so tzitzit are today are all white. There is a complex procedure for tying the knots of the tzitzit, filled with religious and numerological significance. The mitzvah to wear tzitzit applies only to four-cornered garments, which were common in biblical times but are not common anymore. To fulfill this mitzvah, adult men wear a four-cornered shawl called a tallit (pictured above) during morning services, along with the tefillin. In some congregations, only married men wear a tallit; in others, both married and unmarried men wear one. In Reconstructionist synagogues, both men and women may wear a tallit, but men are somewhat more likely than women to do so. A blessing is recited when you put on the tallit. See the text of the blessing at Tallit and Strictly observant Jewish men commonly wear a special four-cornered garment, similar to a poncho, called a tallit katan ("little tallit"), so that they will have the opportunity to fulfill this important mitzvah all day long. The tallit katan is worn under the shirt, with the tzitzit hanging out so they can be seen. If you've ever seen a Jewish man with strings hanging out of his clothing, this is probably what you were seeing. There is no particular religious significance to the tallit (shawl) itself, other than the fact that it holds the tzitzit (fringes) on its corners. There are also very few religious requirements with regard to the design of the tallit. The tallit must be long enough to be worn over the shoulders (as a shawl), not just around the neck (as a scarf), to fulfill the requirement that the tzitzit be on a "garment." Likewise, it should be draped over the shoulders like a shawl, not worn around the neck like a scarf, though that is commonly done (see illustration at right). A longer tallit is commonly folded over the shoulders, to prevent the tzitzit from dragging on the ground. The tallit may be made of any material, but must not be made of a combination of wool and linen, because that combination is forbidden on any clothing. (Lev. 19:19; Deut. 22:11). Most tallitot are white with navy or black stripes along the shorter ends, possibly in memory of the thread of techeilet. They also commonly have an artistic motif of some kind along the top long end (the outside of the part that goes against your neck). This motif is referred to as an atarah (crown). There is no particular religious significance to the atarah; it simply tells you which end is up! It is quite common, however, to write the words of the blessing for putting on the tallit on the atarah, so you can read the blessing while you are putting the tallit on. If a blessing is written on your tallit, you should be careful not to bring the tallit into the bathroom with you! Sacred writings should not be brought into the bathroom. For this reason, many synagogues have a tallit rack outside of the bathroom. Conversely, if you see a room in a synagogue with a sign that tells you to remove your tallit before entering, you can safely assume that the room is a bathroom! Bind [the words that I command you today] as a sign on your arm, and they shall be ornaments between your eyes. -Deuteronomy 6:8 The Shema also commands us to bind the words to our hands and between our eyes. We do this by "laying tefillin," that is, by binding to our arms and foreheads leather pouches containing scrolls of Torah passages. The word "tefillin" is usually translated "phylacteries," although I don't much care for that term. "Phylacteries" isn't very enlightening if you don't already know what tefillin are, and the word "phylacteries" means "amulet," suggesting that tefillin are some kind of protective charm, which they are not. The word "tefillin," on the other hand, is etymologically related to the word "tefilah" (prayer) and the root Pe-Lamed-Lamed (judgment). Like the mezuzah, tefillin are meant to remind us of G-d's mitzvot. We bind them to our head and our arm, committing both our intellect and our physical strength to the fulfillment of the mitzvot. At weekday morning services, one case is tied to the arm, with the scrolls at the biceps and leather straps extending down the arm to the hand, then another case is tied to the head, with the case on the forehead and the straps hanging down over the shoulders. Appropriate blessings are recited during this process. The tefillin are removed at the conclusion of the morning services. See a general outline of this process and its blessings at Tallit and Jewish acupuncturist Steven Schram examined the positioning of the tefillin and the procedure for laying them, and concluded that the laying of tefillin was "a unique way of stimulating a very precise set of acupuncture points that appears designed to clear the mind and harmonise the spirit." Click here to see his article from the Journal of Chinese Medicine. Like the scrolls in a mezuzah, the scrolls in tefillin must be hand-written in a special style of writing. A good, valid set of tefillin can cost a few hundred dollars, but if properly cared for they can last for a lifetime. For more information about tefillin or to purchase valid tefillin online, visit the S.T.A.M. website. of the oldest symbols of the Jewish faith is the menorah, a seven-branched candelabrum used in the Temple. The kohanim lit the menorah in the Sanctuary every evening and cleaned it out every morning, replacing the wicks and putting fresh olive oil into the cups. The illustration at left is based on instructions for construction of the menorah found in Ex. 25:31-40. It has been said that the menorah is a symbol of the nation of Israel and our mission to be "a light unto the nations." (Isaiah 42:6). The sages emphasize that light is not a violent force; Israel is to accomplish its mission by setting an example, not by using force. This idea is highlighted in the vision in Zechariah 4:1-6. Zechariah sees a menorah, and G-d explains: "Not by might, nor by power, but by My spirit." The lamp stand in today's synagogues, called the ner tamid (lit. the continual light, usually translated as the eternal flame), symbolizes the menorah. Many synagogues also have an ornamental menorah, usually with some critical detail changed (for example, with only 6 candles) to avoid the sin of reproducing objects of the Temple. The nine-branched menorah used on Chanukkah is commonly patterned after this menorah, because Chanukkah commemorates the miracle that a day's worth of oil for this menorah lasted eight days. Cover your head so that the fear of heaven may be upon you. -Talmud Shabbat 156b R. Huna son of R. Joshua would not walk four cubits bareheaded, saying: The Shechinah [Divine Presence] is above my head. -Talmud R. Huna son of R. Joshua said: May I be rewarded for never walking four cubits bareheaded. -Talmud Shabbat 118b most commonly known and recognized piece of Jewish garb is actually the one with the least religious significance. The word yarmulke (usually, but not really correctly, pronounced yammica) is Yiddish. According to Leo Rosten's The Joys of Yiddish, it comes from a Tartar word meaning skullcap. According to some Chasidic rabbis I know, it comes from the Aramaic words "yerai malka" (fear of or respect for The King). The Hebrew word for this head covering is kippah (pronounced key-pah). It is an ancient practice for Jews to cover their heads during prayer. This probably derives from the fact that in Eastern cultures, it is a sign of respect to cover the head (the custom in Western cultures is the opposite: it is a sign of respect to remove one's hat). Thus, by covering the head during prayer, one showed respect for G-d. In addition, in ancient Rome, servants were required to cover their heads while free men did not; thus, Jews covered their heads to show that they were servants of G-d. In medieval times, Jews covered their heads as a reminder that G-d is always above Whatever the reason given, however, covering the head has always been regarded more as a custom rather than a commandment. Although it is a common pious practice to cover the head at all times, it is not religiously mandatory. For example, it is widely accepted that one may refrain from wearing a head covering at work if your employer requires it (for reasons of safety, uniformity, or to reduce distractions). You can take off your yarmulke for a job interview if you think it will hurt your chances of getting the job. There is an amusing article about this dilemma, The Kippah Debate, at There is no special significance to the yarmulke as a specific type of head covering. Its light weight, compactness and discreteness make it a convenient choice of head gear. I am unaware of any connection between the yarmulke and the similar skullcap worn by the Pope. Magen David (Shield of David, or as it is more commonly known, the Star of David) is the symbol most commonly associated with Judaism today, but it is actually a relatively new Jewish symbol. It is supposed to represent the shape of King David's shield (or perhaps the emblem on it), but there is really no support for that claim in any early rabbinic literature. The symbol is not mentioned in rabbinic literature until the middle ages, and is so rare in early Jewish literature and artwork that art dealers suspect forgery if they find the symbol in early Scholars such as Franz Rosenzweig have attributed deep theological significance to the symbol. For example, some note that the top triangle strives upward, toward G-d, while the lower triangle strives downward, toward the real world. Some note that the intertwining makes the triangles inseparable, like the Jewish people. Some say that the three sides represent the three types of Jews: Kohanim, Levites and Israel. Some note that there are actually 12 sides (3 exterior and 3 interior on each triangle), representing the 12 tribes. While these theories are theologically interesting, they have little basis in historical fact. The symbol of intertwined equilateral triangles is a common one in the Middle East and North Africa, and is thought to bring good luck. It appears occasionally in early Jewish artwork, but never as an exclusively Jewish symbol. The nearest thing to an "official" Jewish symbol at the time was the In the middle ages, Jews often were required to wear badges to identify themselves as Jews, much as they were in Nazi Germany, but these Jewish badges were not always the familiar Magen David. For example, a fifteenth century painting by Nuno Goncalves features a rabbi wearing a six-pointed badge that looks more or less like an asterisk. In the 17th century, it became a popular practice to put Magen Davids on the outside of synagogues, to identify them as Jewish houses of worship in much the same way that a cross identified a Christian house of worship; however, I have never seen any explanation of why this symbol was chosen, rather than some other symbol. Magen David gained popularity as a symbol of Judaism when it was adopted as the emblem of the Zionist movement in 1897, but the symbol continued to be controversial for many years afterward. When the modern state of Israel was founded, there was much debate over whether this symbol should be used on the flag. Today, the Magen David is the universally recognized symbol of Jewry. It appears on the flag of the state of Israel, and the Israeli equivalent of the Red Cross is known as the Red Magen David. symbol, commonly seen on necklaces and other jewelry and ornaments, is simply the Hebrew word Chai (living), with the two Hebrew letters Cheit and Yod attached to each other. Some say it refers to the Living G-d; others say it simply reflects Judaism's focus on the importance of life. Whatever the reason, the concept of chai is important in Jewish culture. The typical Jewish toast is l'chayim (to life). Gifts to charity are routinely given in multiples of 18 (the numeric value of the word Chai). hamesh hand or hamsa hand is a popular motif in Jewish jewelry. Go into any Judaic gift shop and you will find necklaces and bracelets bearing this inverted hand with thumb and pinky pointing outward. The design commonly has an eye in the center of the hand or various Hebrew letters in the middle. There is nothing exclusively Jewish about the hamesh hand. Arab cultures often refer to it as the Hand of Fatima, which represents the Hand of G-d. Similar designs are common in many cultures. Why it has become such a popular symbol among Jews? I haven't been able to find an adequate explanation anywhere. My best guess: in many cultures, this hand pattern represents a protection against the evil eye (a malignant spiritual influence caused by the jealousy of others), and the evil eye has historically been a popular superstition among For some lovely illustrations of Jewish variations on this design, see © Copyright 5756-5771 (1995-2011), Tracey R Rich If you appreciate the many years of work I have put into this site, show your appreciation by linking to this page, not copying it to your site. I can't correct my mistakes or add new material if it's on your site. Click Here for more details.
<urn:uuid:b5840c47-63a4-47ea-a168-1504e2c076cc>
CC-MAIN-2013-20
http://www.jewfaq.org/signs.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936375
4,338
2.6875
3
Common Name: Orange Plant Parts Used: Fruits Description of Citrus Aurantium : A medium sized thorny tree or a shrub with greenish white,glabrous young shoots and grayish brown bark; leaves foliolate, leaf-stalks broadly winged, the wing nearly as large as the blade, leaflet elliptic or ovate, acute or acuminate, obtuse; flowers white, large, very fragrant; fruit globose, bright yellow when ripe, rind of fruit very aromatic,pulp sour, bitter or austere; seeds many, yellow or cream coloured, smooth, slimy. Characteristics and Constituents : Pro-vitamin A and B1 peel contains hesperidin, isohesperidin,aurantiamarin, a crystalline acid, amorph. Resinous body;bitter principle mostly in the spongy portion. Actions and Uses: The fruits are sour, bitter, astringent, thermogenic,laxative, appetizer, stomachic, digestive, anthelmintic and antiscorbutic, and are useful in vitiated conditions of pitta and kapha, cough, bronchitis, dyspepsia, nausea, flatulence,colic, helminthiasis, scabies and anaemia.
<urn:uuid:4be4dabf-505f-4a52-8dca-182278bba09f>
CC-MAIN-2013-20
http://www.sbepl.com/citrus-aurantium.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.838798
283
2.6875
3
The Debt Assumption Issue This question of whether or not the federal government should take over (assume) each individual state’s war debts was complex but crucial. The very intricacy of the debt funding and assumption issues became part of the issues, with opponents claiming that Alexander Hamilton’s policies were imitations of England’s overcomplicated fiscal system. However, Hamilton’s proposal regarding the funding of the national debt, approved by congress in 1790, was less expensive as well as less complicated than James Madison’s rejected scheme of discriminating between original and subsequent holders of the debt certificates in the case of domestic creditors. This was because Hamilton was content to have the debt funded at 4% rather than at the 6% rate originally contracted. He justified this lowering of the interest rate that the government would be paying to its creditors by arguing that market interest rates for any future government borrowing were bound to fall during the life of current debt, in fact very soon, because the government’s credit rating would rise, and the supply of loan capital available in the United States would increase. In other words, Hamilton boldly but wisely built into his project the assumption that it would succeed. Hamilton’s reason for wanting to lower the cost of funding the national debt in this way was to make it more feasible for the United States to assume responsibility also for the debts that the individual states had incurred during the Revolution. One of Madison’s reasons for proposing his more expensive funding scheme was probably that it would have ruled out this federal assumption of the states’ debts, which the majority of Virginians opposed. Hamilton was keen for the state debts to become federal debts because he did not want the state governments to compete with the federal government either for creditors’ attention and dependence or for sources of revenue required to service government debts. Federalizing the states’ debts was also a way of honoring the Revolutionary commitment to treat the war as a responsibility of the whole country, not of the individual states. However, the question was complicated by the different situations of various states: some had spent more than others and were now desperate for the federal government to take over, some had now repaid much, some simply had less complete records than others. Virginia politicians in particular felt that a thorough and final reckoning of this complex of credits and debits should precede any assumption of the states’ debts, otherwise states like Virginia, which had already repaid much of its war debt, risked being out of pocket, because the federal government, with tax revenues from Virginians as well as from residents of other states, would pay out money to some states now that it might not be able to get refunded by those states if the final reckoning later showed that refunds were owed. When the issue was first discussed by the House of Representatives (in the spring of 1790), the opponents of assumption, among them the Virginian James Madison, narrowly won, removing the assumption proposal from the funding legislation by a vote of 31 to 29. However, that was not the end of the matter. Madison himself began to have doubts about the wisdom of letting this decision stand when he witnessed the extreme reactions by the losing side in this vote; his opponents began to predict that if the debts of the states were not assumed, the union would not hold together. Madison soon began to fear that he had underestimated the desperation of the advocates of assumption, so he was ready for the “compromise of 1790.” Although this “compromise” was later condemned by Jefferson and other Republicans as a bad bargain because of its political effect of entrenching Hamilton’s “corrupting” fiscal system, in the financial terms that were used in the debate in 1790 it was actually less a compromise than a victory for Madison and the south. The agreements that constituted the “compromise” reversed the decision not to assume the states’ debts, but in turn not only gave Virginia and other southern states the promise that the capital city would be located on the Potomac, but also gave Virginia a guarantee that assumption would be managed in such a way that that state would not be out of pocket in the way Madison and other Virginians had feared it would be. Without such management of the numbers, Madison had estimated that Virginia would probably lose to the tune of $2 million.
<urn:uuid:947f06ea-1cad-472d-92b7-c4b8c321852f>
CC-MAIN-2013-20
http://teachingamericanhistory.org/zvesper/appendixi/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986357
894
3.609375
4
RACINE, ANTOINE, Roman Catholic priest, bishop, and educator; b. 26 Jan. 1822 in Saint-Ambroise (Loretteville), Lower Canada, son of Michel Racine, a blacksmith, and Louise Pepin; d. 17 July 1893 in Sherbrooke, Que. The son of a poor family, and raised by a mother who was widowed during his childhood, Antoine Racine experienced the privations of a modest life. During the winter of 1833 he began to study Latin at the presbytery of his great-uncle, Antoine Bédard, curé of Saint-Charles-Borromée parish in Charlesbourg. He was admitted to the Petit Séminaire de Québec in 1834 and then studied theology at the Grand Séminaire. Ordained to the priesthood on 12 Sept. 1844, he served as curate of Saint-Étienne in La Malbaie (1844–48), first curé of Saint-Eusèbe in Princeville (1848–51), curé at Saint-Joseph in the Beauce region (1851–53), and priest in charge of Saint-Jean-Baptiste church at Quebec (1853–74). On 1 Sept. 1874 Racine was elected bishop of the new diocese of Sherbrooke, created on 28 August. Consecrated on 18 October by Archbishop Elzéar-Alexandre Taschereau of Quebec, he took charge of his see two days later. The diocese had some 30,000 Catholics, with 29 priests ministering in 29 parishes and five missions. The town of Sherbrooke was becoming a small industrial centre, but much of the hinterland awaited colonization. While setting up diocesan administrative bodies, creating parishes, founding missions, providing for the material support of the parishes and the clergy, and establishing a uniform ecclesiastical discipline, Racine worked from the beginning of his episcopacy on various educational institutions necessary for the development of his young diocese. The first and most important of these, in the bishop’s eyes, was undoubtedly the Séminaire Saint-Charles-Borromée, which opened in September 1875. He himself served as its superior until 1878 and he undertook the task of teaching theology until 1885. Bishop Racine also had to attend to the education of the people. Since 1857 girls had had the advantage of being taught by the sisters of the Congregation of Notre-Dame, at Mont-Notre-Dame, but a Catholic and French education for boys posed a problem. Both Catholic and Protestant school trustees served on a single committee, responsible for all educational matters. In the spring of 1876 it was recommended to the bishop that they be put on separate committees, which was the practice at Quebec and Montreal. After the necessary authorization had been obtained from the government, the properties and money were divided in proportion to the number of pupils from each religious group. The eastern and northern wards now had their own Catholic schools. The more heavily populated southern one was not provided with a Catholic school until 1882 and it was at Racine’s prompting that the Brothers of the Sacred Heart agreed to take charge of it. In December 1881 Racine asked the Ursulines at Quebec to send some of their number to his diocese; they had already done so for the diocese of Chicoutimi, where his brother Dominique* was the bishop. He repeated his request in January 1882, asking them to come to Stanstead, near the American border. The Ursulines arrived there in the fall of 1884 and established a convent and a school. On 21 April 1875, at Racine’s request, four nuns from the Hôtel-Dieu in Saint-Hyacinthe moved into a house in Sherbrooke that he put at their disposal. They were to look after the poor, the sick, and the infirm at the Hôpital du Sacré-Cœur. Colonization was also one of Racine’s major concerns. At La Malbaie he had been the leading spirit in the Société des Défricheurs de la Rivière-au-Sable, formed to open up the land in the future township of Jonquière. In 1851, under his guidance, Le Canadien émigrant, a colonization manifesto, had been published and created a stir. For him, colonization included the progress of agriculture and the development of new areas as well as the repatriation of French Canadians who had gone to the United States. His circular letter to the clergy of his diocese dated 29 March 1875 described colonization as “a national task” and “a task worthy of [their] holy mission and of all God’s blessings.” In order to ensure its success, he urged his curés to establish a small colonization society in every parish, as envisaged in a provincial law that had just been passed. He ordered them to give him full information about available land and property up for sale. In 1880 he informed his clergy that such societies had been founded in Montreal and at Quebec, and that Sherbrooke had organized its own on 14 April to colonize Woburn Township, which would soon be linked by a good road with the new settlements of La Patrie, Notre-Dame-des-Bois, and Piopolis. Repeating what he had done before, he instituted an annual collection to raise money for colonization. The results, however, fell far short of his expectations. Other major contemporary issues of a much thornier kind held Racine’s attention. In 1874 the conflict between Archbishop Taschereau and the Université Laval on the one hand, and the supporters of an independent Catholic university at Montreal, led by Bishop Ignace Bourget*, on the other, had still not been settled. Racine first took a stand, albeit a moderate one, for Quebec. In 1881 he went to Rome as Taschereau’s representative to defend the interests of the Université Laval. His intention at the time was to obtain a firm decision settling the dispute once and for all, through an apostolic delegate if necessary. Circumstances changed with the creation of the ecclesiastical province of Montreal in 1886 and still more with the publication of the papal bull Jamdudum in February 1889, placing the Montreal branch campus under the jurisdiction of the bishops of that province [see Thomas-Edmond d’Odet d’Orsonnens]. In 1891 Racine, now suffragan bishop of Montreal, was in Rome again, this time to uphold the cause of Montreal. The new situation and the interests of his own diocese explain this apparent reversal. The bishops were also divided politically and the division long preceded Racine’s election to the see of Sherbrooke. Bourget and later Bishop Louis-François Laflèche of Trois-Rivières were the leaders of the ultramontane element and opposed the Liberals. Bishop Jean Langevin of the diocese of Rimouski favoured the Conservatives and Archbishop Taschereau leaned towards the Liberals. Racine, while supporting like most of his colleagues the exclusive jurisdiction of bishops over their priests, as early as 1875 ordered his clergy to remain strictly neutral in politics. Three years later he asserted, “The clergy must in their public and private lives, in questions that in no way concern religious principles, faithfully observe the prescriptions of our church councils regarding political elections.” He clearly forbad any intervention. In 1881 he declared, “Any priest who, without episcopal authorization, teaches from the pulpit or elsewhere that it is a sin to vote for a particular candidate or political party, or who announces that he will withhold the sacraments for this reason, will ipso facto be suspended.” On 2 Aug. 1886, just before the provincial election, he reminded his priests of the need to follow “the line of conduct set out by the Holy See” and added, “Never give your opinion from the pulpit” and “Do not attend any political meeting.” Worn out by his duties, Bishop Racine died on 17 July 1893 at his episcopal palace after an illness of only a few days. By this time the diocese was well established. It had some 60,000 Catholics, 45 parishes (almost all erected under both canon and civil law), and 17 missions served by 64 of the 80 priests in the diocese. Eight priests were attached to the seminary to teach the 225 pupils studying there. Eleven candidates were preparing for the priesthood, including two deacons and a subdeacon. Bishop Antoine Racine was profoundly influenced by the period in which he lived. He made its concerns his own and, in turn, had a noticeable influence on its events. His career provides a good illustration of late-19th-century clerical nationalism in Quebec. He was one of those men for whom love of country and love of the church are one and the same. Enterprising, determined, and courageous, he gave solid roots to a Catholic diocese and French culture in a region where the English Protestant presence was predominant, while still maintaining excellent relations with this religious and cultural group. His achievements, and, above all, the ideas he promoted in his speeches and writings had a lasting influence on colonization. A man of peace and conciliation, without compromising justice or efficiency, he always sought and often succeeded in obtaining, not without difficulty, harmonious and fruitful solutions. ANQ-Q, CE1-28, 27 janv. 1822. Arch. de la chancellerie de l’archevêché de Sherbrooke (Sherbrooke, Qué.), Fonds Antoine Racine, VII, B, B1; Insinuations, 1; Reg. des lettres, 1. Arch. du séminaire de Sherbrooke, P47 (Antoine Racine); R1 (évêques de Sherbrooke). Mandements, lettres pastorales, circulaires et autres documents publiés dans le diocèse de Sherbrooke (24v., Sherbrooke, 1874–1967), 1–3. Principaux discoursed Mgr Antoine Racine . . . , C.-J. Roy, édit. ([Lévis, Qué.], 1928). Séminaire Saint-Charles-Borromée, Annuaire (Sherbrooke), 1885–86; 1892–93. [É.-J.-A. Auclair], Consécration et intronisation de sa grandeur Mgr Ant. Racine, premier évêque de Sherbrooke . . . (Sherbrooke, 1874). Jacques Desgrandchamps, Monseigneur Antoine Racine et les religieuses enseignantes, 1874–1893 (Sherbrooke, 1980). Germain Lavallée, “Monseigneur Antoine Racine et la question universitaire canadienne (1875–1892)” (thèse de ma, univ. de Sherbrooke, 1954). [J.-A. Lefebvre], Monseigneur Antoine Racine, premier évêque de Sherbrooke . . . (Sherbrooke, 1894). J.-G. Lavallée, “Monseigneur Antoine Racine, premier évêque de Sherbrooke (1874–1893),” CCHA Sessions d’études, 33 (1966): 31–39. Cite This Article Jean-Guy Lavallée , “RACINE, ANTOINE,” in Dictionary of Canadian Biography, vol. 12, University of Toronto/Université Laval, 2003–, accessed June 19, 2013, http://www.biographi.ca/en/bio/racine_antoine_12E.html. Information to be used in other citation formatsPermalink: http://www.biographi.ca/en/bio/racine_antoine_12E.html |Author of Article:||Jean-Guy Lavallée| |Title of Article:||RACINE, ANTOINE| |Publication Name:||Dictionary of Canadian Biography, vol. 12| |Publisher:||University of Toronto/Université Laval| |Year of publication:||1990| |Year of revision:||1990| |Access Date:||June 19, 2013|
<urn:uuid:09caf45d-264c-44a9-bc04-4af3344832ac>
CC-MAIN-2013-20
http://www.biographi.ca/009004-119.01-e.php?id_nbr=6385
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949226
2,669
2.515625
3
Scientific advances have a key role but the challenge is using them without harming sustainable development, says report. See also: Post-2015 planning offers compelling messages for scientists Science journalists can stimulate public debate in areas where science and technology impacts the social and natural worlds. But how can effective science journalism be encouraged in the developing world? (Photo credit: Flickr/birdfarm) Communicators and journalists complement one another and should work together to promote public engagement with scientific knowledge. Scientists in developing countries should increase the quality of their research by publishing more good papers, not fewer, says Rafael Loyola. The Network of S&T Popularisation — known as ‘RedPoP’ — which aims to boost collaboration and science communicator training among countries, will be led by a Brazilian for the first time. 4 June 2013 Budgetary constraints and political apathy have resulted in poor science communication in India, says Archita Bhatta. 21 May 2013 10 May 2013 Investigative journalism is rewarding but requires vigilance, determination and preparation. K. S. Jayaraman shares tips from his career.
<urn:uuid:97b54d2a-4a62-4ac3-9510-4fb24eeffe4f>
CC-MAIN-2013-20
http://www.scidev.net/en/science-communication/science-journalism/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937391
233
2.859375
3
The History of Web Hosting The internet is one of the greatest forms of technology in existence, with more than fifty million websites interconnected in a web of information. Technically in order to take a look back at the history of web hosting, we need to look at the history of the internet as well. In August of 1962, J.C.R. Licklider was known for his many articles that were written about the “Galactic Network” which was based on the concept of computers being interconnected so that anyone that has the use of the internet can log on to different websites and find out information. At this time, Licklider was attending MIT where he was going to the ARPA which was the Advanced Research Projects Agency to tell of his ideas. The creation of the ARPA was actually around the same time as the creation of NASA. Both organizations were formed to help Americans catch up with the technology of the Russians. Early forms of the internet were based on a concept know as packet switching. This concept is based on the idea of having network data that had the ability to be sent through electrical phone lines as tiny packages of information. This concept forms the basis for the way we use bandwidth. Using these small packets of information, connections would be used only when the packets were sent through them. When there were no packets being sent, the line would have free space to use for other activities. This free space is the open bandwidth. Paul Baran would later create the actual networking design that forms the foundation for the modern day internet. In the year 1965, two men by the name of Lawrence G. Roberts and Thomas Merril had started the first, but extremely slow computer. The only way that made it faster was by the use of a packet switching method by Paul Baran. When 1969 came around, they had finally successfully linked computers at UCLA. This is where web hosting first began to make its big break. After this, over the next two years, nineteen more hosts would be added to the network. In 1991, a very important year for the development of the internet, hosting would move onto the big stage. From this moment on, the internet was only going to get bigger. Since the networking of multiple computers, web hosting has increased its abilities along with the advancements of technology and has dropped in price. With the quickly changing technology, smaller and more powerful servers have replaced old ones. These advances have enabled web hosting companies to house thousands of servers in a small amount of space. This makes the upkeep and updates simple, and once again has lowered cost. You can now get inexpensive web hosting and it is much less complicated to do now. With the numerous companies nowadays that offer web hosting, it is up to you to search and choose what will be best for your web site’s needs. What’s the future of webhosting? Highly managed solutions, cloud hosting, grid hosting, … We’ll keep you posted of new stuff on this blog!
<urn:uuid:a51e8989-6132-4ca1-adfa-bcab4404bb63>
CC-MAIN-2013-20
http://www.webhostingtipsblog.com/archives/the-history-of-web-hosting
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979563
612
3.078125
3
An expansion board that enables a computer to manipulate and output sounds. Sound cards are necessary for nearly all CD-ROMs and have become commonplace on modern personal computers. Sound cards enable the computer to output sound through speakers connected to the board, to record sound input from a microphone connected to the computer, and manipulate sound stored on a disk. Nearly all sound cards support MIDI, a standard for representing music electronically. In addition, most sound cards are Sound Blaster-compatible, which means that they can process commands written for a Sound Blaster card, the de facto standard for PC sound. Sound cards use two basic methods to translate digital data into analog sounds:
<urn:uuid:f6e9f3e3-0cc5-4f76-b51c-f2d088a5d959>
CC-MAIN-2013-20
http://www.webopedia.com/TERM/S/sound_card.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90338
132
3.078125
3
An agaric is a type of fungal fruiting body characterized by the presence of a pileus (cap) that is clearly differentiated from the stipe (stalk), with lamellae (gills) on the underside of the pileus. "Agaric" can also refer to a basidiomycete species characterized by an agaric-type fruiting body. An archaic usage of the word agaric meant 'tree-fungus' (after Latin agaricum); however, that meaning was superseded by the Linnaean interpretation in 1753 when Linnaeus used the generic name Agaricus for gilled mushrooms. Most species of agarics are classified in the Agaricales, however, this type of fruiting body is thought to have evolved several times independently, hence the Russulales, Boletales, Hymenochaetales, and several other groups of basidiomycetes also contain agaric species. Older systems of classification place all agarics in the Agaricales, and some (mostly older) sources still use "agarics" as a common name for the Agaricales. Contemporary sources now tend to use the term euagarics when referring only to members of the Agaricales. "Agaric" is also sometimes used as a common name for members of the genus Agaricus, as well as for members of other genera, for example, Amanita muscaria is sometimes called "fly agaric". - "Gilled Mushrooms" at AmericanMushrooms.com - "Evolution & Morphology in the Homobasidiomycetes" by Gary Lincoff & Michael Wood, MykoWeb.com
<urn:uuid:fe4f4254-cea8-40b3-a950-ac07d1423eef>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Agaric
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946597
365
3.015625
3
As projects go it is hard to dispute the failure of the Eurofighter/Typhoon project. The fixed costs of development were more than double initial estimates and the cost per aircraft was 75% higher than forecast. If a camel is a horse designed by committee, then the Typhoon is an aircraft designed by committee. You can see below the framework under which Germany, Italy, Spain the UK design and build the aircraft, the key point is that ther is no single point of accountability. Decision making is supposed to take 40 days, in some cases it took 7 years. However, a similar issue with the high level decision making occurs with execution, a set of national contractors perform the work here and there is an emphasis on ‘fair’ distribution of work rather than ensuring efficiency and integration. With this sort of lack of consensus and glacial decision making, it is perhaps unsurprising that the key factor in fixed cost increase has been the collaborative structure. As with many advanced projects, delays should be expected given the inherent complexity in creating something that has not been created before, and the ending of the Cold War and fall of the Berlin Wall created legitimate reasons for the project’s scope to change. However, the Typhoon project shows the coordination problems of having no single authority accountable for a project’s success.
<urn:uuid:0b683b4d-07a2-4461-ab87-0e40feee1d94>
CC-MAIN-2013-20
http://www.projectcasestudies.com/?tag=inefficiency
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975152
268
2.546875
3
Chinese Cultural Studies: - Waging War - Offensive Strategy - Posture of Army - Void and Actuality Chapter 1 : Estimates War is a matter of vital importance to the state; a matter of life or death, the road either to survival or to ruin. Hence, it is imperative that it be studied thoroughly. Therefore, appraise it in terms of the five fundamental factors and make comparisons of the various conditions of the antagonistic sides in order to ascertain the results of a war. The first of these factors is politics; the second, weather; the third, terrain; the fourth, the commander; and the fifth, doctrine. Politics means the thing which causes he people to be in harmony with their ruler so that they will follow him in disregard of their lives and without fear of any danger. Weather signifies night and day, cold and heat, fine days and rain, and change of seasons. Terrain means distances, and refers to whether the ground is traversed with ease or difficulty and to whether it is open or constricted, and influences your chances of life or death. The commander stands for the general's qualities of wisdom, sincerity, benevolence, courage, and strictness. Doctrine is to be understood as the organization of the army, the gradations of rank among the officers, the regulations of supply routes, and the provision of military materials to the army. These five fundamental factors are familiar to every general. Those who master them win; those who do not are defeated. Therefore, in laying plans, compare the following seven elements, appraising them with the utmost care. - Which ruler is wise and more able? - Which commander is more talented? - Which army obtains the advantages of nature and the terrain? - In which army are regulations and instructions better carried out? - Which troops are stronger? - Which army has the better-trained officers and men? - Which army administers rewards and punishments in a more enlightened and correct way? By means of these seven elements, I shall be able to forecast which side will be victorious and which will be defeated. The general who heeds my counsel is sure to win. Such a general should be retained in command. One who ignores my counsel is certain to be defeated. Such a one should be dismissed. Having paid attention to my counsel and plans, the general must create a situation which will contribute to their accomplishment. By "situation" I mean he should take the field situation into consideration and act in accordance with what is advantageous. All warfare is based on deception. Therefore, when capable of attacking, feign incapacity; when active in moving troops, feign inactivity. When near the enemy, make it seem that you are far away; when far away, make it seem that you are near. Hold out baits to lure the enemy. Strike the enemy when he is in disorder. Prepare against the enemy when he is secure at all points. Avoid the enemy for the time being when he is stronger. If your opponent is of choleric temper, try to irritate him. If he is arrogant, try to encourage his egotism. If the enemy troops are well prepared after reorganization, try to wear them down. If they are united, try to sow dissension among them. Attack the enemy where he is unprepared, and appear where you are not expected. These are the keys to victory for a strategist. It is not possible to formulate them in detail beforehand. Now, if the estimates made before a battle indicate victory, it is because careful calculations show that your conditions are more favorable than those of your enemy; if they indicate defeat, it is because careful calculations show that favorable conditions for a battle are fewer. With more careful calculations, one can win; with less, one cannot. How much less chance of victory has one who makes no calculations at all! By this means, one can foresee the outcome of a battle. Chapter 2 : Waging War In operations of war-when one thousand fast four-horse chariots onethousand heavy chariots, and one thousand mail-clad soldiers are required; when provisions are transported for a thousand li; when thereare expenditures at home and at the front, and stipends for entertainment of envoys and advisers-the cost of materials such as glue and lacquer, and of chariots and armor, will amount to one thousand pieces of gold a day. One hundred thousand troops may be dispatched only when this money is in hand. A speedy victory is the main object in war. If this is long in coming, weapons are blunted and morale depressed. If troops are attacking cities, their strength will be exhausted. When the army engages in protracted campaigns, the resources of the state will fall short. When your weapons are dulled and ardor dampened, your strength exhausted and treasure spent, the chieftains of the neighboring states will take advantage of your crisis to act. In that case, no man, however wise, will be able to avert the disastrous consequences that ensue. Thus, while we have heard of stupid haste in war, we have not yet seen a clever operation that was prolonged. for there has never been a protracted war which benefited a country. Therefore, those unable to understand the evils inherent in employing troops are equally unable to understand the advantageous ways of doing so. Those adept in waging war do not require a second levy of conscripts or more that two provisionings. They carry military equipment from the homeland, but rely on the enemy for provisions. Thus, the army is plentifully provided with food. When a country is impoverished by military operations, it is due to distant transportation; carrying supplies for great distances renders the people destitute. Where troops are gathered, prices go up. When prices rise, the wealth of the people is drained away. When wealth is drained away, the people will be afflicted with urgent and heavy exactions. With this loss of wealth and exhaustion of strength, the households in the country will be extremely poor and seven-tenths of their wealth dissipated. As to government expenditures, those due to broken-down chariots, worn-out horses, armor and helmets, bows and arrows, spears and shields, protective mantlets, draft oxen, and wagons will amount to 60 percent of the total. Hence, a wise general sees to it that his troops feed on the enemy, for one zhong of the enemy's provisions is equivalent to twenty of one's own and one shi of the enemy's fodder to twenty shi of one's own. In order to make the soldiers courageous in overcoming the enemy, they must be roused to anger. In order to capture more booty from the enemy, soldiers must have their rewards. Therefore, in chariot fighting when more than ten chariots are captured, reward those who take the first. Replace the enemy's flags and banners with you own, mix the captured chariots with yours, and mount them. Treat the prisoners of war well, and care for them. This is called "winning a battle and becoming stronger." Hence, what is valued in war is victory, not prolonged operations. And the general who understands how to employ troops is the minister of the people's fate and arbiter of the nation's destiny. Chapter 3 : Offensive Strategy Generally, in war the best policy is to take a state intact; to ruin it is inferior to this. To capture the enemy's entire army is better than to destroy it; to take intact a regiment, a company, or a squad is better than to destroy them. For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the supreme excellence. Thus, what is of supreme importance in war is to attack the enemy's strategy. Next best is to disrupt his alliances by diplomacy. The next best is to attack his army. And the worst policy is to attack cities.Attack cities only when there is no alternative because to prepare big shields and wagons and make ready the necessary arms and equipment require at least three months, and to pile up earthen ramps against the walls requires an additional three months. The general, unable to control his impatience, will order his troops to swarm up the wall like ants, with the result that one-third of them will be killed without taking the city. Such is the calamity of attacking cities. Thus, those skilled in war subdue the enemy's army without battle. They capture the enemy's cities without assaulting them and overthrow his state without protracted operations. Their aim is to take all under heaven intact by strategic considerations. Thus, their troops are not worn out and their gains will be complete. This is the art of offensive strategy. Consequently, the art of using troops is this: When ten to the enemy's one, surround him. When five times his strength, attack him. If double his strength, divide him. If equally matched, you may engage him with some good plan. If weaker numerically, be capable of withdrawing. And if in all respects unequal, be capable of eluding him, for a small force is but booty for one more powerful if it fights recklessly. Now, the general is the assistant to the sovereign of the state. If this assistance is all-embracing, the state will surely be strong; if defective, the state will certainly be weak. Now, there are three ways in which a sovereign can bring misfortune upon his army: - When ignorant that the army should not advance, to order anadvance; or when ignorant that it should not retire, to order a retirement. This is described as "hobbling the army." - When ignorant of military affairs, to interfere in their administration. This causes the officers to be perplexed. - When ignorant of command problems, to interfere with the direction of the fighting. This engenders doubts in the minds of the officers. If the army is confused and suspicious, neighboring rulers will take advantage of this and cause trouble. This is what is meant by: "A confused army leads to another's victory." Thus, there are five points in which victory may be predicted: - He who knows when he can fight and when he cannot will be victorious. - He who understands how to fight in accordance with the strength of antagonistic forces will be victorious. - He whose ranks are united in purpose will be victorious. - He who is well prepared and lies in wait for an enemy who is not well prepared will be victorious. - He whose generals are able and not interfered with by the sovereign will be victorious. It is in these five matters that the way to victory is known. Therefore, I say: Know your enemy and know yourself; in a hundred battles, you will never be defeated. When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are sure to be defeated in every battle. Chapter 4 : Dispositions The skillful warriors in ancient times first made themselves invincible and then awaited the enemy's moment of vulnerability. Invincibility depends on oneself, but the enemy' vulnerability on himself. It follows that those skilled in war can make themselves invincible but cannot cause an enemy to be certainly vulnerable. Therefore, it can be said that, one may know how to win, but cannot necessarily do so. Defend yourself when you cannot defeat the enemy, and attack the enemy when you can. One defends when his strangth is inadequate; he attacks when it is abundant. Those who are skilled in defense hide themselves as under the nine-fold earth; those in attack flash forth as from above the ninefold heavens. Thus, they are capable both of protecting themselves and of gaining a complete victory. To foresee a victory which the ordinary man can foresee is not the acme of excellence. Neither is it if you triumph in battle and are universally acclaimed "expert," for to lift an autumn down requires no great strength, to distinguish between the sun and moon is no test of vision, to hear the thunderclap is no indication of acute hearing. In ancient times, those called skilled in war conquered an enemy easily conquered. And, therefore, the victories won by a master of war gain him neither reputation for wisdom nor merit for courage. For he wins his victories without erring. Without erring he establishes the certainty of his victory; he conquers an enemy already defeated. Therefore, the skillful commander takes up a position in which he cannot be defeated and misses no opportunity to overcome him enemy. Thus, a victorious army always seeks battle after his plans indicate that victory is possible under them, whereas an army destined to defeat fights in the hope of winning but without any planning. Those skilled in war cultivate their policies and strictly adhere to the laws and regulations. Thus, it is in their power to control success. Now, the elements of the art of war are first, the measurement of space; second, the estimation of quantities; third, calculations; fourth, comparisons; and fifth, chances of victory. Measurements of space are derived from the ground. Quantities, comparisons from figures, and victory from comparisons. Thus, a victorious army is as one yi balanced against a grain, and a defeated army is as a grain balanced against one yi. It is because of disposition that a victorious general is able to make his soldiers fight with the effect of pent-up waters which, suddenly released, plunge into a bottomless abyss. Chapter 5 : Posture of Army Generally, management of a large force is the same as management of a few men. It is a matter of organization. And to direct a large force is the same as to direct a few men. This is a matter of formations and signals. That the army is certain to sustain the enemy's attack without suffering defeat is due to operations of the extraordinary and the normal forces. Troops thrown against the enemy as a grindstone against eggs is an example of a solid acting upon a void. Generally, in battle, use the normal force to engage and use the extraordinary forces to win. Now, the resources of those skilled in the use of extraordinary forces are as infinite as the heavens and earth, as inexhaustible as the flow of the great rivers, for they end and recommence - cyclical, as are the movements of the sun and moon. They die away and are reborn - recurrent, as are the passing seasons. The musical notes are the passing seasons. The musical notes are only five in number, but their combinations are so infinite that one cannot visualize them all. The flavors are only five in number, but their blends are so various that one cannot taste them all. In battle, there are only the normal and extraordinary forces, but their combinations are limitless; none can comprehend them all. For these two forces are mutually reproductive. It is like moving in an endless circle. Who can exhaust the possibility of their combination? When torrential water tosses boulders, it is because of its momentum; when the strike of a hawk breaks the body of its prey, it is because of timing. Thus, the momentum of one skilled in war is overwhelming, and his attack precisely timed. His potential is that of a fully drawn crossbow; his timing, that of the release of the trigger. In tumult and uproar, the battle seems chaotic, but there must be no disorder in one's own troops. The battlefield may seem in confusion and chaos, but one's array must be in good order. That will be proof against defeat. Apparent confusion is a product of good order; apparent cowardice, of courage; apparent weakness, of strength. Order of disorder depends on organization and direction; courage or cowardice on circumstances; strength or weakness on tactical dispositions. Thus, one who is skilled at making the enemy move does so by creating a situation, according to which the enemy will act. He entices the enemy with something he is certain to want. He keeps the enemy on the move by holding out bait and then attacks him with picked troops. Therefore, a skilled commander seeks victory from the situation and does not demand it of his subordinates. He selects suitable men and exploits the situation. He who utilizes the situation uses his men in fighting as one rolls logs or stones. Now, the nature of logs and stones is that on stable ground they are static; on a slope, they move. If square, they stop; if round, they roll. Thus, the energy of troops skillfully commanded in battle may be compared to the momentum of round boulders which roll down from a mountain thousands of feet in height. Chapter 6 : Void and Actuality Generally, he who occupies the field of battle first and awaits his enemy is at ease, and he who comes later to the scene and rushes into the fight is weary. And, therefore, those skilled in war bring the enemy to the field of battle and are not brought there by him. One able to make the enemy come of his own accord does so by offering him some advantage. And one able to stop him from coming does so by preventing him. Thus, when the enemy is at ease, be able to tire him, when well fed, to starve him, when at rest to make him move. Appear at places which he is unable to rescue; move swiftly in a direction where you are least expected. That you may march a thousand li without tiring yourself is because you travel where there is no enemy. To be certain to take what you attack is to attack a place the enemy does not or cannot protect. To be certain to hold what you defend is to defend a place the enemy dares not or is not able to attack. Therefore, against those skilled in attack, the enemy does not know where to defend, and against the experts in defense, the enemy does not know where to attack. How subtle and insubstantial, that the expert leaves no trace. How divinely mysterious, that he is inaudible. Thus, he is master of his enemy's fate. His offensive will be irresistible if he makes for his enemy's weak positions; he cannot be overtaken when he withdraws if he moves swiftly. When I wish o give battle, my enemy, even though protected by high walls and deep moats, cannot help but engage me, for I attack a position he must relieve. When I wish to avoid battle, I may defend myself simply be drawing a line on the ground; the enemy will be unable to attack me because I divert him from going where he wishes. If I am able to determine the enemy's dispositions while, at the same time, I conceal my own, then I can concentrate my forces and his must be divided. And if I concentrate while he divides, I can use my entire strength to attack a fraction of his. Therefore, I will be numerically superior. Then, if I am able to use many to strike few at the selected point, those I deal with will fall into hopeless straits. The enemy must not know where I intend to give battle. For if he does not know where I intend to give battle, he must prepare in a great many places. And when he prepares in a great many places, those I have to fight in will be few. For if he prepares to the front, his rear will be weak, and if to the rear, his front will be fragile. If he strengthens his left, his right will be vulnerable, and if his right, there will be few troops on his left. And when he sends troops everywhere, he will be weak everywhere. Numerical weakness comes from having to guard against possible attacks; numerical strength from forcing the enemy to make these preparations against us. If one knows where and when a battle will be fought, his troops can march a thousand li and meet on the field. But if one knows neither the battleground nor the day of battle, the left will be unable to aid the right and the right will be unable to aid the left, and the van will be unable to support the rear and the rear, the van. How much more is this so when separated by several tens of li or, indeed, be even a few! Although I estimate the troops of Yue as many, of what benefit is this superiority with respect to the outcome of war? Thus, I say that victory can be achieved. For even if the enemy is numerically stronger, I can prevent him from engaging. Therefore, analyze the enemy's plans so that you will know his shortcomings as strong points. Agitate him in order to ascertain the pattern of his movement. Lure him out to reveal his dispositions and ascertain his position. Launch a probing attack in order to learn where his strength is abundant and where deficient. The ultimate in disposing one's troops is to conceal them without ascertainable shape. Then the most penetrating spies cannot pry nor can the wise lay plans against you. It is according to the situations that plans are laid for victory, but the multitude does not comprehend this. Although everyone can see the outward aspects, none understands how the victory is achieved. Therefore, when a victory is won, one's tactics are not repeated. One should always respond to circumstances in an infinite variety of ways. Now, an army may be likened to water, for just as flowing water avoids the heights and hastens to the lowlands, so an army should avoid strength and strike weakness. And as water shapes its flow in accordance with the ground, so an army manages its victory in accordance with the situation of the enemy. And as water has no constant form, there are in warfare no constant conditions. Thus, one able to win the victory by modifying his tactics in accordance with the enemy situation may be said to be divine. Of the five elements [water, fire, metal, wood, and earth], none is always predominant; of the four seasons, none lasts forever; of the days, some are long and some short, and the moon waxes and wanes. That is also the law of employing troops. Chapter 7 : Manuevering Normally, in war, the general receives his commands from the sovereign. During the process from assembling his troops and mobilizing the people to blending the army into a harmonious entity and encamping it, nothing is more difficult than the art of maneuvering for advantageous positions. What is difficult about it is to make the devious route the most direct routeand divert the enemy by enticing him with a bait. So doing, you may set out after he does and arrive at the battlefield before him. One able to do this shows the knowledge of the artifice of diversion. Therefore, both advantage and danger are inherent in maneuvering for an advantageous position. One who sets the entire army in moriton with impediments to pursue an advantageous position will not attain it. If he abandons the camp and all the impediments to contend for advantage, the stores will be lost. Thus, if one orders his men to make forced marches without armor, stopping neithe day nor night, covering double the usual distance at a stretch, and doing a hundred li to wrest an advantage, it is probable that the commanders will be captured. The stronger men will arrive first and the feeble ones will struggle along behind; so, if this method is used, only one-tenth of the army will reach its destination. In a forced march of fifty li, the commander of the van will probably fall, but half the army will arrive. Ina forced march of thirty li, just two-thirds will arrive. It follows that an army which lacks heavy equipment, fodder, food, and stores will be lost. One who is not acquainted with the designs of his neighbors should not enter into aliances with them. Those who do not know the conditions of mountains and forests, hazardous defiles, marshes and swamps, cannot conduct the march of an army. Those who do not use local guides are unable to obtain the advantages of the ground. Now, war is based on deception. Move when it is advantageous and create changes in the situation by dispersal and concentration of forces,. When campainging, be swift as the wind; in leisurely marching, majestic as the forest; in raiding and plundering, be fierce as fire; in standing, firm as the mountains. When hiding, be as unfathomable as things behind the clouds; when moving, fall like a thunderbolt. When you plunder the countryside, divide your forces. When you conquer territory, defend strategic points. Weigh the situation before you move. He who knows the artifice of diversion will be victorious. Such is the art of manuevering.
<urn:uuid:0acc1ccf-d216-4f96-af1c-b9ed5ee39757>
CC-MAIN-2013-20
http://acc6.its.brooklyn.cuny.edu/~phalsall/texts/artofwar.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965148
5,085
3.0625
3
During a recent meeting with engineering-school faculty and alumni, we talked about whether their college should educate generalists or specialists. One of the graduates explained how his broad education let him solve a problem with fundamental information that bridged several specialties. One of the engineers with a deep knowledge in a narrow area countered that today many companies need engineers with specialized knowledge so they can "jump into" a problem right away without a "warm-up" period. I can see both sides of the generalist vs. specialist debate. In electrical engineering, undergraduates often specialize a bit, perhaps taking more analog than digital electronics courses. But they receive a BS degree with a good understanding of many facets of electronics. In graduate school they can continue their education in narrower fields. Undergraduate engineering programs educate people about how to approach and solve problems, and how to think critically and examine problems from several perspectives. The general knowledge instilled during four years of college also helps graduates evaluate a field and determine whether they want to continue in it. I know science and engineering graduates who have become surgeons, physicians, teachers, entrepreneurs, patent attorneys, and so on. The generalist approach served them well. This approach also lets people who aim for more education benefit from a variety of experiences in their discipline. So I would not recommend trying to push undergrad engineering students to become specialists in four years. On the other hand, when companies and universities advertise job openings, they usually have a long list of specialized requirements. I found this example of job requirements on the Internet: - Minimum five years of embedded FPGA/ASIC design and/or verification experience; - Three-plus years of experience using System Verilog; - Solid experience verifying complex FPGA/ASIC designs; - Strong working knowledge of OOP verification and verification environment; - Experience with OVM/UVM verification methodology; - Good verbal and written communication skills; - Self-starter who can work with minimal supervision in a team environment on site; - Experience with scripting languages (e.g. Perl, TCL). Generalists need not apply. So here's my advice: Go ahead and specialize as you see fit either through an advanced degree or on-the-job training. But keep an eye on general knowledge in your chosen and related fields. If you want to specialize in motor control, for example, you should know how to write code in C, simulate control algorithms in MATLAB and Simulink, use LabVIEW, and so on. It also helps to know how to go to the shop and quickly machine a motor coupling you need to test a motor. You might become a specialist with a generalist's knowledge of many things, or a generalist with pockets of deep knowledge in a few areas. We have room for both types of engineers. Readers, what do you think? Tell us in the comments section below.
<urn:uuid:41ca80d3-45ba-4902-b515-884f9f34cdb6>
CC-MAIN-2013-20
http://www.designnews.com/author.asp?section_id=1419&doc_id=255830&piddl_msgid=879142
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957924
596
2.65625
3
Pesticide Action Network Updates Service (PANUPS) A Weekly News Update on Pesticides, Health and Alternatives See PANUPS archive for complete information. - 'Superweeds' jam the pesticide treadmill - Philippines Dept. of Health: No aerial spraying on banana plantations - Endocrine disruptors disrupt common wisdom - GM crops kill lady bugs; science suppressed The introduction of genetically modified, herbicide-tolerant crops has created a dire situation in the U.S. south – as weeds become more herbicide-resistant, farmers trying to maintain their 10,000-acre-plus “megafarms” are forced to apply increasing amounts of weedkiller. According to Tom Philpott and others, this pesticide treadmill is beginning to break down. Nine strains of amaranth (a.k.a. pigweed) have been labeled as noxious weeds in the U.S. One variety in particular, Palmer amaranth, has become resistant to glyphosate, the active ingredient in Monsanto’s flagship herbicide Roundup. Amaranth and other so-called "superweeds" have thrown a wrench in the machine of industrial agriculture. Pigweed is sturdy enough to “stop a combine in its tracks” and reduce yields by up to 68%, which is forcing many farmers to abandon chemical weedkillers in favor of mechanical cultivators and hand weeding. The situation is so bad in Macon County, Georgia, that 10,000 acres of farmland were deserted. The qualities that make amaranth a particularly pesky weed are the reasons it has been cultivated as a food source by Indigenous peoples in the Americas since 3400 BC: it is prolific (producing up to 10,000 seeds at a time), drought resistant, reaches maturity quickly, and has an extended period of germination. It is also exceptionally nutritious; containing 30% more protein than other cereal grains and, like quinoa (a pseudocereal), it is a complete protein. The Aztecs used it as a food staple but when the Spanish priests discovered that they were also using it in religious ceremonies, they banned the sale, consumption, and cultivation of amaranth. The plant has outlasted the Spanish, bested Roundup and is being reintroduced in many places throughout Mesoamerica as an inexpensive, healthy, localized solution to hunger problems. In response to its current superweed crisis, Monsanto blames farmers for the overuse of glyphosate, and recommends mixing glyphosate with older herbicides like 2-4,D -- one of the active ingredients in Agent Orange. They are right about the overuse part -- in the ten years after "Roundup Ready" crops were introduced, glyphosate use went from 7.9 million pounds per year to 119 million pounds per year. And as for mixing glyphosate and 2-4D? Monsanto appears to have anticipated the superweed dilemma, as they patented that combination in 2001. On November 8, The Philippines Department of Health issued a statement urging a halt to aerial pesticide spraying on banana plantations, saying that the banana industry must prove aerial application safe before returning to the practice. According to the Philippine Daily Inquirer, the recommendation is "based on the precautionary principle espoused by the Rio Declaration, of which the Philippines is a signatory.” The statement is based on a 2006 department study that links aerial spraying with diseases of people living in and around the banana plantations. According to the Inquirer, the department recognized that "the fungicides mancozeb and chlorothalonil which are sprayed aerially 'caused acute health effects and chronic effects to workers and communities living near plantations.'" The 2006 study recommendations include: (1) Establishing a health surveillance system to detect health effects of chronic pesticide exposure in communities adjacent to plantations; (2) Requiring industry, with governmental oversight, to monitor pesticide residues in the environment of adjacent communities, remediating where necessary; (3) Creating and strengthening guidelines for protecting communities from pesticide contamination; and (4) Considering a shift to organic farming techniques. "This is a significant victory," said Dr. Romeo Quijano of Pesticide Action Network Philippines. "But the campaign continues since the Supreme Court has not yet decided on the issue and the companies continue their aerial spraying." As PAN North America members know, Dr. Quijano and Ilana Ilang Quijano, his daughter, have been targeted by banana plantation owners with threats and in libel suits for documenting and publicizing the continuing exposure of plantation residents to pesticide poisoning. According to Medha Chandra, PAN North America Campaigner, "It is critical that we develop and implement policies that prevent chemical trespass via pesticide drift. Sensitive sites -- such as schools, homes and playgrounds -- must be our first priority for protection. Long-term, a transition to agroecological pest management is the best solution to protect health, food and livelihoods of farm and rural communities around the world." Endocrine-disrupting chemicals (EDCs) are substances in the environment that interfere with hormone (endocrine) systems to cause developmental, reproductive, immunological and neurological disorders including cancer, obesity, diabetes and a host of other illnesses. U.S. regulatory, and traditional toxicological and medical science have been slow to recognize the environmental health hazards posed by EDCs in part because this class of chemicals operates at such low levels and with such complex causal mechanisms that reductive and mathematically linear risk models proceeding from the assumption that "the dose makes the poison" have been ill-suited to comprehend the messy realities of multiple chemical exposures and time-dependent dose response. Increasingly, toxicologists and now -- surprisingly -- the American Medical Association, are poised to take up the public health paradigm challenge posed by EDCs. In the latest issue of Environmental Health Perspectives Linda Birnbaum, Director of the National Institute of Environmental Health Sciences, presents a summary of recent research that together refutes the commonly held notion that the dose makes the poison. Birnbaum explains how a growing number of studies show that many environmental toxicants can have significant consequences, including dysfunction and disease, at very low-level exposures. Many of these low-dose studies (including with the pesticides hexachlorobenzene and atrazine) demonstrate that “the timing of exposure is critical to the outcome and that exposures during early life stages (fetal, infant, and pubertal) are particularly important. This recognition of critical windows of vulnerability not only demonstrates the developmental basis of disease but also that the timing, as well as the dose, makes the poison.” In addition, the effects of environmental toxins on the human hormone system, for example, are frequently non-linear such that “high doses may not be appropriate to predict the safety of low doses when hormonally active or modulating compounds are studied.” Birnbaum describes this body of research as responsible for disruptive "paradigm shifts in our understanding of the relationship between environmental toxicants and disease." A recent article in Nature Biotechnology (PDF) reveals data, formerly suppressed by the biotechnology industry, that demonstrate a transgenic variety of corn is fatal to ladybugs. In 2001, at the request of seed company Pioneer Hi-Bred International, university scientists conducted research on a new variety of transgenic corn containing the binary toxin Cry34Ab1/Cry35Ab1. The scientists found that nearly 100% of ladybugs fed on the corn could not survive past the eighth day of their life cycle. Pioneer prohibited the scientists from publicizing their data and, when applying for regulatory approval for a corn variety containing the same toxin, submitted different data that made no mention of potential harm to ladybugs. Scientists are often barred from publicizing data that is unwelcome to biotechnology companies, particularly when the corporations themselves commissioned the research. Based on claims of business confidentiality and strict contracts with researchers, companies are able to keep unwelcome data under wraps and scientists’ hands tied. Companies routinely deny scientists’ research requests and suppress research by threatening legal action, a practice one scientist describes as “chilling.” In February 2009, 26 corn-pest specialists anonymously submitted a statement to U.S. EPA decrying industry’s prohibitive restrictions on independent research. "The risks of genetically modified crops are coming to light in spite of industry’s attempts to strangle the science," observes Kathryn Gilje, executive director of Pesticide Action Network North America. Ireland recently banned GM crops in favor of developing agriculture that emphasizes proven agroecological solutions.
<urn:uuid:5b227e06-93f2-4ece-80ac-73ac03aeab2b>
CC-MAIN-2013-20
http://www.panna.org/print/585?quicktabs_1=2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942563
1,753
2.578125
3
As the hoopla over the first draft of the human genome fades, a new, more fundamental endeavor is quietly gearing up in the same Maryland laboratories where much of the mapping of Homo sapiens took place. At The Institute for Genomic Research, a sandy-haired biologist named Scott Peterson and his team are trying to create something nature has not: a single-celled creature with the smallest number of genes necessary to stay alive. "Some may disagree, but I don't think what I'm doing is creating life," says Peterson. "We're modeling life. We're examining what are the genetic requirements for living cells." The search for such secrets is what drives this nonprofit institute called TIGR (The Institute for Genomic Research). Founded in 1992 by Craig Venter, the pioneer in genetic sequencing who was a coleader of the Human Genome Project, TIGR is dedicated to studying genomes of all stripes--from human to microbe. Peterson's focus, however, is on a particular set of secrets. Although he admits, "I'm motivated by the challenge of doing something nobody's done before," he hopes to understand from this experiment exactly how a cell works. Predictably, he has chosen to study nature's simplest bacterium, Mycoplasma genitalium. Found in the comfy environs of the human urogenital tract, the needs of this mycoplasma are easily fulfilled, and so, over its long evolutionary history, it has shed thousands of unnecessary genes, becoming the very model of austerity. (The genome of food-poisoning culprit E. coli, considered a basic life-form, is nine times bigger.) By tinkering with mycoplasma's slender set of genes, Peterson is in search of answers to two fundamental questions: How many genes, exactly, does a cell need to live? And which genes are they? Success will mean more than making history or providing crucial insights into how cells function. Along the way, Peterson's team must grapple with some searching ethical dilemmas. The results could, for example, lead to customized microbes for chewing up toxic waste, but they could also show a clear path to creating bioweapons more deadly than anything nature has dished up. So Peterson is understandably cautious: "I'm not going to name it Dolly or anything like that. Things like that tend to put people off." The search for the smallest genome stretches back to 1955, when biophysicist Harold Morowitz began collecting a Noah's ark of microbes in his lab at Yale and inspecting each organism's simple circular chromosome. One day he found an impressively runty germ, a species of Mycoplasma, and decided to study it. NASA funded the research, figuring that alien life might resemble something as seemingly primitive and genetically streamlined as mycoplasma. Morowitz supposed that if you knew what each of mycoplasma's genes did, a computer could be used to simulate the system. He foresaw this as a way to study the whole cell, not just one gene here or metabolic pathway there. He imagined the science of understanding genes both in detail and in concert--what Peterson would now call genomics. Peterson's office has all the cultural trappings of the day: a Pokem—n poster, a lava lamp, a cappuccino machine, a Mac G4. But the icon that testifies to his membership in biology's next generation is a diagram taped to his filing cabinet. With several colored bars spanning several rows, it resembles a small, delicately colored Persian rug. It represents the more than 1,700 genes of Haemophilus influenzae--the first complete sequence of a bacterial genome. Craig Venter helped inaugurate the genomic revolution with that sequencing project in 1995. But H. influenzae's genetic code was something of a disappointment. More than a third of its genes were completely unknown. Determined to crack a simpler, more manageable genome, Venter's team set their sights on M. genitalium. Three months later, Claire Fraser, now president of the institute, had nailed the 470-gene sequence. Still, mycoplasma's genetic instruction book was too complex for scientists to grasp how the genes work together. Was there a way to make it even simpler? Craig Venter began looking around for someone who could help. And he found Scott Peterson. Back in the late 1980s, years before the word genome filtered its way into everyday parlance, Peterson had been a budding bacteriologist in graduate school, putting in 14-hour days sequencing sections of mycoplasma DNA. Peterson says he "didn't see a bright future in microbiology." But in 1996 Venter recruited him to launch a long-term exploration of mycoplasma's genes. "Craig had a very alluring scientific pitch," says Peterson. "He outlined a 10-year commitment to learn just how this cell works." Gene-by-gene analysis, chromosome engineering, computer simulations, anything and everything was at his disposal. "Venter said, 'Wow, we could make a chromosome,'" recalls Peterson. Eventually, Venter left TIGR to head up Celera, a private genomics company, leaving the minimal-genome project in his protŽgŽ's hands. Peterson's first step was to disrupt mycoplasma's genes in various places to figure out which were crucial. To do this, he attacked the mycoplasma genome with bits of DNA called transposons that sneak their way into chromosomes. The invading transposons landed at random within the mycoplasma gene sequence, wreaking havoc. By looking at the cells that died from the attack, Peterson could see where the invading transposon had landed and thereby pinpoint genes essential for the bacterium's life. After this meticulous screening, he and Clyde Hutchison, a colleague from the University of North Carolina at Chapel Hill, identified a list of 300 or so essential genes. Without any one of these genes, mycoplasma would die. Yet that turned out not to be the sought-after minimal set. If the roughly 300 genes were strung together and slipped into a mycoplasma cell, the most likely result was one pathetically dependent bacterium, if it survived at all. Some genes, like basketball players on a team, tend to work together in cells. The transposon research showed who the team's best players were, but the analysis missed bit players whose teamwork was crucial. In order to approach a real minimal set of genes, Peterson says, you'd have to take genes out a few at a time, a technically challenging proposition. Therefore, "the way to prove that you've got a minimal cell is to make it," he says. But that approach means creating an organism that is utterly new to the face of the Earth. "If I limit my thinking to the science at hand, it simply represents a challenge," Peterson says. "Where have you gone too far? That's a difficult question. It's one that I haven't properly resolved fully." Although the biotech industry has been altering organisms--from plants to transgenic animals--for more than 20 years, Peterson could see that creating a bacterium with a custom-made, artificially assembled set of genes would be controversial. "In a nutshell, when you are faced with a power unlike anything you've really used before," says Peterson, "you have to stop and ask: Am I using this power appropriately or not?" In 1999, his team commissioned a panel of religious figures and ethicists to discuss the implications. After meeting several times over a year, the panel concluded that the project's basic goals were in keeping with the tradition of sound scientific inquiry. "We found no intrinsic reason in religious or secular ethics that you shouldn't [continue]," says bioethicist Art Caplan of the University of Pennsylvania, who headed the team. "There were some pretty serious religious types who were doubtful when we started and wound up saying it depends what you're going to do with it." The concern was noteworthy because the institute's team is altering an organism uniquely suited to colonize the human body. "It's not hard for me to imagine a sinister application, and that's frightening," says Peterson. Terrorists, he notes, could use these procedures to hide malevolent genes in other organisms. "If you really want to be sophisticated, you have to cloak what you're doing, like put the genes that make anthrax lethal into more innocent bacteria." But scientists who make their living in the biotech trade are persistently optimistic about their ability to control the genies they let out of bottles. "Certainly we're interested in the ethical issues around engineering organisms," adds Michael Brasch, Peterson's commercial collaborator. Brasch developed the technology that Peterson's team is using to assemble the artificial chromosome, and he, too, may face public opposition. In an exchange over lunch, Peterson kids his colleague: "You know, they're going to compare you to the gun companies." Brasch half laughs. "I usually hear us compared to Microsoft." That's because Brasch's firm, Life Technologies, a division of biotech giant Invitrogen Corp., is selling a new scheme as the next killer app of the genome age--the very technique Peterson is using. Dubbed Gateway Cloning Technology, it mimics the way some viruses slip their genes into a host cell's DNA by exploiting genetic tags called recombination sites--regions of DNA that allow bacteria to swap genes with one another. Brasch has developed his own recombination sites that permit him to cut and paste genes with ease, and he's hoping Peterson's work will be a public coming-out party for the system. Before learning about Gateway, Peterson says, his team had been unable to assemble a chromosome. "With available technology, copying the essential genes one by one is easy," Brasch explains. "But linking them together to rebuild the entire genome technically couldn't be done." The Gateway system should overcome this problem, says Peterson. By including the recombination sites on each gene as it's copied, Gateway will connect mycoplasma's DNA in proper order. Once the chromosome is complete, the recombination sites can be used to identify where genes begin and end. That will make it easy for Peterson to pare the chromosome by removing several genes at a time. But even if Gateway solves the copying problem, Peterson faces another hurdle: choosing which genes to string together. Almost certain to make the cut, he says, will be genes that instruct the cell to make proteins, genes that help build DNA, and genes that are crucial for the cell's replication. The team believes they have properly identified more than 200 crucial genes, including ones for eating, metabolism, and structure. But they have no clue what another 100 of mycoplasma's most essential genes do. "One bad choice could kill the whole thing," Peterson says. Attempts at computer-modeling life haven't shed much light on the problem. A Japanese group called E-Cell tried in 1997 to create a digital minimal cell. Their 127-gene, less-than-minimal model of a mycoplasma cell was able to simulate life, but not replicate it. The barrier was science's murky sense of how, among other things, mycoplasma divides. "In this particular area," says E-Cell leader Masaru Tomita, "we have to wait for the science to catch up." It's possible Peterson's reductionist strategy will find no definite answers to understanding a cell. Nevertheless, researchers will begin to understand the inner lives and histories of bacterial genomes. And that alone may be worth the effort. More than 30 genomic sequences now exist for different bacteria, each able to be picked apart. These data mines have spawned a new field in which biologists contrast the genomes of different organisms for clues. Equally powerful is the prospect of editing genomes to tackle a variety of questions. Take, for example, bacterial disease. Streptococcus pneumoniae, for example, is a microbe that kills more than a million people a year in underdeveloped countries. But why are some strains of the bacterium so deadly and some not even able to inhabit the lungs? Genomic engineering with a man-made chromosome in these bacteria could allow scientists to test a slew of ideas. "We can take a gene out, change it, mutate it, do whatever we want," Peterson says, "and put it back in, leaving everything the same, and ask: What effect did that change have?" Some changes will undoubtedly render the bacteria less dangerous, allowing scientists to identify new drugs that will fight these small terrors. Mix-and-match chromosome construction could also prove a powerful weapon for tackling questions of evolution. M. genitalium's closest relative is M. pneumoniae, which can cause a bad cough. By comparing the siblings' sequences, it appears that M. genitalium evolved directly from its older brother by discarding 210 genes. Imaginative chromosome reengineering could allow a researcher to replay the divergence of the two species in a frame-by-frame reverse slow motion. "One could start to add the 210 genes back sequentially to M. genitalium," says Peterson, "and ask questions about the evolution." Ironically, the one question genomic engineering may not be able to answer is which genes are absolutely essential for life. One issue is how to define life--the life-support-machine dilemma, on the most basic level. Normally, says Peterson, M. genitalium replicates itself in about 12 hours. A minimal creature, enfeebled by a bare-bones set of genes, could take much longer, perhaps a month. "And it's so sick that I have to feed it and nurture it. Is that life?" Peterson asks. Even genes designated dispensable may be long-term evolutionary investments. During the first round of experiments, researchers found that the bacteria could live without the gene they believe encodes RecA, a protein that can repair genetic errors. Does that make RecA dispensable? Without the recA gene, mycoplasma cells may survive in the lab, says Peterson, but "in a million years you might suspect they won't be around anymore." Experiments have also shown that the amount of sugar available determines which bacterial genes are crucial for metabolism. So which would be crucial for a minimal cell? And would the minimal set of mycoplasma's genes that just managed to make do in the cushy lab environment suffice when the bacteria live in mice, where an immune system might go after them? "There's a constant debate over nature or nurture--they're inseparable," says Craig Venter. "I naively thought that we could have a molecular definition for life, come up with a set of genes that would minimally define life. Nature just refuses," he says softly, "to be so easily quantified." Peterson stands in his lab, holding a FLASK of drifting mycoplasma cells up to the light. "They're not healthy," he says, "and they're starting to suffer." For eight weeks the bacteria have endured yet another trial by fire, this time with a chemical mutagen. Now the genes that allow them to cling to the flask appear to be damaged. Still, the cells are getting by. Losing genes is old hat for Mycoplasma, but the losses over the eons have made its members pathetically reliant. They cannot make raw materials for proteins, DNA, or their cell membranes. So in a lab they demand a diet of ground-up cow hearts, blood serum, and other delicacies. "They're high maintenance," says one assistant. Without question, Peterson's project--like so many in biotech research--will further our understanding of how genes work together. And it could someday lead to the creation of an entirely artificial single-celled organism, assembled from off-the-shelf components like DNA, proteins, lipids, and sugars. "I think someday science will be in that position," says Peterson, "where we will have to ask: Should we or shouldn't we?" Some restless innovator may build on Peterson's science and take that next uncertain step. It could even be genomics pioneer and entrepreneur Craig Venter, his old boss. Venter, who maintains informal ties with the institute, says he has no interest in that project. "Right now, the only way you can get life is from life itself," Venter says. "We're working in that direction, but we're a long way away from making the decision to go ahead and do that experiment." Peterson isn't so sure: "I wouldn't put it past him." The Institute for Genomic Research's Web site describes its varied projects: www.tigr.org.
<urn:uuid:8553db7b-89d8-49ed-adbe-eb7535e35148>
CC-MAIN-2013-20
http://discovermagazine.com/2001/apr/featsimple
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959198
3,456
3.125
3
Twenty two million American workers produce, process, sell and trade the nation's food and fiber. But only 4.6 million of those people live on the farms-- slightly less than 2 percent of the total U.S. Population. Consumers spend $547 billion for food originating on U.S. farms and ranches. Of each dollar spent on food, the farmer's share is approximately 23 cents. The rest are for costs beyond the farm gate: wages and materials for production, processing, marketing, transportation and distribution. On average, every hour, 24 hours a day, 365 days a year, around $6 million in U.S. agricultural products--grains, oilseeds, cotton, meats, vegetables, snack foods, etc., will be consigned for shipment for export to foreign markets. It all means more jobs and higher wages across the nation. U.S. agricultural exports generate more than $100 billion annually in business activity throughout the U.S. economy and provide jobs for nearly 1 million workers. Agricultural land provides habitat for 75 percent of the nation's wildlife. Deer, moose, waterfowl and other species have shown significant population increases during the past several years. Ethanol and new bio-diesel fuels made from corn and other grains are beneficial to the environment and promote energy security. Today's Farmer and Farm Family Nearly two million people farm or ranch in the United States. Almost 90 percent of U.S. farms are operated by individuals or family corporations. And American agriculture provides jobs—including production agriculture, farm inputs, processing and marketing, along with retail and wholesale sales--for 15 percent of the U.S. population. A recent survey of America's young farmers and ranchers revealed that 97.2 percent planned to farm and ranch for life. And 90 percent said they would like their children to follow in their footsteps. This provides strong incentive for today's farmers and ranchers to protect and preserve he natural resources on their property. Not only is the land and its resources a farmer's lifeblood today, it represents the future for his family and its business. America's farmers and ranchers are true professionals. Most farmers and reachers are trained and certified in the use of agricultural chemicals. And farmers test and evaluate the soil before administering fertilizers. Farmers and ranchers don't spend hard-earned money on costly fertilizers and nutrients unless they absolutely safe to do otherwise doesn't make good business sense. Nearly 30 percent of today's farmers and ranchers have attended college, with over half of his group obtaining a degree. A growing number of today's farmers and ranchers with four-year college degrees are pursuing post-graduate studies. Today's Modern Farm Thanks to modern farming techniques, America's farmers and ranchers are producing more food on fewer acres, leaving more open space for wildlife habitat. Modern farming practices free up millions of acres of wildlife habitat. Modern farming practices free up millions of acres for wildlife to live and thrive. Precision farming practices boost crop yields and reduce waste by using satellite maps and computers to match seed, fertilizer and crop protection applications to local soil conditions. A recent survey of young farmers and ranchers reveals that computers are used on 83 percent of America's farms. Nearly 75 percent of today's young farmers have a cellular telephone, and nearly one-third have access to the Internet, up from 10.5 percent from last year. As farmers, the challenge is to provide consumers with the highest quality food possible. Growing and raising wholesome, safe food is the top goal. Farmers have done a good job; and they will continue to look for every opportunity to improve quality and safety. Federal and state governments are responsible for safeguarding the food supply, but farmers are responsible for growing food safely. We make sure we use crop protectants effectively and safely, in amounts that are no more than what is necessary to combat pests and diseases. Farmers work hard to gain the knowledge, training and skill to use chemicals safely and responsibly. Many farmers learned from their parents and have a lot of experience. But like other professionals, they also go to college, attend seminars and work with consultants. They are professionals in what they do. Food-borne illnesses can occur anytime food is involved., So basic sound food practices should always be followed, whether the food is being prepared at a restaurant, at home or at a church picnic. Proper food storage, processing and handling eliminates most, if not all, food- borne risks. Through cooking has proved an adequate safeguard. Food should always be properly refrigerated. Raw meat products should be segregated from cooked products. Perhaps most important, when in doubt, throw it out. The basic products farmers produce are not usually the source of bacterial diseases. After the products leave the farm, however, meat, milk and other high- protein foods, on occasion, can be subject to contamination during processing, handling, storing and the actual preparation of the food. New food safety standards have been put in place by the federal government to further ensure the food we eat is safe. The agricultural community has along supported new techniques that improve production and help make food even more affordable for consumers. For example, animals and crops have been selected for breeding for centuries resulting in improved disease resistance and bigger yields. Biotechnology simply gives the farmers the tools to speed up this process. Consumers and the environment are the end beneficiary of new advances in biotechnology. Biotech advances that come to use on the farm will further ensure that American food and fiber products can remain cost-competitive both here and abroad. Biotechnology research is closely monitored by federal and state agencies, including EPA, USDA, and the FDA. While it can be an effective tool, biotechnology as used on farms--such as new corn and soybeans varieties--will not "run rampant" and produce the mutants that populate nightmares and science fiction movies. During a biotech research project, perhaps one gene in 10,000 is manipulated to achieve a small, but desired result. Integrated Pest Management American farmers fully support practices that enable them to reduce pesticide use. They've been using IPM tactics such as field scouting and even crop rotation for years. IPM is a management practice that uses cultural practices and natural pest enemies to reduce the use of crop protectants. They will continue to expand IPM use whenever possible. As business people, farmers are interested in lowering costs associated with using crop protectants. IPM can help them do that. IPM, however, does not mean totally eliminating the use of crop protectants. Some are even used in conjunction with modern IPM techniques. Farmers will continue to work with universities like NC State and researchers like those in the College of Agriculture and Life Sciences to develop new techniques that lessen the use and expense of crop protectants.
<urn:uuid:f74cd10d-1b07-4935-801b-e14cd3193a42>
CC-MAIN-2013-20
http://www.cals.ncsu.edu/CollegeRelations/AGRICU.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948282
1,398
3.203125
3
||The examples and perspective in this article deal primarily with the United Kingdom and do not represent a worldwide view of the subject. (May 2011)| |British & Commonwealth |Commonwealth nationality laws| |Classes of citizens and subjects| |Rights and visas| In British nationality law, a Commonwealth citizen is a person who is either a British Citizen, British Overseas Territories Citizen, British Overseas Citizen, British Subject, British National (Overseas) or a national of a country listed in Schedule 3 of the British Nationality Act 1981. Note that British Protected Persons are not Commonwealth citizens. The list of countries in Schedule 3 at any time may not accurately reflect the countries actually within the Commonwealth at that time. For example, when Fiji left the Commonwealth in 1987 and 1990, its name was not removed from Schedule 3. This may have happened because the British Government at the time wished to avoid the consequences of Fijian citizens in the United Kingdom suddenly losing the benefits of Commonwealth citizenship. Most other Commonwealth countries have provisions within their own law defining who is and who is not a Commonwealth citizen. Each country is free to determine what special rights, if any, are accorded to non-nationals who are Commonwealth citizens. In general, citizens of the Republic of Ireland and British protected persons, although not Commonwealth citizens, are accorded the same rights and privileges as Commonwealth citizens. Rights and disabilities in the United Kingdom In the United Kingdom, as in many other Commonwealth countries, Commonwealth citizens (together with Irish citizens and British protected persons) are in law considered not to be "foreign" or "aliens", although British protected persons do not have all the civic rights that are enjoyed by Commonwealth and Irish citizens. Commonwealth and Irish citizens enjoy the same civic rights as British citizens, namely: - the right, unless otherwise disqualified (e.g. imprisoned), to vote in all elections (i.e., parliamentary, local and European elections) as long as they have registered to vote (they must possess valid leave to enter/remain or not require such leave on the date of their electoral registration application) - the right, unless otherwise disqualified, to stand for election to the British House of Commons as long as they possess indefinite leave to remain or do not require leave under the Immigration Act 1971 (c. 77) to enter or remain in the UK - the right, if a qualifying peer or bishop, to sit in the House of Lords - eligibility to hold public office (e.g., as a judge, magistrate, minister, police constable, member of the armed forces, etc.) The disabilities of Commonwealth citizens who are not British citizens are few, but in the case of immigration control, very important. Commonwealth citizens (including British nationals who are not British citizens) who do not have the right of abode are subject to immigration control, including control on the right to work and carry out business. In addition, Commonwealth citizens who are not British citizens may not be engaged in certain sensitive occupations, e.g., in the Foreign and Commonwealth Office, in the intelligence services, and some positions within the armed forces. Nevertheless, under the United Kingdom's immigration arrangements Commonwealth citizens enjoy certain advantages: - Commonwealth citizens born before 1 January 1983 may by virtue of having a parent born in the United Kingdom and Islands have the right of abode therein – such persons are exempt from all immigration control; - Commonwealth citizens with a grandparent born in the United Kingdom and Islands may be admitted for up to five years on this basis, and thereafter be granted indefinite leave to remain; - Commonwealth citizens between the ages of 18 and 30 were eligible to be admitted for a "working holiday" for up to two years. This has since been replaced with the more restrictive Youth Mobility Scheme (now open only to youth of Australia, Canada, Japan, New Zealand, and Monaco); - Commonwealth citizens, unlike other non-European Economic Area nationals, may not be required to register with the police while living in the United Kingdom. The following are "countries whose citizens are Commonwealth citizens" under Schedule 3 of the British Nationality Act 1981, although the list as laid out in the Act may not reflect the actual current membership in the Commonwealth. Although Rwanda does not appear in Schedule 3 of the Act, for electoral purposes, its citizens are considered to be Commonwealth citizens. Also, for electoral purposes, the whole of Cyprus is considered to be a Commonwealth country; hence, anyone who holds a Cypriot passport and/or a Northern Cypriot passport is considered to be a Commonwealth citizen (but not a person who is solely a Turkish citizen without any form of Cypriot nationality). Rights and privileges throughout the Commonwealth Although the rights and privileges (if any) for non-national Commonwealth citizens differ from country to country, a number of Commonwealth countries grant them more privileges than 'aliens' (i.e. non-Commonwealth foreign nationals), but not the full privileges enjoyed by the country's own nationals. Right to vote The following Commonwealth countries allow citizens from other Commonwealth countries to vote: - Antigua and Barbuda. The Representation of the People (Amendment) Act (2002) permits Commonwealth citizens who have lived in Antigua and Barbuda for at least 3 years to register to vote in the constituency where they have resided for at least 1 month. - Australia. British subjects and Commonwealth citizens (including Irish citizens but not including South African citizens) can vote as long as they were on the federal electoral roll on 25 January 1984. If they leave Australia and their enrolment has lapsed, they are still eligible to re-enrol upon their return to Australia. - Barbados By virtue of the Constitution of Barbados under Sec. 41A. of CAP. 5, and the Representation of the People's Act, CAP. 12, Section 7; Commonwealth citizens are given the right to vote and stand for election to the Parliament of Barbados under proscribed stipulations. - Belize. Non-Belizean Commonwealth citizens who are either domiciled or who have lived for the past 12 months in Belize are eligible to register to vote. - Bermuda. Non-Bermudian Commonwealth citizens who were registered to vote on 1 May 1976 are eligible to vote. - Cayman Islands. Commonwealth citizens who are not British Overseas Territories Citizens by virtue of their connection to the Islands can still vote if on 31 January 1988 they were resident and on the electoral roll in the Islands, and either had a parent born on the Islands or was ordinarily resident in the Islands for 7 out of the 9 years preceding registration. - Dominica. Commonwealth citizens who have lived in Grenada for 12 months are eligible to vote. - Guyana. By virtue of Article 159 of the Constitution, Commonwealth citizens aged 18 or over who are "domiciled and resident" in Guyana are eligible to vote. - Jamaica. Non-Jamaican Commonwealth citizens can register to vote in all elections as long as they are ordinarily resident in Jamaica. - Malawi. By virtue of Section 77(2)(a) of the Constitution, all foreign nationals - including Commonwealth citizens - who have lived in Malawi for 7 years can register to vote. - Mauritius. By virtue of Section 42(1) of the Constitution, a Commonwealth citizen aged 18 or over on the 15th of August in the year of registration, and who has either lived in Mauritius for at least two years prior to 1 January in the year of registration or who is domiciled in Mauritius on 1 January in the year of registration may vote. - New Zealand. All foreign nationals – including Commonwealth citizens – who are permanent residents (i.e. in possession of an indefinite visa) are obliged to register to vote at the address where they have lived for at least one month, and can continue to vote in New Zealand elections if abroad as long as they have been to New Zealand in the past year. - Saint Kitts and Nevis. - Saint Lucia. Commonwealth citizens aged 18 or over who have lived in Saint Lucia for 7 years and have lived in their constituency for at least 2 months is eligible to register to vote. - Saint Vincent and the Grenadines. - Trinidad and Tobago. Commonwealth citizens aged 15 or over who have lived for at least one year in Trinidad and Tobago are eligible to vote. Many Commonwealth countries offer visa-free entry for short visits made by Commonwealth citizens from countries with a relatively high standard of living. Many Commonwealth countries continue to allow Commonwealth citizens from other countries to become nationals/local citizens by registration rather than naturalisation, upon preferential terms, e.g. with a shorter required period of residency, although this practice has been discontinued in some countries such as New Zealand and Malta. In March 2013, it was announced by Nigeria's Foreign Affairs Minister, Ambassador Olugbenga Ashiru, that a visa free regime is been contemplated by Commonwealth countries for its members to strengthen trade and investment among member nations. As a prelude to accomplishing this, the council of ministers is to present a proposal for the exemption of holders of official and diplomatic passports from visa requirements at the next Commonwealth Heads Of Government Meeting, CHOGOM, scheduled to hold in Colombo, Sri Lanka. The announcement came in the wake of a meeting between Ashiru and the Secretary General of the Commonwealth, Kamalesh Sharma, in Abuja. At the meeting Ashiru and Sharma discussed proposals to make the Commonwealth more relevant to its citizens across the globe including ways the Commonwealth could ease free movement across member countries, strengthen institutions, enhance education, create job opportunities, facilitate development and enhance the living standard of the citizens across the countries of the Commonwealth. Ashiru said, “We also discussed the issue of free movement to promote people to people contact within the Commonwealth. In the past, it used to be that holders of Commonwealth passports could travel within the commonwealth countries easily without having to go and queue for visas. We are now thinking for ways to ensure that we bring back this old tradition of the Commonwealth. Already, the council of ministers have recommended for approval at the next CHOGOM meeting in Colombo the exemption of holders of official and diplomatic passports within the Commonwealth from the requirement of a visa if they are travelling within the commonwealth.” - The right to work in any position (including the civil service) in some instances, except for certain specific positions (e.g. defence, Governor-General or President, Prime Minister). - Eligibility for the Commonwealth Scholarship. - Eligibility to serve in most roles of the British Armed Forces, provided all other criteria have been met. In foreign (i.e. non-Commonwealth) countries, the British embassy or consulate is traditionally responsible for Commonwealth citizens whose governments are not represented in the country concerned. A few Commonwealth governments have made alternative arrangements to share the burden, such as the Canada-Australia Consular Services Sharing Agreement, hence for Canadian and Australian citizens, the British embassy or consulate only provides assistance if neither country is represented. In return, there are a few Australian consulates that are responsible for British nationals because there is no British consulate there. A few Commonwealth governments, namely Singapore and Tanzania, have opted not to receive consular assistance from the United Kingdom. In other Commonwealth countries, British High Commissions accept no responsibility for unrepresented Commonwealth citizens, who should look to the host Commonwealth government for quasi-consular assistance. Canadian and Australian citizens are still able to seek consular assistance from each other's high commissions. Commonwealth citizen travel documents Commonwealth citizens outside the UK are eligible to apply for a British emergency travel document if they need to travel urgently and their passport has been lost/stolen/expired (as long as the FCO has cleared this with the government of the Commonwealth citizen's home country). When a British embassy or consulate in a foreign country is required to provide a replacement passport to a Commonwealth citizen whose government is unrepresented in that country, it will issue a British passport with the nationality of the holder marked as "Commonwealth citizen". Some Commonwealth governments issue travel documents to Commonwealth citizens resident in their countries who are unable to obtain national passports. For example, the Department of Foreign Affairs and Trade issues Documents of Identity (DOI) for compassionate reasons to Commonwealth citizens resident in Australia who are unable to obtain a valid travel document for the country or countries of which he/she has nationality when he/she needs to travel urgently. - "Judgments of the Court in Cases C-145/04 and C-300/04 – Kingdom of Spain v United Kingdom of Great Britain and Northern Ireland" (PDF). The Court of Justice of the European Communities. September 12, 2006. Retrieved 2007-12-18. - Representation of the People Act 1983, Section 4(6) - Electoral Administration Act 2006, Section 18 - UK Border Agency | Youth mobility scheme - Schedule 3 : Countries whose citizens are Commonwealth citizens, British Nationality Act 1981 (c. 61), legislation.gov.uk. - About my vote: Who can register to vote? - "Part B - Entitlement to register (Section 6.28)" (PDF). March 2010. Retrieved 2011-04-24. - Australian Electoral Commission: British Subjects Eligibility - Edward, Geralyn (1 April 2012). "‘Only citizens should vote’". Nation Newspaper. Retrieved 3 November 2012. "Philip “Jimmy” Serrao, who was EBC chairman for 15 years, said the provision in Barbados’ laws that allowed Commonwealth citizens to vote in general elections after residing here for three years ought to be changed. What’s more, Serrao was firm in his belief that only Barbadian citizens should have the right to elect representatives to govern this country. Serrao, a Queen’s Counsel, who chaired the EBC from 1995 to 2010, told the SUNDAY SUN that citizens of Barbados of the Commonwealth who have resided in Barbados for at least three years, were over the age of 18 and resided in the constituency for three months, could influence who made it into the House of Assembly." - Elections and Boundaries Department of the Government of Belize: Who is eligible to vote? - Bermuda Parliamentary Registry: Voter Eligibility - Cayman Islands Elections Office: Qualifications and Disqualifications of an Elector - Government of the Commonwealth of Dominica Electoral Office: Who Can Register to Vote? - Guyana Elections Commission: Elections in Guyana - Representation of the People Act 1944, Sections 5(2)(a) and 111(3)(b) - Malawi Constitution Section 77 - Electoral Commissioner's Office: Registration - Elections New Zealand: Who can enrol? - Saint Lucia Electoral Department: Frequently Asked Questions - Trinidad and Tobago Elections and Boundaries Commission: Services provided - Commonwealth Plans Visa Free Regime For Member States - Commonwealth plans visa-free policy to boost trade, investment among member states - Canada-Australia Consular Services Sharing Agreement, Government of Canada, retrieved 2007-12-14 - Foreign & Commonwealth Office: The new UK Emergency Passport - "Immigration Directorates' Instructions", Chapter 22 – Passports and Travel Documents, UK Home Office, retrieved 2008-03-28 - Entry Clearance Guidance, UK Government, September 26, 2007, retrieved 2007-12-14
<urn:uuid:8e45d253-7344-42b4-bf82-932041772c6d>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Commonwealth_citizen
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938577
3,194
2.859375
3
Rentokil South Africa does not cover snakes as part of our service package; however you may find the information and advice below regarding snakes useful. Protecting you and your family from Snakes and Snake bites Snakes – and the risk of snake bites – are a real threat to people living in South Africa even though relatively few of the 150 species of snakes are venomous snakes. Most snakes are extremely timid creatures and are only likely to attack if cornered or provoked as attack is, in most instances, simply a form of defense. The required treatment for snake bites varies from species to species and being able to identify the species of snake that has bitten someone is an important part of the procedure. You can learn more about some of South Africa’s more common snakes in our Pest Guides section. How to avoid snake bites: Be aware of the dangers posed by snakes Snake bites caused by accidentally stepping on to a snake, especially if you are out walking in grassland or the bush, are nearly impossible to predict or prevent but by taking a few simple but sensible precautions you can reduce the risk of snake bites: - When walking around make plenty of noise to advertise your presence which will most likely keep snakes away. - Wear strong boots or shoes and long trousers when walking in grassland. - NEVER walk about barefoot, especially at night!Avoid long grass and stick to paths and tracks as much as possible to avoid snake bites. - Use a long stick to ‘probe’ the ground ahead of you; be aware that snakes can ‘play dead’ so do not attempt to touch a snake that appears dead. - Walk in single file through long grass or bushes. - Climb on to large rocks or logs in the pathway and step off them on to clear ground; these are favourite haunts for snakes - Watch where you put your hands and NEVER put them down a hole, as this can lead to snake bites, perhaps even venomous snake bites. - Do not attempt to catch, corner or kill a snake. - Use a mosquito net at night and tuck it in tightly. - Never sleep on the ground, unless you have a tent with an attached and built in ground sheet. - Always use a torch to light the ground ahead of you at night to keep snakes away to avoid snake bites. - If you see a snake stand absolutely still and then slowly back away; remember that many snakes can strike up to half their length. - When entering garages, sheds, storerooms and other outbuildings, open the door, light the internal area and visually check for snakes remembering that you will, in all likelihood, be blocking their exit and that this is when they can attack in defense. Talk to the experts Whilst Rentokil does not offer a service to remove snakes, we can provide further advice on how to minimise the risk of garden snakes and snake bites, or put you in touch with someone who will be able to assist with snake removal. Call us free on 0800 117 852.
<urn:uuid:deed5a52-b244-4214-b02b-17c2929df0d5>
CC-MAIN-2013-20
http://www.rentokil.co.za/residential-customers/rodents-and-snakes/snakes/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00022-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943395
637
3.09375
3