text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 7.09k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
Getting Lost in the Forest
Wandering off the beaten path
We talk a lot about packing up a tent and a backpack and heading out to relatively unchartered territory, get lost from the crowds and finding some solitude to relax and be one with nature. But what if you veer from the beaten path and find yourself actually lost in the woods?
Walking in Circles
First we would recommend never walking away from the path by yourself without the proper equipment to guide you. But sometimes you feel sure that you know exactly where you are, only to look back and realize that you have absolutely no idea where the path is that you left just a short while ago. You try to back track, but have a sneaking suspicion that all you are doing is walking around in circles. The good news is, you are right; you most likely are walking in circles. The bad news is, you’ve been walking in circles and you are probably really disoriented and frustrated, two things that will work against you when you are lost.
Why do we walk in circles?
A couple of German psychologists decided to do an experiment to find out if it was really true that we walk in circles when we are lost. They got together a group of nine people together and instructed them all to walk in a straight line. Some were in the forest, some were desert terrain. All had different times of day and weather conditions and all had GPS trackers so that their movements could be properly documented and studied by the psychologists.
Amazingly, no matter how hard they tried to focus on walking in a straight line were seen to walk in circles, never realizing when they were crossing their own path again.
The walkers who travelled in the most circular fashion were the ones who were walking either at night after the moon had set or during overcast weather conditions. That led to the following conclusion: those walkers who did not have a visual marker like the moon or the sun, would automatically start walking in a circle. Those who could see the sun would either consciously or unconsciously use those visual markers to keep walking in a fairly straight line.
Guided by the Sun and the Moon
Just to make sure, they did another experiment, this time with 15 blindfolded people walking in a straight line. Just like the past conclusions would suggest, all 15 blindfolded walkers immediately started walking in circles around the area, and in fairly small circles at that.
The brain is a mysterious organ, and one that we are still struggling to figure out. Why we automatically walk in circles when we don’t have the moon or the sun to guide us is still a mystery, but knowing that your brain may make your body do the opposite of what your intention is can help you the next time you are lost in the woods (or in the city for that matter). If you can’t see the sun (or the moon) the best thing to do is to simply stay put and let others find you. Your adventurous instinct will urge you to push forward and explore, but be smart about your off road hiking adventures. Bring the proper equipment to guide you.
If you plan on getting off the road most travelled on your next camping trip, make sure you bring a map and a compass. For the perfect basecamp, Denver Tent Company has the best canvas wall tents, range tents or herder tents and tipis to make your next camping trip a great success. Come check out our inventory of sportsmen tents and accessories and get outfitted for your next adventure. | <urn:uuid:1708412a-0a7e-4d25-adff-5745e52798bc> | CC-MAIN-2020-10 | https://denvertent.com/getting-lost-in-the-forest/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00095.warc.gz | en | 0.968864 | 722 | 2.78125 | 3 |
BACKGROUND: Sickle cell disease is an inherited disorder of hemoglobin, resulting in abnormal red blood cells. These are rigid and may block blood vessels leading to acute painful crises and other complications. Recent research has focused on therapies to rehydrate the sickled cells by reducing the loss of water and ions from them. Little is known about the effectiveness and safety of such drugs.
OBJECTIVES: To assess the relative risks and benefits of drugs to rehydrate sickled red blood cells.
SEARCH METHODS: We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group's Haemoglobinopathies Trials Register.Last search of the Group's Trials Register: 25 October 2011.
SELECTION CRITERIA: Randomized or quasi-randomized controlled trials of drugs to rehydrate sickled red blood cells compared to placebo or an alternative treatment.
DATA COLLECTION AND ANALYSIS: Both authors independently selected studies for inclusion, assessed study quality and extracted data.
MAIN RESULTS: Of the 51 studies identified, three met the inclusion criteria. The first study tested the effectiveness of zinc sulphate to prevent sickle cell-related crises in a total of 145 participants and showed a significant reduction in painful crises over one and a half years, mean difference -2.83 (95% confidence interval -3.51 to -2.15). However, analysis was restricted due to limited statistical data. Changes to red cell parameters and blood counts were inconsistent. No serious adverse events were noted in the study.The second study was a Phase II dose-finding study of senicapoc (a Gardos channel blocker) compared to placebo. Compared to the placebo group the high dose senicapoc showed significant improvement in change in hemoglobin level, number and proportion of dense red blood cells, red blood cell count and indices and hematocrit. The results with low-dose senicapoc were similar to the high-dose senicapoc group but of lesser magnitude. There was no difference in the frequency of painful crises between the three groups. A subsequent Phase III study of senicapoc was terminated early since there was no difference observed between the treatment and control groups in the primary end point of painful crises.
AUTHORS' CONCLUSIONS: While the results of zinc for reducing sickle-related crises are encouraging, larger and longer-term multicenter studies are needed to evaluate the effectiveness of this therapy for people with sickle cell disease.While the Phase II and the prematurely terminated phase III studies of senicapoc showed that the drug improved red cell survival (depending on dose), this did not lead to fewer painful crises.
Recommended CitationNagalla, Srikanth and Ballas, Samir K, "Drugs for preventing red blood cell dehydration in people with sickle cell disease." (2012). Department of Medicine Faculty Papers. Paper 106. | <urn:uuid:aaced5fc-6121-4a51-b5e9-9dafd1c9dfce> | CC-MAIN-2017-47 | http://jdc.jefferson.edu/medfp/106/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00452.warc.gz | en | 0.915434 | 594 | 2.578125 | 3 |
Issue No. 03 - July-September (1985 vol. 7)
<p>A fully electronic general-purpose analog computer was designed by Helmut Hoelzer, a German electrical engineer and remote-controlled guidance specialist. He and an assistant built the device in 1941 in Peenemunde, Germany, where they were working as part of Wernher von Braun's long-range rocket development team. The computer was based on an electronic integrator and differentiator conceived by Hoelzer in 1935 and first applied to the guidance system of the A-4 rocket. This computer is significant in the history not only of analog computation but also of the formulation of simulation techniques. It contributed to a system for rocket development that resulted in vehicles capable of reaching the moon.</p>
J. E. Tomayko, "Helmut Hoelzer's Fully Electronic Analog Computer," in IEEE Annals of the History of Computing, vol. 7, no. , pp. 227-240, 1985. | <urn:uuid:dd74cffd-c3be-4cf1-adc1-1148d77e9a79> | CC-MAIN-2019-04 | https://www.computer.org/csdl/mags/an/1985/03/man1985030227-abs.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00032.warc.gz | en | 0.937563 | 204 | 3.390625 | 3 |
Methods in 1st segment
Methods in 2nd segment
Methods in 3rd segment
Methods in 4th segment
Methods in 5th segment
Methods in 6th segment
Additional methods (Timing Unknown)
Involvement of social elites
Success in achieving specific demands/goals
Notes on outcomes
Unlike the United States during the 1960s, the Netherlands did not have an atmosphere of racial strife or international conflict. The relative peace of the Netherlands was one potential reason why student protests for university reform first manifested as student unionism in support of democratization. Movements calling for similar university reforms occurred between 1967 and 1968 in Germany and France. The Dutch students’ protest influenced the restructuring of the Netherlands’ university system. Through their campaign, university function became more democratic; included in the new system were committees and councils including students and non-academic staff, whose voices had been limited in the previous system.
Traditionally, Dutch universities were public and managed by the state. A Board of Curators (College van Curatoren) was responsible for upholding laws and regulations implemented by the government over universities, maintaining the quality of the teaching programmes, academic buildings, and university possessions. The Curatoren also administered university finances. Gradually the Curatoren’s role shifted to become more organizational rather than policy oriented.
A committee known as the Maris committee was established to create a new formal university structure. In 1968, the Maris committee submitted a proposal to eliminate the current reign of dual authority between Curatoren and Senate. Instead, they planned to introduce a hierarchical system by which professors would report to departmental deans, and the deans would report to a central governing board with broad power known as the Presidium.
Professors, who feared for their relative autonomy under the current structure, and students, who viewed the Maris formula as an effort to gear higher education towards production goals, protested the Maris Committee’s proposals, and they were not implemented. By the time the 1968-1969 academic year began, the students’ and professors’ revolt against the Maris scheme had increased tensions across campuses.
The first protests of the campaign, led by students, occurred at two Catholic universities in Tilburg and Nijmegen. Students viewed these two universities as representing the hierarchical and totalitarian structure of the Catholic Church. The first student union was established at the Catholic University in Tilburg.
In April 1969, students from the Catholic College of Economic and Social Sciences at Tilburg occupied the telephone exchange to express their dissatisfaction with the slow progress from the student and faculty committee assembled to formulate a new proposal for a university constitution. Tilburg Curatoren closed the college, and students occupied the entire campus in protest. The Tilburg protest sparked excitement at other campuses, notably Leyden where further protests occurred.
On 1 May 1969 the General Leyden Student Association held a meeting and demanded that the Senate and Curatoren take a clear position in respect to the conflict at Tilburg. By the following day, the students presented their demands as an ultimatum, which threatened coercion within four days if the authorities did not issue a public statement both disapproving of the lockout in Tilburg and allowing members of the Leyden scientific staff to go to Tilburg and teach.
On 5 May, Senate members, including professors, lecturers, and scientific staff conferred at a meeting where, for the first time in years, a wide disagreement emerged concerning what policy should be adopted in response to student protest. The Senate formed a compromise by which members stated they deplored the conflict, and they believed that further statements would not lead to a termination of the conflict.
This compromise did not appease the student movement. The students formulated a new ultimatum to expire on 8 May demanding co-decision and openness. Universities would be required to allow members of all categories of the community to vote on committee decisions. The universities would also obligate university committees to convene publicly and make any meetings or memoranda openly available, in compliance with student demands.
On 8 May the Senate agreed to co-decision among parties, but it was unwilling to comply with the demand for openness. Over one thousand students and other protesters attended this protest meeting. Towards the end of the meeting, a motion was adopted to permanently continue the meeting in the Main University Building to facilitate open and fair discussion, meeting the second demand. The authorities also agreed to form a new Committee on Structure to draft a new Constitution. In Tilburg and at other Dutch Universities, the Minister of Education also agreed to the conciliatory policies of co-decision and openness.
In Leyden, professors and lecturers, scientific staff, technical and clerical personnel, and students began to discuss the Committee on Structure. Ultimately it was decided that three general meetings would be held in October with a final vote in the beginning of December. However, the Committee on Structure did not begin meeting until June 1970 and finally submitted its proposal in April 1971.
In January 1969, the Minister presented a memorandum (Nota Bestuurshervorming van de Universiteien en Hogescholen) outlining the principles for university government reform. At this stage of the protests, both conservative and progressive factions had formed. The conservative faction drafted a petition to Parliament with objections to WUB stating that it extended too large a share of policy-making to unqualified people. One hundred and forty four of three hundred professors and lecturers signed. In contrast, the progressive faction prepared a memorandum known as “Academic Freedom and Societal Responsibility,” pleading for a new vision on academic freedom that was signed by 42 lecturers.
In February 1970, the Minister of Education and Science issued Wet Universitaire Betuurshervorming (University Government Reorganization Act), which was adopted by both houses of Parliament. In December, parliament passed the bill. The law became effective on 1 January 1972. WUB democratized university functions. It replaced existing authority with elected councils and boards at all levels of university function. These included: departments, sub-faculties, faculties, and the university as a whole. Students, academic staff, and senior administrators among others all sat as members on various committees. The law was experimental and originally intended to only last until 1976 but was extended until 1982.
Lammers, Cornelis J. 1974. “Localism, Cosmpolitanism, and Faculty Response.” Sociology of Education 47(1):129-158. Retrieved February 6, 2015 (http://www.jstor.org/stable/2112170 .).
Locke, Grahame. 1989. “The Collectivisation of the Dutch Universities.”Minerva 27(2/3):157-176. Retrieved February 4, 2015 (http://www.jstor.org/stable/41820764 .).
Maassen, Peter and Egbert de Weert. 1999. “The Troublesome Dutch University and Its Route 66 Towards a New Governance Structure.”Higher Education Policy 329-342. Retrieved February 6, 2015 (www.elsevier.com/locate/highedpol). | <urn:uuid:24996482-83ff-42af-b4cd-f5c965f0b7ff> | CC-MAIN-2024-10 | https://nvdatabase.swarthmore.edu/content/dutch-students-organize-university-reforms-1968-1971 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00879.warc.gz | en | 0.952983 | 1,472 | 3.234375 | 3 |
From Wikipedia, the free encyclopedia - View original article
The first printed version from Mother Goose's Melody (London, c. 1765), has the following lyrics:
The version from Songs for the Nursery (London, 1805), contains the wording:
Alternate Lyrics as shown in The Real Mother Goose published in 1916:
The most common version used today is:
Various theories exist to explain the origins of the rhyme.
One identifies it as the first poem written on American soil, suggesting it may date from the 17th century and have been written by an English immigrant who observed the way native-American women rocked their babies in birch-bark cradles, which were suspended from the branches of trees, allowing the wind to rock the baby to sleep. A difficulty with this theory is that the words appeared in print first in England in c. 1765.
In Derbyshire, England, local legend has it that the song relates to a local character in the late 18th century, Betty Kenny (Kate Kenyon), who lived with her charcoal-burner husband, Luke, and their eight children in a huge yew tree in Shining Cliff Woods in the Derwent Valley, where a hollowed-out bough served as a cradle. However this "late 1700s" date is incompatible with the poem's appearance in print c. 1765.
Yet another theory has it that the lyrics, like the tune "Lilliburlero" it is sung to, refer to events immediately preceding the Glorious Revolution. The baby is supposed to be the son of James VII and II, who was widely believed to be someone else's child smuggled into the birthing room in order to provide a Roman Catholic heir for James. The "wind" may be that Protestant "wind" or force "blowing" or coming from the Netherlands bringing James' nephew and son-in-law William of Orange, who would eventually depose King James II in the revolution (the same "Protestant Wind" that had saved England from the Spanish Armada a century earlier). The "cradle" is the royal House of Stuart. The earliest recorded version of the words in print appeared with a footnote, "This may serve as a warning to the Proud and Ambitious, who climb so high that they generally fall at last", which may be read as supporting a satirical meaning. It would help to substantiate the suggestion of a specific political application for the words however if they and the 'Lilliburlero' tune could be shown to have been always associated.
Another possibility is that the words began as a "dandling" rhyme - one used while a baby is being swung about and sometimes tossed and caught. An early dandling rhyme is quoted in The Oxford Nursery Rhyme Book which has some similarity:
The words first appeared in print in Mother Goose's Melody (London, c. 1765), possibly published by John Newbery (1713–1767) in the 18th century, which was reprinted in Boston in 1785. Rock-a-bye as a phrase was first recorded in 1805 in Benjamin Tabart's Songs for the Nursery, (London, 1805).
It is unclear though whether these early rhymes were sung to the now-familiar tune. At some time however the Lilliburlero-based tune and the 1796 lyric, with the word "Hush-a-bye" replaced by "Rock-a-bye", must have come together and achieved a new popularity. A possible reference to this re-emergence is in an advertisement in The Times newspaper in 1887 for a performance in London by a minstrel group featuring a "new" American song called 'Rock-a-bye':
This minstrel song, whether substantially the same as the nursery rhymes quoted above or not, was clearly an instant hit: a later advertisement for the same company in the paper's October 13 edition promises that "The new and charming American ballad, called ROCK-A-BYE, which has achieved an extraordinary degree of popularity in all the cities of America will be SUNG at every performance."
If this is, in fact, the same song, then this implies that it was an American composition and already popular there. An article in the New York Times of August 1891 (p. 1) refers to the tune being played in a parade in Asbury Park, N.J. and clearly by this date the song was well established in America. Newspapers of the period, however, credit its composition to two separate persons, both resident in Boston: one is Effie Canning (later referred to as Mrs. Effie D. Canning Carlton and the other the composer Charles Dupee Blake. | <urn:uuid:aa4eee33-bc61-4fdd-928b-f0cb18041105> | CC-MAIN-2014-23 | http://blekko.com/wiki/Rock-a-bye_Baby?source=672620ff | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00133-ip-10-33-131-23.ec2.internal.warc.gz | en | 0.96032 | 983 | 3.21875 | 3 |
Acute back pain: A type of back pain that lasts less than three to six months. It can occur suddenly and may be the result of an injury or illness.
Analgesics: Medications that relive pain. Includes over-the-counter and prescription pain medication.
Block: An injection of pain medication that prevents a nerve from continuing to send pain signals to part of the body. A block can be permanent or temporary.
Bone graft: A way to rebuild bony structures using bone from another portion of the body or another person, or from synthetic materials.
Cervical vertebrae: The seven vertebrae that support the neck, referred to individually by numbers: C1, C2, C3, C4, C5, C6, and C7.
Chronic back pain: A type of lingering back pain that continues for more than three to six months.
Coccyx: A series of small fused vertebrae that make up the tail end of the spine.
Decompression: The removal of pressure on a nerve or the spinal cord through back surgery.
Disk: The pad of tissue between two vertebrae that cushions the vertebra and allows the spine to be flexible.
Disk replacement: Surgical removal and replacement of a damaged disk.
Epidural space: The area between the bone and the membrane enclosing the brain and spinal cord. "Epidural" is often used to describe an injection of pain medication into the epidural space.
Fusion: A surgical procedure in which vertebrae are joined together for greater stability.
Implantable drug delivery systems: Pumps that can be surgically implanted to deliver medications into the spinal canal in order to control pain.
Kyphoplasty: A procedure normally used to fix a vertebral compression fracture. A small balloon is injected into the damaged bone; the balloon is blown up, forcing the damaged bone into its rightful shape. A synthetic bone filler (or cement) is then injected into that space for stability.
Ligaments: Tough bands of tissue that hold bones together in joints.
Lumbar vertebrae: The five vertebrae in the lower back.
Muscle relaxants: Medications that help control and release muscle spasms.
Non-steroidal anti-inflammatory drugs (NSAIDs): Over-the-counter and prescription medications used to treat pain and reduce inflammation.
Opioids or opiates: Prescription pain medications such as morphine that control the perception of pain by binding to certain receptors in the central nervous system.
Recurrent back pain: Back pain that lasts for a while, goes away, and then returns.
Relaxation therapy: Learned techniques to help relax the mind and body, which may be used to help ease back pain.
Spinal cord stimulators: Devices that use electrical signals to help manage back pain. Spinal cord stimulators may be outside the body or implanted in the body, much like pacemakers.
Spinal fluid: Also known as cerebrospinal fluid (CFS), this is the fluid that surrounds the spinal cord and brain.
Spinal manipulation: Spinal treatments, either using a device or the hands, that are intended to realign the spine and other elements that contribute to pain. Also called an adjustment.
Spine: The stabilizing structure of the body that runs up the back and is made up of bones called vertebrae, ligaments, disks, and nerves.
Tendons: Tough bands of tissue holding muscle to bone.
Thoracic vertebrae: The 12 vertebrae between the neck (cervical) vertebrae and the lower back (lumbar) vertebrae.
Traction: Use of a harness or table to stretch the back in order to relieve pain or tension.
Transcutaneous electrical nerve stimulation (TENS): Use of a small device that delivers small shocks in order to stimulate the body’s natural pain killers.
Vertebra: One of 33 bony structures in the spine that are lined up and stacked upon each other, with disks in between, to give the back flexibility.
Vertebral compression fracture: A fracture, or break, in a vertebra, which makes the vertebra collapse.
Vertebroplasty: A procedure in which synthetic bone filler is injected into a fractured vertebra to help stabilize it.
Zygapophyseal joints: Also known as “Z” joints or facet joints, these are joints between adjacent vertebra. | <urn:uuid:d24b22fa-f7d3-49ad-a38f-a959858a2d6d> | CC-MAIN-2018-05 | https://www.everydayhealth.com/back-pain/back-pain-glossary.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00366.warc.gz | en | 0.905773 | 944 | 3.0625 | 3 |
Anti-racist mathematics education is primarily or muhammed (islamic), could be used in story problems 'anti-racist' message in mass math class, fox. It's finally happened, guys math is now racist national mathematics organizations have come out to complain that math education is “unjust and grounded in a. Earlier today on sirius xm urban view, an african-american talk station, the guest was daryl scott, president of the association for the study of african american. Retrieved from https problems racist math larrycuban research funding is a two - month decisions about forms of complementary courses in companies abroad teaching. 'if fred got two beatings per day' homework asks roach said the teachers were attempting to incorporate social studies into math problems. Math is “racist ” bradford hanson article while keeping them out of our whites only lands should make their math and “whiteness” problems go away so.
A middle school teacher gave her eighth-grade students a racist math test about ‘pimps and hos word problems included: tells us weekly her 14-year-old son. What would be an example of a racist math question on the sat is it racist to question the birthplace what are some of the most difficult sat math problems. The latest tweets from racist math (@mathracist) i make racist math problems if jamal ran 80 m until the police caught him, prepare to laugh everywhere.
The anti-bias curriculum is an activist approach to a solid understanding of social problems and issues while equipping them math is seen as. Community college suspends professor for ‘racist for composing a math exam question that involved a offensive math problem,” stated fire.
The response to ratener and the ‘racist’ math names in his math problems to our society than does an insensitive math problem when fire has argued. A washington, dc teacher who sent home violent, morbid and traumatizing math problems to third graders from center city.
Talk:anti-racist mathematics/archive 1 and indians that whitey's math is racist and they are not capable of getting racist or culturally specific word problems. · wait: math is racist now math is racist: that this will cause problems for your employer they want to avoid those problems. Math is racist , apparently the berlin papyrus contains two problems math can't be racist because racism is about prejudice and power and while math can be. | <urn:uuid:6c93a630-fd90-4314-a0d0-036b7f068ddd> | CC-MAIN-2018-13 | http://mkhomeworkoqwn.bluewindow.me/racist-math-problems.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650262.65/warc/CC-MAIN-20180324112821-20180324132821-00238.warc.gz | en | 0.95043 | 509 | 2.546875 | 3 |
Epivir interferes with the virus's ability to reproduce in the human body and thereby delays the collapse of the human immune system. It is also used for treatment of chronic hepatitis B.
Epivir must be taken exactly as prescribed by your doctor. You must continue to take it even if you have started feeling better. You may take it with or without food.
If you miss a dose take it as soon as you remember. However if it is almost time for the next dose, skip the Missed Dose and continue your regular dosing schedule. Do not take a double dose to make up for a missed one.
Store Epivir at room temperature and avoid direct exposure to sunlight or moisture. Brief storage at 59 to 86 °F (15 to 30 °C) is permitted. Keep it out of the reach of children.
Epivir does not eliminate HIV from the body. The infection can still be passed to others through sexual contact or blood contamination.
Lactic acid build-up in the blood (lactic acidosis) and severe liver disease, including death, have been reported with the use of lamivudine, either alone or in combination with other drugs, used to treat human immunodeficiency virus (HIV) infection (e.g., zidovudine, ritonavir). HIV counseling and testing should be offered both before and during treatment to all patients, using Epivir-HBV. Lamivudine-HBV contains a lower dose of lamivudine drug, used to treat HIV infection. Use of Epivir-HBV in patients with unknown or untreated HIV infection could result in resistant strains of HIV. Consult your doctor or pharmacist for more information. Tell your doctor your medical history especially of: infection with HIV, kidney problems, liver problems, blood disorders, pancreas problems, alcohol use, allergies (especially drug allergies). This drug may make you dizzy; use caution, engaging in activities, requiring alertness such as driving or using machinery. Avoid alcoholic beverages. Liquid preparations of lamivudine may contain sugar (sucrose); caution is advised in patients with diabetes. If you are diabetic, close monitoring of your blood sugar is recommended as you begin using this drug. Ask your doctor or pharmacist for more details. Caution is advised when using this drug in children (especially in children with pancreatitis) because they may be more sensitive to the effects of this drug. This medication should be used only when clearly needed during pregnancy. Discuss the risks and benefits with your doctor. This medication has not been shown to affect the transmission of hepatitis B from mother to infant. Consult your doctor for more information. Epivir HBV passes into breast milk and may have undesirable effects on a nursing infant. Breast-feeding is not recommended while using this drug. Consult your doctor before breast-feeding. | <urn:uuid:51e99b68-3f66-41d7-8482-6ebdf61165b7> | CC-MAIN-2022-27 | https://americana-ed.com/products/epivir | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00747.warc.gz | en | 0.913665 | 616 | 3.171875 | 3 |
Study links spikes in oxygen levels with bursts of evolution
Life on Earth may have begun billions of years ago with a quiet, single-celled whimper, but it really arrived with a bang about 540 million years ago. Within a relatively short period of time, life burst forth into an incredible diversity of forms, in an event that has since come to be known as the Cambrian explosion. Now, an international team of scientists has found clues to what may have caused that – spikes in oxygen levels.
The Cambrian explosion has long been studied as probably the most important event in the history of evolution of life on Earth. It's fairly clear in the fossil record too – prior to that time, signs of life are mostly limited to trace fossils of single-celled organisms and some simple multi-celled creatures. But starting around 541 million years ago, life really took off, expanding into all the major groups we see today over just 13 to 25 million years.
But exactly what triggered the Cambrian explosion has remained a mystery. One long-standing theory suggests that surging oxygen levels led to these bursts of evolution in animals, and the current study has found new evidence to support this idea.
To do so, the researchers first collected marine carbonate samples from along the Aldan and Lena rivers in Siberia – areas that would have been shallow seas during the Cambrian period, bustling with life. The team then analyzed the carbon and sulfur isotopes in these samples, which allows for the calculation of the levels of oxygen floating around in the shallow ocean and atmosphere during those times.
Sure enough, high levels of oxygen were found around the times of the diversity explosion, with a particularly strong amount present between 524 and 514 million years ago. Interestingly the reverse also seemed to prove true – low levels of oxygen accompanied a later extinction event, occurring between 514 and 512 million years ago.
"This is the first study to show clearly that our earliest animal ancestors experienced a series of evolutionary radiations and bottlenecks caused by extreme changes in atmospheric oxygen levels," says Graham Shields, corresponding author of the study.
Extra oxygen is definitely vital for larger lifeforms, but it might have just been one part of the story. It's thought that the end of the Snowball Earth period about 100 million years earlier may have washed more nutrients into the seas, leading to increased diversity of single-celled organisms. Together, these favorable conditions might have let life finally run wild with creativity.
The study was published in the journal Nature Geoscience.
Source: Oxford University | <urn:uuid:2c1cba84-4c88-4ad9-b7f5-4bf92b0a83c0> | CC-MAIN-2020-40 | https://newatlas.com/oxygen-spike-cambrian-explosion/59658/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00439.warc.gz | en | 0.963198 | 521 | 3.953125 | 4 |
Research has shown that the keris originated from the Majapahit empire which ruled in the 13th century. After the decline of the Majapahit, many of its blacksmiths migrated to other areas such as Jawa, the Sumatran islands, the Sulawesi islands and finally to the Melayu Peninsula. The keris developed as a weapon in the Melayu Peninsula up until the point it was occupied by the British. British law banned the wearing of the keris on the body or used as a weapon but only allowed its ceremonial role.
There are many different types of keris, however, the Keris melayu as used by the Melayu normally displays the features of the original Majapahit keris, which has the Jawa Demam hilt and a boat-shaped piece on the sheath.
Parts of the Keris Melayu
Names of different keris parts:
1. Hulu (hilt)
2. Sarung (sheath)
3. Pendokok/ bedokoh
5. Buntut (end)
6. Perut (stomach)
7. Pamor (pattern)
8. Lok (wave)
9. Bilah/ awak/ mata keris (blade)
10. Aring/ ganja (crosspiece)
11. Puting/ unting/ oting (tang)
12. Pucuk/ hujung mata (blade tip)
14. Belalai gajah (elephant trunk)/ kembang kacang
15. Lambai gajah/ bibir gajah (elephant lips)
16. Bunga kacang
18. Dagu keris (chin)
19. Kepala cicak (gecko head)
20. Leher cicak (gecko neck)
21. Gading gajah (elephant tusk)
22. Ekor cicak (gecko tail)
23. Janggut (beard)
24. Kepit/ sepit rotan (rattan pincers)
25. Lurah/ kambing kacang
26. Tulang/ tulangan (spine)
The 'gecko head' and 'gecko tail' get their names from the gecko resemblance when viewed from the tang end. The ganja comprises the gecko head, neck and tail. The ganja and the blade are two different pieces which are assembled later. However, there is a keris type where both of these are of a single piece, called the Keris Ganja Seiras or the Single View Keris.
The Spiritual Value of the Keris
The keris is not only a weapon, but it also carries certain symbolic meanings. The tang of the keris represents masculinity, while the crosspiece with its hole in the middle represents femininity. The combination of both elements gives birth to a balance in life and power.
The blade of the keris represents the shape of a dragon, which is closely connected to water and rivers. Water is the source of life, thus the dragon is a mystical lifeform that represents power. A keris with no waves represents a resting or meditating dragon, while a wavy keris represents a moving dragon. The belalai gajah and lambai gajah represents and elephant, which is an allegory for power.
The keris was originally made from a composition of iron mined from the earth and the meteoric iron ore. This produces a pamor which is believed contained magical powers as a result of the blending of earthly and heavenly elements.
The keris is also believed to have an affinity or rapport with its owner. The owner will measure the length of the keris from its crosspiece down to its tuntung using his thumbs while reciting holy verses or a mantra. He will cease his recitation when one of his thumb arrives at the tuntung and depending on which verse marked the arrival will determine the affinity between keris and owner. An owner who truly believes in this affinity will not buy or use a keris which does not have this quality, irrespective of the value, beauty or scarcity of the keris.
The Melayu societies of the past would practise keeping kerises on the crossbeam of their homes as protection against enemies, evil spirits and diseases. It is believed that a keris would rattle and make sounds in its sheath when danger approached. Some are even believed to unsheath and fly to the enemy on their own, or when the owner pointed towards the enemy's location, the keris would fly out to the enemy and kill him.
Is it any wonder then, why the keris is so highly revered by some owners who faithfully bathe their kerises every Thursday night (deemed a special night) or once a year in the month of Muharram, to ensure that the power of the keris is not left untended and cause it to run amok or leave the keris totally. However, with the advent of Islam to the Melayu Peninsula, many of these beliefs have been discarded and what remains is a cleaning or weapon care ritual.
Famous mystical Kerises
Among the most popular kerises is the Keris Tamingsari owned by Hang Tuah which was believed to gran invincibility to its wielder. On the other hand, Hang Jebat's keris has a void in its blade which allows its owner to the future, and it was by this way that Hang Jebat discovered his impending death at the hand of his own blood brother, Hang Tuah.
The Keris Kai Condong, which is inhabited by an evil spirit, flies and kills anyone its finds when night falls. This keris was eventually defeated by three other mystical kerises which combined and baited it into a magically prepared pounder, which instantly destroyed it. However, the destroyed keris finally flew away to rejoin a comet of the same meteoric ore from which it was made.
This article was translated by Mohd Nadzrin Wahab from the original article "Asal Usul Keris Melayu" at http://www.mishafbisnes.com.my/krafmelayu/keris.htm for publication in the World Silat Championship 2007 souvenir book. For a reference to the article , please email webmaster [at] silatmelayu.com | <urn:uuid:e1feb694-0316-4312-8da5-27f77ba38018> | CC-MAIN-2016-22 | http://senamanmelayu.blogspot.com/2009/06/senjata-tradisi-melayu-keris-melayu.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00115-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.937435 | 1,352 | 2.859375 | 3 |
DISCUSSION QUESTION # 4 – ETHICS
Over the past few weeks you have been reading and learning about the computer and its many and varied uses. This tool has become for most of us a “must have” device and our lives seem to be closely connected to the power of this machine. In fact, most of us would be lost without this powerful “tool”.
Every day on TV, radio, and in newspapers we are told of the misuses of the computer in crimes and in the loss of personal privacy. In our own area, several universities and public institutions have had the privacy and security of their systems violated by hackers and these institutions then have to spend hundreds of thousands of dollars dealing with these issues. In addition, privacy of all of the people in the system is hacked.
The problems with personal security and privacy are not going to go away. Many people do not trust the security of many websites and are reluctant to use their credit and debit cards to order or buy merchandise on line. Many individuals feel the only solution is to pass more laws to govern these issues. In addition, to be concerned is the issue of spying by the NSA and I wonder if that too is a form of ethical behavior?
Some people consider these problems as both legal and ethical issues. In the dictionary ethics is defined as “moral and religious beliefs as well as what is considered normal community standards.” There are many definitions and interpretations of ethics. Some even believe the “Golden Rule” could solve many of the issues concerning privacy, however, others feel the passing of new laws is the only way to deal with these issues. There is a debate about the need to ‘track” all computer users and the sites they visit. ‘
So, what do you think? Are ethics an issue to be considered in terms of using computers and their various applications, and if so how? What are ethics to you and do you think people agree with your version of ethics or not? In fact, how important are ethics in today’s society and should we legislate or pass laws that specify the ethical implications of computer usage? Will laws solve all problems and do we need more laws to deal with these issues?
You are to about 400 words in 4 or more are a has to discuss this question. There is not a right answer, and please state your opinions not someone else’s thoughts. Be sure you use spell check and grammar checking in WORD before you attach to an email to me. | <urn:uuid:bd84a791-6c0f-4efb-a776-5c0c759f0e85> | CC-MAIN-2022-33 | https://academicresearchbureau.com/past-weeks-reading-learning-computer-many-varied-uses-tool-become-us-must-device-lives-seem-closely-co/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00409.warc.gz | en | 0.977661 | 518 | 3.078125 | 3 |
Fifth difficult issue is the one of the most important explanation about exact acupuncture diagnosis according to WuXing, five movements. This issue gives practitioner method of defining one of the five movements dominating in each meridian in normal and pathological condition. This difficulty explains comparison between forces of the pulse during each movement of meridian. Each pulse is compared to the weight, which is the force and not level as implied by some practitioners. W = mg. It is different force of Qi corresponding to each of five different movements. It is a very important difference from herbal pulse diagnosis. Acupuncture pulse diagnosis is not only diagnosis of condition but also exact prescription for needling. According to this difficult issue pulse may become more yin or more yang depends of its normal movement and movement it dominates now, showing yin or yang excess and deficiency of each meridian. Fourth difficult issue explains that pulse may contain parts yin or yang. For example pulse of the heart has one part of yin and three parts of yang, pulse of liver has one part of yang and three parts of yin and so on. Only acupuncture pulse diagnosis can indicate exactly how many parts of yin or yang contains each pulse as it may be described in graph known as Pulse map. | <urn:uuid:51994117-f83a-4630-acc4-a2b06cf00698> | CC-MAIN-2021-21 | https://www.pulsediagnosticum.com/5-difficulty.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00603.warc.gz | en | 0.942948 | 256 | 2.859375 | 3 |
New Dictionary of Biblical Theology
2 in stock
In recent years our knowledge of the individual parts of the Bible has increased greatly, but our understanding of how they fit together has not kept pace. In particular, the relationship between the Old and New Testaments has been a neglected field of study.
This Dictionary is an essential tool for students, preachers and ministers, as well as for scholars and others seeking a better grasp of the Bible's teaching.
The aim of this prestigious dictionary is to integrate the various biblical books and themes into the overarching story of the Scriptures. The volume embodies three perspectives on biblical theology, which are reflected in its structure.
"This well-produced reference book takes each Biblical book and explores the main themes within, whilst adding a bibliography of further reading material. A section between the Old and the New Testament takes a variety of seminal themes, such as faith, the family, freedom and God, and compares the way both the Old and New Testaments cover each one. This is a welcome addition to the selection of reference books available."
- Church of England Newspaper
Elementary Biblical Hebrew: An Introductory Grammar (Fifth Edition) [Paperback]
Meal with Jesus, A: Discovering Grace, Community, and Mission around the Table [Paperback] | <urn:uuid:90d35ec0-d720-49a7-b54c-c8ecd6200695> | CC-MAIN-2018-47 | https://reformers.com.au/products/9780851119762 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741016.16/warc/CC-MAIN-20181112172845-20181112194845-00268.warc.gz | en | 0.933885 | 265 | 2.59375 | 3 |
Brain Chemical Could Be Link to Alcohol Consumption
By growing mice with altered brain chemicals, UW researchers may have found a key to alcohol consumption and its sedative effects.
Inside the brain is a naturally occurring brain chemical, or neurotransmitter, called neuropeptide Y (NPY). Using genetic engineering pioneered at the University, researchers bred mice without any NPY. They found that these mice drank significantly more alcohol and were less affected by its sedative, or sleep-inducing, effects than normal mice.
The group also grew mice with abnormally high levels of NPY. These mice drank less alcohol than normal mice and were highly prone to succumb to its sedative effects.
"This is the first direct demonstration that there are altered levels of alcohol consumption if you change the amount of NPY present in the brains of rodents," says Todd Thiele, a research scientist in psychology who headed the UW team along with Biochemistry Professor Richard Palmiter. "These data indicate that, in rodents, there is a relationship between NPY levels and the willingness to voluntarily consume alcohol."
The researchers cautioned that while their results with mice are convincing, further research is necessary to see if there is a relationship between NPY and alcohol consumption and abuse in humans.
Home / Current Issue / Archives / Talk Back / Advertising / Columns FAQ / Alumni Website / Search | <urn:uuid:0b94e2cd-fcaa-419e-8a90-52fa95aee91a> | CC-MAIN-2015-40 | http://www.washington.edu/alumni/columns/march99/alcohol.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677342.4/warc/CC-MAIN-20151001215757-00127-ip-10-137-6-227.ec2.internal.warc.gz | en | 0.953706 | 275 | 2.828125 | 3 |
Rescuers of Jews
Stanislava DAMANSKAITĖ Petras DAMANSKAS Pranė DAMANSKIENĖ Stefanija KULEVIČIŪTĖ Kipras ŽUKAUSKAS Julija ŽUKAUSKIENĖ The Shapira’s family (Yitzhak and Taibe) lived near the town of Kelmė. They had two children – son Yosef (born in 1930) and daughter Dora (born in 1933). On the 25th of June 1941 the Germans seized Kelmė and 2 months later the father of the family – Yitzhak – was killed. Other members of the family managed to escape and along with other relatives had been hiding in Aukšmiškis forest for 5 weeks. Stanislava Damanskaitė, their former nanny, as well as Stefanija Kulevičiūtė (their former house help) found out that the Shapira family was hiding in the forest and decided to help them. Stanislava first of all took Dora, who was the youngest, to hers small room in the outskirts of Kelmė. Then she returned to Aukšmiškis forest and took Dora’s brother Yosef. When she went there for the third time, nobody was alive. Dora and Yosef were hiding in the small Stanislava‘s room for 3 weeks. It was very dangerous, because the neighbors knew the children. Due to this reason Stanislava found for Yosef a new shelter in faraway village. He stayed and worked there about two years. One day he returned to Stanislava and was at her for several weeks. She found for him another hiding place and he stayed there until the end of the war. Stanislava Damanskaitė led Dora to Stefanija Kulevičiūtė who had been hiding her about one month. With help of Stanislava and Stefanija Dora moved to several other families, one after another. In 1942 Dora was hosted by large Žukauskas family. The head of family Kipras Žukauskas his wife Julija Žukauskienė, the 15 years old eldest daughter of Genė Žukauskaitė (now Furmonavičienė), as well as their son Aloyzas Žukauskas, who was the 11 years old, hid, fed Dora and cared for her. All Žukauskas family members knew that they could lose their lives saving Dora. When the incessant searches started, Stefanija Kulevičiūtė took Dora at her place again. During 1943 Dora changed several hiding places; everything was done with the help of Stanislava and Stefanija. In the spring of 1944 m. Stanislava Damanskaitė took Dora to her brother Petras Damanskas who lived in Kirkliai village close to Lioliai. Petras Damanskas, his wife Pranė Damanskienė and their daughter Valerija Damanskaitė (now – Tarasevičienė) together with Stanislava hid Dora and cared for her until the liberation of Kelmė in the beginning of October 1944. | <urn:uuid:11f59d72-f552-4d4c-8386-346301fa22da> | CC-MAIN-2024-10 | http://rescuedchild.lt/content.php?id=2806 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00441.warc.gz | en | 0.979186 | 710 | 2.921875 | 3 |
Chan Chan (chän chän) [key], ruins of an ancient city near Trujillo, N Peru. An early example of city planning, with a rectangular grid structure, it was probably begun in the period from A.D. 950 to 1400, and it is estimated that it may have contained as many as 200,000 people. Chan Chan is generally accepted as the capital of the Chimu, a pre-Inca civilization. It is on a large plain of the coastal desert, which was made arable by extensive irrigation works. Covering c.11 sq mi (28 sq km), the city comprised at least 10 self-contained, walled-in units. The walls, built of adobe brick, are decorated with relief designs.
More on Chan Chan from Fact Monster:
See more Encyclopedia articles on: South American Indigenous Peoples | <urn:uuid:2cf136bd-42ae-421d-a43b-e2a0fcbc0a13> | CC-MAIN-2015-14 | http://www.factmonster.com/encyclopedia/society/chan-chan.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299877.5/warc/CC-MAIN-20150323172139-00164-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.962685 | 176 | 3.546875 | 4 |
HepatologyMore Press Releases related to this journal
Vol 63 (13 Issues in 2016)
Edited by: Michael H. Nathanson
Print ISSN: 0270-9139 Online ISSN: 1527-3350
Impact Factor: 11.711
- Press Release
- About Journal
Teens Susceptible to Hepatitis B Infection Despite Vaccination as Infants
Mom is Link to Hepatitis B Infection
New research reveals that a significant number of adolescents lose their protection from hepatitis B virus (HBV) infection, despite having received a complete vaccination series as infants. Results in the January 2013 issue of Hepatology, a journal published by Wiley on behalf of the American Association for the Study of Liver Diseases, suggest teens with high-risk mothers (those positive for HBeAg) and teens whose immune system fails to remember a previous viral exposure (immunological memory) are behind HBV reinfection.
Infection with HBV is a major global health concern even with the success of universal vaccination against the virus in infants. The World Health Organization (WHO) estimates two billion individuals worldwide have HBV infection, with 360 million chronic carriers of the hepatitis B surface antigen (HBsAg). In the U.S., the Centers for Disease Control and Prevention (CDC) state that up to 1.4 million Americans are living with chronic HBV.
In Taiwan, where the present study was conducted, mother-to-child transmission (vertical transmission) is responsible for much of the HBV cases in that country. In fact, Taiwan has long been an endemic area with an HBV infection rate of 95% and HBsAg carrier rate that is found in up to 20% of the general population. To combat this major health burden, Taiwan launched the world’s first universal vaccination program in 1984, vaccinating newborns of infectious mothers then expanding to all newborns in 1986.
“Chronic HBV is a major health burden that leads to cirrhosis, liver cancer (hepatocellular carcinoma) and liver failure, shortening lives and placing a huge economic drain on society,” said lead author, Dr. Li-Yu Wang from Mackay Medical College in New Taipei City, Taiwan. “While infantile HBV vaccination is highly effective, it is not 100% and our study examines the long-term success of the HBV vaccine in a high-risk population.”
For the present study, 8733 high school students born between July 1987 and July 1991 provided vaccination records and were assessed for presence of HBsAg and antibodies to HBsAg (anti-HBs). The mean age of participants was 16 years and 53% of the group was male. All participants attended school in Hualien County located in east Taiwan.
Findings indicate that HBsAg and anti-HBs positive rates were 2% and 48%, respectively. For students who received the HBV immune globulin (HBIG) and vaccine as infants, 15% were positive for HBsAg—a rate that was significantly higher in students whose mothers were positive for HBeAg and who received HBIG off schedule. Researchers found a significantly negative association between HB vaccination dose and a positive rate of HBsAg among students who did not receive HBIG.
Reporting on previous research the team notes that the vaccine program reduced HBV infection and carrier rates of children in Taiwan. Prior studies also reported a decline in severe hepatitis in infants and liver cancer in children as a result of the vaccine program. Dr. Wang concludes, “Certainly the HBV vaccine program was a great success in Taiwan. For adolescents who lose protection, a HBV vaccination booster at age 15 or older should be considered, particularly in those born to HBsAg positive mothers or who had a high-risk of HBV exposure. Those born to high-risk mothers should first be screened for HBsAg.”
Researchers further suggest a routine anti-HBV treatment during pregnancy may help to further reduce infant exposure to the virus. However, they stress that the safety and efficacy of this therapy plan would need to be proven in large-scale studies before standard use to combat HBV. | <urn:uuid:05608b68-fab3-4680-a2c7-0868d70ce229> | CC-MAIN-2016-36 | http://www.wiley.com/WileyCDA/PressRelease/pressReleaseId-107000.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982967797.63/warc/CC-MAIN-20160823200927-00027-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.9519 | 871 | 2.71875 | 3 |
How Loud Music Can Damage Your Hearing
Hearing is an extremely important sense, but can often be the most neglected.
We brush our teeth, take care of our eyes and do our best to keep our bodies in tip-top condition. However, people don’t do enough to protect their ears.
Our hearing can be damaged by activities we enjoy every day, such as listening to music, and we need to do more to prevent problems such as tinnitus.
A superb article on netdoctor, written by Jenny Cook, looks at how exposure to loud music can damage your hearing and what you can do to protect your ears.
In this article, Tony Kay, head of Audiology Services at Aintree University Hospital NHS Foundation Trust in Liverpool, says:
“Our ears are one of our wonderful senses and, when we have good hearing, allow us to hear a huge range of sounds from the extremely quiet to the very loud. One of the modifiable risk factors (meaning something that we are able to control) for problems such as tinnitus and hearing loss is exposure to excessive noise. The damage caused to our ears generally builds up over a long period of time, but some very loud sounds can cause irreversible damage immediately.”
At Mercury Hearing, we provide custom ear plugs and ear phones that are carefully moulded to your own ear. This means an enhanced fit and optimum comfort whilst protecting your hearing from long term damage.
Contact Us for more information and to find out how we can help you protect your hearing. | <urn:uuid:a713203b-7f66-4793-9c08-2c5898ef29ad> | CC-MAIN-2017-34 | http://www.mercuryhearing.com/loud-music-damage-hearing/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120194.50/warc/CC-MAIN-20170823113414-20170823133414-00633.warc.gz | en | 0.956903 | 316 | 2.8125 | 3 |
The largest cave in the world is soon to be opened to tourists in Vietnam, tourism officials have disclosed.
The Son Doong Cave, discovered just 20 years ago, will be opened up on a test basis to visitors.
The tourism authority of Quang Binh Province said the cave would be open between February and August of next year, with small expeditionary groups of up to eight persons permitted entry.
"After that we will decide whether to keep the cave open on a regular basis," said the tourism authority's deputy director Nguyen Van Ky. "We'll be studying whether the visits will have any adverse environmental effects."
Tours must be booked via the Vietnamese tour operator Oxalis. Already, the company says it has more requests than it has capacity for in the coming year. Oxalis says a total of just 220 visitors to the cave are planned for 2014.
An expedition in Son Doong, located in the Phong Nha Ke Bang National Park 500 kilometres south of the capital Hanoi, is not for beginners.
Tourism officials say the underground trek is 17 kilometres. The expeditions will start after an overnight stay at the site. The cave contains spectacular rock formations and pools of water and unique flora and fauna, Nguyen says.
A tour costs 3,000 dollars per person. The national park was declared a World Natural Heritage site by UNESCO in 2003.
Local inhabitants of the area of central Vietnam bordering on Laos discovered the cave with its underground river in 1991.
In 2009, British scientists carried out the first expedition, with researcher Howard Limbert reporting a cavern length of 6,481 metres.
At some points it is 150 metres wide and 200 metres tall, surpassing what till then had been the largest known cave in the world, the Deer Cave on Borneo. | <urn:uuid:32b43dd9-dbb7-40bb-bf91-28734ee8348d> | CC-MAIN-2017-22 | http://www.stuff.co.nz/travel/destinations/asia/9229661/Worlds-largest-cave-opened-to-visitors-in-Vietnam | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00412.warc.gz | en | 0.940191 | 367 | 2.578125 | 3 |
In 8 A.D. Augustus Caesar exiled the Roman poet Ovid(Publius Ovidius Nasso 43 B.C. - 17 A.D.) to the remote Black Sea town of Tomis (modern Constanta, Romania). Not only was Ovid isolated from the political, social, and intellectual center of his world, but he also had to endure a climate much harsher than that of Rome. To lament his exile, he wrote the Tristia, poems that detail his physical and emotional discomforts. In one part of the Tristia Ovid writes:
"and the wines stand stiff, jugless but keeping the shape of their jugs, and the people don't drink draughts of wine--they eat pieces of it"
Our modern day temperature scales can be traced back to no earlier than the 17th century, therefore no temperature records from Ovid's time exist. However, chemists will immediately recognize that the phenomenon of freezing-point depression can be applied to estimate the temperature of Constanta, Romania, in the year 8 A.D., provided that the composition of the wine can be accurately estimated. Throughout the Roman Empire there were many kinds of wine consumed, some of which were diluted with water. However, the exact Latin words used by Ovid to describe the wine he writes about were "vina" and "meri". These words were used to describe undiluted wine.
Do some research on the composition of the undiluted wines made and consumed today. (In other words, visit some stores and read the labels.) Several different kinds of wines exist, choose one "sweet" and one "dry" wine. From the information gathered determine the temperature that existed when Ovid wrote the Tristia by calculating the expected freezing point of wine using the freezing point depression constant of water (1.86 oC/1-m), the density of water, the density of alcohol, and the alcohol content of the wine.
Your instructor will provide you with two different kinds of wine: one "sweet" wine and one "dry" wine. Determine the actual freezing point of each of these wines. A bath cold enough to freeze the wines can be made by mixing together ice and salt or ice and methanol (ditto fluid).
This is a nice lab to do. I do it with my advanced chemistry students on the last day of school before Christmas break. It gives students a chance to apply the chemistry they have learned to a commercial product. It also serves to unite science with literature and history.
When I first informed my principal that I wanted to bring in some wine for this experiment I got a look that said "ARE YOU NUTS!!" However, when I explained what I wanted to accomplish, he gave his permission and asked only that I inform him of the exact day that wine was being brought into the school so that he could respond immediately to any questions that might arise from parents or "concerned citizens". I have always honored that request and have never had a problem with my principal, parents, students, or "concerned citizens". In fact parents have always been very supportive whenever the topic of this laboratory exercise came up during parent-teacher conferences. In 1995 when I went to a local store to purchase the wine for this experiment, a voice said "is that for the Chemistry 2 Christmas Experiment". I turned to find a former Chemistry II student of mine working during his college's Christmas break. He remembered this experiment and it obviously left an impression on him. The greatest gift a teacher can receive is for a former student to say "I appreciate what you did, thank-you". | <urn:uuid:92eace8e-7ae7-4b1d-8820-9a745e415664> | CC-MAIN-2016-40 | http://chem.lapeer.org/Chem2Docs/FPofWine.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661023.80/warc/CC-MAIN-20160924173741-00006-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.973058 | 746 | 3.8125 | 4 |
The term originated in the early 16th century after Europeans made landfall in what would later be called the Americas in the age of discovery, expanding the geographical horizon of classical geographers, who had thought of the world as consisting of Africa, Europe, and Asia, collectively now referred to as the Old World (aka Afro-Eurasia).
|This section does not cite any sources. (December 2013)|
The terms "Old World" vs. "New World" are meaningful in historical context and for the purpose of distinguishing the world's major ecozones, and to classify plant and animal species that originated therein.
One can speak of the "New World" in a historical context, e.g., when discussing the voyages of Christopher Columbus, the Spanish conquest of Yucatán and other events of the colonial period. For lack of alternatives, the term is also still useful to those discussing issues which concern the Americas and the nearby oceanic islands, such as Bermuda and Clipperton Island, collectively. This usage is seen as problematic by many for its narrowness of perspective and implication that discovery by European explorers was the beginning of history for the Americas.
The term "New World" is used in a biological context, when one speaks of Old World (Palearctic, Afrotropic) and New World species (Nearctic, Neotropic). Biological taxonomists often attach the "New World" label to groups of species which are found exclusively in the Americas, to distinguish them from their counterparts in the "Old World" (Europe, Africa and Asia), e.g. New World monkeys, New World vultures, New World warblers.
The label is also often used in agriculture. Africa, Asia, and Europe share a common agricultural history stemming from the Neolithic Revolution, and the same domesticated plants and animals spread through these three continents thousands of years ago, making them largely indistinct and useful to classify together as "Old World". Common Old World crops (e.g., barley, lentils, oats, peas, rye, wheat), and domesticated animals (e.g., cattle, chickens, goats, horses, pigs, sheep) did not exist in the Americas until they were introduced by post-Columbian contact in the 1490s (see "Columbian Exchange"). Conversely, many common crops were originally domesticated in the Americas before they spread worldwide after Columbian contact, and are still often referred to as "New World crops"; common beans (phaseolus), maize, and squash - the "three sisters" - as well as the avocado, tomato, and wide varieties of capsicum (bell pepper, chili pepper, etc.), and the turkey were originally domesticated by pre-Columbian peoples in Mesoamerica, while agriculturalists in the Andean region of South America brought forth the cassava, peanut, potato, quinoa and domesticated animals like the alpaca, guinea pig and llama. Other famous New World crops include the cashew, cocoa, rubber, sunflower, tobacco, and vanilla, and fruits like the guava, papaya and pineapple. There are rare instances of overlap, e.g., the calabash (bottle-gourd), cotton, and yam, and the dog, are believed to have been domesticated separately in both the Old and New World, their early forms possibly brought along by Paleo-Indians from Asia during the last ice age.
In wine terminology, "New World" has a different definition. "New World wines" include not only North American and South American wines, but also those from South Africa, Australia, New Zealand, and all other locations outside the traditional wine-growing regions of Europe, North Africa and the Near East.
Origin of term
The term "New World" ("Mundus Novus") was first coined by the Florentine explorer Amerigo Vespucci, in a letter written to his friend and former patron Lorenzo di Pier Francesco de' Medici in the Spring of 1503, and published (in Latin) in 1503-04 under the title Mundus Novus. Vespucci's letter contains arguably the first explicit articulation in print of the hypothesis that the lands discovered by European navigators to the west were not the edges of Asia, as asserted by Christopher Columbus, but rather an entirely different continent, a "New World".
Vespucci first approached this realization in June of 1502, during a famous chance meeting between two different expeditions at the watering stop of "Bezeguiche" (the Bay of Dakar, Senegal) - his own outgoing expedition, on its way to chart the coast of newly discovered Brazil, and the vanguard ships of the Second Portuguese India armada of Pedro Álvares Cabral, returning home from India. Having already visited the Americas in prior years, Vespucci probably found it difficult to reconcile what he had already seen in the West Indies, with what the returning sailors told him of the East Indies. Vespucci wrote a preliminary letter to Lorenzo, while anchored at Bezeguiche, which he sent back with the Portuguese fleet - at this point only expressing a certain puzzlement about his conversations. Vespucci was finally convinced when he proceeded on his mapping expedition through 1501-02, covering the huge stretch of coast of eastern Brazil. After returning from Brazil, in the Spring of 1503, Amerigo Vespucci composed the Mundus Novus letter in Lisbon to Lorenzo in Florence, with its famous opening paragraph:
In passed days I wrote very fully to you of my return from new countries, which have been found and explored with the ships, at the cost and by the command of this Most Serene King of Portugal; and it is lawful to call it a new world, because none of these countries were known to our ancestors and all who hear about them they will be entirely new. For the opinion of the ancients was, that the greater part of the world beyond the equinoctial line to the south was not land, but only sea, which they have called the Atlantic; and even if they have affirmed that any continent is there, they have given many reasons for denying it is inhabited. But this opinion is false, and entirely opposed to the truth. My last voyage has proved it, for I have found a continent in that southern part; full of animals and more populous than our Europe, or Asia, or Africa, and even more temperate and pleasant than any other region known to us.
Vespucci's letter was a publishing sensation in Europe, immediately (and repeatedly) reprinted in several other countries.
While Amerigo Vespucci is usually credited for coming up with the term "New World" (Mundus Novus) for the Americas in his 1503 letter, certainly giving it its popular cachet, similar terms had nonetheless been used and applied before him.
The Venetian explorer Alvise Cadamosto had used the term "un altro mundo" ("another world") to refer to sub-Saharan Africa, which he explored in 1455 and 1456 on behalf of the Portuguese. However, this was merely a literary flourish, not a suggestion of a new "fourth" part of the world. Cadamosto was quite aware sub-Saharan Africa was firmly part of the African continent.
The Italian-born Spanish chronicler Peter Martyr d'Anghiera often shares credit with Vespucci for designating the Americas as a new world. Peter Martyr used the term Orbe Novo (literally, "New Globe", but often translated as "New World") in the title of his history of the discovery of the Americas as a whole, which began to appear in 1511 (cosmologically, "orbus" as used here refers to the whole hemisphere, while "mundus" refers to the land within it). Peter Martyr had been writing and circulating private letters commenting on Columbus's discoveries since 1493 and, from the start, doubted Columbus's claims to have reached East Asia ("the Indies"), and consequently came up with alternative names to refer to them. Only a few weeks after Columbus's return from his first voyage, Peter Martyr wrote letters referring to Columbus's discovered lands as the "western antipodes" ("antipodibus occiduis", letter of May 14, 1493), the "new hemisphere of the earth" ("novo terrarum hemisphaerio", September 13, 1493), and in a letter dated November 1, 1493, refers to Columbus as the "discoverer of the new globe" ("Colonus ille novi orbis repertor"). A year later (October 20, 1494), Peter Martyr again refers to the marvels of the New Globe ("Novo Orbe") and the "Western hemisphere."("ab occidente hemisphero").
Christopher Columbus touched the continent of South America in his 1498 third voyage. In his own 1499 letter to the Catholic Monarchs of Spain, reporting the results of his third voyage, Columbus relates how the massive waters of the Orinoco delta rushing into the Gulf of Paria implied that a previously unknown continent must lie behind it. However, bowing to the classical tripartite division of the world, Columbus discards that hypothesis and proposes instead that the South American landmass is not a "fourth" continent, but rather the terrestrial paradise of Biblical tradition, not a previously unknown "new" part of the world, but a land already "known" (but location undiscovered) by Christendom. In another letter (to the nurse of Prince John, written 1500), Columbus refers to having reached a "new heavens and world" ("nuevo cielo e mundo") and that he had placed "another world" ("otro mundo") under the dominion of the Kings of Spain.
The Vespucci passage above applied the "New World" label to merely the continental landmass of South America. At the time, most of the continent of North America was not yet discovered, and Vespucci's comments did not eliminate the possibility that the islands of the Antilles discovered earlier by Christopher Columbus might still be the eastern edges of Asia, as Columbus continued to insist down to his dying day. A critical step in the transition was the conference of navigators (Junta de Navegantes) assembled by the Spanish monarchs at Toro in 1505, and continued at Burgos in 1508, to digest all existing information about the Indies, come to an agreement on what had really been discovered, and set out the future goals of Spanish exploration. Amerigo Vespucci attended both conferences, and seems to have had an outsized influence on them - Vespucci ended up being appointed the first piloto mayor, the chief of navigation of Spain, at Burgos. Although the proceedings of the Toro-Burgos conferences are missing, it is almost certain that Vespucci articulated his recent "New World" thesis to his fellow navigators there. It was during these conferences when Spanish officials seem to have finally accepted that the Antilles and the known stretch of Central America were definitely not the Indies they had originally sought, and Columbus had insisted they were, and set out the new goal for Spanish explorers: to find a sea passage or strait through the Americans which would permit them to sail to Asia proper.
While it became generally accepted after Vespucci that Columbus's discoveries were not Asia but a "New World", the geographic relationship between the two continents was still unclear. That there must be a large ocean between Asia and the Americas was implied by the known existence of vast continuous sea along the coasts of East Asia. Even prior to Vespucci, several maps, e.g. the Cantino planisphere of 1502 and the Canerio map of 1504, placed a large open ocean between China on the east side of the map, and the inchoate largely water-surrounded North American and South American discoveries on the western side of map. However, out of uncertainty, they depicted a finger of the Asian land mass stretching across the top to the eastern edge of the map, suggesting it carried over into the western hemisphere (e.g. the Cantino Planisphere denotes Greenland as "Punta d'Asia" - "edge of Asia"). Some maps, e.g. the 1506 Contarini–Rosselli map and the 1508 Johannes Ruysch map, bowing to Ptolemaic authority and Columbus's assertions, have the northern Asian landmass stretching well into the western hemisphere and merging with known North America (Labrador, Newfoundland, etc.). These maps place the island of Japan near Cuba and leave the South American continent - Vespucci's "New World" proper - detached and floating below by itself. The Waldseemüller map of 1507, which accompanied the famous Cosmographiae Introductio volume (which includes reprints of Vespucci's letters) comes closest to modernity by placing a completely open sea (with no stretching land fingers) between Asia on the eastern side and the New World (being represented two times in the same map in a different way: with and without a sea passage in the middle of what is now named Central America) on the western side - which (on what is now named South America) that same map famously labels simply "America". However, Martin Waldseemüller's map of 1516 retreats considerably from his earlier map and back to classical authority, with the Asian land mass merging into North America (which he now calls Terra de Cuba Asie partis), and quietly drops the "America" label from South America, calling it merely Terra Icognita.
The western coast of the New World - the Pacific Ocean - was only discovered in 1513 by Vasco Núñez de Balboa. But it would take a few more years - Ferdinand Magellan's voyage of 1519-22 - to determine that the Pacific definitely formed a single large body of water separating Asia from the Americas. It would be several more years before the Pacific Coast of North America was mapped, dispelling lingering doubts. Of course, until the discovery of the Bering Straits in the 17th century, there was no absolute confirmation that Asia and North America were not connected, and some European maps of the 16th century still continued to hopefully depict North America connected by a land bridge to Asia (e.g. the 1533 Johannes Schöner globe).
- M.H.Davidson (1997) Columbus Then and Now, a life re-examined. Norman: University of Oklahoma Press, p.417)
- This preliminary letter from Bezeguiche was not published, but remained in manuscript form. It is reproduced in F.A. de Varnhagen (1865: p.78-82).
- English translation of Mundus Novus as found in Markham (1894: p.42-52)
- Varnhagen, Amerígo Vespucci (1865: p.13-26) provides side-by-side reproductions of both the 1503 Latin version Mundus Novus, and the 1507 Italian re-translation "El Nuovo Mondo de Lengue Spagnole interpretato in Idioma Ro. Libro Quinto" (from Paesi Nuovamente retrovati). The Latin version of Mundus Novus was reprinted many times (see Varnhagen, 1865: p.9 for a list of early reprints).
- Cadamosto Navigationi, c. 1470, as reprinted in Giovanni Ramusio (1554: p.106). See also M. Zamora Reading Columbus, (1993: p.121)
- de Madariaga, Salvador (1952). Vida del muy magnífico señor Don Cristóbal Colón (in Spanish) (5th ed.). Mexico: Editorial Hermes. p. 363.
"nuevo mundo", [...] designación que Pedro Mártyr será el primero en usar
- J.Z. Smith, Relating Religion, Chicago (2004: p.268)
- E.G. Bourne Spain in America, 1450-580 New York: Harper (1904: p.30)
- Peter Martyr, Opus Epistolarum (Letter 130 p.72)
- Peter Martyr, Opus Epistolarum, Letter 133, p.73
- Peter Martyr, Opus Epistolarum (Letter 138, p.76)
- Peter Martyr Opus Epistolarum, Letter 156 p.88
- "if the river mentioned does not proceed from the terrestrial paradise, it comes from an immense tract of land situated in the south, of which no knowledge has been hitherto obtained" (Columbus 1499 letter on the third voyage, as reproduced in R.H. Major, Select Letters of Christopher Columbus, 1870: p.147)
- J.Z. Smith, Relating Religion, Chicago (2004: p.266-67)
- Columbus 1500 letter to the nurse (in Major, 1870: p.154)
- Columbus's 1500 letter to the nurse(Major, 1870: p.170)
- F.A. Ober Amerigo Vespucci New York: Harper (1907: p.239; 244)
- S.E. Morison The European Discovery of America, v.2: The southern voyages, 1492-1616.(1974: p.265-66).
- For an account of Vespucci at Toro and Burgos, see Navarette Colección de los viages y descubrimientos que hicieron por mar los españoles desde fines del siglo XV(1829: v.iii, p.320-23)
- C.O. Sauer The Early Spanish Main. Cambridge (1966: P.166-67)
- J.H. Parry, The Discovery of the Sea (1974: p.227)
- Verrazzano, Giovanni da (1524)."The Written Record of the Voyage of 1524 of Giovanni da Verrazzano as recorded in a letter to Francis I, King of France, July 8th, 1524". Citing: Wroth, Lawrence C., ed. (1970). The Voyages of Giovanni da Verrazzano, 1524-1528. Yale, pp. 133-143. Citing: a translation by Susan Tarrow of the Cellere Codex. | <urn:uuid:d45d4874-11dd-4c49-a116-8c598319d878> | CC-MAIN-2016-07 | https://en.wikipedia.org/wiki/New_World | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162938.42/warc/CC-MAIN-20160205193922-00005-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.939868 | 3,848 | 3.890625 | 4 |
During austral spring and summer 1988 the upper 500 m of water column in the Scotia-Weddell Confluence was sampled for the elemental composition of total suspended matter. For particulate organic carbon surface water concentrations ranged between 2.5 and 15-mu-mol/l, with an estimated 19 to 47% of this pool being detrital carbon. In late November, the highest surface water particulate organic carbon concentrations (15-mu-mol/l) occurred in the Confluence area where they coincided with a maximum in particulate Si (1.7-mu-mol/l). Later in the season particulate Si in the Confluence area decreased to less-than-or-equal-to 0.3-mu-mol/l. In the Scotia Sea on the contrary, surface water particulate Si increased with time and reached 3-mu-mol/l in late December. For particulate Ca and Sr in surface water, strong gradients are observed across the Scotia Front (e.g. Ca: from 230 to 10 nmol/l; Sr: from 1.0 to 0.1 nmol/1), with highest concentrations in the Scotia Sea. In general, these distributions are confirmed by the observations on plankton species composition, done by other participants. In the Scotia Sea heavily calcified coccolithophorids and diatoms occurred throughout the season, while in the Confluence area heavily calcified coccolithophorids were absent and a switch-over from diatom to naked flagellate dominance was observed following a krill event. In the surface waters, the lithogenic Si fraction represents on average only 4% of the total particulate Si content. However, this fraction reaches 60% below 100 m depth in the Confluence area, due mainly to the presence of a sub-surface maximum in the aluminosilicate load (particulate Al content up to 30 nmol/l), probably reflecting advection of resuspended shelf sediments. Subsurface Ba/barite concentrations are highest in the Scotia Sea (280 pmol/l) and decrease through the Scotia Front to reach values of 100 pmol/l and less in the Confluence area, the marginal ice zone and the closed pack ice zone. | <urn:uuid:936c2251-aa4d-4e97-b212-222ea015dc94> | CC-MAIN-2017-04 | https://lirias.kuleuven.be/handle/123456789/46542 | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900883 | 465 | 2.53125 | 3 |
Homothallic vs. Monoecious
Difference Between Homothallic and Monoecious
(of some algae and fungi) Producing male and female reproductive structures in the same plant
Having both the male and female reproductive organs in the same individual, either in different flowers or in the same or different flowers; hermaphrodite.
A homothallic fungus or alga
having male and female reproductive organs in the same plant or animal | <urn:uuid:cd09fc59-f79b-46e5-8e6e-017630b71546> | CC-MAIN-2023-23 | https://www.difference.wiki/homothallic-vs-monoecious/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00638.warc.gz | en | 0.825748 | 125 | 2.765625 | 3 |
January 2, 2017
Don't Wait to Treat Prediabetes
Prediabetes is defined as impaired glucose tolerance or impaired fasting glucose. Prediabetes is associated with an increased risk of cardiovascular disease and all-cause mortality. This is not something that doctors wish to recognize or discuss, as they are hesitant to give their patients concern about prediabetes.
The risk increased in people with a fasting glucose concentration as low as 100 mg/dl (5.55 mmol/L). A1C of 5.7%-6.5% (39-47 mmol/mol) or A1C of 6%-6.5% (42-47 mmol/mol) was associated with an increased risk of composite cardiovascular disease and coronary heart disease. Lifestyle modification is now the main management for people with prediabetes.
The question comes up after reviewing these studies as to whether we need to lower the cut-off point for defining prediabetes and that we might want to change the definition of prediabetes to a single number and not a range. Most doctors won't even consider this in a discussion of prediabetes.
The health risks and mortality associated with prediabetes seem to increase at the lower cut-off point for blood sugar levels recommended by some guidelines, finds a large study published in The BMJ. Prediabetes is a “pre-diagnosis” of diabetes — when a person’s blood glucose level is higher than normal, but not high enough to be considered diabetes. If left untreated, prediabetes can develop into type 2 diabetes. An estimated 79 million people in the U.S. are thought to be affected.
Doctors define prediabetes as impaired fasting glucose (higher than normal blood sugar levels after a period of fasting), impaired glucose tolerance (higher than normal blood sugar levels after eating), or raised hemoglobin levels. But the cut-off points vary across different guidelines and remain controversial.
For example, the World Health Organization (WHO) defines prediabetes as fasting plasma glucose of 110-125 mg/dl.(6.1-6.9 mmol/L), while the American Diabetes Association (ADA) guideline recommends a cut-off point of 100-125 mg/dl.(5.6-6.9 mmol/L.)
Results of studies on the association between prediabetes and the risk of cardiovascular disease and all-cause mortality are also inconsistent. Furthermore, whether raised hemoglobin A1C levels for defining prediabetes is useful for predicting future cardiovascular disease is unclear.
So, a team of researchers from the affiliated Hospital at Shunde, Southern Medical University in China analyzed the results of 53 studies involving over 1.6 million individuals to shed more light on associations between different definitions of prediabetes and the risk of cardiovascular disease, coronary heart disease, stroke, and all-cause mortality. They found that prediabetes, defined as impaired fasting glucose or impaired glucose tolerance, was associated with an increased risk of cardiovascular disease and all-cause mortality. The risk increased in people with a fasting glucose concentration as low as 100 mg/dl.(5.6 mmol/L) — the lower cut-off point according to ADA criteria.
Raised hemoglobin A1C levels were also associated with an increased risk of cardiovascular disease and coronary heart disease, but not with an increased risk of stroke and all-cause mortality.
The authors point to some study limitations that could have influenced their results, and say pulling observational evidence together in a systematic review and meta-analysis is a good way to consider all the evidence at once, “but we cannot make statements about cause and effect. We would need to look at experimental evidence for that.” However, they say their findings “strongly support” the lower cut-off point for impaired fasting glucose and raised hemoglobin A1C levels proposed by the ADA guideline.
They conclude that lifestyle change — eating a healthy balanced diet, keeping weight under control, and doing regular physical activity — is the most effective treatment at this time.
In conclusion, researchers found that prediabetes defined as impaired fasting glucose or impaired glucose tolerance is associated with an increased risk of composite cardiovascular events, coronary heart disease, stroke, and all-cause mortality. There was an increased risk in people with fasting plasma glucose as low as 100 mg/dl (5.6 mmol/L). Additionally, the risk of composite cardiovascular events and coronary heart disease increased in people with raised A1c, over 5.6%. These results support the lower cut-off point for impaired fasting glucose according to ADA criteria as well as the incorporation A1C in defining prediabetes. At present, lifestyle modification is the mainstay management for people with prediabetes. High risk subpopulations with prediabetes, especially combined with other cardiovascular risk factors, should be selected for controlled trials of pharmacological treatment because at this time we have no FDA-approved medications for prediabetes.
Chief investigator Yunzhao Hu, MD, PhD, professor in the department of cardiology at First People’s Hospital of Shunde in Foshan, China, added that, “The risk increased in people with fasting glucose levels as low as 100 mg/dl and with HbA1c of 5.7%…. So, we believe people with prediabetes should be followed up clinically and keep a healthy lifestyle. Plus, we need to develop models for risk stratification in people with prediabetes, and we need to find a drug treatment that can prevent CVDs in them.” | <urn:uuid:9c7791cd-773e-4c5d-9a15-0d603aa65709> | CC-MAIN-2017-39 | http://bobsdiabetes.blogspot.com/2017/01/dont-wait-to-treat-prediabetes.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686043.16/warc/CC-MAIN-20170919221032-20170920001032-00685.warc.gz | en | 0.934361 | 1,142 | 3.328125 | 3 |
After you have part of your arm or leg amputated, there’s a chance you could feel pain in the limb that’s no longer there. This is known as phantom limb pain. It’s most common in arms and legs, but some people will feel it when they have other body parts removed, such as a breast.
For some people, the pain will go away on its own. For others, it can be long-lasting and severe. But you can limit it if you tell your doctor about it early on so you can get treatment ASAP.
Don’t worry that your doctor will think you’re imagining the pain. It’s common among people who’ve lost a limb. Most people who have an amputation will have some feelings connected to their missing limb within 6 months of the surgery.
Researchers don’t know exactly what causes phantom limb pain. One possible explanation: Nerves in parts of your spinal cord and brain “rewire” when they lose signals from the missing arm or leg. As a result, they send pain signals, a typical response when your body senses something is wrong.
Another example of this rewiring: When you touch one body part -- say, your hip or your forearm -- your brain might sense it on your missing limb.
Other possible causes of phantom limb pain include damaged nerve endings and scar tissue from the amputation surgery.
What Phantom Limb Pain Feels Like
Not all pain feels the same. The throbbing of a headache, for example, is very different from the sharp ache of a stomach cramp. So it’s no surprise that phantom limb pain is not the same for everyone. Your pain may feel like it’s:
- Like “pins and needles”
- Like an electric shock
Aside from pain, you may also sense other feelings from a body part that’s no longer there:
Medicine Can Help
Anticonvulsants. These drugs treat seizures, but some can also help with nerve pain. Examples include carbamazepine (Carbatrol, Epitol, Tegretol), gabapentin (Gralise, Neurontin), and pregabalin (Lyrica).
Other painkillers. A few other types may help with phantom limb pain, including:
Medicine alone may not provide enough relief, so your doctor may recommend other treatments as well, such as:
Nerve stimulation. You may already know about TENS (transcutaneous electrical nerve stimulation) devices, sold at drugstores for muscle pain relief. They send a weak electrical current via sticky patches you put on your skin. The idea is that it can interrupt pain signals before they get to your brain.
Mirror box therapy. Picture a box with no lid. It has two holes -- one for your remaining limb and one for the stump -- and a mirror in the center. When you put your limb and stump inside, you see the reflection of the intact arm or leg in the mirror. It tricks your brain into thinking you have both limbs as you do therapy exercises. Research shows this can help relieve pain in a missing limb.
Acupuncture. A skilled practitioner will insert very thin needles into your skin at specific places. This can prompt your body to release pain-relieving chemicals.
Your habits. Don’t overlook the power of lifestyle choices to bring some relief. Some things to try:
- Find distractions to take your mind off of the pain
- Get (or stay) physically active
- Practice relaxation techniques, including meditation and visualization
Other Ways to Ease Phantom Limb Pain
If your pain is a problem even when you use medicine and non-drug therapies, your doctor may suggest other medical procedures.
Spinal cord stimulation: Your doctor will put tiny electrodes inside your body along your spinal cord and send a small electrical current through them. In some cases, this can help relieve pain.
Brain stimulation: It’s similar to spinal cord stimulation, except the electrodes send the current to the brain instead. A surgeon will place the electrodes in the right spot in your brain. Scientists are still studying how well it works, but for some people, the research is promising.
Revision surgery: If nerve pain is the root of the problem, surgery on your stump may help correct it. | <urn:uuid:0b30d854-df3a-4a70-a461-aaf1dbb75bf5> | CC-MAIN-2020-05 | https://www.webmd.com/pain-management/guide/phantom-limb-pain | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00529.warc.gz | en | 0.913858 | 903 | 2.765625 | 3 |
India is an incredible land- a land of festivities and celebrations. Being a country of diversity, India witnesses countless festivals each month. What makes India incredible is the spirit of its people. From a small kid to the oldest member in a family, every human celebrates the festivals, regardless of religious or ideological differences. A celebration of the uniqueness of the country glues the people together. People from different ethnicities, cultures, and traditions participate in every festival as a community. Festivals that celebrate unity, religion, and even trade tell the true story of India. Some of them being a holiday, festivals raise the spirits of the people.
Recognizing the day India became a Republic, 26th January is a day of saluting the national martyrs. It is a day to celebrate the achievements of the nation. All the citizens of the country celebrate the day through dancing and singing in the cultural programs. Folk dances, patriotic songs, and flag hoisting fill the streets of India with a sense of unity and nationalism. A huge parade is organized at Rajpath that displays the different hues of Indian culture and democracy. The fighter planes of the Air Force soar in the sky, making every Indian proud. A ceremony called “Beating Retreat” is held at the Vijay Chowk in New Delhi.
Marking the end of British rule in India, the 15th of August is celebrated as Independence day. It is a day of celebration and remembrance of the brave soldiers who sacrificed their lives for the freedom of the country. Every year, the Hon’ble Prime Minister of India hoists a flag on the Red Fort and addresses the nation. People show their patriotism by singing the national anthem.
A celebration of the end of winters and welcoming of the spring, Holi is one of the most vibrant festivals of India. It is a two-day festival celebrated majorly in the North of India. Holi is a festival of colors where people smear different color powders on each other’s faces. The sweet fragrance of homemade dessert and cuisines in every household add to the merry of the day. Tourists from all around the world visit Mathura and Vrindavan to immerse themselves in the colors of India.
Ganesh Chaturthi is celebrated to mark the day Lord Ganesha was born. The elephant-headed god, Ganesha is a symbol of new beginnings and auspiciousness. Ganesh Chaturthi is a 10 days long event. The devotees bring idols of the deity to their home as a guest. The hosts pray and serve “bhog” to the deity. Friends and family gather to ask for blessings from Lord Ganesha. On the last day of the festival, the idols are immersed in water with the hope that all bad luck goes away with Him.
Durga Puja is an important festival for the people of West Bengal. Celebrated across India, it is a day of worshipping the Goddess Durga. Singing, dancing, and feasting are an integral part of this festival. On the day of the puja, people dress up in traditional outfits and visit Durga temples to pray for their well being.
It is a grand festival in India as it is the day Lord Rama defeated Ravana. Dussehra is celebrated to signify the victory of good over evil. On this day, a huge figurine of Ravana, Kumbhkaran, and Meghanath is burnt. Actors dress up as the characters of Ramayana and enact the whole story. The celebrations usually begin a week before Dussehra. Fairs are also organised where people eat street food and enjoy the epic. Generally, a big ferris wheel can be found in the Mela.
After spending 14 years in exile, Lord Rama, Laskhman, and Sita came back to Ayodhya on this day. And to welcome the trio, the people of Ayodhya lit up the whole city with lamps. The tradition continues to this day and clay lamps are lit in every household. The houses are cleaned and decorated. Rangolis are made in the verandah to make the occasion colorful. People also dance, sing, and exchange gifts with families and friends. Every house and street in India is illuminated on this day.
India celebrates the festival of joy and love with great fervour. It is a day when people are filled with hope and affection. Carols are sung, and people get together to dive into the spirit of the festival. Christmas trees and decorations put people in the mood for parties. The chilly winters add to the charm of the festival.
Pushkar Mela is one of the largest camel fairs in the world. Pushkar becomes a cattle trading ground for 12 days. The true essence of the Rajasthani culture can be observed in Pushkar. Folk dancing, music, hot air balloon rides, safaris, and many more activities take place at the fair. Devotees take a dip in the lake of Pushkar on Kartik Purnima and pray at the Brahma Temple.
Festivals in India bring immense joy to the lives of people. The days are filled with zeal and merry. Vibrant colors, music, festivities, and playful rituals make the subcontinent a country of culture. The Deccan Odyssey passes through the land of these diverse festivals of India. It offers a part of the celebration to the passengers. | <urn:uuid:6eca8dbd-9fac-4bca-8b5e-dcef1a91a185> | CC-MAIN-2023-50 | https://www.deccan-odyssey-india.com/festivals.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00592.warc.gz | en | 0.946579 | 1,122 | 2.921875 | 3 |
To define the term Data Warehouse (DW) especially to software developers who are new to the industry, have tried asking them a few simple questions before getting to the classic definition in the words of Bill Inmon. Some of the questions which leads to defining a Data Warehouse are:
Q: What is Data?
A: ‘Data’ is a collection of facts which are captured as it happens.
E.g., the content present in a Survey Sheet is ‘Data’
Q: What is information?
A: The details that are derived by processing the ‘Data’ are called Information.
E.g., the details that are arrived from the survey data like total, average etc are called Information
Q: What is a system that collects ‘Data‘ called?
A: A computer system that collects ‘Data’ is usually called an OLTP (Online Transaction Processing System) system. This system is designed to collect data in a much more rapid way.
E.g., The survey data could be captured into a laptop using a software application,An ATM machine or a Core banking system for deposit/debit interaction…
Q: How is ‘Information’ derived from ‘Data’?
A: The ‘Data’ is pulled out from the OLTP system and moved to a separate data store/ system and then processed to derive Information. A computer system that acts as a platform for processing the ‘Data’ to derive ‘Information’ is called a Data warehouse.
The ‘Information’ gathered from DW system helps an Organization in gaining more Knowledge about their business. This gained Knowledge helps the Organization in Decision making hence the DW system which supports decision making is part of the “Decision Support System”
Q: What are the key characteristics of a Data Warehouse?
A: A DW is designed to
1. store large quantity of data across years
2. push out ‘Data’ faster from its storage to the Information processing engine
Q: Why is a Data Warehouse required?
A: The OLTP system is usually used by many people to collect (push) data from the outside world into its storage where as the DW system is usually used by few people to pull the data out from its storage. Volume of data lying inside a DW system is very much higher that that in an OLTP system. The purpose of each system is different so designing a separate OLTP and DW system to cater to their unique requirement became imperative.
But this segregation between OLTP and DW has happened gradually. During the initial years the DW related activities were more done on OLTP systems and it still happens before an organization or department feels the need for a DW system.
The need for a DW system is felt due to issues related to
3. Data Integration | <urn:uuid:2b3e4d5a-d153-4633-89b5-d85219029441> | CC-MAIN-2017-04 | http://blogs.hexaware.com/what-is-a-data-warehouse-dw/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959147 | 586 | 3.171875 | 3 |
Cervical cancer screening is used to find abnormal changes in the cells of the cervix that could lead to cancer. The main cause of cervical cancer is infection with HPV (Human Papillomavirus). Most of these infections will be suppressed by the immune system for 1 to 2 years without causing cancer. It can take 10 to 20 years or more for a persistent infection with a high-risk HPV type to develop into cancer.
Because cervical cancer has been proven to grow at such a slow rate, the Center of Disease Control and Prevention changed the interval for cervical cancer screening in 2010. Women should begin Pap testing at age 21, regardless of previous sexual history. Adolescents have a very low risk of cervical cancer and a high likelihood that cervical cell abnormalities will go away on their own. Women ages 21 through 29 should be screened with a pap test every 3 years. Women ages 30 through 65 can be screened every 5 years with Pap and HPV contesting or every 3 years with a Pap test alone. Further testing is implemented if the pap test results come back abnormal. Women who have had a hysterectomy do not need to have cervical screening, unless the hysterectomy was done to treat a precancerous cervical lesion or cervical cancer. Women who have been vaccinated against HPV should still be screened for cervical cancer because the vaccine does not protect against all types of HPV.
The changes to the PAP guidelines have decreased the amount of harm and invasive procedures caused by treating abnormalities that would never progress to cancer. The changes have also limited false-negative results that would delay diagnosis and treatment of a precancerous/cancer condition. | <urn:uuid:5273f0fd-f887-4753-8764-911250459804> | CC-MAIN-2020-05 | https://columbuswomenscare.com/2017/01/29/cervical-cancer-screening/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00395.warc.gz | en | 0.955233 | 334 | 3.515625 | 4 |
The figures below don’t do justice to the harm an earthquake would do. There is $1.9 trillion dollars of property at risk from earthquakes in the San Francisco Bay Area, where a catastrophic earthquake on the Hayward Fault would almost certainly have ripple effects throughout California, the U.S. and the world, since this area has one of the highest concentrations of people, wealth, and innovation in the U.S. (Grossi).
These are just a few of the earthquake faults and their estimated costs in California:
Earthquake (Cost / Where):
- $ 69 billion / Southern California Puente Hills fault
- $ 54 billion / Northern California San Andreas Fault
- $ 213 billion / Southern California San Andreas Fault (Ii 2016, USGS 2008)
- $ 49 billion / Southern California Newport-Inglewood fault
- $ 190-235 billion / Northern California Hayward Fault (Lesle 2014, Grossi 2013)
- $ 30 billion / Southern California Palos Verdes fault
- $ 29 billion / Southern California Whittier fault
- $ 24 billion / Southern California Verdugo fault
Possible cascading effects of a large earthquake would be:
- Destruction of the delta levee system, resulting in $40 billion losses and no drinking water for 23 million people
- Crashing the U.S. financial system, perhaps also the global financial system
- Los Angeles is the #1 port in the USA and Oakland #7 in the value of import and exported goods
- Food security: California supplies a third of food in the United States, and exports a great deal of food as well
- Bankruptcy of most insurance and re-insurance companies, delaying and preventing recovery
- Earthquakes sometimes result in compound disasters, in which the major event triggers a secondary event, natural or from the failure of a man-made system. In urban areas, fires may originate in gas lines and spread to storage facilities for petroleum products, gases, and chemicals. These fires often are a much more destructive agent than the tremors themselves because water mains and fire-fighting equipment are rendered useless. More than 80 percent of the total damage in the 1906 San Francisco quake was due to fire (OTA).
California Bay Area Hayward or San Andreas earthquake
- According to reports by the Association of Bay Area Governments, more than 100,000 dwellings would be uninhabitable and as many as 400,000 could sustain some damage. In a region where rents and home prices are at a premium and vacancies are extremely low, damage to one third of the housing stock in the counties closest to the fault rupture (combined with the business disruption and the inability to travel around the region) would create a social and financial disaster.
- The potential for massive disruption is a function of the physical conditions in the region. The building stock and the infrastructure are old. The geography of the region has concentrated urban development between the hills and the bay, forcing limited transit corridors with little redundancy and creating significant distances between the urban core immediately surrounding the bay and outlying communities.
On July 17, 2014, the United States Geological Survey (USGS) announced updated U.S. National Seismic Hazard Maps, with the latest scientific views on where, how often, and how hard future earthquakes will be. Some of the details have changed since the maps were last released in 2008 (National Seismic Hazard Project.)
Lack of Insurance in the San Francisco Bay Area
Over half of the loss after Hurricane katrina was covered by insurance. But only 5% to 10% of the total residential losses and 15% to 20% of the commercial losses of a major Hayward Fault earthquake are expected to be reimbursed by insurance. Overall, insurance payments will cover between 10% and 15% of the total loss—somewhere between $11 and $26 billion (Grossi).
[ Appendix A has a house hearing on earthquakes in the U.S. below, after the references ]
Alice Friedemann www.energyskeptic.com author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer
Mary C. Comerio. 2000. Paying for the Next Big One. Our system for financing recovery from natural disasters is in shambles. Issues in Science & Technology. National Academy of Sciences.
B. Rowshandel, et. al. 2003. Estimation of Future Earthquake Losses in California. California Geological Survey.
Earthquake Engineering Research Institute (EERI), Scenario for a Magnitude 7.0 Earthquake on the Hayward Fault (Oakland, Calif.: EERI, 1996).
Grossi, P., et al. 2013. 1868 Hayward Earthquake: 145-year retrospective. Risk Management Solutions.
Ii, Rong-gong Lin. May 5, 2016. San Andreas Fault ‘locked, loaded and ready to roll’ with big quake, expert says. Los Angeles Times.
Lesle, T. 2014. Doomsday 4: A Massive Quake Could Be Only the Beginning of the Bay Area’s Woes. Cal Alumni Association, UC Berkeley.
OTA (Office of Technology Assessment). 1990. Physical Vulnerability of Electric System to Natural Disasters and Sabotage. OTA-E-453. Washington, D.C.: U.S. Government Printing Office.
Peter May and Walter Williams, Disaster Policy Implementation: Managing Programs Under Shared Governance (New York: Plenum Press, 1986).
Risa Palm and Michael Hodgson, After a California Earthquake: Attitude and Behavior Change (Chicago, Ill.: University of Chicago Press, 1992).
Jeanie Perkins et al., Preventing the Nightmare (Oakland, Calif.: Association of Bay Area Governments, 1999).
Jeanie Perkins et al., Shaken Awake (Oakland, Calif.: Association of Bay Area Governments, 1996).
Rutherford H. Platt, Disasters and Democracy: The Politics of Extreme Natural Events (Washington, D.C: Island Press, 1999).
USGS. 2008. The ShakeOut Scenario. United States Geological Survey. Report 2008-1150
House 112-13. April 7, 2011. Are we prepared? Assessing earthquake risk reduction in the U.S. House hearing. 82 pages.
The hearing will examine various elements of the Nation’s level of earthquake preparedness and resiliency including the U.S. capability to detect earthquakes and issue notifications and warnings, coordination between federal, state and local stakeholders for earthquake emergency preparation, and research and development measures supported by the federal government designed to improve the scientific understanding of earthquakes. Portions of all 50 states are vulnerable to earthquake hazards, although risks vary across the country and within individual states. Twenty-six urban areas in 14 U.S. states face significant seismic risk. Earthquake hazards are greatest in the western United States, particularly in California, Oregon, Washington, Alaska, and Hawaii. Though infrequent, earthquakes are unique among natural hazards in that they strike without warning. Earthquakes proceed as cascades, in which the primary effects of faulting and ground shaking induce secondary effects such as landslides, liquefaction, and tsunami, which in turn set off destructive processes within the built environment; structures collapse, people are injured or killed, infrastructure is disrupted, and business interruption begins. The socioeconomic effects of large earthquakes can reverberate for decades. The recent earthquake that struck off the coast of northern Japan on March 11, 2011, illustrates that the effects of an earthquake can be catastrophic. The earthquake, recorded as a 9.0 on the Richter scale, is the most powerful quake to hit the country, and it triggered a devastating tsunami that swept over cities and farmland in the northern part of the country. As Japan struggles with rescue efforts, it also faces a nuclear emergency due to damage to the nuclear reactors at the Fukushima Daiichi Nuclear Power Station. As of March 31, the official death toll from the earthquake and resulting tsunami includes more than 11,600, and more than 16,000 people were listed as missing. The final toll is expected to reach nearly 20,000. More than 190,000 people remained housed in temporary shelters; tens of thousands of others evacuated their homes due to the nuclear crisis and related fear.
In Japan, the after effects of the quakes have reduced supplies of water and electricity, hampering their ability to export many manufacturing products and forcing some businesses to slow or stop operation all together. Supply chains for important technology products here in the States have also been interrupted, directly impacting our productivity.
Clearly the consequences of a major earthquake are felt on a global scale. These hazards represent a serious threat to both national security and global commerce. Given our current economic situation, it would be even more painful for the United States to endure a disastrous earthquake, the socioeconomic effects of which would reverberate for decades.
CHRIS POLAND, CHAIRMAN AND CHIEF EXECUTIVE OFFICER, DEGENKOLB ENGINEERS AND CHAIRMAN, NEHRP ADVISORY COMMITTEE
I am testifying on behalf of the 140,000 members of the American Society of Civil Engineers (ASCE). At ASCE, I am Chairman of the Infrastructure and Research Policy Committee. Additionally, I serve as Chairman, Degenkolb Engineers; and I serve as Chairman of the National Earthquake Hazards Reduction Program (NEHRP) Advisory Committee. I am registered civil and structural engineer, and have worked for more than 35-years as an advisor on government programs for earthquake hazard mitigation and in related professional activities.
It also must be recognized that resilience is not just about the built environment. It starts with individuals, families, communities, and includes their organizations, businesses, and local governments. In addition to an appropriately constructed built environment, resilience includes plans for post event governance, reconstruction standards that assure better performance in the next event, and a financial roadmap for funding the recovery.
While the nation can promote resilience through improved design codes and mitigation strategies, implementation and response occur at the local level. Making such a shift to updated codes and generating community support for new policies are not possible without solid, unified support from all levels of government.
The federal government needs to set performance standards that can be embedded in the national design codes, be adamant that states adopt contemporary building codes including provisions for rigorous enforcement, provide financial incentives to stimulate mitigation that benefits the nation, and continue to support research that delivers new technologies that minimize the cost of mitigation, response, and recovery. Regions need to identify the vulnerability of their lifeline systems and set programs for their mitigation to the minimum level of need. Localities need to develop mandatory programs that mitigate their built environment as needed to assure recovery.
[In response to a question about how prepared we are on a scale of 1 to 100 for resiliency, preparation, and recovery]: Are we prepared? No. I would say maybe 10. In areas of very high seismicity in California, Oregon and Washington, there have been building codes in place for 20 years that are going to help people be safe. Other parts of the country that we talk about, those things are not in place., From a scale of safety, I believe that California will maybe 50 or 60. On a scale of resilience to be able to recover quickly and not have a significant impact on the national economy, we are still down in the 10–20 range.
The vast majority of our building stock and utility systems in place today were not designed for earthquake effects let alone given the ability to recover quickly from strong shaking and land movement. Earthquake Engineering is a new and emerging field and only since the mid-1980s has sufficient information been available to assure safe designs. Design procedures that will assure resilience are just now being developed. Strong, community destroying earthquakes are expected to occur throughout the United States. In most regions outside of California, little is being done about it. While modern building codes and design standards are available, they are not routinely implemented on new construction or during major rehabilitation efforts because of the complexity and cost. Many communities do not believe they are vulnerable and if they do accept the vulnerability, find the demands of seismic mitigation unreachable.
The problem of implementation and acceptance does not just lie with the public, but also with the earthquake professionals. Because this is an emerging area of understanding, conservatism is added whenever there is significant uncertainty. Earth Science research has made great strides in identifying areas that will be affected by strong shaking. Unfortunately, each earthquake brings different styles of shaking and building performance. This leaves many structural engineers generally uncertain about what causes buildings to collapse, and unwilling to predict the extent of damage that will occur, let alone whether a building will be usable during repairs or if lifeline systems can be restored quickly enough. Resilience demands transparent performance and significant earthquake science and earthquake engineering research and guideline development is needed to bring that ability to communities.
Comprehensive worldwide monitoring and data gathering related to earthquake intensity and impact. Extensive instrumentation is needed to adequately record the size and characteristics of the energy released and the variation in intensity of strong shaking that affect the built environment. We are lucky if we obtain a handful of records for entire cities but in reality thousands are needed to record the dramatic differences that occur and to understand the damage that results. In addition, the geologic changes that occur due to faulting, landslides, and liquefaction need to be surveyed, recorded, and used to understand the future vulnerability of the built environment to land movement. A network of observation centers is needed to record, catalogue and maintain information related to the impacts on society, and the factors influencing communities’ disaster risk and resilience. At present, earthquake engineering is based more on anecdotal observations of damage that are translated into conservative design procedures without the benefit of accurate data about what actually happened. In my mind, expanded monitoring is the single most important area that will reduce the cost of seismic design and mitigation that will allow us to achieve greater resilience.
An Overarching Framework that defines resilience in terms of Performance Goals Resiliency is all about how a community of individuals and their built environment weather the damage, respond and recover. It is more about improvisation and redundancy than about how any single element or system performs. Buildings and systems are designed one structure at a time for the worst conditions they are expected to experience. This approach worked well when life safety was the goal, and there was no need to consider the overall performance of the built environment. Resiliency, however, demands that performance goals and their interdependencies are set at the community level for the classes of structures and systems communities depend during the recovery process. Facilities providing essential services during post-earthquake response and recovery must function without interruption. Electric power is needed before any other system can be fully restored. Emergency generators can only last a few days without additional deliveries of fuel. Power restoration, however, depends on access for emergency repair crews and their supplies. Community level recovery depends on neighborhoods being restored within a few weeks so the needed workforce is available to restart the local economy. People must be able to shelter in place in their homes, even without utilities, but cannot be expected to stay and work after a few days without basic utility services. To ensure that past and future advances in building, lifelines, urban design, technology, and socioeconomic research result in improved community resilience, a framework for measuring, monitoring and evaluating community resilience is needed. This framework must consider performance at various scales-e.g., building, lifeline, and community-and build on the experience and lessons of past events. Only the Federal government can break the stalemate related to setting performance goals that if left alone will eventually cripple the nation.
Senator David Wu, Oregon. As an Oregonian, I am particularly concerned with the prospect of a similar disaster occurring in the Pacific Northwest. Off the coast of Oregon, Washington and northern California, we have the Cascadia subduction zone, and this fault is currently locked in place, but research over the last 30 years indicates that the same stress now accumulating has been released as a large earthquake once about every 300 years dating back to the last ice age about 12,000 years ago. The last Cascadia earthquake occurred 309 or 310 years ago. It was a magnitude 9.0 earthquake, the same destructive magnitude as the one that stuck Japan. All indications show that we Oregonians can expect another quake any time. It is a matter of when, not a matter of if.
When the next earthquake occurs on our fault, there will be prolonged shaking, perhaps for as long as five minutes, with the potential to collapse buildings, create landslides, and destroy water, power, and other crucial infrastructure and lifelines. Such an earthquake will also likely trigger a devastating tsunami that could overwhelm the Oregon coast in less than 15 minutes, resulting in potentially thousands of fatalities and billions of dollars in damage. Unfortunately, this type of disaster scenario is not limited to the Western United States. In fact, more than 75 million Americans across 39 states face significant risk from earthquakes.
JACK HAYES, DIRECTOR, NATIONAL EARTHQUAKE HAZARDS REDUCTION PROGRAM, NIST. Since the beginning of 2010, we have witnessed horrific losses of life in Haiti (over 230,000) and Japan (toll still unknown but numbering in the tens of thousands) due to the combined earthquake and tsunami impacts, and lesser, but nevertheless significant, losses of life in Chile and New Zealand. The toll in terms of human life is overwhelming, and we all offer our heartfelt sympathy to those nations and their citizens.
Haiti and Chile earthquakes provided a stark contrast in the effectiveness of modern building codes and sound construction practices. In Haiti, where such standards were minimal or non-existent, many thousands were killed in the collapses of homes and other buildings. In Chile, with much more modern building codes and engineering practices, the loss of life, while still tragic, was far smaller, about 500, despite the fact that the Chile earthquake had a significantly higher magnitude of 8.8 (M8.8) than the Haiti earthquake (M7.0). The fault rupture that caused the Chile earthquake released approximately 500 times the energy released in the Haiti earthquake. The Chilean building code provisions had been based in large part on U.S. model building codes that have been developed by researchers and practitioners who have been associated with and supported by NEHRP. Scientists and engineers have not yet had enough time since the 2011 earthquakes in New Zealand (M6.3) and Japan (M9.0) to draw detailed conclusions. We do know that Japan and New Zealand are international leaders in seismology and earthquake engineering—we in the U.S. partner with our counterparts in both countries, because we have much to learn from one another. Despite their technical prowess, leaders in both countries have been taken aback by the amount of damage that has occurred. One lesson we take from this before we even begin detailed studies is that we still have much to learn about the earthquake hazards we face and the engineering measures needed to minimize the risks from those hazards. Assuming that we already know everything we need to know is the surest strategy for catastrophe. The other broad lesson that has already become clear from both of these events is that local, and indeed national, resilience —to recover in a timely manner from the occurrence of an earthquake or other hazard event—is vital, going far beyond the essential, but narrowly focused, issue of ensuring life safety in buildings and other locations when an earthquake occurs. In Christchurch, NZ, the central business district has been largely closed since the February 21 earthquake, severely impacting the local economy. Some reports indicate as many as 50,000 people are out of work as a result of this closure. In Japan, the impact of the March 11 earthquake and resulting tsunami have been far worse on the national economy, with energy, agriculture, and commercial disruptions of monumental proportions. Some estimates already put the economic losses over $300 billion, and economic disruption is certain to continue for years and extend far beyond Japan’s shores.
The 2010 and 2011 events followed decades or even centuries of quiescence on the faults where they struck and are sobering reminders of the unexpected tragedies that can occur. The USGS has recently issued updated assessments of earthquake hazards in the U.S. that provide appropriate perspectives for us. For example, in 2008, the USGS, the Southern California Earthquake Center (SCEC), and the California Geological Survey (CGS), with support from the California Earthquake Authority (CEA), jointly forecast a greater than 99% certainty of California’s experiencing a M6.7 or greater earthquake within the next 30 years.
The recent New Zealand earthquake, at M6.3, is slightly less severe than that which is postulated for California. The recent Chile and Japan earthquakes, at M8.8–M9.0, occurred in tectonic plate collision zones where one plate overrides another; that characteristic is closely comparable to those which generated 1964 Alaska earthquake and more ancient earthquakes off the coasts of Oregon and Washington, in the Cascadia Subduction Zone. Seismologists thus believe that what we have recently observed in Chile and Japan should serve as clear indication to us for what may likely occur again someday off the Alaska, Oregon, and Washington coasts.
While concern for future earthquake activity is always great along our West Coast, the National Research Council has noted in its publications that 39 states in the U.S. have some degree of earthquake risk, with 18 of those having high or very high seismicity. In 2011 and 2012, earthquake practitioners and state and local leaders in Memphis, St. Louis, and other Midwestern locales will participate in events that will commemorate the bicentennial anniversary of the New Madrid sequence of earthquakes, which included at least four earthquakes with magnitudes estimated at 7.0 or greater.
If a southern California earthquake severely damaged the ports of Los Angeles and Long Beach, as happened to the port of Kobe, Japan, in 1995, there would be national economic implications. Similarly, if a major earthquake occurred in the Central U.S., one or more Mississippi River transcontinental rail or highway crossings in the Saint Louis to Memphis region, as well as oil and natural gas transmission lines could be severely disrupted.
In 2008, the USGS, California Geological Survey, and Southern California Earthquake Center produced a plausible scenario of a rupture of the southern end of the San Andreas fault that could result in about 1,800 deaths, 50,000 injuries, and economic losses exceeding $200 billion in the greater Los Angeles area. This scenario formed the basis for the 2008 Great Southern California Shakeout earthquake preparedness and response exercise.
JIM MULLEN, DIRECTOR, WASHINGTON STATE EMERGENCY MANAGEMENT DIVISION AND PRESIDENT, NATIONAL EMERGENCY MANAGEMENT ASSOCIATION
Response & Recovery. A major event involving multiple disciplines is complex and difficult to manage. While firefighters, law enforcement officials, and emergency medical personnel often constitute the traditional first responders, emergency managers provide the all important coordination function. This coordination far exceeds the initial response as emergency managers also maintain responsibility for the transition from the lights and sirens of response into the complex and often long-term efforts of recovery. Once an event occurs, the response is a three-tiered process of escalation where the level of support is directly related to the need of the impacted jurisdiction. The initial response is at the local level where first responders and local emergency managers provide assistance. Should the incident exceed the capacity of those local responders, the state may offer assistance in myriad ways including personnel, response resources, financial support, and mutual aid. On rare occasions, an event will even overwhelm the state’s ability to mount an effective response. This is usually the only time in which the Federal Emergency Management Agency (FEMA) is called upon to offer assistance. FEMA assistance is triggered by a direct request from the Governor to the President. Should the President deem the event worthy of federal assets, a Presidential Disaster Declaration is declared and FEMA can provide assistance such as assets from the Department of Defense, financial aid, and expertise. Disaster assistance from FEMA traditionally comes in one of three forms. The first is the Public Assistance (PA) Program which provides supplemental financial assistance to state and local governments as well as certain private non-profit organizations for response and recovery activities required as a result of a disaster. The PA Program provides assistance for debris removal, emergency protective measures, and permanent restoration of infrastructure. Federal share of these expenses are typically not less than 75 percent of eligible costs. The PA Program encourages protection from future damages by providing assistance for Hazard Mitigation
VICKI MCCONNELL, DIRECTOR, OREGON DEPARTMENT OF GEOLOGY AND MINERAL INDUSTRIES
Oregon’s Department of Transportation published in 2009 the Seismic Vulnerability of Oregon State Highway Bridges: Mitigation Strategies to Reduce Major Mobility Risks. This study incorporates FEMA HAZUS risk assessment modeling funded by NEHRP as well as NEHRP soil conditions data to determine peak ground acceleration (PGA). Their findings indicate that 38% of state-owned bridges in western Oregon would fail or be too heavily damaged to be serviceable after a magnitude 9.0 earthquake and that repair or replacement would take 3–5 years essentially cutting the Oregon coastal communities off from the rest of the state.
Chairman QUAYLE. Mr. Poland, in your testimony you compared the different results of the earthquakes that occurred in Haiti and Japan, and even what happened in the Northridge quake, and the quake that occurred in San Francisco. You mentioned that it would be cost-prohibitive to retrofit buildings across the United States. What is your suggestion to minimize the repercussions of an earthquake? Do you mostly look at where different communities lie along faults? For example, a city is close to the San Andreas fault, you obviously take different things into account than cities in middle America located away from the New Madrid fault line.
Mr. POLAND. The biggest problem we have is that the built environment that we have right now in the country has not been designed for earthquake effects, both in terms of public safety and in terms of being able to recover and resiliency. And so the biggest problem we have is, what do we do with 85 or 90 percent of our buildings and systems that are not adequate for the kind of performance that we want. When I spoke about it being cost-prohibitive, I was speaking about retrofitting those buildings and those systems so that they can perform properly, and that is what costs so much money.
Mr. WU. My second question is that we do have a number of nuclear reactors that are sitting on active seismic zones, and I believe one of them is on the West Coast. Can you all comment on what can be done to build resiliency and recovery into these nuclear facilities? You know, what we found in Japan is that it wasn’t the earthquake, it was the tsunami and the loss of electricity and it affected both the reactor itself and the fuel that was stored in pools on top of the reactor facility. Can you all comment on how we can do a better job with our own nuclear facilities?
Dr. HAYES. NEHRP itself does not address the nuclear facilities in the United States. That is the responsibility of the Nuclear Regulatory Commission and the Department of Energy.
Mr. POLAND. I would just like to add that the design process that has been done for nuclear power plants since their inception has been extraordinarily rigorous and much more detailed and much more carefully done than for any other kind of construction by many orders of magnitude. Our facilities, our nuclear facilities from a standpoint of strong shaking are the safest buildings that we have in the Nation. The problem in Japan, as you mentioned, had to do with the tsunami, and it wasn’t that they didn’t think they were going to have a tsunami. They had a wall. The wall wasn’t tall enough. The backup systems didn’t work as well as they thought that they would.
Mr. SARBANES. Okay. Humans are notoriously shortsighted about everything, and even with the earthquake activity of recent days, we will get back to being shortsighted even on this question, and I wonder if you could speak to—I mean, I would imagine if you went to any budget hearing at a local level, at a city, municipality level or at the state level if earthquake preparation and resiliency was even on the budget document, it would be on the last page on the last line because there are so many other things obviously that are pulling on our resources and our attention. So it makes me wonder how much—and I think you have spoken to this a little bit, but the opportunity to piggyback the kinds of things you want to see done onto other kinds of initiatives that are out there that have greater priority in the minds of planners and budgeters and all the rest of it so that you can kind of come along with a little bit, of leverage and not so much add a cost, say, well, as long as you are doing X, Y and Z, why not add this into the mix, and that can go to codes and building standards and so forth. But it also could go particularly well with community resiliency planning, and I wonder if you could speak to that and maybe throw in whether sort of green building codes and sustainable building codes are ones where there can be some added elements with respect to resiliency and so
Mr. MULLEN. I will tell you that on the West Coast, there are significant discussions taking place in local communities about earthquakes and tsunami threats and measures that should be taken. One of the things we haven’t really talked about is the importance of the general public understanding not only the risk they face but the measures they can take to protect themselves. I am very enthusiastic about getting a warning about something that might be coming like the tsunami warning we got a few weeks ago really helped us but the type of events, the no- notice events that we would deal with in the central Puget Sound or in Oregon or on the coast, they are not going to get a lot of warning for an earthquake. One of the things that we need to do is make sure people are prepared to take the protective steps that they need immediately. They need to be able to drop cover and hold. They need to know that they have got—that they need to have some resources for themselves. And on the coast, we have been working hard with the communities about their evacuation programs, knowing what it means to move quickly. The ground motion in an earthquake that is right off our coast is your signal. We also have an elaborate system of warning systems that we can activate to tell people to move to high ground. The difficulty we have, the challenge that communities have as they prepare with us and they have worked with us is there is not a vertical evacuation site that is necessarily readily available to every community, and so we have been trying to plan for the type of vertical evacuation structure that would be necessary on the coast in the Port of Los Angeles or Long Beach or Ilwaco where those folks can get to a place of safety which may not be the warmest, driest place but it will at least be above any kind of potential wave. That is an important step. There is no such structure right now but the communities are planning with it. I think the key to this whole thing that you are getting at in terms of where people are, and I would not hazard a guess about the scale because I would just be making something up. I will tell you if you educate people about the risks that they face and you level with people about what they can do to protect themselves and their families, whether it is the average citizen, someone running a business or the emergency management community or the local elected officials, you begin to generate the kind of interest that will get people looking at this as another issue that they have to deal with and move it up on that committee agenda. The national-level exercise I spoke of in my testimony is an attempt in the Midwest, in eight Midwestern states to begin to educate people at the same time that we are determining whether our doctrines and plans are going to work for us or not. That will be an extremely challenging exercise. We expect failure to occur because we want to find out what our condition is. So we are very eager to find out where we are weak, where we have got strengths and make sure we capitalize on the strengths and shore up the weaknesses. | <urn:uuid:3954be63-c4c0-4316-91d0-aefaa5243621> | CC-MAIN-2018-05 | http://energyskeptic.com/2016/earthquakes-in-california/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00696.warc.gz | en | 0.955658 | 6,681 | 3.03125 | 3 |
Earlier this month, the UN Special Rapporteur on violence against women, Rashida Manjoo issued a statement during her visit to the U.S. scrutinizing the U.S. for its continued failure to prosecute perpetrators of sexual violence crimes against Native American and Alaska Native women and girls.
Consistent with Amnesty International’s findings in 2007’s “Maze of Injustice” report documenting the epidemic of sexual violence in Indian Country, Manjoo met with tribal leaders and advocates, who confirmed Amnesty’s own findings – including Department of Justice statistics citing that 86% of perpetrators of sexual violence against Native women and girls are in fact, non-Native men.
This horrific statistic is an all too familiar, frightening daily reality for Native women – particularly as tribal courts still have no jurisdiction to prosecute non-Native offenders, often leaving survivors of sexual violence without access to justice or redress for crimes committed against them.
As we celebrate International Women’s Day all this week, it is all too clear that the U.S. still has a long way to go in addressing this epidemic of sexual violence against Indigenous women here in the U.S.
But it is equally important to note and applaud the significant, albeit long-awaited, successes of the past year – including President Obama’s historic signing of the Tribal Law and Order Act last July, and the President’s endorsement of the UN Declaration on the Rights of Indigenous Peoples (UNDRIP) in December 2010. Both Congress and the Administration have demonstrated their commitment to improving public safety and justice services in Indian Country – and we must now ensure that the policies and programs provided in critical legislation such as the Tribal Law and Order Act are not only fully funded, but are also consistent with the provisions of the UNDRIP.
Many important strides have been achieved since the Maze of Injustice report launched Amnesty’s effort to join the countless other tribal leaders, Indigenous rights, and women’s advocates who have worked hard to bring to light the shocking crimes of sexual violence against Native women that have been left in the shadows for far too long.
This International Women’s Day and week, we honor those advocates and those survivors whose incredible strength and efforts continue to drive this work, and these successes. All women have the right to feel and be safe and secure in their own communities.
We have a long way to go – but with your continued advocacy and efforts, we can get there. | <urn:uuid:9204a98c-ee93-4279-878f-e318720c0e8f> | CC-MAIN-2015-11 | http://blog.amnestyusa.org/americas/the-fight-to-end-sexual-violence-against-indigenous-women-and-girls-in-the-u-s/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461359.90/warc/CC-MAIN-20150226074101-00103-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.952102 | 508 | 2.546875 | 3 |
On Jul 3, 2014, at 9:30 PM, Anna Roys <email@example.com> wrote:
> What I find as an interesting phenomenon in working with students and rhythm is that even many pre-schoolers and early elementary students can hear and duplicate and play along with complicated rhythms if they have enough hands on exploration time with percussion instruments, such as in drum circles. Why is this possible without any "training?" Could it be because over time their audio processing abilities are refined?
In AI one of the philosopher?s stones has to do with the ability of the brain to recognize any pattern, even in noise, immediately. This is especially obvious with our sense of hearing. That is why sonar is still ultimately a human skill. To do this computationally (with a computer) takes enormous time because you have to compare every sample point to every other sample point and even then, there is no effective way to do this unless you know before hand what you are looking for. Yet the brain accomplishes this instantly without having to know what it is looking for. Neurons, neural nets and repeated firing are obviously in play, but the impressive factor is the lack of needing to know what to look for. Play two beats almost identical, but slightly different, and the difference is immediately detectable, especially if the beats are themselves in a pattern. This suggests that the neural net in the brain is so dense that every possible pattern (within some limit) is already ac! counted for, and recognition is just a matter of presenting the pattern to an already wired brain. This pre-wired ability goes deep and is behind our ability to acquire language so quickly and is so sensitive that when driving down a noisy road, we are fooled by any regularities in the noise and they sound like words.
I would say that the youngsters? ability to recognize rhythm is already there, and while some ear training is in play, the majority of the training taking place is muscle coordination and timing, In other words the training required for them to mechanically create the rhythms with the instruments.
As an analogy, our immune system is already programmed to recognize every possible contagion (within our existence on earth) but even though it can recognize the contagion, it might not react quickly the first time it is exposed, but through training, it can react more quickly the second time and subsequent times. The training isn?t so much for the recognition as it is for the reaction. | <urn:uuid:f22b81bd-dbf7-44e7-a7f2-270c78b3cea9> | CC-MAIN-2016-30 | http://mathforum.org/kb/message.jspa?messageID=9510645 | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00092-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.974779 | 502 | 3.25 | 3 |
The ancient Greeks and Romans have left footprints all around the world, and today we are still intrigued. How did they live? How were they entertained? How was their society structured? Why is this important to us?
Using gallery artefacts, take a stroll in the sandals of a citizen to understand the classical way of life and its influence on us today.
To book or to learn more, contact the Education team.
Social Sciences: Identity, Culture and Organisation; Continuity and Change | <urn:uuid:3b11a3db-ebe0-4539-b36b-e8c831cd77b1> | CC-MAIN-2018-17 | http://otagomuseum.nz/learn/programmes/programme/classical-life | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00402.warc.gz | en | 0.959436 | 100 | 2.59375 | 3 |
This Summer’s featured ingredient is the Red Raspberry.
The red raspberry (Rubus idaeus) is an edible fruit of the Rose Family. A raspberry is actually not a single berry, but a cluster of 50 to 150 tiny stone fruits, or drupelets, each containing a seed. 1
Red raspberries are indigenous to both Asia and North America. They are believed to have been first gathered wild on the Asian continent by the Trojans in the foothills of Mt Ida in ancient Turkey. Roman agriculturalist Palladius wrote about their domestication in the 4th century. Romans are thought to have initiated the spread of raspberry cultivation across Europe. 2
In early Medieval Europe, raspberries were mainly eaten by royalty. Their juices were used in paintings and manuscripts. All parts of the plant were used in medicines and topical treatments to treat such ailments as diarrhea and menstrual cramps. King Edward (1272-1307) is recognized as the first person to call for their general cultivation, and by the 1700s raspberries could be found growing in gardens throughout Europe.
American Red Raspberries were also used by the Native Americans in traditional medicines and eaten fresh or dried for ease of transportation. European settlers in America brought their cultivated raspberries to the new colonies. George Washington is said to have cultivated raspberries at his Mount Vernon estate. After the Civil War, major production of raspberries emerged across the country. Today, the leading producing regions for red raspberries are Washington, Oregon and California. 3
Raspberries are typically in season from June to October, depending on the region. Often local farms will offer a seasonal You-Pick berry patch one can visit with information online. In rural areas, there can be wild berry patches where one can forage. When first visiting a wild berry patch, go with an experienced berry picker who can positively identify the raspberries. While there has never been a clustered drupe berry (i.e. raspberry, blackberry, boysenberry) identified as poisonous, there are many berries that are poisonous! Never eat a berry that has not been identified as edible.
When picking red raspberries, select plump, firm, fully red berries. Unripe berries will not ripen once picked. Gently grasp the berry with your fingers and thumb, and tug gently. If it is ripe, the berry will easily come off in your hand, leaving the center part attached to the stem. Avoid touching the stems of the berry plants, as they can have tiny stickers. Fill containers no more than 3 inches deep to protect the berries. Refrigerate berries as soon as possible after picking for up to three days. Wash berries only when ready to use them, as they will quickly mold after exposure to water. Surplus berries can be frozen in airtight containers or freezer bags for up to 3 months. 4
Red raspberries are a rich source of manganese, vitamin C, fiber and numerous bioactive compounds that research indicates may have antioxidant, cardio-protective, neuro-protective, and anti-cancer effects. Raspberries also contain B vitamins, vitamin K, potassium, calcium, iron and magnesium.
This Summer’s Recipe is for Raspberry Chiffon Cake. With the richness of pound cake and the lightness of an angel food cake, chiffon cake is the best of both worlds.
Celebrate National Raspberry Cake Day this July 31st with our light raspberry chiffon cake, covered with a mountain of fresh raspberries, topped with raspberry preserve glaze, toasted almonds, and a generous dollop of whipped coconut cream. It’s the perfect fresh summer dessert.
Raspberry Chiffon Cake
Preparation Time: 30 minutes
Baking Time: 60-80 minutes
Cooling Time: 2 hours
Yield: 8-12 servings
1 1/2 cups aquafaba
2 cups cake flour
1 1/2 cups fine granulated sugar
3 teaspoons baking powder
3/4 teaspoon salt
3/4 cups fresh raspberry juice
optional: red food coloring
1/2 cup oil
1 teaspoon cream of tartar
seedless raspberry preserves thinned with a little fresh lemon juice into a glaze
whipped coconut cream with maple syrup & vanilla to taste
toasted sliced almonds
2 bowls, large
2 bowls, medium
strainer, fine mesh
tube pan, preferably with removable bottom & feet
1. Position oven rack in lower middle of oven. Preheat oven to 325F.
2. Strain aquafaba from chickpeas. Store chickpeas for future use.
3. Sift together flour, sugar, baking powder and salt through fine mesh strainer into the other large bowl.
4. Place strainer into the second medium bowl, and juice raspberries by squeezing raspberries in hands to extract juice.
5. Add raspberry juice to optional red food coloring, if a pink-red cake is desired.
6. Add cream of tartar to aquafaba. With the second very clean whisk, whip to soft peaks.
7. Slowly add 2 Tbsp sugar to aquafaba, and whip just to stiff peaks.
8. Whisk 1/3 of whipped aquafaba into batter.
9. Gently fold remaining whipped aquafaba into batter with rubber spatula until well combined.
10. Gently fill 10″ ungreased tube pan with batter.
11. Bake 60-80 minutes until springs back to light touch, top cracks and appears dry. Unlike most cakes, it is better to over bake slightly than under bake slightly to ensure the cake is adhered to the sides of the cake mold, and will survive inversion.
12. Immediately invert pan on clean counter, and cool completely, ~2 hours.
13. To unmold cake, turn pan right side up, and run knife around outer and inner edges.
14. Grasp inner tube and pull cake out of pan onto counter. Cut bottom free.
15. Invert cake onto serving plate. Gently twist tube to remove.
16. Cut slices of cake and place on individual plates or bowls. Top with whipped coconut cream, raspberries, raspberry glaze, and toasted sliced almonds. | <urn:uuid:9c9fd3eb-10f5-43a7-b051-04fa8fe6dd67> | CC-MAIN-2020-16 | https://funfoodfeed.com/2014/07/01/raspberry-chiffon-cake/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00535.warc.gz | en | 0.92384 | 1,316 | 3.171875 | 3 |
More Info on Amoxicillin's Indications
Amoxicillin belongs to a group of medications known as aminopenicillins, which is part of a larger group of medications known as beta-lactam antibiotics (named after the ring-like "lactam" structure of these antibiotics). It works by stopping bacteria from making cell walls, which eventually causes the bacteria to die.
Amoxicillin is approved for use in children, including very young infants. Be sure to talk to your healthcare provider about using amoxicillin in children. Amoxicillin is available in liquid form and as chewable tablets for use in children.
On occasion, your healthcare provider may recommend amoxicillin for something other than the conditions discussed in this article. Amoxicillin is frequently used to treat many other types of infections, particularly if they are caused by bacteria that are susceptible to amoxicillin. Also, using the drug to prevent (instead of treat) any type of infection is considered to be an off-label amoxicillin use. | <urn:uuid:89ac374f-fb6a-477a-9ca7-d401604668b4> | CC-MAIN-2014-41 | http://antibiotics.emedtv.com/amoxicillin/amoxicillin-uses-p2.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114926.36/warc/CC-MAIN-20140914011154-00162-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.936523 | 211 | 3.203125 | 3 |
Extreme Ice Survey’s Videos on Vimeo. Subglacial bedforms. Chris Clark is a professor of Palaeoglaciology at the University of Sheffield.
His research interest is glacial geomorphology (the landforms that glaciers and ice sheets produce), which he discusses in this series of videos. You can read more about the presenter here. Subglacial bedforms. A Time Lapse Reveals The Retreating South Cascade Glacier in Washington. Glaciers melting in time lapse photography. "CHASING ICE" captures largest glacier calving ever filmed - OFFICIAL VIDEO. Animated Kids (Children) Education Video. What is an Ice Age? What is a Glacier? Formation of an iceberg - Frozen Planet - BBC One. Glacier Calving, Huge Wave. How do glaciers shape the landscape? Animation from geog.1 Kerboodle. Arctic Glacier collapses . Too close for comfort.
Inside a glacier - Earth - The Power of the Planet - BBC. Glaciation in action - Frozen Planet - BBC One. Best Documentary 2016 The Secrets Of Antarctica Earth Underwater. Ice Glaciers Documentary The Discovery Channel. | <urn:uuid:1eb84c08-fa37-44fd-908a-233aab307cbd> | CC-MAIN-2018-17 | http://www.pearltrees.com/robgeog/glacier-video-clips/id16171493 | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948285.62/warc/CC-MAIN-20180426144615-20180426164615-00184.warc.gz | en | 0.817939 | 230 | 3.734375 | 4 |
How to Draw an Action Button Hyperlink on Your PowerPoint 2007 Slide
On the Home or Insert tab, open the Shapes gallery.
The Action Buttons category appears at the bottom of the gallery.
Click an action button.
The action button is selected.
Draw the button on the slide.
To do so, drag the pointer in a diagonal fashion. The Action Settings dialog box when you finish drawing your button.
(Optional) Select the Mouse Over tab.
Choose this if you want users to activate the button by moving the mouse pointer over it, not by clicking it.
Select the Hyperlink To option button.
Here you can select what to link to.
On the Hyperlink To drop-down list, choose the action you want for the button.
You can go to the next slide, the previous slide, the first or last slide in a presentation, the last slide you viewed, or a specific slide. To make clicking the action button take users to a specific slide, choose Slide on the list. You see the Hyperlink to Slide dialog box, which lists each slide in your presentation. Select a slide and click OK.
To play a sound when your action button is activated, click the Play Sound check box and select a sound on the drop-down list.
Mouse-over hyperlinks need sound accompaniment so that users understand when they have activated an action button.
Click OK in the Actions Settings dialog box.
To test your button, switch to Slide Show view and click it. | <urn:uuid:51003009-9785-499c-88c0-91e90831813e> | CC-MAIN-2019-18 | https://www.dummies.com/software/microsoft-office/powerpoint/how-to-draw-an-action-button-hyperlink-on-your-powerpoint-2007-slide/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00300.warc.gz | en | 0.749459 | 313 | 3.0625 | 3 |
Most people will worry if an unpleasant event has just happened and it involved something or someone very important to them. Suddenly losing your money, having a hurtful argument with someone close to you, having an automobile accident, or making a mistake, will naturally result in your mind trying to cope with the feelings that those events aroused.
Similarly, you will probably worry if a highly probable unwanted event is coming your way. Your mind may try to work out how to avoid a bad outcome if:
You have to drive in very bad weather
A sudden large expense occurs
There is real evidence that your spouse is no longer as loving as he/she used to be
You are facing an important challenge at work or in your social life where poor performance on your part is a real possibility
If worrying is a natural response of the mind to disagreeable events that have happened or have some likelihood of happening, when is worry undesirable? How much worry is too much?
There is no absolute answer to this question, but there are some good general guidelines. While thinking about a past bad event might be natural shortly after it occurs, constantly thinking about it long afterwards is not adaptive for you. If there is nothing that can be done about the past, it is time to let go of it and get on with your life.
When faced with upcoming problems, anticipating the future and planning ways to avoid bad events and create good events are adaptive behavior (everyday living skills that are learned), but constant thinking about possibilities is not useful.
Worry is a problem if:
Your thinking is causing intense emotional distress and has been interfering with your daily functioning for some time
In general, it is not quickly or clearly providing solutions.
In the case of "What if...?" worries, there is another useful guideline: Worry is natural only to the extent that the feared future event is really likely to happen.
If a spot occurs on your skin, it is wise to have a physician take a look at it. "What if it is cancer?" may be an adaptive thought that leads to the adaptive response of seeing a physician.
To worry about it very much in the meanwhile, after making an appointment with the physician, would be nonproductive, because the likelihood of actual cancer is low.
To worry about it after the physician says that it is not cancer is even less adaptive, because the likelihood of cancer is then extremely low.
So in general, worry is maladaptive if the things you worry about are not very likely to happen.
Even for future bad events that are quite likely to happen, worry may not be useful and will simply cause additional disturbance. This is the case when you have done all the problem solving you can do before the event and there is nothing more to do about it.
Of course it is natural for the mind to periodically be reminded about the upcoming event until it is over. But if you’ve done all you can reasonably do in preparation for it, to continue to allow yourself to constantly think about it merely causes more distress and interference with the rest of your life. So, although the worrying here may be natural, it is not helpful, and applying methods to reduce it would be useful.
Need To Know:
Worrying Is A Habit
It is important to remember that worrying is a habit. A habit is something that is repeated involuntarily.
Habits are developed because you have practiced doing them so often that you just start doing them without being aware of it. Worrying can become a mental habit. If worrying is a common problem for someone, it is partly because that person has done it a lot in the past. This fact will have some implications for how to reduce the habit of worrying. | <urn:uuid:5bcc6d90-f754-4390-aaff-3937373f3a4e> | CC-MAIN-2015-18 | http://ehealthmd.com/content/when-worry-too-much | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246646036.55/warc/CC-MAIN-20150417045726-00301-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.975308 | 762 | 2.59375 | 3 |
Summary: A number of patients in intensive care units for non-brain-related illnesses may suffer from cognitive dysfunction. Inflammation that occurs as a result of infection and problems with oxygen flow to the brain which occur when breathing is affected, could contribute to cognitive impairments.
Source: University of Western Ontatio
A new study led by Western University and Lawson Health Research Institute has found that most patients entering hospital intensive care units (ICU) for non-brain-related injuries or ailments also suffer from some level of related cognitive dysfunction that currently goes undetected in most cases.
The findings were published today in the influential scientific journal, PLOS ONE.
Many patients spend time in the ICU for reasons that have nothing to do with a known brain injury, and most health care providers and caregivers don’t have any evidence to believe there is an issue with the brain. For example, a patient may have had a traumatic injury that does not involve the brain, yet still requires breathing support to enable surgeons to fix damaged organs, they may have issues with their heart or lungs, they may contract a serious infection, or they may simply be recovering from a surgical procedure like an organ transplant that has nothing directly to do with their brain.
For the study, Western investigators from the Schulich School of Medicine & Dentistry and the Brain and Mind Institute and researchers from Lawson assessed 20 such patients as they left the ICU and every single patient had detectable cognitive deficits in two or more cognitive areas of investigation, including memory, attention, decision-making and reasoning. Again, this is in spite of the fact that, on the face of it, they had no clear brain injury.
The discovery was made using online tests, developed by renowned Western neuroscientist Adrian Owen and his teams at the Brain and Mind Institute and BrainsCAN, which were originally designed to examine cognitive ability in patients following brain injuries but for this purpose, are being used to detect cognitive deficits in people who have spent time in an intensive care unit without a diagnosed brain injury.
“Many people spend time in an intensive care unit following a brain injury and, of course, they often experience deficits in memory, attention, decision-making and other cognitive functions as a result,” explains Owen, a professor at Schulich Medicine & Dentistry. “In this study, we were interested to see how patients without a specific brain injury fair after leaving the ICU. The results were astonishing.”
Why cognitive ability declines even in non-brain related visits to the ICU likely varies from patient to patient, but Dr. Kimia Honarmand from Schulich Medicine & Dentistry says the lesson to be learned is that many conditions affect brain function, even though they might not directly involve the brain.
“If you are having trouble breathing, your brain may be starved of oxygen. If you have a serious infection, the inflammation that occurs as a result of infection may affect brain function. If you are undergoing major surgery, you might be given drugs and have procedures that may affect your breathing, which in turn may affect the flow of oxygen to the brain,” explains Dr. Honarmand. “What we have shown here is that all or any of these events can lead to deficits in brain function that manifest as impairments in cognition. And healthy cognition is a vital determinant of functional recovery.”
Dr. Marat Slessarev, Lawson Scientist, says these findings can shift how the medical community treats incoming patients and more importantly, outpatients following ICU visits.
“Historically, the clinical focus has been on just survival. But now we can begin to focus on good survival,” says Dr. Slessarev, also an associate member at the Brain and Mind Institute and an assistant professor at Schulich Medicine & Dentistry. “These sensitive tests will enable doctors to both detect cognitive impairment and track cognitive performance over time, which is the first step in developing processes for optimizing brain recovery.”
University of Western Ontario
Jeff Renaud – University of Western Ontario
The image is in the public domain.
Original Research: Open access
“Feasibility of a web-based neurocognitive battery for assessing cognitive function in critical illness survivors” Kimia Honarmand, Sabhyata Malik, Conor Wild, Laura E. Gonzalez-Lara, Christopher W. McIntyre, Adrian M. Owen, Marat Slessarev, published 12 Apr 2019 PLOS ONE doi:10.1371/journal.pone.0215203
Feasibility of a web-based neurocognitive battery for assessing cognitive function in critical illness survivors
To assess the feasibility of using a widely validated, web-based neurocognitive test battery (Cambridge Brain Sciences, CBS) in a cohort of critical illness survivors.
We conducted a prospective observational study in two intensive care units (ICUs) at two tertiary care hospitals. Twenty non-delirious ICU patients who were mechanically ventilated for a minimum of 24 hours underwent cognitive testing using the CBS battery. The CBS consists of 12 cognitive tests that assess a broad range of cognitive abilities that can be categorized into three cognitive domains: reasoning skills, short-term memory, and verbal processing. Patients underwent cognitive assessment while still in the ICU (n = 13) or shortly after discharge to ward (n = 7). Cognitive impairment on each test was defined as a raw score that was 1.5 or more standard deviations below age- and sex-matched norms from healthy controls.
We found that all patients were impaired on at least two tests and 18 patients were impaired on at least three tests. ICU patients had poorer performance on all three cognitive domains relative to healthy controls. We identified testing related fatigue due to battery length as a feasibility issue of the CBS test battery.
Use of a web-based patient-administered cognitive test battery is feasible and can be used in large-scale studies to identify domain-specific cognitive impairment in critical illness survivors and the temporal course of recovery over time. | <urn:uuid:0344ea01-44db-444d-b9ed-f1e80c01254f> | CC-MAIN-2019-26 | https://neurosciencenews.com/cognition-brain-injury-11079/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00537.warc.gz | en | 0.946186 | 1,245 | 2.515625 | 3 |
Mew Gull: Medium-sized gull with gray back and upperwings, and white head, neck, breast, and belly. Bill is bright yellow. Wings have white-spotted black tips; tail is white. Feet and legs are dull yellow. Graceful, bouyant flight. Undulating, with several rapid wingbeats and a pause.
Range and Habitat
Mew Gull: Breeds from Alaska east to central Mackenzie and south to northern Saskatchewan and along the coast to southern British Columbia. Spends winters on the Pacific coast and along the boreal forest belt of Eurasia. Found in and along coastal ranges, tidal estuaries, interior lakes, and marshy grasslands.
The Mew Gull has an extensive breeding range, with three distinct forms that are sometimes considered different species.
Although it is a common bird along the Pacific Coast, it is a rarity in the East. Birds that appear along the Atlantic Coast are likely from Europe.
It is the only white-headed gull that regularly uses trees for nesting.
A group of gulls has many collective nouns, including a "flotilla", "gullery", "screech", "scavenging", and "squabble" of gulls. | <urn:uuid:d461068f-3c41-481a-ba39-a7c5781e5713> | CC-MAIN-2019-22 | http://bib.ge/chiti/open.php?id=1204&chiti=chiti3 | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259316.74/warc/CC-MAIN-20190526145334-20190526171334-00405.warc.gz | en | 0.949054 | 260 | 3.359375 | 3 |
Art & Science/ Mountaineering & Tourism/ Environmental Preservation
Beginning in the eighteenth and nineteenth centuries, enthusiasm for mountain landscapes expanded across Europe and into North America. Artists, scientists, and writers introduced the sublime splendor and natural history of alpine terrain, which was once believed to harbor demons and dragons.
Artists’ views of alpine landscapes helped popularize the revolutionary concept of Ice Ages, which advanced and receded over vast stretches of time through the movement of glaciers and ice sheets. Artworks contributed to knowledge about Earth’s expanding age and geological formations.
Artists’ images appeared in scientific publications, travelogues, popular magazines, and exhibitions. A passion for mountain climbing and tourism to alpine regions soon emerged. Collaborations between the arts and sciences stimulated a closer connection between people and nature. This influenced the emergence of groups like the Sierra Club (1892) and campaigns for environmental preservation.
Artists were commissioned to create mural-sized landscape paintings for natural history museums and schools of higher learning. These works helped students and the public visualize the movement of glaciers, which was key to understanding the process of ice age formation and retreat.
Mary Shelley’s famous novel Frankenstein: Or, the Modern Prometheus (1818), which was written after a trip to to Mont Blanc’s glaciers, takes place in the Alps and the Arctic.
Top banner image: Joseph Mallard William Turner, Mer de Glace in the Valley of Chamouni, Switzerland, 1803, watercolor and graphite with gum on wove paper, Yale Center for British Art, Paul Mellon Collection. | <urn:uuid:4a5ad805-50ba-42aa-8eea-25ce30ed1bea> | CC-MAIN-2023-14 | http://vanishingice.org/alpine-glaciers/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00509.warc.gz | en | 0.924661 | 333 | 3.84375 | 4 |
Glaucoma condition is part of a group of eye diseases which result in damage to the optic nerve resulting in loss of vision. It is the second most common cause of sight loss around the world, the number of people diagnosed with the condition is set to spike due to an aging population.
There are many types of Glaucoma such as Primary Open Angle, Acute Angle Closure, Secondary Glaucoma, Normal Tension and Congenital Glaucoma. Similar to AMD, Glaucoma is painless, so it also goes unnoticed for a while until drastic changes are noticed. It can take many years for the condition to develop, affecting your peripheral vision first. Open-Angle Glaucoma is the most common type and accounts for 90 percent of all cases of the condition.
Glaucoma affects the vision through a damaged nerve as the optic nerve carries sight to the brain so once this is damaged vision is lost. Glaucoma is not curable but early detection through eye tests, prevention and treatments could control it and prevent permanent sight loss.
Living with Glaucoma affects people in different ways, early diagnosis can make a huge difference, at Beacon we support people and help them realise their true potential and not let the condition limit their life. If you know somebody with sight loss living in the Black Country or Staffordshire and would like to know more about how we can help them, contact us on – 01902 880 111. | <urn:uuid:52067a43-aff0-4b37-8a63-e0a96d4e7411> | CC-MAIN-2022-21 | https://www.beaconvision.org/about-sight-loss/about-glaucoma/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00416.warc.gz | en | 0.944344 | 311 | 2.96875 | 3 |
Get your kids motivated to do their schoolwork with these ideas!
No Cut and Paste Activities Here!
This is not a busy book.?
This is not just another activity book of throw away projects and old school games. Instead,?100 Ways to Motivate Your Kid helps children develop STEAM skills, compassion, creativity, and critical thinking through hands-on projects that link school subjects to the real world. Julie Polanco equips parents to be able to say, “Right now!” when their children ask the age old question, “When am I ever going to need this?”
Inside, parents gain fresh, developmentally appropriate ideas for:
- Developing strong observation skills
- Encouraging creative and flexible thinking
- Promoting strong initiative
- Inspiring critical thinking and problem-solving abilities
- Enhancing social skills while instilling empathy
Using low- and no-cost methods,?100 Ways gets kids talking, listening, exploring, and interacting with the world in new, enriching ways that add value. It’s the only handbook of ideas parents need to inspire their children to imagine, to create, and to change their communities. | <urn:uuid:dc3bf9db-6b3f-4a8a-969e-fae38499c0f7> | CC-MAIN-2019-26 | https://julienaturally.com/books/100-ways-for-how-to-motivate-kids/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00116.warc.gz | en | 0.900935 | 247 | 3.21875 | 3 |
The terse answer will be the class name mentioned as parameter will be loaded. Especially if we see in the JDBC, the driver is loaded and initialized.
If its asked to me, I will say : The method attempts to locate,load and link the class.
The JLS specification states that
“Loading refers to the process of finding the binary form of a class or interface type with a particular name, perhaps by computing it on the fly, but more typically by retrieving a binary representation previously computed from source code by a compiler, and constructing, from that binary form, a Class object to represent the class or interface.”
“According to the JLS, it must be transformed from it’s binary representation to something the Java virtual machine can use, this process is called linking. Finally, the class is initialized, which is the process that executes the static initializer and the initializers for static fields declared in the class. ”
Well its lots of stuff done before the class is loaded.
Now in terms of JDBC, Class.forName( new Driver(“sqldriver”));
The call to Driver class is made, which contains the static method:
} // with try catch block.
The above line will register the driver and add it in the list of Vector. The elements are referred when DriverManager.getConnection() is done | <urn:uuid:c8ea5ecf-bcbf-407e-9eb5-700cb9327fd2> | CC-MAIN-2022-49 | https://tech-read.com/2008/12/01/what-happens-with-classfornamestr/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00566.warc.gz | en | 0.915422 | 304 | 2.796875 | 3 |
Oct. 16, 2015
(Dallas)—Two first-of-their-kind studies on Dallas-Fort Worth smog, released in tandem today, challenge the State’s assertions that dirty air has little public health impact on local residents, and new controls on major polluters won’t make a difference.
Using the school’s banks of supercomputers, engineers at the University of North Texas replicated the massive computer model being used by the Texas Commission on Environmental Quality to write the region’s new clean air plan. They then used the model to test the effects on local ozone when pollution levels were reduced from aging coal plants in East and Southeast Texas, the three cement plants in Midlothian and oil and gas facilities in the Barnett Shale and elsewhere – tests the state agency had not done on its own model in almost a decade.
They found such reductions, mimicking the effects of modern pollution controls, would propel North Texas into compliance with the current smog standard and decrease smog significantly around the region. It’s the first time anyone outside of Austin or the TCEQ has had access or control over a region’s “non-attainment “ modeling.
Read about the Downwinder UNT Study
in Dallas Morning News.
Look at the effects of reductions in pollution from the East Texas Coal Plants, Midlothian cement plants and North Texas Oil and Gas facilities from the study itself:
Then take 30 seconds and please
sign the petition for a real air plan from EPA,
and then another thirty seconds to
send e-mails to the Director and regional chief of EPA asking the same thing.
Your air needs your help. Please. Thanks.
Accompanying the release of the UNT work was another unprecedented effort by Dr. Robert Haley of the Texas Medical Society using EPA “benefit mapping” software to estimate public health benefits of cutting smog levels by at least five parts per billion. Complying with the current standard in DFW would require a decrease of more than 8 parts per billion from 2015 levels.
Haley found a five part per billion decrease would have large medical and economic consequences, including preventing over 75 deaths, 350 Emergency Room Visits, 160 hospital admissions, and 120,000 lost school days in the 10-county DFW nonattainment area annually, totaling over half a billion dollars in medical care costs and lost productivity.
“Together, these studies provide a powerful rebuke to the state’s inaction on dirty air in DFW,” said Jim Schermbeck, Director of Downwinders at Risk, the 21-year old clean air group that financed the UNT effort. “Using the data from these two studies, it’s clear reducing pollution from these major sources, especially the coal plants, would have a large and beneficial impact on North Texas public health, and save the local economy millions of dollars.“
Frustrated by over two decades of official state modeling that has predicted success but delivered five failed air plans in a row, Schermbeck said his group wanted a second opinion going into a new round of planning, as well as an opportunity to look at different control strategies TCEQ refused to examine.
In the past, such an undertaking would have cost millions. But the price of computing has come down to such an extent that even huge, complicated tasks like modeling the DFW airshed can be performed for a fraction of what they once cost.
Downwinders wrote grants for the project and assembled $120,000 in financing from local DFW sources including the Harold Simmons Foundation, Trammell S. Crow, Garrett Boone and the Dallas Foundation.
Dr. Kuruvilla John, Associate Dean of Research and Graduate Studies, Professor of Mechanical and Energy Engineering, College of Engineering at UNT, was chosen because of his accessibility, lower transportation costs and past collaborations with the TCEQ. Dr. John reported to a committee of current and former officials, chaired by former Dallas County Judge Margaret Keliher, and including Dallas County Commissioner Theresa Daniel and Dallas City Councilwoman Sandy Greyson.
Schermbeck noted Dr. John built a duplicate of the Texas Commission on Environmental Quality’s computer model used to design a new DFW clean air plan. All the variables used in the model are the state's, including the meteorology and the projected emissions of pollution from all categories of sources. None of the information used in the model originated with Downwinders at Risk or the UNT engineers. In fact Dr. John benefited from TCEQ’s technical assistance in completing his duplicate and previewed the results to TCEQ officials in September.
In all, UNT ran at least 15 different scenarios through the TCEQ model that examined the impacts of reductions of smog-forming pollution from individual sources, as well as combinations of reductions. Among the most significant findings:
- Without a doubt, the single largest industrial source of DFW smog is the pollution from antiquated coal-fired power plants in East and Southeast Texas. If you want to have a quick and dramatic reduction in DFW ozone levels, installing modern controls on these coal plants would be the first step.
- Conversion of the Midlothian cement plants from dirtier wet kilns to cleaner dry kilns over the last decade has improved air quality, but they remain sizable polluters. Adding modern controls could significantly decrease downwind smog, particularly in Tarrant and Johnson Counties.
- Decreases in regional smog from reductions in pollution from oil and gas sources skew lower since those sources are primarily located in the five western-most, downwind counties of the DFW non-attainment area. Even so, reductions in pollution from oil and gas have disproportionally higher impacts because they affect many of the historically worst performing air quality monitors.
- The most effective combination of control measures studied were: 1) reducing smog-forming Nitrogen Oxide (NOx) pollution by 90% or more at the coal plants, 2) reducing NOx by 90% at the cement plants, and, 3) electrification of all large gas compressors, or a 100% reduction in NOx from those sources. This combination brings down smog levels at all 20 DFW monitors an average of 5 parts per billion (ppb) and lowers the regional average to below 75 parts per billion – the current smog standard DFW has yet to meet.
- UNT’s results have already answered many of the questions the EPA posed to the state in its official comments concerning the proposed Dallas-Fort Worth air plan earlier this year, including,
“How would a reduction in NOx emissions from utility electric generators in just the counties closest to the eastern and southern boundaries of the DFW area impact the DFW area?”
As has been noted, UNT found reductions in smog-forming pollution from these coal plants have a profound impact on DFW smog levels. UNT’s study now becomes a source of information about the state’s plan for the EPA when the state itself can’t, or won’t, provide it
6. The results of this study directly contradict statements made in the Texas Commission on Environmental Quality’s most recent DFW air plan. For example,
"…the impact of the suggested NOX controls on East Texas EGUs is not expected to have a substantive impact on Denton Airport South monitor in the DFW area.”
In fact, UNT’s results show removing 90 percent of the coal plants’ NOx would reduce ozone by as much as 4. 5 parts per billion at the Denton monitor and bring the monitor’s annual average down below the current required standard of 75 ppb.
DFW has been in continual violation of the Clean Air Act for almost 25 years because of its chronic smog problem. Despite state and industry claims that air quality is getting substantially better, progress has stagnated over the past five years. In 2010, the regional smog average was 86 ppb. Today, it’s 83.
DFW is one of only four non-California metropolitan areas the EPA estimates will still not be in compliance with the current 75 ppb standard by a deadline of 2018. It’s also one of only about 10 metropolitan areas not expected by EPA to meet the brand new proposed standard of 70 ppb by 2025.
Critics of the state, such as Schermbeck, note this lack of progress corresponds to anemic air plans proposed by Austin, which never include new controls on any major polluters. Schermbeck says the UNT study provides a technical justification for now including those controls, if not to the state, which he discounts as an “unserious participant,” at least to the EPA and local officials. It’s this unprecedented breaking-up of the state’s monopoly on the technical expertise upon which the entire local air quality planning process relies that Schermbeck thinks is as important as the results themselves.
“Up to now, if the state didn’t want to look at a control measure, it didn’t get looked at,” said Schermbeck. “If the state said a new technology wouldn’t do any good, you just had to take their word for it. But if a local grassroots group can scrape-up enough funding to provide a viable alternative, there’s no excuse for DFW officials to be completely dependent on the state anymore.”
Downwinders and other clean air advocates are already using both studies to argue on behalf of a Dallas County Commissioners Court resolution coming up for a vote this Tuesday, Oct. 20. Sponsored by Commissioner Daniel, it urges the judge hearing the EFH bankruptcy case to require modern controls on Luminant’s three East Texas coal plants as a condition of their sale.
Schermbeck also predicted the UNT study would have a very large impact on how the current DFW air plan would be drafted, and even its chances of being approved by EPA. “UNT’s use of TCEQ’s own model to quantify the effects of off-the-shelf control technologies provides answers to questions EPA already had about the plan, but which TCEQ seems reluctant to respond. Because there’s another set of hands on the model that can produce those answers, TCEQ has significantly less wiggle room to rationalize why they aren’t requiring the controls their own data shows are effective at reducing smog.”
It’s also likely the UNT results will be used in an effort to tie the East Texas coal plants to the fate of the DFW nonattainment area under the new 70 ppb ozone standard. “It’s unfair and bad public policy for those plants to have such a huge impact on our air quality but be untouched by our area’s anti-pollution measures,” said Schermbeck, whose group successfully petitioned the EPA to bring Ellis County and the Midlothian cement plants into the DFW nonattainment area over a decade ago.
Schermbeck looks at both studies as evidence of a trend toward greater democratization in policy-making because of grassroots access to heretofore prohibitive resources. “Both of these studies show the ability of new technology to empower groups that were previously at the mercy of Big Government or Big Industry. When citizens have new tools, things change.”
More Information on the UNT study can be found at: www.dfwozone study.org. | <urn:uuid:52ecc5e0-850b-44ab-9ef3-02e5e272719c> | CC-MAIN-2023-40 | https://greensourcedfw.org/articles/downwinders-risk-studies-challenge-states-indifference-smog | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00499.warc.gz | en | 0.945257 | 2,411 | 2.546875 | 3 |
|History · Timeline · Resources|
|Anti-globalization related · Arab
Christian · Islamic · Nation of Islam
New · Racial · Religious
Secondary · Academic · Worldwide
|Deicide · Blood libel · Ritual murder
Well poisoning · Host desecration
Jewish lobby · Jewish Bolshevism
Kosher tax · Dreyfus affair
Zionist Occupation Government
|On the Jews and Their Lies
Protocols of the Elders of Zion
The International Jew
The Culture of Critique series
|Expulsions · Ghettos · Pogroms
Jewish hat · Judensau
Yellow badge · Spanish Inquisition
Segregation · The Holocaust
Nazism · Neo-Nazism
Community Security Trust
EUMC · Stephen Roth Institute
Wiener Library · SPLC · SWC
UCSJ · SCAA · Yad Vashem
|Antisemitism · Jewish history|
Jewish Bolshevism, Judeo-Bolshevism, Judeo-Communism, and known as Żydokomuna in Poland, is a pejorative stereotype based on the claim that Jews are the driving force behind the modern Communist movement, specifically the Russian Bolsheviks.
The expression was the title of a pamphlet, The Jewish Bolshevism, and became current after the October Revolution (1917) in Russia, featuring prominently in the propaganda of the anti-communist "White" forces during the Russian Civil War. It spread worldwide in the 1920s with the publication and circulation of The Protocols of the Elders of Zion. It made an issue out of the Jewishness of some leading Bolsheviks (most notably Leon Trotsky) during and after the October Revolution. Daniel Pipes says that "primarily through the Protocols of the Elders of Zion, the Whites spread these charges to an international audience." James Webb writes: It is rare to find an anti-Semitic source after 1917 which does not stand in debt to the White Russian analysis of the Revolution."
The label "Judeo-Bolshevism" was used in Nazi Germany to equate Jews with communists, implying that the communist movement served Jewish interests and/or that all Jews were communists. In Poland before World War II, Żydokomuna was used in the same way to allege that the Jews were conspiring with the USSR to capture Poland. The allegation still sees use in antisemitic publications and websites today.
Jews had been a persecuted minority in the Russian Empire. They had endured a form of racial segregation in the Pale of Settlement, as well as sporadic pogroms. In the period from 1881 to 1920, more than two million Jews left Russia.
According to Berel Wein:
Expulsions, deportations, arrests, and beatings became the daily lot of the Jews, not only of their lower class, but even of the middle class and the Jewish intelligentsia. The government of Alexander III waged a campaign of war against its Jewish [citizens]... The Jews were driven and hounded, and emigration appeared to be the only escape from the terrible tyranny of the Romanovs."
Jews in relatively large numbers joined various ideological currents favoring gradual or revolutionary changes within the Russian Empire. Those movements ranged from the far left (anarchists, Bundists, Bolsheviks, Mensheviks) to moderate left (Trudoviks) and constitutionalist (Constitutional Democrats) parties. Such monarchist parties as Union of the Russian People expressed clearly antisemitic attudes, and included antisemitic paragraphs in their political program.
A high percentage of ethnic Jews in comparison to the percentage of the total population took an active part in Bolshevik movement and revolutionary leadership before the revolution and for years after - see details below. Most of these Jews were hostile to traditional Jewish culture and Jewish political parties, and were eager to prove their loyalty to the Communist Party's atheism and proletarian internationalism, and committed to stamp out any sign of "Jewish cultural particularism".
Of the 21 members of the Central Committee (CC) of the Bolshevik party in April 1917, three were ethnic Jews: Lev Kamenev, Grigory Zinoviev, and Yakov Sverdlov. Of the thirteen committee members who, during the historic meeting on October 10, 1917, agreed for the necessity of armed revolution (leading to the October Revolution), six were Jewish: Zinoviev, Kamenev, Leon Trotsky, Moisei Uritsky, Sverdlov, and Grigory Sokolnikov – although Kamenev and Zinoviev opposed the revolution, and Trotsky abstained). The ethnic lineage of Vladimir Lenin, the head of the committee and the leader of the Bolshevik Revolution, was diversely composed of Russian, German, Swedish, Jewish, and Kalmyk blood (see Blank family).
Of the 25 Bolsheviks who worked alongside Lenin as members and candidate members of the Politburo of the Central Committee from August 1917 to 5 March 1918 (between the 6th and 7th congresses) there were six ethnic Jews: Adolph Joffe, Kamenev, Sokolnikov, Trotsky, Uritsky, and Zinoviev. Concurrently, there were eleven Russians (Bubnov, Bukharin, Kiselyov, Krestinsky, Milyutin, Oppokov, Preobrazhensky, Sergeyev, Stasova, and Yakovleva), two Latvians (Berzin and Smilga), two Ukrainians (Muranov and Skrypnyk), two Georgians (Dzhaparidze and Stalin), one Pole (Dzerzhinsky), the Finnish-and-Russo-Ukrainian Alexandra Kollontai, and one Armenian (Shahumyan).
Of the 22 Politburo Bolsheviks working alongside Lenin from 8 March 1918 to 17 March 1919 (between the 7th and 8th congresses) as members or candidate members there were seven ethnic Jews: Joffe, Mikhail Lashevich, Sokolnikov, Sverdlov, Trotsky, Uritsky, and Zinoviev. Concurrently, there were nine Russians (Bukharin, Kiselyov, Krestinsky, Oppokov, Sergeyev, Alexander Shlyapnikov, Vasili Shmidt, Stasova, and Mikhail Vladimirsky), three Latvians (Berzin, Smilga, and Stuchka), one Ukrainian (Petrovsky), one Pole (Dzerzhinsky), and one Georgian (Stalin).
The Second All-Russian Congress of the Workers', Soldiers', and People's Deputies' "Decree Instituting the Council of People's Commissars" of 17 October 1917 established the Narkomats,or People's Commissariats. These were to be coordinated by a central body, the Council of People's Commissars, or, effectively, the cabinet of the Bolshevik government. Besides Lenin as chairman of the council and Gorbunov as secretary, it was to be composed of fourteen ministerial positions. These were occupied by fifteen officials called the People's Commissars (or Narkoms) – of whom only Trotsky was ethnically Jewish. (The position of People's Commissar for Military Affairs was concurrently filled by both Vladimir Antonov-Ovseyenko and Nikolai Krylenko, while no People's Commissar for Railways was temporarily appointed.)
After Lenin's death, the title of the chairman of the Narkom passed to Alexei Rykov, an ethnic Russian. Among the 23 Narkoms between 1923 and 1930, there were thirteen Russians (including Rykov), five Jews, two Georgians (Stalin and Ordzhonikidze), one Pole (Dzerzhinsky), one Moldovan (Frunze), and one Latvian (Rudzutak). In the 1930s, there was one person of Jewish descent in the Politburo: Lazar Kaganovich.
According to the 1922 party census, there were 19,564 Jewish members of the Bolsheviks, comprising 5.21% of the total. The same year's figures for the 44,148 members of the Bolshevik party that had joined before October 1917 – the Old Guard, as Lenin referred to them, which included those who had joined the Bolshevik Party during its massive growth phase between February and October 1917 – indicated that 7.1% were ethnic Jews. 65% were ethnic Russians.
Among members of the Central Executive Committee of the Soviet Union (parallel to the Central Committee of the Communist Party) in 1929, there were 402 Russians, 95 Ukrainians, 55 Jews, 26 Latvians, 13 Poles, and 12 Germans – Jewish representation had actually declined from 60 members in 1927.
Of the 417 Communists who constituted the ruling circles of the Soviet Union in the mid-1920s – as members of the Central Executive Committee, the party Central Committee, the Presidium of the Executive of the Soviets of the USSR and the Russian Republic, the People's Commissars, and the chairman of the Executive Committee – a mere 27, or just 6%, were ethnic Jews.
The numbers of Jews in important positions continued to shrink in the 1930s when Stalin had his old comrades Kamenev and Zinoviev executed while in prison, after a rigged trial in 1936. Zinoviev and Kamenev had previously been expelled, in October 1927 and December 1927 respectively, from the top positions they shared with Stalin in the Soviet ruling elite. Leon Trotsky had concurrently been expelled from the Soviet Union in 1927 and was then assassinated in Mexico City in 1940, by a Soviet agent, the Catalan Spaniard Ramón Mercader.
Between 1936 and 1940, during the Great Purge, Yezhovshchina and in particular after the rapprochement with Nazi Germany, Stalin had largely eliminated Jews from top level party, government, diplomatic, security and military positions in the Soviet Union. After dismissing Maxim Litvinov as Foreign Minister in 1939, Stalin immediately directed incoming Foreign Minister Vyacheslav Molotov to "purge the ministry of Jews". Although some scholar believe that the latter decision was affected mostly by domestic reasons, others argue it possibly was a signal to Nazi Germany that the USSR was ready for non-aggression talks. Remaining Jews were eliminated (with a few notable exceptions) after the war, during the antisemitic campaigns in 1947-1953.
According to historian Iakov Etinger, many Soviet state purges of the 1930s were antisemitic in nature, and a more intense antisemitic policy developed toward the end of World War II, Stalin in 1946 allegedly said privately that "every Jew is a potential spy."
Walter Laqueur states in his book The Changing Face of Antisemitism: From Ancient Times to the Present Day:
To what extent did the presence of many Jews among the Communist leadership contribute to antisemitism? It certainly played an important role in antisemitic propaganda, and it is certainly true that during the 1920s Jews were heavily overrepresented in the ranks of party and state officials. With the rise of Stalin, Jews were removed from key positions and very often "liquidated." The fact that other minorities were also disproportionately highly represented did not greatly matter - there was no tradition of anti-Latvianism in Russia, nor were Latvians found in the very top positions. Nor did it matter that Jews were equally strongly represented among other anti-Communist parties of the left such as the Mensheviks and the Social Revolutionaries, or that the anti-Stalinist opposition was to a considerable extent of Jewish extraction.
"Antisemites... refused to acknowledge the important and indisputable fact that the Jews who participated in the Socialist and Anarchist movements around the world, including the Russian Jews in particular, were renegades of the Jewish nation who had no connection with Jewish history nor with Jewish religion nor with Jewish masses, but were rather exclusively internationalists, promoting the ideas shared by Socialists of other ethnicities, and were hostile to the Jewish nation in general."
According to figures provided by the Federal Security Service of the Russian Federation, there was a total of 49,991 Cheka operatives as of 1 October 1921: 38,648 Russians, 4,564 Jews, 1,770 Latvians, 1,559 Ukrainians, 886 Poles, 315 Germans, 186 Lithuanians, 152 Estonians, 102 Armenians, and 1,808 from other ethnic groups. The Cheka's Board of thirteen functionaries was composed of three Russians (Kedrov, Ksenofontov, and Mantsev), three Jews (Messing, Unszlicht, and Yagoda), two Latvians (Latsis and Peters), two Poles (Dzerzhinsky and Menzhinsky), one Ukrainian (Bokiy), one Belarusian (Medved), and one Armenian (Avanesov).
The ethnic breakdown for mid-level and upper-level officials of the OGPU leadership (the Cheka's successor agency in the 1920s) for 15 November 1923 consists of 54 Russians, 15 Jews, 12 Latvians, 10 Poles, and 4 others.
Of the 2,402 functionaries in the central apparatus of the OGPU as of 1 May 1924, there were 204 Jews, 1,670 Russians, 208 Latvians, 90 Poles, 80 Belarusians, and 80 Ukrainians, with functionaries from other ethnic groups the remaining 3.5%.
Yagoda's secret police oversaw the execution of both Zinoviev and Kamenev, but fell victim to Stalin's next round of purges. In September 1936, Yagoda was replaced by Nikolai Yezhov, not of Jewish descent , until Yezhov was also arrested and executed in March 1937, becoming replaced by Lavrentiy Beria, an ethnic Georgian like Joseph Stalin. No other Jew besides Yagoda held the highest position within the bureaucracy of Soviet state security organizations. Under Yezhov, the number of Jews fell precipitously (to just 6 people) while the number of ethnic Russians among the leadership of the NKVD secret police rose to 102 people (67%) – and the purges, at Stalin's instigation, then entered their bloodiest period (1937–1938) (see Great Purge).
Vadim Abramov's monograph "Jews in the KGB" demonstrated that although Jews were trusted by the early communist authorities because as formerly disenfranchised they were not expected to harbor any loyalties to Tsarist regime, their number in the security services at no point in history exceeded 9%, and from 1927 never exceeded 4%.
Rosenberg's obiter dicta about Russia and Communism are found in the Mythos and in countless brochures and booklets: Bolshevism is the revolt of the Jewish, Slavic and Mongolian races against the Germans (Aryan) element in Russia; it is the revolt of the steppe, the hatred of the nomads of everything great, heroic, racially healthy; all big things in Russian history had been achieved by Germans or those of German blood, but the revolution of 1917 had exterminated the Aryan element. . . ., nor did the Jewish-Soviet Government represent the Russian people. To the Nazi ideologists, all leading Soviet statesmen were Jews: Lenin and Trotsky, Lunacharsky and Rakovsky, Kuibyshev and Krasin, Kaganovitch and Manuilsky among them. Whoever was not a Jew was a Chinese. Rosenberg developed an elaborate theory about the leading role of Chinese silk merchants in the Russian revolution. While other observers of the Soviet scene engaged in political speculation and social analysis, the Nazis' Russian experts were preoccupied with another kind of scientific investigation which hardly left them time for anything else. They tracked down the 'real' (Jewish) names of all Soviet leaders; Lunacharsky, for instance, became Mondschein - for who did not know that 'luna' was 'moon' in Latin? This, by and large, was the level of Nazi Sovietology.—Laqueur, Ibid., pp. 21-22
In Nazi Germany, this term expressed the common perception that Communism was a Jewish-inspired and Jewish-led movement seeking world domination from its very origin. The term was popularized in print by German journalist Dietrich Eckhart, who authored the pamphlet "Der Bolschewismus von Moses bis Lenin" ("Bolshevism from Moses to Lenin") in the early 1920s, thereby tying Moses and Lenin as both Communists and Jews. Alfred Rosenberg's 1923 edition of the Protocols "gave a forgery a huge boost". This was followed by Hitler's highly inflammatory statement in Mein Kampf (1924): "In Russian Bolshevism we must see Jewry's twentieth century effort to take world dominion unto itself."
According to Michael Kellogg, the author of The Russian Roots of Nazism. White Émigrés and the Making of National Socialism, 1917–1945:
In his groundbreaking 1939 book, L’Apocalypse de notre temps: Les dessous de la propagande allemande d’après des documents inédits (The Apocalypse of Our Times: The Hidden Side of German Propaganda According to Unpublished Documents), Henri Rollin stressed that "Hitlerism" represented a form of "anti-Soviet counter-revolution" which employed the "myth of a mysterious Jewish-Masonic-Bolshevik plot." Rollin investigated the National Socialist belief, which was taken primarily from White émigré views, that a vast Jewish-Masonic conspiracy had provoked World War Ⅰ, toppled the Russian, German, and Austro-Hungarian Empires, and unleashed Bolshevism after undermining the existing order through the insidious spread of liberal ideas. German forces promptly destroyed Rollin’s work in 1940 after they occupied France, and the book has remained in obscurity ever since.
A major source for propaganda about Jewish Bolshevism in the 1930s and early 1940s was the pro-Nazi and virulently antisemitic international Welt-Dienst / World-Service / Service Mondial news agency founded in 1933 by Ulrich Fleischhauer.
The American ambassador to Russia, David R. Francis, wrote in January 1918 that most of the Bolshevik leaders were Jewish. A report by British Intelligence, "A Monthly Review of the Progress of Revolutionary Movements Abroad", states in the first paragraph that international Communism is controlled by Jews. Capt. Montgomery Schuyler, a military intelligence officer in Russia, reported regularly to the chief of staff of U.S. Army Intelligence, who relayed the reports to the US president. In one of these reports, declassified in 1958, Schuyler states: "It is probably unwise to say this loudly in the United States, but the Bolshevik movement is and has been since its beginning, guided and controlled by Russian Jews of the greasiest type..." In another report on June 9, 1919, Schuyler wrote the following, which the historical record shows to be inaccurate:
A table made up in 1918, by Robert Wilton, correspondent of the London Times in Russia, shows at that time there were 384 commissars including 2 Negroes, 13 Russians, 15 Chinamen, 22 Armenians and more than 300 Jews. Of the latter number, 264 had come from the United States since the downfall of the Imperial Government.
Lucien Wolf, one of the voices of the period who took issue with the propagation of the Jewish Bolshevism conspiracy and the Protocols of the Learned Elders of Zion hoax concurrently being spread in the West, writes in The Myth of the Jewish Menace in World Affairs (1921):
"...I find a notorious German anti-Semitic book quoting... Wilton, of the Times, as its authority for the statement that 'of 384 People's Commissars who constitute the Government only 13 are Russians, while 300 are Jews.' What are the facts? The only officials in Soviet Russia who are authorised to hold the rank of People's Commissars are the members of the Cabinet. These number 17, and of them 16 are indisputably Gentiles, while only one – Trotsky – is Jewish by birth... The other so-called Jewish Commissars are all men of the second and lower ranks of officials belonging exclusively either to the Civil Service or the Soviet analogue of our municipal life. They are probably fairly numerous, but in what may be called the second rank they do not number more than ten at the outside. The others may or may not be convinced Bolsheviks. They are servants of the State who may have many other motives for serving the Soviets than an enthusiasm for Lenin's politics...Trotsky has in his War Office and Corps of Officers probably as many ex-Tsarist officers – including sixteen Generals – as there are 'Jewish Commissars' in the whole Soviet Administration. And yet nobody dreams of describing the Red Legions as a Tsarist Army. These officers are probably not even Bolsheviks. If we could know their motives we should probably find that they were not very widely different from those which actuate the 'Jewish Commissars.'
"All this is not to say that there are no professing Jews in the Bolshevist ranks, or that the number of indifferent and apostate Jews who have thrown in their lot with the Soviets is quite negligible. What is contended is that normally the Jew is intensely antipathetic to Bolshevism, and that at the beginning of the Revolution relatively very few Jews – even of those who were Jews by race only – rallied to the call of Lenin. That this situation has changed during the last year is not improbable. But with whom does the blame rest? If Jews have reluctantly turned toward Bolshevism, it is because they have been forced into it by the anti-Bolsheviks. They cannot but be alarmed by the persistancy and passion with which the charge of Bolshevism is levelled against them, and the threats which come from all sides to avenge in their persons the sins of Lenin and Trotsky."
In an article in the Illustrated Sunday Herald on February 8, 1920, Winston Churchill asserted::
There is no need to exaggerate the part played in the creation of Bolshevism and in the actual bringing about of the Russian Revolution by these international and for the most part atheistic Jews. It is certainly a very great one; it probably outweighs all others. With the notable exception of Lenin, the majority of the leading figures are Jews.
Churchill declared that Bolshevism must be "strangled in its cradle." However, according to Churchill biographer Sir Martin Gilbert, Churchill had been sent a copy of The Protocols a few weeks prior to the publishing of this article.
Such attitudes were not uncommon in the UK at the time of the allied intervention in the Russian Civil War. The British court of inquiry, appointed to investigate the Arab 1920 Palestine riots, associated Zionism with Bolshevism and identified the Jewish nationalist leader Ze'ev Jabotinsky with a Labor Zionist party, Poale Zion, which the court called "a definite Bolshevist institution." In reality, Jabotinsky was a staunch anti-socialist who had fought with the Jewish Legion of the British Army in World War I and was already emerging as a leader of the right-wing Revisionist Zionist opposition to the Labour Zionist movement.
The allegation was revived in a December 28, 2006 interview by Iranian Presidential Advisor Mohammad Ali Ramin who was appointed secretary-general of the new "World Foundation for Holocaust Studies" established at the International Conference to Review the Global Vision of the Holocaust:
"The Bolshevik Soviet government in Lenin's time, and later, in Stalin's - both of whom were Jewish, though they presented themselves as Marxists and atheists... - was one of the forces that, until the Second World War, cooperated with Hitler in promoting the idea of establishing the State of Israel." | <urn:uuid:e2521fdb-08de-4b5d-91a8-72841f7bb840> | CC-MAIN-2014-52 | http://www.thefullwiki.org/Jewish_Bolshevism | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768197.70/warc/CC-MAIN-20141217075248-00090-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.956632 | 4,951 | 2.921875 | 3 |
Trace the growth of freedom for African-Americans from the time of slavery through today.
One possibility would be to write about the treatment of slaves and the treatment of and rights granted to free blacks before the Civil War. You could mention the recognition of slavery in the Constitution.
You might write about the Emancipation Proclamation, the three (3) Civil War Amendments and the development of Jim Crow laws and the rise of the KKK.
Maybe mentioning Plessy vs. Ferguson.
Talk about civil rights in the 1920’s and 30’s and the fight to desegregate the military during and after World War II
Brown vs Board of Education
The Montgomery Bus Boycott, Sit-ins, Selma, Watts and other riots, the Black Power Movement) (obviously, you will not have time to do more than one or two of these in such a short essay)
The growth of the Civil Rights movement under Dr. King
And maybe the most obvious way to end would be 2008 (Obama’s election…)?
Talk about rights women had during colonial days
Maybe Abigail Adams’ plea to John to not forget the ladies in the Constitution
Seneca Falls Convention
Struggle for women’s rights before the Civil War
Struggle for those rights leading up to women getting the right to vite
Efforts to pass the E.R.A.
Struggle for equal pay for equal work | <urn:uuid:87f85f19-0a20-427b-a075-383b60df669f> | CC-MAIN-2019-35 | https://acedessays.com/development-or-manifestations-of-a-major-theme-concept-or-issue-through-the-cultural-products-social-configurations/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00544.warc.gz | en | 0.938552 | 305 | 3.65625 | 4 |
More than a history lesson: Presentation includes personal reflections on Pearl Harbor
To Bill Williamson Jr., the Japanese attack on Pearl Harbor is far more than a few Wikipedia paragraphs.
The retired physicist and university professor was an eyewitness to the 15 unforgettable minutes that changed the course of world history.
Williamson, president of the Prescott-based national chapter of the Sons and Daughters of Pearl Harbor Survivors, presented a Pearl Harbor history lesson, combined with personal reflections, to a group of about 25 people at the Las Fuentes Village Resort on Friday, Dec. 6.
Some attenders are veterans who lived from a distance through the infamous day. Others were not alive, but believe such significant world events must be revered and commemorated.
DEC. 7, 1941
On that morning 78 years ago today, 2,403 service members and civilians were killed; their sacrifice on the day the late President Franklin D. Roosevelt declared the “date which will live in infamy” must never be forgotten, Williamson and his audience affirmed.
“I’m here because it was our life, and it meant something,” declared Ed Lingelbach who was a 10-year-old boy living in Los Angeles when the attack occurred. Three relatives, one serving on the USS Utah, were among the survivors.
A United States Navy veteran who toured Hiroshima in 1952, Lingelbach said he remembers the shock and fear of those around him. But he said he was “too young to be scared.” His family ended up on a train headed east to the Luke Air Force Base in Phoenix.
“I remember it,” Lingelbach said of that historic time he and his wife, Lois, will commemorate by attending church on what is now referred to as National Pearl Harbor Remembrance Day. “It changed the world; it changed my world.”
The son of a Navy engineering chief petty officer assigned to the USS Pennsylvania battleship, Williamson Jr. was eight years old the morning he tried to dial a friend from his home seven miles away from Pearl Harbor. He wanted to go to a matinee movie. On his first try just before 8 a.m., the phone had no dial tone. On his second try, an operator ordered him to hang up – Pearl Harbor was under attack.
In a blur, his father dressed and departed for his ship in dry dock; the radio was calling all military personnel to duty. The announcer confirmed this was no drill, he noted.
From their house, Williamson said they could see plumes of smoke, and explosive noise; all residents were ordered inside and for days the city was under martial law. Despite the presumed dangers, Williamson said he was a curious boy. He couldn’t resist peeking outside shaded windows, and scouring the area for souvenirs. To this day, he has a couple pieces of shrapnel from his father’s car and large-caliber machine gun bullets still in his possession.
His clearest memory is of Christmas Day, 18 days after the attack. His mother was ordered at 10 a.m. to pack up him and his siblings – one suitcase each. Four hours later, they were evacuated to what then was an unknown location. Their father’s whereabouts remained a secret.
Their ship landed in San Francisco; Williamson remembers thinking he was in Alaska because it was so cold. As they departed, however, the foursome forgot all about the weather. Williamson’s father was alive; his embrace an indescribable comfort, he said.
Williamson grew up to be a true World War II buff. In his considerable research, he has uncovered now unclassified documents that speak to that long ago tragedy.
One he recalled is an FBI intercepted phone call from a Japanese dentist on Oahu to a caller in Tokyo. Though there was talk of the Pacific fleet in Pearl Harbor, most of the conversation revolved around island horticulture, Williamson said. The last line of the conversation was, “The hibiscus and poinsettias are now in bloom,” he repeated.
Unknown then, Williamson said that was a coded message about the timing for a surprise attack.
Much speculation has been made since that perilous day about its ripple effects across time; the nuclear bombing of two Japanese cities still a matter of debate in certain circles.
To Williamson, and those of his era, it is impossible, even an insult, to second guess the agonizing decisions forced upon American leaders wishing to spare millions of military lives, yet unable to convince Japan to surrender.
From his perspective, Williamson said “it was the right thing to do.”
Audience member Wayne Blanton, 67, a friend of Williamson, concurs.
“You can’t judge what happened back then by today’s standards,” Blanton said.
Though Pearl Harbor was a history lesson for him, Blanton said it is one that needs to be remembered because “if we forget it, we’re lost.” | <urn:uuid:5150f74d-d309-47ab-9cb8-1de3fdb6fcf5> | CC-MAIN-2020-05 | https://www.dcourier.com/news/2019/dec/06/more-history-lesson-presentation-includes-personal/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00057.warc.gz | en | 0.9765 | 1,046 | 2.6875 | 3 |
About two-thirds of Americans think they’re tech-savvy. Unfortunately, 55% of survey respondents wouldn’t know what to do if they were hacked. Another 36% don’t think twice about sharing personal information online.
Another 36% have already fallen victim to a hacker.
While you might think you’re technologically savvy, it always helps to learn a little more. Here are nine effective tips that can help when learning about technology. With these tips, you can remain savvy and safeguard yourself in the future.
Start learning more about tech with these nine simple tips today.
Video content could account for over 80% of all online web traffic by this year.
One of the best ways to start learning about computers is on a computer. Pull up YouTube and start learning about tech. You can find pre-made playlists and tutorials to get started.
Regardless of how much you think you know, consider starting from the basics. You might learn new, valuable information. What you learn could even point you in a new direction.
For example, you might not realize you have an interest in cybersecurity until you learn more about how it works.
Start small. Then, develop your knowledge base over time. Once you feel more comfortable, start learning more about complicated subject matters.
You can do a deep dive into YouTube if you want to learn more about a specific subject, too.
As you start watching videos and learning about technology, take the time to ask questions. Many YouTube creators want to help. They won’t hesitate to answer any questions you have, big or small.
Spark conversations with other tech-savvy people online. Learn from their experience and expertise. They could point you in the direction of new learning materials, too.
Watching YouTube videos could help you retain a lot of the information you learn as well.
Also read: Top Tips for Finding the Best Video SDKs!
If you’re more interested in taking a course to start learning about technology, take an online class. For example, you can start taking classes through Lynda. Lynda is affiliated with LinkedIn.
You can sign up for courses specific to your interests.
Developing well-rounded expertise could broaden your skillset. It could make you a more valuable asset if you’re trying to build your career.
Consider taking classes in video editing and Photoshop if you’re interested in marketing. Otherwise, try a coding class.
Make a note of which classes you feel the most passionate about. Then, follow that passion by finding more learning materials online!
Take the time to research your favorite technology brand or company. Some companies offer helpful workshops and classes. You can learn how to use the technology you already own and love.
For example, you can start taking classes at your local Apple store.
Most classes are free, too! A short course won’t take up your entire day. Meanwhile, you can learn more from people who are passionate about the same tech you love.
Make sure to have questions prepared before the course. As you keep asking questions, you can continue broadening your knowledge base.
You don’t have to become tech-savvy on your own. Instead, consider finding Meetup groups in your area.
Meetup can provide you access to groups for everything from books to hiking. There are plenty of technology groups, too. You can learn more from like-minded people who want to expand their own knowledge.
Let them know what you’ve learned so far. Exchange information and resources. They could point you in the direction of a new course or tool.
Try teaching a little yourself, too. If you’re able to teach it to others, you can feel confident.
If you struggle to explain a concept, however, you should keep learning about that subject.
If you can’t meet in person, that’s okay. Consider using the Google Meet app. You can learn more here: https://setapp.com/how-to/use-google-meet-app-for-mac.
If a Meetup group or Apple class doesn’t suit you, consider returning to school. You can earn a degree or certification for one of the subjects you’re interested in.
Many people benefit more from learning in a formal setting. You can pick the mind of an instructor or professor to gain deeper insights.
Earning a degree or certification is also beneficial if you want to advance your career.
As you search for online resources, consider checking social media. Social media will allow you to access some of the greatest minds in the world.
Start by checking LinkedIn. Discover videos, articles, and updates you can subscribe to. Consider joining a community, too.
Otherwise, check Medium. The medium can give you access to many online tech publications. Keep reading articles daily to continue learning.
Using social media can keep you up-to-date with new technologies and advancements. Technology changes every day. If you’re not up-to-date, your knowledge might become irrelevant.
In addition to using Medium, you can also subscribe to RSS feeds to remain up-to-date. You’ll get instant updates when blogs post fresh content. You can download an RSS feed reader app to your phone to get started.
For example, you might want to start using Feedly. Feedly will allow you to organize your feed using different categories.
Try to find blogs that feature thought leaders in the technology industry. They can keep you informed of the latest developments. You can continue learning about technology from people who are changing the industry.
If you can’t retain information by watching videos, consider reading instead. Head to your library or bookstore. Otherwise, check Amazon for technical books.
You can also gain professional technology skills by volunteering for a campaign. Meet up with tech-savvy professionals who can show you the ropes. You can find tech-oriented organizations online to get started.
Discovering how to become tech-savvy doesn’t have to feel daunting. Instead, keep these nine tips in mind. Remember to keep learning about technology as new tools and software becomes available.
Remaining up-to-date could turn you into a real asset to any team!
Searching for more helpful tips and tricks? You’re in the right place.
Explore our latest guides today for more. | <urn:uuid:7f9e7f56-7cde-46d8-9809-c8d47d950df6> | CC-MAIN-2024-10 | https://assistsuite.com/tips-to-become-more-tech-savvy/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00711.warc.gz | en | 0.926873 | 1,333 | 2.828125 | 3 |
Claire Holmes and Keith Lilley
The place name of ‘Swansea’ is first recorded on coins dated to the 1140s, indicating the presence of a mint in Swansea. Despite this, the often-cited view is that the name is of an earlier, Viking origin (‘Sweyns-ey’ – Sweyn’s island). Whether or not Swansea has Viking origins as a settlement remains a vexed issue. Although alternative suggestions for the name have been put forwards, it is worth reviewing the circumstantial evidence that might support the town’s Viking ancestry. The evaluation that follows here draws upon research undertaken using existing information on archaeology, linguistics and topography, and using a Geographical Information System (GIS) to map out these historical sources to look at how Swansea may have fitted into the wider pattern of Irish-Norse trading activity, as well as how the layout of part of the medieval town is similar to some Viking towns in Ireland.
There has been relatively little research into Wales’ Viking past when compared to the other parts of the British Isles. Welsh historians and archaeologists have focused almost exclusively upon their Celtic past and there has been little interest in Vikings. Part of the reason for this is probably the lack of written sources for Wales. Unlike Ireland and England where the Irish annals and Anglo-Saxon chronicles tell us much about the activities of the Vikings, Wales lacks contemporary sources. This has led many to believe the Vikings therefore had little or no impact on Wales, however this lack of written records doesn’t necessarily mean a lack of Viking activity.
During the Viking period Wales was divided into several petty kingdoms whose rulers were almost continually in conflict with each other. We do have some evidence for Viking raiding taking place in Wales, starting in 852, when Cyngen of Powys was slain by the ‘gentiles’ a common name for the Vikings in the annals. This reached a climax in 914 and then there was a time of relative peace until the mid-tenth century.
During the Viking Age the Irish Sea would have acted as a highway for the Viking traders, trade between the Vikings who settled in Ireland, the Hiberno-Norse and the Anglo-Saxons is well documented and was probably quite extensive. In the initial phases of Viking activity their main trade route ran from Dublin to Chester.
This invariably brought the Vikings into close contact with the north coast of Wales. Then during the latter part of the Viking Age, Bristol replaced Chester as the principal focus for Hiberno-Norse trade with Anglo-Saxon England. This new route would have brought the Vikings along the South coast of Wales and Swansea, with its safe natural harbour, would have been a logical stopping point. There was also the potential of potentially rich inland trade with the Welsh, the native Welsh Princes would have no doubt found a trading town very convenient.
Sources for Viking activity in south Wales
The most direct evidence for Viking activity in Wales comes from the Viking sagas. The Jómsvíkinga saga tells the story of a Viking marrying a Welsh princess and gaining half a Welsh kingdom and other sagas, like Njála and Orkneyinga Saga, show the familiarity the Norsemen had with Wales and the Welsh coast. Strikingly the annals of Loch Ce explicitly state that merchants from Wales came to Dublin Bay to fight in the battle of Clontarf in 1013 and it is almost certain that these were Vikings who had settled in Wales and not native Welsh as they are not known to have been merchants at this time.
In the Irish records, Welsh horses are mentioned on various occasions, they appear to have been the most highly prized during the Viking Age and a trade in them seems to be firmly established at this time. Clearer evidence for Viking contact with Wales comes from the Historic and Municipal Documents of Ireland. It records the names of the citizens of Dublin at the end of the twelfth century and records a large number of people from Bristol, Cardiff, Swansea, Haverfordwest and other towns on the Bristol Channel, most of which had Norse names. This shows that there were Scandinavian traders who had settled in these towns at least by this late date.
There is no direct documentary evidence to prove that Norse settlements were established in Wales but we should not attach too much weight to this. Several previously unsuspected Viking settlements have been discovered in recent years, for example at Woodstown, Co. Waterford and at Llanbedroch, Anglesey, these where both Viking trading and manufacturing settlements. What the documentary sources do record however, are some instances of Viking raids along the Welsh coast and these have been added to the GIS and mapped out here.
One aspect of their culture which the Vikings brought with them to every coast they visited was their language. This they imposed with varying degrees of success upon all areas in which they stayed for any length of time. In the British Isles the linguistic legacy of the Vikings’ language consists mainly in loanwords. Scandinavian loanwords in English are both numerous and well documented. In Wales, however, vocabulary is scarcely affected; there are no syntactic or morphological changes that can be attributed to the Norsemen. Some writers have claimed that the colloquialisms of South Pembrokeshire indicate Norse influence. This evidence however, is not strong and is certainly not proof of extensive Norse colonisation.
Perhaps the greatest evidence we have for Viking settlement in Wales is not from language, but from place-names. Viking place-names come in two types; topographical names, or a generic name denoting a settlement. The instances in which these two types occur have been mapped as shown here.
There is a scattering of Viking place-names along the north and south coasts of Wales. It is interesting to note that instances of Viking place-names occur much more frequently along the south coast. Some scholars have argued that this relative abundance of place-names does not denote actual settlements but merely indicates the Vikings’ use of these places as navigational markers. However, others point out the difficulty in explaining the survival of these names without the presence of at least some Scandinavian speakers on mainland Wales. The establishment of small Viking markets and settlements along the south Welsh coast seems the most obvious explanation for the adoption of these names into common usage.
Archaeological evidence for Viking activity is also present in south Wales, although unfortunately not for any physical settlement remains. Various Viking burials, single coins and hoards have been discovered in Wales, mainly along the coast. Another interesting source of evidence of Viking presence comes from sculpture. Stone slabs and crosses were common during this period and several examples found in Wales have heavy Viking influence such as the splendid pillar-cross at Nevern, Pembrokeshire. The occurrences of these sculptures have also been mapped and shown here. The erection of these crosses is clearly more likely if contact between the Vikings and the Welsh was direct and continuous, for example through a settlement. Another piece of archaeological evidence for Viking activity in Wales is the excavation of a possible Viking trading ship at Alexandra Dock, Newport in 1878, where a portion of the side of a Viking ship, thought to date from 900 was discovered.
There are some other forms of evidence for Viking activity in Wales which, while they cannot be easily geographically located, are of special interest. For example, in Carmarthenshire there are some odd survivals of possible Scandinavian names in family genealogies. Recent research has also shown the presence of local population with rare blood types. The frequency of A genes among the indigenous population of Pembrokeshire, for example, occurs at levels of up to 33.6 percent, which is far higher than anywhere else in Wales and is only matched in parts of Scandinavian.
Placing the Vikings in Wales in context
When we look at the GIS-based map with all these various forms of evidence mapped together trends of Viking activity in Wales begin to emerge.
Activity is generally more prevalent along the coasts, in particular along north and the south coasts of Wales. Also of note is the cluster of activity between Anglesey and north Wales, and significantly the recently-discovered Viking site of Llanbedrgoch lies right at the centre of this cluster of activity. The high incidence of evidence along the south coast matches indicates how important and highly trafficked the trade route was between Dublin and Bristol for the Vikings.
Although on the whole, archaeological data for Wales is limited, some scholars suggest there were various areas of Viking settlement along the south Wales coast. Further evidence to support this view comes from looking at Viking activity not just in Wales but also on the Irish side of the Irish Sea. In fact a series of settlements along the coast may have been necessary to the Viking traders moving around the Irish Sea.
The idea of a series of settlements along the coast of south and east Ireland began with the reinterpretation of an existing archaeological site on Beginish Island, Co. Kerry. Sheehan et al proposed there was a long-lived settlement there that functioned as a Viking Age maritime waystation. The house types excavated at the site are similar to those of Viking Dublin and the finds are as sophisticated as those found in the tenth-century towns. Considering the Beginish site in the context of its location situated on the route between Viking Cork and Limerick, it seems hardly possible that the strategic importance of the island, as a natural heaven for supplies, shelter and repairs, could have been overlooked by the Vikings.
Way-stations such as Beginish would have been essential during the Viking Age. From written sources we know that as far as possible Vikings would travel by day and hug the coast so they at least needed somewhere to overnight. It has been calculated that a typical Viking boat could only be rowed 36 nautical miles per day in bad weather. Therefore a whole series of these waystations would have been necessary to act as havens in times of bad weather and also as places to rest or carry out repairs to ships.
Sheehan et al suggest the existence of Scandinavian settlements all along Ireland’s western coast, linking Limerick all the way to Dublin and using historical and toponomastic evidence have identified a dozen or so possible way-stations. So if we accept this premise of the necessity of a series of way-stations along the coast between the major trading towns and apply it to Wales, using the data already collected, a series of probable areas of Viking settlement can be identified. With its wide natural harbour, Swansea would be an obvious one of these waystations in south Wales.
If Swansea began life as a small way-station such as that on Beginish Island, unlike Beginish, Swansea had a rich hinterland and through trading with the local Welsh a settlement could have quickly grown into an important trading town. As this was such an obvious advantage to the Welsh princes it would be unlikely to involve hostility and warfare of the type to be recorded in the written record.
Comparisons with Viking towns in Ireland
Another very important strand of evidence to look at in determining Swansea’s origin is the town itself. It was through plan analysis as part of the City Witnes project that the idea of a Viking origin for the town resurfaced. Viking towns often have a distinctive plan, so does Swansea follow this? To investigate this, the layout of Viking towns in Ireland, namely Dublin, Cork, Limerick, Waterford and Wexford, are examined and compared with each other here, and with Swansea, to see how far Swansea fits the model. There are three main areas for comparison; location, layout and defences.
The Irish Viking – or ‘Hiberno-Norse’ – towns named above were chosen for comparison for two reasons. Firstly due to the lack of urban development in Ireland before the Vikings’ arrival these towns are entirely ‘Viking’ in character and can be thought of as a blueprint of what Scandinavians would have thought to be an ‘ideal town’. Secondly and perhaps more importantly, if Swansea was founded by Vikings then it was most likely by the Hiberno-Norse from the major trading towns in Ireland. Examining the Viking elements of these towns is made difficult by the differing amounts of excavation undertaken in the various towns, the variety of the nature and location of the sites which has led to different degrees of preservation and there has also been a general lack of syntheses from the individual towns.
All of the Irish Viking towns started out as ‘longphort’ or ship-bases and then began to take on the functions of trading and manufacturing enclaves, quickly developing as active centres in a network of overseas trading. They are all very extremely similar in terms of their locations. The Viking towns in Ireland are usually located on relatively high ground overlooking the confluences of tidal river estuaries and their tributaries. Indeed, this is typical of Viking towns in general not just in Ireland. The choice of river also seems to have been important to the Vikings, generally they are sited on rivers which gave access to rich interiors or hinterlands. For Dublin this river is the Liffey and its tributary the Poddle, for Wexford it’s the confluence of the Slaney and it’s tributary the Bishop’s Water River, Limerick is sited just to the north of the confluence of the Shannon and it’s tributary the Abbey River, Waterford is situated on a triangular promontory bounded on the north by the river Suir and the south-east by marshy ground on either side of the St John’s river. Cork is slightly different in that it lies on an island in the River Lee.The reason for these locations seems to be twofold, sheltered mooring and defensive. Swansea’s location fits in with this pattern very well; it is situated on high ground not far along the River Tawe, a fairly large and navigable river, within easy reach of the open sea and has a large and sheltered harbour.
The possible layout of each of the Viking towns in Ireland has been modelled in the GIS using ArcMap and simplified plans produced, reproduced here. These plans are based on excavation reports, historic maps and previous scholarly inference. In no case has a complete town plan from the Viking age been recovered and mapped. The areas which have been excavated are small and they do not necessarily give us much information about the specifics of the towns during the Viking age. Therefore, it is important to remember that these plans are just models and so conjectural.
From examining the layouts of Viking towns in Ireland they seem to have consisted of one or two main streets, parallel to the shore or a riverbank, with lanes running back from it at right angles. The streets tend to follow the natural contours of the locations chosen. This is also the case in Swansea, where the suggested Viking area of the town consists of one main street running parallel to the river with lanes running back at right angles. Its layout does not look out of place among the Hiberno-Norse towns; in fact it appears very similar to the plan of Viking Limerick, as can be seen here.
Another important aspect of the layout of the towns is their size. Limerick and Cork are similar in size to each other as too are Waterford and Wexford. Unsurprisingly Dublin is by far the largest, for it was the most important trading town for the Vikings of the Irish Sea. Swansea is slightly smaller in area than the Irish towns; however, this is probably to be expected as it would never have been as important a site as the Irish examples. Only one Viking street in Ireland has been excavated: Peter Street in Waterford. Here, sixteen metres of the original surface was uncovered and had a maximum width of 3.6m. Estimates for Swansea street-widths calculated from the GIS work indicates a similar range.
At both Dublin and Waterford excavations have shown the presence of defensive stone and earthen walls encircling the towns during the Hiberno-Norse period. In the written records Giraldus Cambrensis uses the term murum to describe Wexford’s and Dublin’s defences and he also uses the same term for the town walls of Waterford and Limerick which implies that they were also defended by stone walls before the coming of the Anglo-Normans in the twelfth century. However, it seems clear that Viking Cork had no town walls of stone. This is possibly because its position on an island may have been thought of as enough of a defence. At Swansea, too, there are no tangible indications that there were pre-Norman defences, unless those of the castle and around Wind Street were set out on earlier alignments.
So was Swansea founded by the Vikings? The evidence for Viking activity in Wales certainly puts forward a highly convincing argument for a lot of Viking influence in and around Swansea along the adjacent coasts. The theory that Vikings used a series of waystations along their trading routes adds much weight to the idea of permanent settlements in Wales. Looking specifically at Swansea itself, it does not look out of place when compared to other Viking towns around the Irish Sea coasts.
Taken on their own none of evidence laid out here would constitute a credible argument, but taken together they appear consistent and mutually reinforce each other. The picture that builds up from looking at the evidence is that the founding of Swansea under the Vikings is plausible. After all, something drew the Normans to the site. The name Swansea was sufficiently well entrenched and regionally important to be taken over by the Anglo-Normans in around 1100. | <urn:uuid:0f7d342f-9ccf-4296-979e-a5512531d843> | CC-MAIN-2021-49 | https://medievalswansea.ac.uk/en/context/viking-swansea/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00027.warc.gz | en | 0.973814 | 3,661 | 3.34375 | 3 |
Introduction to Data Analysis with R
18 October, 2023
The real goal of this manual is not to teach the R language per se, but to allow the students to manage the basic concepts in order to be able to explore and analyze data using R. After this course the student will be capable of clearly understand the R code that someone else wrote and customize her own code according to the need. This means having the possibility to explore different and more complex solutions for the student’s problems by exploring new packages and paving the way to become a statistician!
I also created also an ad-hoc R playground accessible here. The R playground allows the student to exercise in order to reinforce her knowledge of R and data analysis. Exercises are fundamental in order to fix the knowledge acquired in class. The platform is structured in the same way as this manual in order to have a linear learning process. I finally want to remark that the code I provide is “my best and easiest version of the solution”, I hope you will appreciate it. In fact, with R, it is possible to do the same thing in 1000 different ways, and by looking at the internet you can have a confirmation of it.
This website is free to use, and is licensed under the GNU General Public License 2.0. If you’d like a pdf copy of the book, you can download it here. | <urn:uuid:9b7bb8c0-b8d3-4a3e-b592-8c6427474c84> | CC-MAIN-2023-50 | https://federicoroscioli.github.io/book/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00773.warc.gz | en | 0.957306 | 287 | 2.578125 | 3 |
Aids may be spreading more quickly in areas of sub-Saharan Africa due to malaria, a new study has found.
The region has a significant overlap of the diseases and HIV may be advancing malaria infection rates, according to a new study from Fred Hutchinson Cancer Research Centre and the University of Washington.
Malaria increases the viral load of someone infected with HIV, making it easier to pass on the virus to a sex partner.
And HIV-infected people are more prone to malaria infections because their immune systems are weakened, the study discovered.
Laith Abu-Raddad, an HIV research scientist in the Hutchinson Centre's Statistical Centre for HIV Research and Prevention, said: "While HIV/AIDS is predominantly spreading through sexual intercourse, this biological co-factor induced by malaria has contributed considerably to the spread of HIV by increasing HIV transmission probability per sexual act."
Mr Abu-Raddad designed a mathematical model based on HIV and malaria co-infection data from Malawi which made it possible for researchers to gauge the impact of malaria on HIV.
Approximately 24.5 million adults and children are infected with HIV in sub-Saharan Africa, according to Aids charity Avert.
© Adfero Ltd | <urn:uuid:26df92b3-bb42-47db-9191-1176968de8dc> | CC-MAIN-2017-26 | http://www.netdoctor.co.uk/healthy-living/news/a14597/malaria-may-increase-spread-of-hiv/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320489.26/warc/CC-MAIN-20170625101427-20170625121427-00260.warc.gz | en | 0.938642 | 248 | 3.265625 | 3 |
What is a thesis statement? A thesis statement is usually a sentence that states your argument to the reader. It strong titles for an essay appears in the first paragraph of an essay.
Why do I need to write a thesis statement for a paper? Your thesis statement states what you will discuss in your essay. Not only does it define the scope and focus of your essay, it also tells your reader what to expect from the essay. A thesis statement can be very helpful in constructing the outline of your essay. Also, your instructor may require a thesis statement for your paper. How do I create a thesis statement?
A thesis statement is not a statement of fact. It is an assertive statement that states your claims and that you can prove with evidence. It should be the product of research and your own critical thinking. There are different ways and different approaches to write a thesis statement. Start out with the main topic and focus of your essay. Make a claim or argument in one sentence. | <urn:uuid:9ec105de-c954-4eee-b9e0-d27bda5cce39> | CC-MAIN-2018-17 | http://tedxcollegeofeurope.eu/strong-titles-for-an-essay/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948285.62/warc/CC-MAIN-20180426144615-20180426164615-00231.warc.gz | en | 0.938248 | 199 | 3.234375 | 3 |
Today, trust is the premise of large numbers of our consistently cooperations and trades. We put cash into a bank believing that it is more secure there. We offer data to each other on the premise that they won’t impart it to another person without earlier authorization. We likewise put a ton of trust into bits of paper – cash, land records, exchange data, and so forth In any case, these bits of paper can be handily taken, fashioned, or changed. These days while we are moving towards the computerization of data, information can in any case be hacked and spilled without any problem.
Blockchain is a progression of records or information appropriated through an organization of PCs with the goal that no focal PC or data set holds the data, all things considered, each PC contains the information making it a completely straightforward framework. Why blockchain is so amazing is because of its unhackability. Each trade, exchange, or record went into an information base is time-stepped and checked by an enormous gathering of believed PCs before it is put as a square into a chain of different trades, exchanges, or records. After it is entered, the data “block” can’t be adjusted or erased on the grounds that that implies modifying or erasing the chain on all the PCs at the same time which is practically unthinkable.
The social effect that blockchain innovation can have is enormous and can be actualized toward tackling numerous issues the world faces today in an assortment of regions. In most non-industrial nations horticulture adds to a significant piece of their GDP; yet numerous ranchers endure because of absence of cash, absence of land, and absence of different assets vital for cultivating. Regardless of whether a rancher possesses a huge plot of land, it is frequently inaccurately recorded. Property titles likewise will in general be helpless to misrepresentation, just as exorbitant and work escalated to manage. Blockchain can be executed to digitize land and ranchers will at this point don’t need to fear somebody hacking the data set and submitting misrepresentation over land proprietorship as a wide range of record-keeping will turn out to be more productive.
The innovation won’t just reveal to you who presently possesses the land, however it can likewise disclose to you who recently claimed the land making it incredibly easy to follow the chain of title. Blockchain can accurately refresh the records of what bit of the land has a place with which individual and what amount was created from that land, permitting the ranchers to get the right measure of subsidizing fundamental.
Among numerous different regions, blockchain innovation can add to the medical services area. Support of Public medical care records is a consistent issue in numerous nations with its detachment to specialists and patients. By making a decentralized ‘record’ of clinical information, we can eliminate the paper trail in medical care and make patients’ clinical records accessible to the patients and specialists effectively and productively. It likewise dispensed with the dread of the clinical records getting lost. Such a change isn’t just advantageous yet fundamental where specialist patient privacy is getting progressively significant.
Presently, blockchain is basically utilized in account. Blockchain can precisely record the exchange among individuals, and in light of the fact that each move is with insignificant to no charge, it can possibly disturb the present monetary associations that bring in cash by charging an expense for every exchange or move made. This makes what is known as a distributed organization, where an outsider isn’t needed for an exchange to happen.
In the monetary world this means if an individual needs to buy something, normally the bank and the spot/site from which you’re purchasing, will take a small portion of what you’re paying. Also, in light of the fact that there is no expense for an exchange in blockchain or the exchange charge is infinitesimal contrasted with the exchange esteem, most if not all the cash goes straightforwardly to the maker or wholesaler of the item.
A similar rationale can be applied to the music business too. Truth be told, it is now being executed today. As opposed to an individual buying a melody through a real time feature like Apple Music or Spotify, they will pay straightforwardly to the craftsman and get the rights to tune in and utilize the music. This dispenses with the requirement for a ‘center man’ and makes each exchange just between 2 elements.
Always prepare before you make a choice. There is so much info about lighthousetreatment at https://lighthousetreatment.com
check out this page for a dependable seller that will give you the domestic violence attorney you’re looking for quickly and easily. | <urn:uuid:69bed777-87bd-4983-86d3-856736af7cbd> | CC-MAIN-2021-39 | http://nevadalawenforcement.info/author/admin/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00531.warc.gz | en | 0.937998 | 953 | 2.546875 | 3 |
I seem to be obsessed with submarines, which isn’t something I realized until now. There’s nothing worse than a stealth obsession.
Anyway: sound has a speed. That speed depends on the properties of the medium. The speed of sound in air is about 330 meters per second. The speed of sound in olive oil is 1430 meters per second (Yes, somebody measured that, and here’s the proof, along with some other handy tables of speeds of sound in other materials). The speed of sound in aluminum is 6320 m/s. The speed of sound in beryllium is an amazing 12,900 m/s, which is not only faster than the International Space Station’s orbital velocity, it’s actually faster than Earth escape velocity.
The speed of sound in seawater is a much tamer 1500 m/s (the exact speed depends on depth (meaning pressure), temperature, and salinity). That got me thinking that, since I’ve abandoned the submarine car in favor of an actual submarine, why not make it a supersonic submarine?
There’s nothing in the laws of physics to stop me. There’s no physical reason that makes it impossible to move through water faster than the speed of sound in water. There are plenty of engineering reasons, but we’ll get to those in a second.
The interesting thing about moving supersonically in water is that water isn’t a gas. Air isn’t very dense, it’s compressible, and it doesn’t have many phase transitions readily available. It can liquefy if you compress it while keeping it cool, and it can turn to plasma if you compress it and let it heat up. But when you’re talking about supersonic vehicles, the air heats up rather than cooling down. It heats up a lot. The air around re-entering spacecraft turns into plasma.
Water, on the other hand, is much denser (pure water is about 1,000 kilograms per cubic meter), and compared to air, is almost incompressible. Water is about five orders of magnitude less compressible than air. This means that a whole slew of new phenomena happen in supersonic submarines that don’t happen in supersonic aircraft. The coolest one is cavitation.
Cavitation is what happens when, for one reason or another, the pressure on a volume of water drops below that water’s vapor pressure, or when something moves through the water so fast that the cavity in the water doesn’t have time to close around the object. There are all sorts of cool videos of cavitation on the Internet, but I think this is my favorite:
Ain’t that beautiful? Many thanks to The Slow Mo Guys and Smarter Every Day for filming that, and for doing exactly what I would have done if I had access to one of those slow-motion cameras.
Notice the large cavity that opens behind the bullet as it travels. The spherical cavity around the gun’s muzzle is from the blast of hot, escaping gas, but the sort of sausage-shaped bubble attached to the bullet is pure cavitation. The bullet slams the water aside so hard that, even though water is usually very good at closing voids within itself, it has no choice but to stand aside for a fraction of a second. For the brief period that it exists, that cavity is full of a little water vapor that evaporated from the surface and not much else, and as soon as the moving water has deposited its inertia in the stationary water around it, pressure wins out and makes the bubble collapse again.
But a cavitation bubble isn’t the same thing as a sonic boom. The bullet in that video was fired from a revolver. Since I don’t know the make of the revolver or what kind of ammunition it was using, I don’t know the muzzle velocity, but if we assume it was in the same class as a Ruger firing .357 Magnums, then the muzzle velocity would have been around 450 meters per second. Not faster than the speed of sound in water. Barely faster than the speed of sound in air.
Either way, we know that our supersonic submarine would cut quite a large hole in the water as it flew. (Flew? That doesn’t sound right. What is the right verb for a submarine’s movement? Somebody let me know. That’s gonna bother me now). It would also, true to acoustics, generate a sonic boom. I would guess that this sonic boom would be more than enough to rupture the eardrums of unlucky divers who happened to get in its way, and that the drop in pressure after the shock would probably create a whole swarm of smaller cavitation bubbles in its wake. And because the water that evaporated from the surface of the cavity would be moving roughly in the same direction as the cavity (relative to the submarine), the submarine would likely create a second, much slower-moving sonic boom in the water vapor. After the submarine passed, the cavity would expand to a maximum size, then slam closed, possibly heating the gases inside enough to glow. This is called sonoluminescence, and is very impressive:
After the collapse, you’d have a soup of very hot bubbles and very hot water vibrating and rising to the surface. The water would be hot from the collapse of the cavity. Here’s about what our supersonic sub would look like:
And, from a practical perspective, it would be hot for another reason. To break the speed of sound in water, you’d need the engine power of 4 Saturn V moon-rockets.
Yes, really. This comes from the basic drag formula I’ve been using all along:
drag force = (1/2) * (density of medium) * (velocity of object)^2 * (drag coefficient (depends on shape and texture of object)) * (projected or cross-sectional area of the object)
I have no idea where we’re going to get a rocket four times as powerful as a Saturn V. I guess we could just make the end of the submarine a parabolic reflector and drop antimatter out the back and ride the blast of steam, but I hear people get pretty upset if you go dumping antimatter in the ocean. Especially if they happen to be swimming behind you.
But that’s the least of our worries. At 1500 meters per second, the front of the submarine would be experiencing pressures ten times greater than at the bottom of the Mariana Trench. Not unsurvivable, but between the pressure of the water against the front of the hull and the cavitation going on around the back of the hull, the whole thing’s going to need to be a pressure vessel. That’s going to be one heavy submarine. While we’re pretending that a submarine-sized craft could produce 141 million Newtons of thrust for an extended period, why not just turn the bastard into a rocket? Besides, I’m afraid that if I tooled around underwater making watery sonic booms, I might upset an octopus, and I have a deep and inexplicable affection for octopuses.
But before we stop doing weird things underwater, there’s a question that demands to be answered: if our supersonic submarine would need four times the thrust of a Saturn V to travel through the water, how fast would the Saturn V itself be able to go underwater. Well, input some reasonable values into the drag equation, set the drag equation equal to 35 million newtons (the Saturn V’s first-stage thrust), and we have:
41.2 meters per second
92 miles per hour
148 kilometers per hour
The Saturn V is one of the most powerful rockets ever built. And, under ideal conditions, it could manage 92 miles an hour underwater. I have driven my car faster than that. A good baseball pitcher or cricket bowler can throw faster than that. I guess the people at NASA weren’t planning for the possibility that the space between the Earth and Moon might inexplicably be filled with seawater. The fools.
But, although 92 miles an hour is not a very impressive speed, especially by rocket standards, you have to admit, it’d be one hell of a sight to behold:
Now that‘s a fucking torpedo! | <urn:uuid:04100fdd-0a35-4d55-8319-1942a0044615> | CC-MAIN-2019-18 | https://sublimecuriosity.com/tag/saturn/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596541.52/warc/CC-MAIN-20190423074936-20190423100936-00367.warc.gz | en | 0.946562 | 1,769 | 2.96875 | 3 |
There are hundreds of handwritten manuscripts of the New Testament. There are many small differences between these hand-written copies. Most of these variants fall into the category of “typos” which do not affect the meaning of the text, but occasionally some manuscripts have words or even verses that are missing from other copies of the New Testament.
Recent English translations fall into two general camps in their approach to the text of the New Testament. Some translations closely follow the so-called Textus Receptus (TR, Received Text) which was the basis of the King James Version. The so-called Majority Text (MT) is not identical to the Textus Receptus, but both reconstructions of the text rely heavily on late medieval manuscripts and are sometimes also called the Byzantine text type. Closely following this tradition results in a longer text of the New Testament.
The second major approach follows a critically reconstructed text which relies much more heavily on older Greek manuscripts with an emphasis on texts from Egypt, where there are more old texts that have survived because of the dry climate. This text type is sometimes called the Alexandrian text. This tradition is summarized in the critical editions of the New Testament known as the UBS/Nestle editions. Overall, it is this tradition that results in a shorter text of the New Testament.
In this brief FAQ we cannot go into the intricacies of the ongoing battles between these two schools other than to note that proponents of the TR/MT end of the spectrum argue that the Byzantine text type is the most carefully preserved text in the main line of transmission of the text throughout the church, and that the Egyptian type texts have significant corruptions and omissions. Proponents of the UBS/Nestle tradition argue that the Byzantine type texts have been amplified by a lot of scribal additions over the centuries.
The NIV, ESV, and HCSB are all translations in the UBS/Nestle tradition. These translations may occasionally follow a Greek text different from the text given preference in the UBS/Nestle text.
The New King James and some of its cousins are examples of translations in the Textus Receptus tradition.
Our approach to the text of the New Testament is to avoid a bias toward any one textual tradition or group of manuscripts. An objective approach considers all the witnesses to the text (Greek manuscripts, lectionaries, translations, and quotations in the church fathers) without showing favoritism for one or the other, since each of these has its own strengths and weaknesses as a witness to the text. In the New Testament, a fuller text than that of the UBS/Nestle should be weighed on a case by case basis because UBS/Nestle tends to lean too heavily toward the theory that the shorter text is the better reading. In general, as we examine significant variants, the reading in a set of variants that has the earliest and widest support in the witnesses is the one included in the text. The other readings in a set of variants are dealt with in one of three ways:
- A reading that has very little early or widespread support in the witnesses is not footnoted in order to avoid an overabundance of textual notes.
- A reading with significant early and/or widespread support but not as much early or widespread evidence as the other reading is reflected in a footnote that says, “Some witnesses to the text read/add/omit: . . . .”
- A familiar or notable reading from the King James tradition (e.g. the addition or omission of a whole verse) whose support is not nearly as early or widespread as the other reading can be reflected in a footnote that says, “A few witnesses to the text read/add/omit: . . . .”
In short, readings and verses that are omitted from UBS/Nestle-based versions of the New Testament, which have textual support that is ancient and widespread are included in our translation. If there are readings where the evidence is not clear-cut, our “bias,” if it can be called that, is to include the reading with a note that not all manuscripts have it. The result is that our New Testament is slightly longer than many recent translations of the New Testament. | <urn:uuid:e8dac697-cbe6-4fe8-a4b5-eca626772de6> | CC-MAIN-2022-49 | https://wartburgproject.org/sp_faq/10/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00585.warc.gz | en | 0.954993 | 881 | 3.1875 | 3 |
IMPACT OF THE ARABIC LANGUAGE AND CULTURE ON ENGLISH
THE OTHER EUROPEAN LANGUAGES
From the desert they
came - men filled with religious zeal and riding under banners
inscribed with the motto: “There is no god but God and Muhammad is
His messenger.” Leaving
the conquered Middle East and North Africa behind they landed on the
Iberian Peninsula where they planted their religion and language.
men of the Arabian desert were not the usual conquerors. The cultures of the countries they occupied were not
destroyed, as had been the fate of civilizations overwhelmed by
other victorious armies, but preserved.
Later these cultures were absorbed and enriched to form the
Arab-Islamic civilization which was to be mankind’s pathfinder for
the language of these men from the desert, was one of the most
important vehicles which carried this culture of the East to the
Europe of the Dark Ages. In
the deserts of Arabia, before the Islamic conquest, this Semitic
tongue had become a beautiful language of poetry.
In that barren and inhospitable land it had developed an
enormous vocabulary. For
any object to be found in their desert, the Arabs had many words.
poet had no trouble in rhyming his verses, for he had a large
storehouse of synonyms from which to draw.
Hence, Arabic became unmatched as a language of prose and
Arabs were proud of their language and believed it had no equal
among the tongues of mankind. As
befitting a proud people, they spent much effort trying to keep
their basic language pure. Even
after the Islamic conquests, when foreign influences began to
stealthily move in, scholars tried to stem this tide.
Omar S. Pound in his book Arabic and Persian Poems in
Arab prides himself on using the ‘mot juste’ and in
ancient times many an Arab scholar is reported to have travelled
great distances to find out the exact meaning of rare word used by
an obscure Bedouin tribe. Often
we read of guests from far-off landsbeing closely cross-examined on
the use and meaning of a particular word found only in the guest’s
Islam was established and moved out of its Arabian homeland, Arabic
was the language which carried its message.
Every converted Muslim wanted to learn the tongue of these
desert men for it was believed that Arabic was the mother of all
tongues first taught to Adam in Paradise.
Chejne in his work, The Arabic Language: Its Role in History,
writes that an Arab author, Ibn Manzur (14th century), in the
introduction of his book Lisan, states that God made
the Arabic tongue superior to all other languages, and enhanced it
further by revealing the Qur’an through it therefore making it the
language of Paradise. Ibn Manzur further relates a tradition of the
Prophet Muhammad who said: “They (people) loved the Arabs for
three reasons: “I am an Arab; the Qur’n is Arabic; and the
language of Paradise is Arabic.”
this pride of language did not stop the Arabs from enhancing their
tongue after the conquests. From
the newly-conquered peoples Arabic borrowed a whole range of
scientific and technical terms.
These words enriched the desert tongue with its many synonyms
to produce a world language ‘par excellence’.
after the Islamic conquests, Arabic emerged as a full-fledged
language of empire and an instrument of thought which was to last
well into medieval times. Perhaps there is no language in the world
today that has survived some 1,400 years in its original form as has
Arabic, molded in that century of Arab greatness.
our times, it is the only tongue in world where an ordinary person,
even Arabs who are semi- educated, can pick up a
of poetry written in the 6th century and understand its contents.
All of the anients languages have either died out or have
vastly changed, and all other languages came into existence long
the 8th to the 12th centuries, Arabic became the scientific language
of mankind. During this period anyone who desired to advance in the
world and become a skilled and learned man had to study Arabic, just
as in cur day English opens the door to technical and scientific
advancement for ambitious men and women.
these centuries more works were produced in Arabic at that time than
in all the languages of the world.
One of the many libraries in Cordova alone had some 400,000
volumes of handwritten manuscripts; this at a time when Europe was
in the middle of the Dark Ages, and washing the body was considered
a dangerous custom.
the Muslim regions of Spain the use of Arabic quickly spread.
By the 10th century elementary education was general
throughout Arab Spain. With
the exception of the very poor, all boys and girls attended school.
Unlike the Christian parts of Spain and the countries of
northern Europe, the vast majority of people were literate.
Arabic, the language of this literate population, reached
less than a century even the Christians living under Muslim rule
became so proficient in Arabic that they neglected their own
tongues. R. Dozy in Spanish
Islam indicates that the Christians were captivated by the
glamour of Arabic literature and that men of taste despised Latin
authors, and wrote only in the language of their conquerors.
He cites Alvaro, a contemporary writer, who deplores this
fact with these words:
fellow-Christians, he says, delight in the poems and romances of the
Arabs; they study the works of Mohammedan theologians and
philosophers, not in order to refute them, but to acquire a correct
and elegant Arabic style. Where
to-day can a layman be found who reads the Latin Commentaries on
Holy Scriptures? Who is
there that studies the Gospels, the Prophets, the Apostles?
Alas! the young Christians who are most conspicuous for their
talents have no knowledge of any literature or language save the
Arabic; they read and study with avidity Arabian books; they amass
whole libraries of them at a vast cost, and they everywhere sing the
praises of Arabian lore. On
the other hand, at the mention of Christian books they disdainfully
protest that such works are unworthy of their notice.
The pity of it! Christians
have forgotten their own tongue, and scarce one in a thousand can be
found able to compose in fair Latin a letter to a friend!
But when it comes to writing Arabic, how many there are who
can express themselves in that language with the greatest elegance,
and even compose verses which surpass in formal correctness those of
the Arabs themselves!”
The fact that the Arabic language was being preferred over
their own language by the non-Muslim inhabitants made it inevitable
that the impact of Arabic on the Spanish Romance languages would be
words began to move into the Spanish dialects, especially in the
scientific and technical fields.
This borrowing did not enter the Spanish and later the other
European languages only by chance or due to an enchantment with the
Arabic tongue, but as a result of European Christians trying to
emulate Arabic culture - the uppermost in the world of that era.
Year after year the borrowing of these words gathered
momentum until the time when Arab culture in Spain began to decay.
The sacred language of
Islam was very well suited to imparting its words to other
languages. Titus Burckhardt in his book The Moorish Culture in
tend to become poorer, not richer, with time, and the original
character of the Arabic language, unworn by time, reveals itself in
its very wealth of words and immense range of expressions.
It can describe one object with different words and from
different aspects, and possesses words in which different, allied
concepts are condensed, without ever being illogical.
This equivocal aspect of Arabic in the most positive sense of
the word, is without doubt what makes it so appropriate as a holy
tongue. ...According to Ibn Khaldun, Arabic is a perfect language
because it can not only be declined and conjugated, but because the
“what” and the “how” can be derived from an action - in
other words, nouns and adjectives can be derived from the verbs.
However, this is possible only because in Arabic, the
“doing” verbs are far more comprehensive than, say, in English.
Much of what we tend to express by using an adjective in
conjunction with the verb “to be”, such as “to be
beautiful”, “to be inside”, “to be outside”, is expressed
in a single verb in Arabic.”
the tenth century onwards Arabic words and terms entered the Spanish
dialects on a massive scale. This
rich vocabulary of Arabic words was a great stimulant in the
evolution of European thought.
When, in Toledo, after its re-conquest by the Christians,
Arabic works were translated into the European languages Christian
thinking was revolutionized and Europe was put on the path to
There is no doubt that many Arabic words entered numerous
European languages after these translations.
Although, through the centuries, western historians have been
reluctant to admit this great role the Arabs had in the evolution of
Christian Europe, Arabic words in European languages indicate that
this contribution was considerable.
in spite of the fact that after the re-conquest the Spaniards tried
to cleanse the Arabic words from their language, over 8,000 words
and over 2,300 place-names remain. However, Spanish and the other
European languages were not the only tongues enriched by Arabic.
Many other languages, specially in Muslim lands, are
saturated with Arabic words. 57% of Pushto, 42% of Urdu and 30% of
Persian can be traced back to the language of the Qur >an.
Spain was the principal point of the Arab impact, Arab influences
also spread to Europe from Sicily after its conquest and Arabization.
In addition, the Crusaders returning from the civilized Arab
East brought back to the Europe of the Dark Ages many new products
and ideas. After these soldiers of the cross returned, English and
other European languages were
with numerous words in the fields of architecture, agriculture,
food, manufacturing, the sciences and trade.
There is no doubt that many of the Arabic loan-words in the
languages of Europe had their origin in the vocabulary of these
it was only natural that the borrowing of words would travel from
east to west since in that epoch the Muslim lands the most advanced
in the world. In the
same fashion today, English being the language of industry and
science, its words creep into foreign tongues, so it was with Arabic
in the era of the Crusades.
of the northern Europeans took part in these religious conflicts. In
the main, the crusaders made their wars in the Middle East but
sometimes they unsheathed their swords in Sicily and Spain.
In any case, wherever these soldiers of the cross had contact
with the Muslims, they always became familiar with new products
produced in the richer Arab lands.
As the taste for these products grew, merchants travelled to
the Arab lands for trade. Hence,
both merchants and warriors were instrumental in the transmission of
Arabic words into the European idioms.
was one of the European languages which received an inflow of words
from this early contact with Spain, Sicily and the Arab East. >From these lands it was a continuing process, the flowing
in of new words.
among others, French and Portuguese were instrumental, as a medium,
in some of the transmissions. From
the 18th to the 20th century, when Great Britain expanded its empire
to the four corners of the world, numerous other words entered
English by way of Africa, the Middle East and India.
Even after colonialism was no more, the inflow of words did
not come to a halt, but has continued until the present day.
process of borrowing Arabic words which began in the early Middle
Ages has done much to enrich the language of Shakespeare.
If, today, we leaf through the English dictionaries, we will
find that words of Arabic origin are to be found, here and there,
under every letter of the alphabet.
It will surprise many to know that some scholars have made a
study of the Skeats
Dictionary and found that Arabic is the seventh on the list of
languages that has contributed to the enrichment of the English
Only Greek, Latin, French, German, Scandinavian and the
Celtic group of languages have contributed more than Arabic to the
are over 3,000 basic words, along with perhaps some 4,000
derivatives, of Arabic origin or transmitted through Arabic in the
English language. Although
many of these words are rarely used, they nevertheless are to be
found in the English dictionaries.
There is no doubt that they have become English words and are
employed in some aspect of the language.
However, the Arabic derived words in the working tongue are
not insignificant. There
are some 500 words which impregnate our everyday speech.
Arabic-loan words employed in the everyday vocabulary indicate that
in almost all areas the Arabs contributed to the English way of
life. Some examples of
these common words with their Arabic origin will give an insight
into this contribution.
find Arabic words or Arabic transmitted words in all facets of
European life. In
architecture we have: alcove (al-qubbah), ogive (al-jubb) and
the abode of animals and birds: albatross (al-qadus), camel (jamal), gazelle
(ghazal), giraffe (zarafah), jerboa (yarbu), monkey (maymum), nacre
(naqqarah), popinjay (babbaghgha’), and tuna (tun);
the clothing and fabric trade: caftan (quftan), camlet (khamlah), chiffon (shaff),
cotton (qutn), fustian (Fustat), gauze (Ghazzah), jupe (jubbah),
macrame (miqramah), mohair (mukhayyar), muslin (musil), sandal
(sandal), sash (shash), satin (zaytuni), tabby (>attaabi=)
and taffeta (tafata;
the field of chemicals, colors and minerals:
alkali (al-gili), amalgam (al-jama), antimony (al-uthmud), arsenic
(al-zirnikh), azure (lazaward), bismuth (uthmud), borax (bawraq),
camphor (kafur), cinnabar (zinjafr), carmine (girmizi), crimson (qirmiz),
elixir (al-iksir), gypsum (jibs), kale (qili), lacquer (lakk), musk
(misk), myrrh (murr), natron (natrun) realgar (rahj al-ghar),
scarlet (siqillat), soda (suda), talc (talq) and zircon (zarqun);
the area of food and drink: alcohol (al-kuhl); apricot (l-barquq),
artichoke, (al-khurshuf), arrack (caraq),
banana (banan), candy (qand), cane (qand),
caramel (qanah), caraway (karawya), carob (kharrub),
coffee and cafe (qahwah), cumin (kammun), jasmine (yasmin),
julep ( julab), kabab or kabob (kabab), lemon,
lemonade and lime (laymun), mocha (mukha), orange (naranj),
saffron (za faran), salep (tha lab),
sesame (simsim), sherbet (sharbah), sherry (Sherish
- the Arab name of the city of Jerez de la Frontera in Andalusia),
spinach(isbanakh), sugar (sukkar - borrowed by nearly
every language of Europe from Arabic), sumach (summaq),
syrup, (sharab), tangerine (tanjah) and tarragon (tarkhun);
the sphere of geography and navigation: admiral (Amir al-bahr), alhambra (al-hamra),
canal (qanah), Gibraltar (Jabal Tariq), monsoon (mawsim), safari (safarah),
sahara (Sahara), saracen (sharqiyin ), Trafalgar (Taraf al-ghar),
typhoon (tufan), xebec shabbak) and Zanzibar (Zanjibar);
the home and daily life: adobe (al-tab), cable (habl), calabash (khirbiz),
carafe (gharafa), carboy (qirbah), divan (diwan), genius (jinn),
jar (jarrah), kismet (gismah), massage (massa=,
mattress (matrah), mulatto (muwallad), nabob and Nob Hill (na=ib),
ottoman (uthman) and sofa (suffah);
the land of music and song: fret (fard), guitar (qitar), hocket (iqaat),
lute (ud ), tabor and tambour (tanbur), timbal (tabl) and troubadour
the theatre of the macabre (magbarah): assassin (hashashin), ghoul (ghul), mafia (mu
afi), mumy (mumiya=) and massacre (maslakh);
the realm of personal adornment: amber (anbar), attar (atr), cameo (chumaban),
civit (zabad), henna (hinna=),
lapis lazuli (lazaward), mask and mascara (maskharah), sequin (sikkah)
and talisman (tilasm);
the world of plants: alfalfa (al-fisfisah), anil (al-nil), apricot (al-barquq),
carob (kharrub), crocus (kurkurn), hashish (hashish), lemon and lime
(laymun), jasmine (yasmin), lilac (laylak), orange (naranj),
safflower (asfar) and tamarind (tamr hindi);
the technical confines of science and mathematics:
almanac (al-manakh), alchemy (al-kimiya=), alembic (al-inbig), algebra (al-jabr),
algorism (al-khuwarizmi), average (awar), calibre (qalib), carat (qirat),
chemistry (al-kimiya=) and both cipher and zero (sifr);
the domain of the heavens: auge (awj), azimuth (al-samt) ,nadir (nazir),
zenith (samt al-ra=s) and the stars: Aldebaran (al-dabaran),
Achernar (akhir al-nabr), Algol (al-ghul), Alphard (al-fard), Altair
Betelgeuse (bayt al-jawza=), Deneb (dhanab), Fomalbaut (fam al-hat),
Menkar (minkhar), Merak (marikh al-dubb), Mizar (mi=zar),
Rigel (rijl) and Vega (al-nisr al-waqi=);
the arena of sports: racket (rah) and tennis (tinnis); and
trade and commerce: arsenal (dar al-sinaah), bazaar (bazar), cafe (qahwah),
cheque (sakk), dragoman (turjuman), magazine (makhzan), ream (rizmah),
tare (tarhah), traffic (tafriq) and tariff (tarifah).
Arabic-loan words themselves are only one aspect of the Arabic
impact on English. In addition, there are numerous English words and
terms which are a literal translation of the Arabic.
Amygdala is a direct rendering of the Arabic al-lawzatan;
dura mater and pia mater are versions of al->umm al-sulbah and >umm
raqiqah respectively; primum mobile is literally al-muharrak al->awwal;
sine is the English version of jayb; and surd is a rendering of
the Arabic contributions as reflected in the Arabic-loan words had
an impact on western society, but the introduction of the Arabic
numerals with the decimal system revolutionized life itself.
There is no question that before their use became prevalent
in Europe, the clumsy Roman numerals had retarded the evolution of
the 13th and 17th century, Latin Europe became gradually acquainted
with Arabic numerals. This
was mostly accomplished through the trade between the Christian and
took five long centuries before Christian Europe would fully accept
these numerals, introduced by the Arabs - the custodians of the
knowledge of antiquity. However, when they were accepted, Europe
left the dark ages behind.
translation of the works of Al-Khuwarizmi - the greatest of Arab
mathematician who invented algebra; Jabir ibn Aflah of Seville;
Masluma al-Majriti, whose name is taken from the Arabic name for
Madrid (Majrit); and others in the 12th and 13th centuries, by
Adelard of Bath, Robert of Chester, Gerhard of Cremona and Johannes
Campanus, was instrumental in putting Europe on the road to
field of Arabic contributions which has been barely explored are the
English words, not generally considered of Arabic origin but which
could be derived from, or transmitted through, Arabic.
They are numbered in the hundreds.
examples with their possible Arabic origin will tantalize a
researcher seeking the true roots of words: baboon (maymum),
balsam (balasan), buckram (abu qiram), caravan (the
Persian qanirawan, through Arabic) and risk (rizq).
These are only a few the list is endless.
in the context of contribution of other tongues to the English
language, Arabic, in the past, has had impressive record.
However, even in our times this contribution has not stopped.
The flow of Arabic words into English continues .
In the last few years some of the Arabic words that have
entered the language of Shakespeare are: Ayatollah, from the Arabic ayat-Allah,
burghul or burghal - burghul; couscous - kuskus;
falafel - filafil, fatwa - fatwa, halvah - halawa,
Hezbollah or Hizballah - hizb Allah,
hummus - hummus, intifada - intifada,
al-Jazeera - al-jazira, kibbe or kibbeh - kubbah,
leban - laban, shish kebab B shish kabab,
al-Qaida or al-Qaeda - al-Qiyada, taboula - tabbulah,
and Taliban or Tallaban - Taliban, have become part of
the English vocabulary.
light of the sample of words, which have been considered, it becomes
clear that Arabic, in the past and to a much lesser degree at
present, has contributed and is continuing to contribute, although
on a smaller scale, to the advancement of mankind. This makes it
quite evident that a language which the Arabs and, in fact, all
Muslims, consider to be ‘the language of paradise’ will continue
its worldly role. | <urn:uuid:50f604a0-6a3b-4ff2-a08b-924f850db592> | CC-MAIN-2013-20 | http://www.syriatoday.ca/salloum-arab-lan.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00073-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.913911 | 5,304 | 3.703125 | 4 |
Soaps are sodium or potassium salts of long chain carboxylic acids.
The Soap molecule has two ends with different properties.
Hydrophillic end :
Hydrophillic end dissolves in water
Hydrophobic which dissolves in hydrocarbons.
Cleaning action of soap:
The cleaning action of soap is due to micelle formation and emulsion formation. Inside water a unique orientation forms clusters of molecules in which the hydrophobic tails are in the interior of the cluster and the ionic ends on the surface of cluster. This results in the formation of micelle.
Soap in the form of micelle cleans the dirt as the dirt will be collected at the centre of micelle.
This property of soap makes it an emulsifier. The dirt suspended in micelles is easily rinsed away. This is known as cleaning action of soap.
In hard water soap don't give lather .Hard water contains calcium and magnesium salts, which combine with soap molecules to form insoluble precipitates known as scum.
Detergents have almost the same properties as soaps but they are more effective in hard water. Detergents are generally ammonium or sulphonate salts of long chain carboxylic acids. The charged ends of these compounds do not form insoluble precipitates with the calcium and magnesium ions in water. | <urn:uuid:04826e72-ccb5-48db-b2eb-a8fe0c2f90fb> | CC-MAIN-2014-41 | http://www.learnnext.com/nextgurukul/wiki/concept/CBSE/X/Science/Soaps-and-Detergents.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132007.18/warc/CC-MAIN-20140914011212-00332-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.93063 | 284 | 3.75 | 4 |
On April 5, 2021, the College of the Environment welcomed Moriba Jah, associate professor at The University of Texas at Austin, to present his lecture Near-Earth Space: The Lost Ecological Pleiad. The Earth has a number of ecosystems we can call an ecological Pleiades. To date, these ecological Pleiades have been constrained to the land, oceans, and air. However, there is an additional ecosystem, near-Earth space, which has yet to be globally acknowledged. To this end, Jah’s lecture focused on near-Earth space as a “lost” ecological Pleiad, comprised of “some abiotic objects such as micrometeoroids, a few humans in the Space Station, and a large number of anthropogenic space objects as a consequence of our technological developments.” In his lecture, Jah explored the known evolution of this Lost Pleiad, and underscored the need for its environmental protection.
Jah opened the lecture by giving context to the sheer number of human-made objects in Earth’s orbit. The assumed population of space objects is roughly half a million, ranging in size from a speck of paint to the International Space Station. Jah stressed that of that assumed half million, we can only measure about 30,000, and the total space population is unmeasurable with the precision of current instruments. He also noted that out of those potential 500,000 objects, only 3,500 are functioning, saying, “Much less than 1 percent of everything up there that we’re responsible for actually serves a purpose.” | <urn:uuid:f9c6f814-0f42-4a1e-a722-76e752ec71a4> | CC-MAIN-2021-43 | http://coexist.blogs.wesleyan.edu/category/events/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00195.warc.gz | en | 0.954815 | 330 | 3.375 | 3 |
January 8, 2013 | 39
LONG BEACH, Calif.—Look up on a starry night. Almost every one of those tiny pricks of light is home to an unseen world. Our Milky Way galaxy is full of planets—100 billion or more—and many of those planets are Earth-like rocks (although our solar system still appears to be an oddball). Such are the major findings that astronomers are announcing here at the semi-annual meeting of the American Astronomical Society, where the halls are crackling with excitement as we all bear witness to a hidden, rocky universe beginning to coalesce out of the darkness.
The great explosion of planetary information is coming courtesy of the Kepler telescope, which has been peering at one small slice of the night sky to search for momentary dips in brightness that happen when a planet passes in front of its host star. Kepler scientists announced that they have found an additional 461 planet candidates, bringing the total number of such Kepler-found candidates to 2,740. (These objects all look like planets, but could potentially turn out to be something else like a double-star system upon further examination. “It’s likely that 90 percent or more of these candidates are going to be bona fide planets,” according to astrophysicist Natalie M. Batalha of NASA Ames Research Center.)
Most of Kepler’s new planet candidates aren’t the big Jupiter-like planets that early planet scans were sensitive to—they’re Earth-like planets or so-called “super-Earths,” planets about twice the diameter of Earth.
Of course, Kepler can only find planets that are aligned just so—the planet must pass directly between its host star and us. There’s no reason to think that most planets are lined up this way. “For every transiting planet that we identify there are 10 to 100 more that aren’t transiting,” said Batalha. The question becomes: how many planets are out there that we don’t see? The answer: lots.
“Almost all sun-like stars have a planetary system,” said Francois Fressin, an astronomer at the Harvard-Smithsonian Center for Astrophysics who has been exploring statistical models of Kepler data. “If you travel to a sun-like star it will have a planet. We can’t say if it will be welcoming, but it will have a planet.” What would an unwelcoming planet be? Something very close to its star, and therefore very hot. Those close-up planets whip around their stars in a matter of days or weeks, which means that Kepler has seen them cross in front of their stars many times by now. Fressin’s recent work has shown that about one in six stars is home to a rocky, Earth-like planet that orbits its star within 85 days or less. For longer-period planets, we just have to wait for more observations.
What about Earth-like planets with Earth-like orbits? Of the 461 new planet candidates, 51 of them are in the so-called “habitable zone,” the Goldilocks region around the star that’s at just the right temperature for liquid water to exist. And one of these new planet candidates has all three of the qualities we’re looking for in a twin Earth: it’s in the habitable zone, it’s only 1.5 times the size of Earth, and it’s orbiting a sun-like main sequence star.
This last attribute is important, because most stars are not, in fact, like our sun. Most stars in the galaxy are so-called red dwarfs–small, dim, cool stars that are our galaxy’s “silent majority,” according to John Johnson of the California Institute of Technology. Red dwarfs make up 70 percent of all stars in the galaxy, and these are absolutely full of planets, says Johnson–on average, about one per star. Summing up all the red dwarfs in the galaxy and all the planets that they host, we can estimate that the Milky Way is home to at least 100 billion planets. “Our solar system is rare among the galaxy’s population of planetary systems,” says Johnson, “because our star is not a red dwarf.” But with 100 billion possibilities to choose from, who would bet that there’s one not like us peering back through that darkness? | <urn:uuid:97a9a17a-8e13-47be-b6de-fc549acd7572> | CC-MAIN-2014-15 | http://blogs.scientificamerican.com/observations/2013/01/08/earth-like-planets-fill-the-galaxy/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.94244 | 939 | 3.296875 | 3 |
With a little help from a scientist looking for a way to clean car engines, a physician believes he can explain the confounding paradox behind why homeopathic medicine gets more potent as it's diluted. Homeopathic medicine, discovered by a German physician more than 200 years ago, espouses many concepts seen in other forms of alternative medicine – namely, that the body can and knows how to heal itself.
"Everybody's fine and hunky dory with [homeopathic concepts] until they come to the part where the more you dilute and shake the substance, the more powerful it gets and the deeper it reaches," said Dr. Bill Gray, author of Homeopathy: Science or Myth.
"That doesn't make sense [for most practitioners], because we're used to thinking in a chemical sense."
Just how the body reacts to varying dosages of medicine is still being debated. Pharmaceutical and herbal medicines both operate under the notion that more is more; whether it's aspirin, Prozac, or Echinacea, the more milligrams per dose, the quicker the cure.
Not so in homeopathy. The "law of infinitesimals" states that the more you dilute a drug, the more potent it gets. Arnica, for example, can address a sprain or bruise in low potencies. In high potency, it can adversely affect a person's mental state.
Remedies are made with one part of the material, which can be a chemical, element, plant, or even poison, added to nine or 99 parts water. The water is vigorously shaken after the material is added. Then one drop of that water is added to another nine or 99 drops of water, a process called "successing."
The mixture is again shaken and the process repeated. After repeating this hundreds or even thousands of times, the water is poured onto sugar pellets, which is how the medicine is administered.
This intense watering down conflicts with accepted laws of chemistry, namely Avogadro's Number, which states that any substance becomes untraceable if it is diluted beyond when a single molecule of the chemical can be found.
Critics point out that homeopathic medicines are diluted far beyond Avogadro's Number. The thesis of Gray's book is that water gains structure through the whole successing process.
"The point is, now that modern research shows that water that's prepared homeopathically is altered in its structure, this water does actually alter tissue cultures, organ function, and entire animals," said Gray, who has been practicing homeopathy in the San Francisco Bay Area for 29 years.
Validation of the dilution process came in a roundabout way, thanks to research by Shui Yin Lo, a former visiting associate professor in the chemistry department at California Institute of Technology. Lo was performing experiments on how to improve car engine efficiency when he made the discovery.
Lo, who now is the director of research and development at American Technologies Group found that water molecules, which are random in their normal state, begin to form a cluster when a substance is added to water and the water is vigorously shaken – the exact process homeopaths use to create their medicine.
Lo said every substance exerts its own unique influence on the water, so each cluster shape and configuration is unique to the substance added. With each dilution and shaking, the clusters grow bigger and stronger. This water, which homeopaths call "potentized," is considered "structured water," because the water molecules have taken on a shape influenced by the original substance.
The clusters start to assume a form that mimics the structure of the original substance itself. So even though the chemical can no longer be detected, its "image" is there, taken on by the water molecules.
"If these clusters were unique to the original solute, and the observations are true that they can perpetuate themselves the more they are diluted or shaken, then the original material becomes irrelevant," Gray said.
The American Medical Association, which stated in its charter it was formed "to stamp out the scourge of homeopathy," declined to comment on Gray's book, homeopathy, or alternative medicine.
"We just believe [alternative medicine] needs to be studied more and patients should keep their physician in the loop. But we don't talk about one alternative therapy over another," said an AMA spokeswoman.
Dr. Richard Sarnat, a medical doctor and president of Alternative Medicine Inc. in Highland Park, Illinois, said the theory of clustered water has been around for some time, but up until now it hasn't been proven. The book could help further the acceptance of homeopathy by explaining how it works.
"I think year by year, these types of ideas are more readily accepted into the medical community as a whole," Sarnat said. "Acupuncture in the 1960s was considered voodoo. Given the full range of things we've researched in alternative medicine, [electromagnetics] is no bigger a stretch than any other phenomenon under investigation." | <urn:uuid:25991850-bd97-43ca-aea9-b3248f5fa262> | CC-MAIN-2019-35 | https://www.wired.com/2000/03/homeopathy-dilute-and-heal/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00477.warc.gz | en | 0.961396 | 1,026 | 2.890625 | 3 |
Synesthesia, a condition characterized by one sensory experience generating another - so that shapes have tastes, for instance - is estimated to affect between 1 in 200 to 1 in 2,000 people. The most common form involves seeing specific letters or numbers (graphemes) in specific colors. For these individuals, known as grapheme-color synesthetes, an ordinary "5," in black ink on a white background, always appears red or a "k," greenish-blue.
According to research published in the March 24 issue of Neuron, not only do these grapheme-color synesthetes really see the colors they report, as measured in behavioral tests, but functional magnetic resonance imaging (fMRI) of their brains also shows activation in the color-selective regions of the cortex when they view black-and-white letters or numbers.
The results, say researchers from the University of California, San Diego and the Salk Institute for Biological Studies, lend support to the hypothesis that cross-activation of adjacent brain regions is the mechanism underlying synesthesia.
"We specifically designed our experiment to test the cross-activation hypothesis we initially advanced in 2001," said V.S. Ramachandran, a coauthor of the study and director of the Center for Brain and Cognition at UC San Diego. "The fMRI findings quite clearly demonstrate cross-activation - in this case between the number/letter region and color region of the fusiform gyrus in grapheme-color synesthetes."
When control subjects viewed numbers or letters, fMRI scans showed increased activity (increased blood-flow) only in the grapheme-selective regions of their brains, said Edward Hubbard, former UC San Diego graduate student and first author of the paper. Meanwhile, the hV4 area, a part of the brain network sensitive to and specialized for color perception, did not. In synesthetes, however, both regions "lit up."
In other words, in the synesthetic brain, the experience of a letter or number was activating both the standard, predictable area and "cross-activating" the color-selective area.
At the beginning of the project, the team first set out to determine whether synesthetes really see their reported colors. They started with behavioral measures. One test, for example, presented the subjects with a pattern of graphemes embedded in a matrix of other, distracting graphemes; 2's that formed a triangle, say, surrounded by 5's. If a synesthete saw 2's as a particular color, the triangle shape would pop out to them from an otherwise black-and-white field. Thanks to their synesthesia, went the thinking behind the task, synesthetes would be able to identify the embedded shapes more quickly than normal controls.
Most of the study's synesthetes (five of six) did indeed outperform control subjects in this task. But synesthetic colors were not as "strong" and not as effective an aid as real colors. Moreover, not all the synesthetes performed equally well.
Even more differences emerged among synesthetes when trying to identify letters or numbers in a crowded display in their peripheral vision.
These differences had been observed by scientists before, but it was difficult to gauge whether these were due to variance in the synesthetes or were primarily artifacts of differing research methods, Hubbard said.
The current study, the first to use both behavioral measures and neuroimaging in the same individuals, has allowed researchers to discern actual differences among synesthetes and to discover important correlations: The fMRI scans reveal that the stronger the activation of color-selective hV4 in a synesthete, the stronger the color perception and, consequently, the better the behavioral performance.
"Synesthetes are likely to be far more variable that previous research has suspected," Hubbard said. "Further work in the field will need to address specific types of synesthetic experience."
Two such types, the researchers said, might be "higher" synesthetes, whose colors are driven by the concept of a grapheme, and "lower," whose colors are driven by the appearance, or percept, of a grapheme. Ramachandran - who is beginning to image synesthetic brains with the Diffusion Tensor Imaging method (which captures the pathways of axons, the brain's connecting cells or "wires") - plans to work with higher synesthetes to see if they have not only cross-activation in the angular gyrus but also more wiring.
But why trouble with the strange, mixed-sense reality of synesthetes?
"By gaining an understanding of how the synesthetic brain functions we may gain an understanding of important aspects of human perception, cognition and development," said Hubbard. "For example, as the infant brain grows into the adult brain, regions that were connected to each other at birth are slowly separated or pruned. In synesthetes, however, it seems that this pruning process does not occur to the same degree. Understanding synesthesia may help us to better understand how a baby brain becomes sculpted into the adult form that we all have."
Synesthesia may give us clues about how nurture and nature interact to lay down neural pathways, adds Ramachandran. And it provides a unique window into the mind.
"Synesthesia might tell us how the brain makes metaphors, which often take the form of cross-sensory associations - think "loud tie" or "sharp cheddar," Ramachandran said. "Processes similar to synesthesia may underlie our general capacity for metaphor and be critical to creativity.
"It is not an accident that the condition is eight times more common among artists than the general population," he said. "A quirky color/number synesthesia is not on the evolutionary agenda - but the ability for metaphor, a flair for connection, is. In fact, it's one of the hallmarks that makes us human."
The experiments were supported by grants from the National Institutes of Health.
Geoffrey M. Boynton and A. Cyrus Arman, both of the Salk Institute for Biological Studies, collaborated on the project and are coauthors of the paper. | <urn:uuid:7afdaf48-da71-4830-98b9-cfdfc3caa0f0> | CC-MAIN-2017-39 | https://www.eurekalert.org/pub_releases/2005-03/uoc--nl031705.php | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00090.warc.gz | en | 0.948561 | 1,294 | 3.765625 | 4 |
Linux nproc Command Tutorial for Beginners (with Examples)
Every process that's executed on a computer system requires CPU to do what it is expected to do. There may be times when your system's CPU is overloaded (due to the number or kind of processes running on the system), and for whatever reason, you want to know the number of available processing units for new processes. Well, there's a tool dubbed nproc that you can use to confirm this information.
In this tutorial, we will discuss the basics of nproc using some easy to understand examples. But before we do that, it's worth mentioning that all examples included in this article have been tested on Ubuntu 16.04 LTS.
Linux nproc command
The nproc command basically displays in output the number of available processing units. Following is the tool's syntax:
And here's how the utility's man page defines it:
Print the number of processing units available to the current process, which may be less than the
number of online processors
Following are some Q&A-styled examples that will give you a good idea on how the nproc command works.
Q1. How to use nproc?
This is very easy - all you have to do is to just run the 'nproc' command.
On my system, the tool produces the following output:
So the output produced is '4'.
It's worth mentioning that this number does not represent the number of physical CPUs. The output of nproc corresponds to the CPUs field in the output of the lscpu command.
And CPUs in itself is nothing but:
Threads per core X cores per socket X sockets
So in our case that comes out to be 2x2x1, which is equal to 4.
Q2. How to make nproc print total installed processing units?
Instead of the number of available processing units, if you want nproc to display the total installed processing units, you can use the --all option.
For example, here's the option in action:
So on my system, the total number of installed processing units is 4.
Q3. How to make nproc exclude some processing units?
There exists a command line option --ignore which you can use to tell nproc that if possible, exclude a set number of processing units.
PS: In case you want to know more about the nproc command, you can use the --help and --version options.
Clearly, nproc is not the kind of tool an average Linux command line user would require on day to day basis, but it's always good to know about such commands. However, if you are a system admin or someone whose work involves debugging Linux system related issues, the nproc command could be of great help. You can learn more about it by heading to its man page. | <urn:uuid:334b358a-5a4c-460e-b702-ec308fe17c9d> | CC-MAIN-2020-05 | https://www.howtoforge.com/linux-nproc-command/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00431.warc.gz | en | 0.883313 | 587 | 3 | 3 |
Students will learn about geometric shapes and color as they create a construction paper picture in the style of Piet MondrianObjectives:
To define and create an abstract art work.
To introduce the artist Piet Mondrian.
Review geometric shapes (rectangles & squares), straight , angular lines, color
About Piet Mondrian:Written by Andrea Mulder-Slater, KinderArt®
Piet Mondrian was a Dutch painter who was born in 1872 (that's over 100 years ago!). At one time, Mondrian painted realistic landscapes, but as he painted more and more, his style began to change. He started to create abstract images ... much like the Mondrian-style paintings you see here on this page. How did he come to paint this way? Well, the more Mondrian looked at trees, buildings and vases, the more he saw their basic shapes and colors. You can try this too ... just squint your eyes while you are looking at something and all the details will start to disappear. You will see only shapes and color ... no real objects. This is what Mondrian did.Recommended Books/Products:
Teaching Art to Young Children ages 4-9
by Robert Barnes | <urn:uuid:bafc803e-e13c-493f-9bba-cbddff73040d> | CC-MAIN-2015-22 | http://kinderart.com/arthistory/abstract.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928729.99/warc/CC-MAIN-20150521113208-00227-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.954953 | 246 | 3.984375 | 4 |
When it comes to wintery lawn pests, henbit is one of the most commonly faced culprits. Henbit (Lamium amplexicaule) is an annual forb in Nebraska. It is a member of the mint family and is often confused with ground ivy. It is generally a problem in newly seeded turfs established in the fall. Henbit has a four-sided, square stem. The leaves are hairy, rounded, coarsely lobed, deeply veined, and are opposite. Toward the base of the stem, the leaves are long, petioled, and toward the top the leaves clasp the stem. The flowers are tubular or trumpet-shaped, pink or purple in color, and rise from the leaf axils. Henbit flowers in early spring.
Henbit thrives in places with shade and can grow up to 16 inches tall. Its root system is fibrous and it loves places where turf is thin. Henbit won’t harm your lawn and can actually help prevent erosion, but it can harbor spider mites. If you want grass to be the only thing in your lawn, there are options to get rid of it.
Organic Prevention and Treatment
Herbicides can be an option, but we always recommend adjusting your lawn practices before adding chemicals to it. Henbit loves sparse turf, so heavier turf presence (more grass there) can choke the weed out. Mulch is also a great way to keep it from taking root by preventing surface seeds from germinating while still conserving moisture. Spraying in the spring might make you feel better, but it can cause the plant to produce and drop more seeds. If the area isn’t too large, these weeds can be hand-pulled. Increasing the density and health of the lawn in the thin areas can help too. Improve the lawn either by overseeding or by changing cultural practices to promote grass growth.
Horticultural vinegar (which is different than what you use in your kitchen) can be used in a spray bottle for spot treatment in gardens or landscape beds. Hand weeding can also be effective, but can be tedious and not get all the plant roots.
If choosing a herbicide, the best option is to apply a pre-emergent before henbit has a chance to pop up. However, be careful doing this if you are planning on spreading new seeds. This should be done in the fall, before temperatures drop and seeds can germinate. If your henbit problem is new and you’ve got a lot of young, thriving weeds, consider a post-emergent broadleaf herbicide. The longer the plant is there, the less effective chemical treatment will be. The most common and effective mixture of herbicides is a three way concoction of 2,4-D, dicamba, and mecoprop (MCPP).
If you don’t want to go the chemical route or perhaps have become at peace with the flowery henbit popping up, consider using it in meals. Henbit is a non-minty member of the leaf family, and like dandelions and other weeds can be very useful in salads. It can also be mixed in recipes with curry and cinnamon for a spicy change or put on a cucumber sandwich. We don’t know much about the nutritional benefits of henbit but it has been used as fever reducer, laxative, and stimulant in herbal remedies. Henbit is often confused for the creeping Charlie or dead nettle but all three are edible, so have no fear. As always, if you are going to eat wild plants, be sure that they are free from pesticides.
Henbit: hate it, love it, leave it, eat it. The choice is yours.
Leave a Comment | <urn:uuid:fead79f4-ea96-4080-8593-8f9af6a49495> | CC-MAIN-2019-39 | https://omahaorganicslawncare.com/blog/henbit/?utm_source=rss&utm_medium=rss&utm_campaign=henbit | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00292.warc.gz | en | 0.947389 | 781 | 2.59375 | 3 |
Performance - Arousal
I created the below infographic for my session with U16s academy players. The inverted U theory or as otherwise known Yerkes-Dodson law states that we can enter 3 states: under arousal, optimum level or arousal and over arousal. I am aware that this theory received many criticisms for it being too simplistic. However, it is a brilliant way for young players to start to develop their self-awareness and to start paying attention to how their body feels prior to a game or the thoughts they are having.
During the session individual differences were highlighted as a one of the main takeaway messages.
It was further explained with pictures that the curve can differ according to the 4 factors.
We practiced strategies such as imagery and reframed negative self talk to positive.
Yerkes, R.M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit formation. Journal of Comparative Neurology and Psychology, 18, 459-482. | <urn:uuid:93efe681-542d-4c98-bb86-278c37713c2e> | CC-MAIN-2023-14 | https://www.active-mindset.com/post/performance-arousal | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00167.warc.gz | en | 0.95532 | 215 | 2.53125 | 3 |
Sent from my iPad
The Alabama Women Who Made Their Quilts a Part of Modern Art
As the Alabama River wends its way south and west, it meanders in a series of bends before emptying its muddy waters into Mobile Bay. Along the way, about 30 miles from Selma, one of those bends cuts deep into the land to form an isolated peninsula, which is filled by the hamlet of Gee's Bend.
Gee's Bend (now also known as Boykin) is home to generations of African-American families whose ancestors were brought to the area as slaves, back when the South was covered in plantations. The story of the people of Gee's Bend is, therefore, similar to many stories in the South: one marked by inequality, institutionalized racism, and poverty. But the history of Gee's Bend is also a story of community and creativity, the results of which stand as high-water marks in American art.
Quilts are the artistic treasures of Gee's Bend. Benders, as locals are called, have been stitching these exquisite textiles since the early 1900s, or perhaps even earlier (some date the tradition back to Joseph Gee's cotton plantation in the early 19th century). Eventually, as interest in the artworks spread, the quilts left the bend and traveled the country, becoming recognized as striking works of modern art featured in museums and galleries from Houston and New York to the Smithsonian in Washington, D.C.
Initially, of course, the quilters of Gee's Bend—primarily African-American women—were not aiming for museum walls or international acclaim. Quilts were essential to daily life. In winter months, they were used to fight off the bitter cold in bed or to cover wood-slatted walls, thus keeping blustery drafts at bay. They were likewise spread out on the floor, where updrafts seeped in through creaky floorboards.
But the quilts of Gee's Bend aren't like typical quilts. Their distinct designs have a lot to do with the past and present of the place in which they were made, as if history seeps into the fabric.
The settlement inauspiciously came into existence in 1816, when Joseph Gee made the trek from North Carolina to take over the land, slaves in tow. His white nephews inherited it, increased the slave holdings, then sold the people and land to another relative, Mark Pettway, who brought more slaves and built a grand plantation house on the property.
After the Civil War and emancipation, the freed slaves of Gee's Bend became sharecroppers (many kept the Pettway name). But with the economy in disarray and no local infrastructure, the area fell into poverty. Photographs from 1937—many taken by Arthur Rothstein, who was dispatched to the area by the New Deal federal government—show fallow fields and a smattering of ramshackle cabins.
Yet the photographs also show a tight-knit community, which is reflected in its quilting tradition. The list of surnames of Gee's Bend quilters reads like a family reunion—Pettway, Bendolph, Kennedy, Bennett—sometimes four generations deep. Mothers taught their daughters and granddaughters to gather fabric scraps sourced from sackcloth, old shirts, or pant legs, and patchwork pieces together. But there was always an emphasis on individuality—a sense of improvisation and originality known as "my way" quilting. Nevertheless, after decades of quilting, it's clear the thousands of wildly different quilts still fit into a family.
As with some other arts, however, quilt-making has traditionally struggled for recognition as a fine art. Like weaving and embroidery, quilting is often seen as merely a craft, or "women's work," as opposed to painting and sculpting, which were traditionally considered more manly, high-art forms. In terms of art world acceptance, the women of Gee's Bend had an added disadvantage: their blackness.
Calling these quilts "outsider art"—with their imperfections, improvisations, and untrained creators—only serves to shine a light on the walls the art world builds around itself and, by extension, the people and traditions that are being excluded.
Yet on visuals alone, the quilts of Gee's Bend feel right at home next to great works of modern art. Their colors, designs, and entrancing emotive qualities reflect ideals of modern abstract art movements. There are undeniable flashes of Frank Stella, Paul Klee, and Piet Mondrian, and hard-edge painters such as Ad Reinhardt. In fact, many Gee's Bend quilts predate like-minded works by their more famous abstract art cousins.
The quilter Gloria Hoppins, for instance, seems to have similar geometric fascinations as Josef Albers, while the mystique and calm depths of Pearlie Pettway Hall call to mind the meditative mind of Agnes Martin. These comparisons, however, tend to overshadow the quilts' less-heralded links to West African art and weaving traditions.
Recently, art forms traditionally associated with craft—textile arts in particular—have enjoyed greater exposure and popularity in the art world, a development for which the quilts of Gee's Bend also played a large role. In 2002, the Museum of Fine Arts, Houston, in collaboration with the nonprofit Tinwood Alliance, presented a seminal exhibition of 70 Gee's Bend quilts. The show became a national hit, traveling to 11 other cities and launching an explosion of interest in the quilts, the artists, and their community.
The exhibition was the brainchild of Bill Arnett, a white art dealer from Georgia, who, through his Tinwood Alliance, took a liking to what he called "black vernacular art." Soon, Gee's Bend quilts that had sold for a few bucks (if they sold at all) were going for tens of thousands of dollars. Jane Fonda gave Tinwood a million dollars in support, and Kathy Ireland licensed the quilt designs for a home goods line of furniture, lamps, and bric-a-brac.
Gee's Bend hasn't changed that much, though. Money—some of which got entangled in legal disputes with the Arnett family and other outsiders—found its way to many of the quilters, but over 50 percent of the community's population still lives below the poverty line. Women in Gee's Bend still quilt, though, and their works continue to tell an American story that's still being sewn. | <urn:uuid:4f682074-210a-4b9d-acec-4ce421f2411d> | CC-MAIN-2019-30 | https://trickpa-formyinformation.blogspot.com/2018_08_02_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00269.warc.gz | en | 0.972538 | 1,370 | 3.265625 | 3 |
The light CVD (Photo CVD) PECVD
makes the thin film low temperature and produces a semiconductor element like A-Si. But because of the need to consider the film:
(1) in the removal of high temperature (HCVD) and PECVD
doped with various defects in components (such as the impact of charged particles in PECVD
caused by injury);
(2) it is difficult to produce components (impurity profile), do not want to be destroyed in the back of high temperature treatment engineering therefore, hope can be at a low temperature thin film coating. PCVD is one of the ways to solve this problem. When heating is decomposed, the general molecules move together and the internal freedom is excited by heating, which stimulates the degree of freedom which does not need to be decomposed. Relative to PCVD, it only directly activates the internal freedom of decomposition, and provides activator to promote decomposition reaction. It is expected that a few nondestructive films can be made at low temperature and the fine lines or etchings can be directly depicted because of the focus and scanning of light. | <urn:uuid:b06484e9-8735-4d43-ada2-7a67710e9c33> | CC-MAIN-2021-17 | http://www.kejiafurnace.com/news/2017/1207/309.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00209.warc.gz | en | 0.934866 | 237 | 3.140625 | 3 |
Short Books on Great Men
- Jesus by Humphrey Carpenter
Oxford, 102 pp, June 1980, ISBN 0 19 283016 3
- Aquinas by Anthony Kenny
Oxford, 86 pp, June 1980, ISBN 0 19 287500 0
- Pascal by Alban Krailsheimer
Oxford, 84 pp, June 1980, ISBN 0 19 287512 4
- Hume by A.J. Ayer
Oxford, 102 pp, June 1980, ISBN 0 19 287528 0
- Marx by Peter Singer
Oxford, 82 pp, June 1980, ISBN 0 19 287510 8
To be truly a Master is to have authority. To claim to be a Master is to claim to possess authority. We can be confident that more persons claim to have authority than do truly have it. What is less easy to determine is who in fact does possess it. The place of authority in human life is both centrally important and irretrievably contentious. The personnel of the ‘Modern Masters’ series may simply map the credal disorder of our days, the fitful intellectual allegiances of a society of masterless persons. Past Masters, however, are, or at any rate ought to be, figures of historically proven authority. It is easiest to see historically proven authority as essentially the authority of continuing traditions. One question, therefore, which Keith Thomas’s series must confront at the start is simply whether for us as moderns any continuing traditions do (or even could) retain their authority. (An entire school of sociologists, for example, seeks to define modernity as a categorical denial of authority to tradition in its entirety.) What, then, is authority? And more particularly, how far is it genuinely open to us to think of authority as something which can be incarnated, realised in the historical persons of individual human beings?
A major difficulty in seeing how to answer this question is an ambiguity, within the concept of authority itself, between the idea of social efficacy and the idea of moral or cognitive validity. By vulgarly quantitative criteria of social efficacy, two of the five figures here in question are decidedly more magisterial than the others. However many of their followers’ performances Christ or Marx would have regarded with enthusiasm, they have clearly mustered an amazing retinue of followers. Social efficacy, of course, is not necessarily a sound criterion of ethical or cognitive merit: but at least the procedures for identifying it are appreciably less controversial. Sociologically considered, Christ, Mahomet and Marx are perhaps still the three leading past masters of our day. The inclusion of two of them in Dr Thomas’s first batch suggests a very natural expectation that, in this dimension at least, mastery can be firmly linked to effective demand in the market.
Social efficacy is simply a fact, a datum of history. But the very idea of historically proven authority perhaps implies an unacceptable conflation of credence with validity. Since the 17th century, the view that history can prove validity has become extremely hard to defend. And if what history proves is not validity but endurance, it is not clear that mastery is a very decorous term to employ for its identification. The classically anarchic slogan, ‘Ni Dieu, Ni Maître would not have pleased Immanuel Kant himself, but it does state a natural extension of his moral ideals. If moral and intellectual autonomy ought properly to be the standard for human existence as a whole, the view that authority for human beings can be fitly incarnated in a master seems unenticing. To an anarchic disposition, then, such a series must necessarily be a mild offence, though the offence is liable to be sharpest in the case of more modern masters. From one point of view, the idea of very short books on very great men is a publisher’s dream. But there are numerous other points of view from which it could readily prove an intellectual nightmare. Writing a very short book about any very great man is unlikely to be easy. But the difficulty is certain to be augmented where the brief for the book is not merely to tell the story of a human life but also to interpret the authority which that life discloses, and to make clear how far this authority was simply a matter of social efficacy and how far it was truly one of epistemic or ethical validity.
A suitable criterion for inclusion in such a series might perhaps be that it should matter deeply to us that these persons should have thought or acted (lived their lives) as they did, and an appropriate criterion for success in its individual texts might then be that they Should tell us why it does still matter deeply. This sounds less pretentious than an attempt to fathom the nature of authority: but it may well simply be less clear. In any case, it is plain that to get crisply together a presentation of the authority which the master’s life discloses, and an account of the human life in which this authority was incarnated, will be a hard task. It could scarcely even be attempted by anyone who had not already come to a firm decision whether the authority in question did in fact reside simply in the social effects of the actions of his or her subject, or whether it rested, rather, in the cognitive or moral standing of their thoughts. It will also scarcely be attempted with much success in miniature unless there is a relatively vivid and transparent relation between the life in question and the nature of the authority to which it gave birth. | <urn:uuid:c650abe1-49c8-403f-9fde-7dd01b782355> | CC-MAIN-2014-49 | http://www.lrb.co.uk/v02/n10/john-dunn/short-books-on-great-men | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379546.70/warc/CC-MAIN-20141119123259-00102-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.958227 | 1,120 | 2.65625 | 3 |
Continuing analysis of an HIV vaccine trial undertaken in Thailand is yielding additional information about how immune responses were triggered and why the vaccine did not protect more people.
In a study appearing May 6, 2013, in the journal Proceedings of the National Academy of Sciences, an international team of researchers led by the Duke Human Vaccine Institute describe a previously unknown interaction between antibodies that worked to block the vaccine's protective powers.
The vaccine trial, known as RV144, used two investigational vaccines in combination, resulting in an unprecedented 31 percent protection rate among participants. While encouraging, that rate fell short of the minimum needed for public health use. However, additional analyses of the trial's data are yielding a trove of information about the virus and its potential vulnerabilities.
Last year, Duke researchers published a study in the New England Journal of Medicine that detailed clues to why the vaccine tested in the RV144 trial protected some volunteers.
In the current analysis, study authors, led by Georgia D. Tomaras, PhD, director of the Laboratory of Immune Responses and Virology at DHVI, explored the inverse relationship that helps explain why the vaccine may have failed to protect more of the participants.
"We learned that a specific vaccine-induced immunoglobulin A can weaken the protective effect of immunoglobulin G. IgA competes with IgG to bind to the same site on the virus's outer envelope that is exposed on infected cells," Tomaras said. "In work with my colleague here at Duke, Dr. Guido Ferrari, we found that the IgA antibodies can block the activity of natural killer cells activated by IgG, further interfering with the vaccine-induced immune response."
Tomaras added that decreased vaccine effect was higher among participants who had more specific immunoglobulin A evident in blood samples compared to immunoglobulin G, suggesting that the ratio of virus-specific IgA to IgG in blood may be an important marker for vaccine effectiveness.
"Understanding that certain vaccine-induced immunoglobulin A antibodies in the blood may interfere with an antiviral function of another antibody is a new finding that can lead to further vaccine development on how to induce effective antibody responses," Tomaras said.
More information: HIV-1 vaccine-induced envelope gp120 C1 region IgA blocks binding and effector function of gp120 IgG, www.pnas.org/cgi/doi/10.1073/pnas.1301456110 | <urn:uuid:f99feed0-0927-4aee-83e5-efedd60b636c> | CC-MAIN-2015-14 | http://medicalxpress.com/news/2013-05-antibodies-limited-hiv-vaccine-trial.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297195.79/warc/CC-MAIN-20150323172137-00027-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.932003 | 506 | 2.953125 | 3 |
Volume: 32 Issue: 11 (Nov. 2009)
By: Karen Moltenbrey
|Video Game Violence: How Much is Too Much?
Violent content in video games: It is a hot-button topic where fact and opinion are at odds. Many gaming groups, including the Entertainment Software Association (ESA), contend that “facts, common sense, and numerous studies all debunk the myth that there is a link between computer and video games and violence.” Tell that to the lobbyist groups that maintain such a correlation exists.
I have been covering computer gaming for more than a decade. And while my interest in the industry is pretty much limited to the computer graphics in the titles, not the gameplay, it is nearly impossible to avoid getting pulled into the violence debate. In our June issue, we ran a story about the creation of MadWorld, a title that has a unique graphic-novel look (black and white “ink” drawings with splashes of red). While beautiful, the game focused on murder and mayhem, and plenty of it. The game sparked debate, particularly in Germany and the UK. It also hit closer to home: A reader was none too pleased, particularly with the guest editorial by the story’s writer, who, as a CG artist himself, has strong views against content censorship and the ill effects such a move could have on the industry.
This issue has reached a tipping point in Venezuela recently, where a law is pending that would prohibit violent video games and toys (see the blog “It’s the Law” on www.cgw.com). The legislation aims to curb the out-of-control street violence in a country with an escalating murder rate. Supporters say that in Caracas, youngsters—who play violent video games at Internet cafes—are easily transitioning from virtual violence to real violence. My question is this: What would they do instead? Somehow, I do not think they will be playing board games or will be content playing E-rated (Everyone) computer games. Such legislation, in my opinion, is likely have the opposite effect of the intended outcome. My opinion, not fact.
My view on violence in games is this: If you don’t want to look at it, don’t. Games, like movies, are rated. So, simply do not buy it. But, don’t stop others from doing so, unless it is your own family. The problem is that many parents do not check the game rating, or simply ignore it. I recall an incident at a toy store during the holidays, whereby a mother requested a certain game that was notorious for its controversial violence. The clerk, in his early 20s, asked the parent who the game was for, to which she responded, “My 14-year-old son.” The clerk explained that the game was for adults and actually tried to dissuade her, but in the end she said, “Well, it is on his list, so I’ll take it.” I was aghast. She never even checked out the box. I realize that some parents are careful and vigilant. Those are the same people who check movie ratings and make informative decisions about what their kids watch. Others don’t bother.
Will I purchase certain games for my almost-teen son? No. Will he play some of them at a friend’s house, where the rules are more relaxed? Probably. But it is up to me to monitor that situation. Come to think of it, isn’t this the same issue our parents had years ago with certain movies and television programs?
ESA states on its Web site, “Blaming video games for violence in the real world is no more productive than blaming the news media for bringing crimes of violence into our homes night after night.” Good point. Those on the opposite side have said that people become desensitized to violence after repeated play. Good point, too. No doubt, this debate will continue on for years, as there is no simple solution.
What’s your opinion on violence in games? Share it in a blog on www.cgw.com.
|Back to Top | <urn:uuid:85c53138-a3b5-4be5-9f3f-2493d1e8ae6d> | CC-MAIN-2016-18 | http://www.cgw.com/Publications/CGW/2009/Volume-32-Issue-11-Nov-2009-/Editor-s-Note.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111868.79/warc/CC-MAIN-20160428161511-00182-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.970644 | 874 | 2.609375 | 3 |
For an internal combustion engine, a large quantity of fuel energy (accounting for approximately 30% of the total combustion energy) is expelled through the exhaust without being converted into useful work. Various technologies including turbo-compounding and the pressurized Brayton bottoming cycle have been developed to recover the exhaust heat and thus reduce the fuel consumption and CO2 emission. However, the application of these approaches in small automotive power plants has been relatively less explored because of the inherent difficulties, such as the detrimental backpressure and higher complexity imposed by the additional devices. Therefore, research has been conducted, in which modifications were made to the traditional arrangement aiming to minimize the weaknesses. The turbocharger of the baseline series turbo-compounding was eliminated from the system so that the power turbine became the only heat recovery device on the exhaust side of the engine, and operated at a higher expansion ratio. The compressor was separated from the turbine shaft and mechanically connected to the engine via CVT. According to the results, the backpressure of the novel system is significantly reduced comparing with the series turbo-compounding model. The power output at lower engine speed was also promoted. For the pressurized Brayton bottoming cycle, rather than transferring the thermal energy from the exhaust to the working fluid, the exhaust gas was directly utilized as the working medium and was simply cooled by ambient coolant before the compressor. This arrangement, which is known as the inverted Brayton cycle was simpler to implement. Besides, it allowed the exhaust gasses to be expanded below the ambient pressure. Thereby, the primary cycle was less compromised by the bottoming cycle. The potential of recovering energy from the exhaust was increased as well. This paper analysed and optimized the parameters (including CVT ratio, turbine and compressor speed and the inlet pressure to the bottoming cycle) that are sensitive to the performance of the small vehicle engine equipped with inverted Brayton cycle and novel turbo-compounding system respectively. The performance evaluation was given in terms of brake power output and specific fuel consumption. Two working conditions, full and partial load (10 and 2 bar BMEP) were investigated. Evaluation of the transient performance was also carried out. Simulated results of these two designs were compared with each other as well as the performance from the corresponding baseline models. The system models in this paper were built in GT-Power which is a one dimension (1-D) engine simulation code. All the waste heat recovery systems were combined with a 2.0 litre gasoline engine. | <urn:uuid:6096042c-818f-4293-bcb3-1035efada22c> | CC-MAIN-2020-45 | https://researchportal.bath.ac.uk/en/publications/analysis-and-comparison-of-the-performance-of-an-inverted-brayton-2 | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00257.warc.gz | en | 0.976173 | 503 | 2.53125 | 3 |
"There is nothing in a caterpillar that tells you it's going to be a butterfly." ~ R. Buckminster Fuller
Habitat-Landscapes will work with your school to create habitat gardens that can be used as outdoor learning classrooms and beautiful gathering places for your school community. Research shows that children are more engaged in learning and truly benefit from hands-on outdoor instruction. The children tend to be happier and healthier and establish good lifelong habits. The gardens reduce stress levels, promote team building and probleme solving and can increase children's self-esteem. Children become interested in improving the environment around them.
School gardens have also proven to increase the effectiveness of teachers. It is as beneficial for the teachers as it is for the children to be able to experience nature and share the wonder with their students. | <urn:uuid:b918b933-a36e-454a-95a9-e3c15b6e3e66> | CC-MAIN-2018-05 | https://habitat-landscapes.com/photo-gallery/schools-gallery | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886437.0/warc/CC-MAIN-20180116144951-20180116164951-00702.warc.gz | en | 0.967849 | 164 | 2.734375 | 3 |
Presenters often go to presenters to learn how to improve the presentations that they present (so much presenting!). Presenters don’t often go to an Instructional Designer (who deals with more eLearning than presenting) to learn how to take their presentations to the next level. Instructional Designers have differing perspectives than presenters on what is needed to give a good presentation, thus creating a disconnect in the interactions between Presenters and Instructional Designers. But what you didn’t probably know is that Instructional Designers have a lot to bring to the table when it comes to your presentations. In order to understand how they can help you, you need to have a basic idea of what they do.
Instructional Designers (IDs), in a nutshell, take content and create a “pedagogically sound format” ready for classroom and online situations. In other words, they focus on the audience’s involvement/attention to the presentation and the retention of information via graphics, layout, animations, colors, etc. As a presenter (whether you are primarily a web presenter or not), those two points should always be close to the top on your priority list. Here are seven tips straight from an amazing Instructional Designer on how to improve your next presentation; he goes by the name of Adam Cannon.
1. Think SportsCenter:
When you are giving a presentation, you shouldn’t give the full picture. And much like SportsCenter, you need to give only the high-level, high-profile items that will capture the attention of your audience. “It is like you are in front of the classroom writing on a whiteboard or a blackboard. You are not going to write out everything you are going to say before you say it.” Make sure you keep it simple, fresh, and easy to digest.
2. Mind The Flow:
Make sure the content makes sense in the way that it is presented. Don’t use an acronym before you have introduced the separate parts of it. “Know your audience.” Every audience will learn differently. If you are a programmer, the flow of how you learn content will be completely different to how a graphic designer will learn content.
3. Purpose, Purpose, Purpose:
When you build a presentation, you have the freedom to do whatever you want. In a rebuttal to that idea Adam quoted Jurassic Park saying, “Just because you can, doesn’t mean you should.” Every text, picture, graphic, and even animation needs to have a purpose. If it doesn’t serve a specific purpose, it is considered fluff and needs to be removed. “Have a reason behind it. It doesn’t have to be anymore than a ‘Hey!’ to grab their attention or wake them up. There still should be a reason.”
4. White is Alright:
“White space or blank space is okay in a presentation. Actually, it is preferred.” You always want to have the learner’s view in mind. You don’t want to overwhelm them with content, even if it is relevant to the presentation and the growth of the audience. “You don’t have to left justify everything. Be creative! Use the space on the top, the right, and the bottom to change things up.” It is possible to be simple and creative. It is all in how you use what space is given to you.
5. Fonts and Colors:
“What looks good up close to you on a computer monitor might not look good from the back of a room.” The type of font that you choose in addition to the background color or format can create a real problem in what is comprehended. If it is a struggle to understand what is being said on screen, it doesn’t matter how important it is, you will lose the interest of your audience. “Never go below a 30-point font if you are presenting in front of a group of people.” When you are presenting in front of an audience you are presenting as much to the person up front and as to the person at the very back. Treat everybody equally and with the same respect; so use the right font and font size.
6. Bullet Rule of 6:
“No more than 6 bullets and no more than 6 words per bullet.” Again going back to the first point that this is a highlight rule, you don’t want to watch 10 minutes of a game in order to see the best play. The same is true with the content you display in conjunction with the content that is given by the presenter; keep it short and sweet.
7. Slide Count Doesn’t Count:
“A lot of presenters get caught up and think that if they have more than 20 slides the audience is going to shut off.” This simply isn’t true, unless you have lots of content to run through on each and every slide. The first remedy to slide count is to not show it. The audience never has to see how many slides you have. Number two is that you can use multiple slides for a single topic or bullet point list. Meaning each slide can have one point of the 6 points in your list. But because the slides look the exact same it will just look like an object animating in instead of a slide transition. This allows you to create complex moving objects with pictures and text fading in and out without cramming 100 animations onto one slide, thus creating presentations that are easier to edit. And lastly, don’t shove all your content onto one slide. This will give you a longer slide count but you will get through those slides more quickly than normal.
Instructional Designers create awesome content for a living, and presenters develop awesome content for a living. The two can and do go hand in hand. There are so many things that cross over from the eLearning world to the Presenter world, and these seven instructional design tips prove it. Let me know in the comment section below how they have worked in your latest presentation! | <urn:uuid:8b761e58-d2c9-4828-80b9-8bf10b3e01d4> | CC-MAIN-2018-05 | http://getmygraphics.com/blog/7-instructional-design-tips-next-presentation/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887067.27/warc/CC-MAIN-20180118051833-20180118071833-00408.warc.gz | en | 0.944438 | 1,276 | 2.890625 | 3 |
Grace Hopper Explains Nanoseconds to Letterman
Rear Admiral Grace Hopper is famous both as a computer pioneer and for, at the time of her retirement (at age 79), being the nation's oldest active military officer. Hopper worked on early computers, and is widely credited with popularizing the term "computer bug" after she found a moth stuck inside a relay in Harvard's Mark II computer in 1947. (Thus "debugging" became the term for fixing computer problems....) You can see the first computer bug (they kept the moth!) at the Smithsonian, in the American History museum.
In this 1986 interview with David Letterman, Grace Hopper displays her grace and wit, and explains the concept of a nanosecond, using Bell System telephone wire cut into 30cm lengths -- 30cm is the maximum distance light can travel in a billionth of a second. Here's a representative quote:
"When an admiral asks you why it takes so damn long to send a message via satellite, you point out to him that between here and the satellite there are a very large number of nanoseconds. [Waves the wire at the sky.]"
She also gets into picoseconds! Enjoy:
You can read more about Hopper from Wikipedia. | <urn:uuid:9512d5d8-17a9-4006-a955-d7cc36622301> | CC-MAIN-2020-50 | https://www.mentalfloss.com/article/30090/grace-hopper-explains-nanoseconds-letterman | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197593.33/warc/CC-MAIN-20201129093434-20201129123434-00212.warc.gz | en | 0.943992 | 266 | 3.046875 | 3 |
Sneem Black Pudding facts for kids
Produced by local butchers Peter O'Sullivan and Kieran Burns, it is described as "traditional blood pudding, uncased and tray-baked. It has a deep red-brown colour and is free from artificial colours, flavours, bulking agents and preservatives." It is sold in squares rather than rings, and the ingredients are beef suet, onions, oat flakes, spices and blood (from pigs, cattle and lambs of South Kerry).
It is claimed that home blood pudding production in the region dates back to the early 19th century, traditionally produced by women; the current recipe dates to the 1950s. In 2019, Sneem Black Pudding received Protected Geographical Indication (PGI) status.
Sneem Black Pudding Facts for Kids. Kiddle Encyclopedia. | <urn:uuid:529314e8-e5c0-4f95-a76f-c2d3757ad02f> | CC-MAIN-2023-50 | https://kids.kiddle.co/Sneem_Black_Pudding | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00407.warc.gz | en | 0.939936 | 175 | 3 | 3 |
Turkey and Syria earthquake: evidence-based innovations and guidance for acute crisis response.
Principal Investigators: Stephanie J. Nawyn & Stephen Gasteyer (Michigan State University)
To provide services to refugees safely during the COVID-19 pandemic, NGOs have instituted safety protocols to mitigate the risk of spreading infection in crisis settings. This study aimed to better understand how these protocols were being followed on the ground and examine barriers to adherence.
Humanitarian NGOs face significant challenges to limiting infection spread while assisting refugees. Using data from on-the-ground service provision to refugees in thirteen locations in Lebanon, Jordan, and Turkey, the overall goal of the study was to determine what interventions could be implemented that would mitigate barriers to practices that aimed to slow the spread of COVID-19 among refugee populations.
In accordance with the study’s original aims, the study focused on social distancing, mask wearing, and hand hygiene, measuring how well those protocols were followed during different types of services and with different refugee populations. Barriers such as lack of physical space, lack of knowledge about COVID-19, limitations of the services, and attitudes about COVID-19 were measured.
Often the key to understanding how refugee assistance can be improved is to understand what barriers service providers face in everyday work, and what innovations they develop in response. By comparing a range of contexts, we hope to produce recommendations for sustainable best practices that come from the ground up.
After conducting 1,454 interviews with staff and 215 unique observations of service provision at four partner NGOs assisting refugees in Lebanon, Turkey, and Jordan, the study found:
One of the more concerning findings of this study was the prevalence of COVID skepticism among refugees. Humanitarian service providers will need to consider how COVID skepticism might affect not just their refugee clients’ adherence to safety protocols, but also their willingness to be vaccinated in the future.
The findings also suggest that local cultures emerge around COVID-skepticism and adherence to different safety protocols. Humanitarian NGOs need to consider the specifics of each site as a local culture that might be quite different from others. They should not assume that protocols are followed in the same way across all of their service centers, or that because one protocol is followed well that others are too.
The research team are preparing to distribute the recommendations from their study through a written final report and a series of five webinars offered in English and Arabic.
Please check this study webpage for the latest updates, outputs from the study, and contact information.
A report, summarising the key findings of the research, was published.View
In this webinar, the study team present findings from humanitarian NGOs assisting refugees in Lebanon, Jordan, and Turkey to identify where there are gaps in practices intended to reduce the spread of SARS-CoV-2, and what barriers exist to more safely administering humanitarian aid.View
The study team created a website for those wishing to find out more about the project, the team, and keep up to date with the latest progress.View
You are seeing this because you are using a browser that is not supported. The Elrha website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:Windows
Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience. | <urn:uuid:2f309707-87cc-4a2e-83b0-aea8ca86ab71> | CC-MAIN-2023-14 | https://www.elrha.org/project/using-humanitarian-engineering-in-refugee-humanitarian-interventions/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00642.warc.gz | en | 0.95187 | 721 | 2.53125 | 3 |
The Sacrifice Theory and The Oedipus
as the origin for routine circumcision.
Both sacrifice and the castration complex (or the Oedipus complex)
are still widely believed to be the origins of routine circumcision
and used as arguments for discontinuing this practice in the modern
world. There are many sensible reasons against routine circumcision,
the theories on sacrifice and castration lead to even more irrational
thinking on the subject and thus need to be clearly negated
TO TEACH OEDIPUS A LESSON
The Oedipus complex describes how the boy child covets his mother,
however he is in conflict because he fears his father`s retribution:
cutting off his penis. The theory is that when the child grows to man
he re-senses these feelings, and resolves the eternal conflict symbolically
by circumcision of his male offspring.
Is it psychologists or our barbaric forefathers who are incapable
of even the simplest steps of awareness and intelligence as regards
male anatomy? The operation exposes the glans, forming the flaccid
penis in a way which emulates the erect state and thus indicating a
readiness for sexual intercourse (Hastings, Bryk, Ploss). It
is inconceivable that such a measure would be used to diminish or punish
any element of sexual competition.
Bryk discusses the Oedipus
theory in depth
THE SACRIFICE THEORY
Though routine circumcison may be seen as a great ignorance from
the viewpoint of modern medical attitudes, the idea that it was introduced
for sacrificial purposes by mutilating young boys is a ridiculous one.
Astoundingly, as recently as the Sept/Oct 1994 edition of the British
Journal of Sexual Medicine, JP Warren FRCP Physician. and J Bigelow
PhD Psychologist argue for:-
"The sacrificial origin of circumcision"
"The origins of circumcision are lost in antiquity... No doubt
human sacrifice was widespread, and it seems likely that substitutes
for this practice included the sacrifice of domestic animals and mutilations
of the human body, of which circumcision is just one example...
... An important aspect of sacrifice is the shedding of blood, and
circumcision is a notoriously bloody operation,...
Another aspect of sacrifice is that the object which is forfeited
should be valuable. The greater the value of the object sacrificed,
the more worthy the sacrifice. This should make us wonder what are
the value and function of the prepuce. If it were just a useless
flap of skin, it would not be much of a sacrifice,... This makes
it an ideal sacrificial object, as the circumcised male is able to
function normally in society and to procreate, but suffers permanent
impairment of sexual enjoyment and bears a visible, life-long reminder
of his sacrifice." (27)
THE SACRIFICE THEORY NEGATED
Circumcision was not a substitute for human sacrifice because routine
circumcision developed previous to ritual sacrifice.
The Encyclopedia Brittanica says - "Blood sacrifice is linked...
with the cultures... of the cultivators" (37).
Practices involving blood sacrifice developed among the cultivating
peoples, because they had an understanding of fertility. They believed
that by sacrificing they were renewing life.
As any origins connected with fertility have been conclusively rebutted by anthropological sources since
the 1930s and as the thought associations from fertility to sacrifice
are one step more abstract, then sacrifice could not have been among
the original motives for the introduction of the routine practice.
So much to the facts, as I understand them - now I wish to speculate
I believe the subject deserves greater clarity because so often mutilation
and sacrifice are brought into the modern argument against routine
infant circumcision and while I agree with the intention, this line
of reasoning confuses common sense and is thus counter productive.
(It is also an interesting subject, if anyone wishes to discuss the
following, please write)
Firstly, for the peoples who performed it, sacrifice was treated
with great respect and as the highest expression of their cultural
thoughts and perspectives. On the other hand most mutilations (as we
would define them) were considered by the folks who performed them
as enhancing the beauty and/or social acceptability of the individual.
When those motivated by the modern medical disgrace of routine infant
circumcision, compare RIC with sacrifice as a mutilation, then I believe
they are associating with the prisoners who were sacrificed by conquering
tribes. and in this sense we could talk of sacrificing someone by mutilating
him - but then one could imagine removing the penis, but to remove
the foreskin and encourage a state which many men feel is preferable,
this would be fully illogical.
Sacrifice could have involved his fingers. his ears, his penis,
- but why his foreskin? which form of self denial could motivate an
operation which forms his penis to emulate the erect member, a state
which many men choose and are happy with. The element of having lost
something appears only to be evident subjectively among a few people
- the distinguishing lines are so unclear that I doubt any God would
fulfill any wish in exchange for such a token - it appears inconceivable
that natural peoples would introduce such an indistinct measure to
demonstrate or induce anything at all.
Lets reconsider: Sacrifice as a form of self-denial - as pennance
(out of guilt) or used to be pleasing to Gods and ancestors, impress
lovers etc.; must exist as a very basic motivation - we can date a
concept of the supernatural previous to fertility: (that aborigenes
understand procreation in terms of a spirit child is evidence of this).
- Could one of the natural peoples have circumcised himself as a form
of self sacrifice, I see no reason why not! - I see removing painful
foreskins as a far more obvious and dynamic urge - but yes, I would
agree that circumcision could have been first performed for sacrificial
reasons - but this refers only to this first operation. As above, the
results being more pleasing than showing a mark of any loss, thus the
sacrificial thought would lose its substance and would motivate no
similar operation on kith and kin.
Purely out of interest are there any other sacrifical practices
which were practiced routinely on every member of a tribe? - (apart
from taxation :-)
Are cultural sacrificial practices the projection of self denial?
Bryk discusses Sacrifice
in far greater depth | <urn:uuid:5c20eda1-6e7b-4d48-8bee-d59b132ff3c9> | CC-MAIN-2016-44 | http://www.male-initiation.net/anthropology/sacrifice.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720760.76/warc/CC-MAIN-20161020183840-00384-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.954129 | 1,393 | 2.921875 | 3 |
Questions and answers on thermodynamics topic from 11 to 20.
11. What is the name given to the ratio of actual cycle efficiency and ideal cycle efficiency?
12. For same compression ratio, whether Otto cycle or Diesel cycle is more efficient ?
is more efficient than diesel cycle
13. If cut off ratio of diesel cycle is increased, what happens to its efficiency ?
14. How the network to drive a compressor and its volumetric efficiency behave with increase in clearance volume?
Work remains unaltered and volumetric efficiency decreases.
15. Why axial flow compressor is preferred in aircraft gas turbines ?
It has low frontal area.
16. How does ratio Cp / Cv behave with increase in temperature ?
17. Which machine will produce continuous work without receiving any energy from other system or surroundings?
18. What are the similarities between heat and work ?
Both heat and work are transient phenomena, i.e. these cross the boundaries of the system whenever system undergoes a change of state.
Both heat and work are observed at boundaries of the system.
Both are path functions and are inexact differentials.
19. Under what conditions ∫ p. dv represents the work ?
∫ p.dv represents work, when the system is closed, and the process takes place in the non-flow system; the process is quasi-static, and the boundary of the system moves so that work may be transferred.
20. What is the definition of 1 Kelvin as per internationally accepted temperature scale ?
1 Kelvin = 1/273.16 the of the triple point | <urn:uuid:c0579f2a-86d0-4016-8d84-f32327349a28> | CC-MAIN-2019-47 | https://me-mechanicalengineering.com/interview-questions-thermodynamics/2/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00172.warc.gz | en | 0.897735 | 332 | 3.3125 | 3 |
World Literature Today's Notable Translation 2013
This collection introduces the work of Japan’s foremost Marxist writer, Kobayashi Takiji (1903–1933), to an English-speaking audience, providing access to a vibrant, dramatic, politically engaged side of Japanese literature that is seldom seen outside Japan. The volume presents a new translation of Takiji’s fiercely anticapitalist Kani kōsen—a classic that became a runaway bestseller in Japan in 2008, nearly eight decades after its 1929 publication. It also offers the first-ever translations of Yasuko and Life of a Party Member, two outstanding works that unforgettably explore both the costs and fulfillments of revolutionary activism for men and women. The book features a comprehensive introduction by Komori Yōichi, a prominent Takiji scholar and professor of Japanese literature at Tokyo University.
"From time to time, as Japan has faced various postwar crises, popular interest in proletarian literature has revived. Now English readers have a chance to experience, in a lively translation, three representative works by Kobayashi, two of them translated for the first time. . . . Kobayashi was a very talented writer, not just an ideologue, and showed a flair for striking imagery . . . . It is the vividness of these images that makes the read a pleasure, despite the painfulness of many of the things described." —Japan Times
"The title piece of this volume is a timely and long awaited new translation of Kobayashi Takiji's 1929 novel [The Cannery Boat], the most influential literary work to emerge out of the so-called Proletarian Cultural Movement that was in operation for about a decade around 1930 ... Cipris has made a magnificent fist of bringing Kobayashi's words to life on the page for a modern readership." —Japan Studies (34:1, 2014)
“A miracle happened in the world of Japanese letters in 2008: an eighty-year-old masterwork of Japanese proletarian literature appeared on best-seller lists. Embraced and reviled in its own day, dismissed and forgotten once revolution was declared both impossible and unnecessary, Kobayashi Takiji’s The Crab Cannery Ship, reborn here in Željko Cipriš’s fresh translation, stirred in Japanese a forgotten hunger for a literature that answers to bleak times with an incandescent anger and life-giving solidarity. This volume, which includes two novels never before translated, Yasuko and the Life of a Party Member, gives us a trio of works that speak to readers with prescient urgency.” —Norma Field, Robert S. Ingersoll Distinguished Service Professor Emerita of Japanese Studies, University of Chicago
“The long recession that hit Japan after the ‘bubble economy’ burst in the early 1990s brought falling incomes and widespread underemployment. It also sparked a major revival of interest among the nation's readers in this landmark work of Japan’s proletarian literature, selling briskly in a new edition. While other translators concentrate on recent pop-star writers, Željko Cipriš joins a growing number of scholars bringing to an English-language readership pioneering works of Japanese literature written from the perspectives of women, blue collar workers, and ethnic minorities. The publication in English of Kobayashi Takiji’s fiction, skillfully translated here, is particularly timely as people in many countries with troubled economies struggle with lost jobs and burgeoning debts.” —Steve Rabson, Professor Emeritus of East Asian Studies, Brown University
“Kobayashi Takiji is hands down the most important proletarian author to have emerged from Japan, and this volume of translations provides an excellent introduction to a vibrant, dramatic, politically engaged side of Japanese literature that is seldom seen. The stories spring to life in fresh, idiomatic translations that bring Takiji’s heart-pumping originals to life.” —Jeffrey Angles, Western Michigan University
Author: Kobayashi Takiji; Translator: Cipris, Zeljko;Željko Cipriš
is associate professor of Asian studies and Japanese at the University of the Pacific, Stockton, California. | <urn:uuid:07ac7d83-a201-4f2f-977a-d1dd73ee7473> | CC-MAIN-2016-18 | http://www.uhpress.hawaii.edu/p-8947-9780824837426.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461863352151.52/warc/CC-MAIN-20160428170912-00078-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.921352 | 875 | 2.546875 | 3 |
Economic Integration, Poverty and Regional Inequality in Brazil
Gains and losses from trade liberalization are often unevenly distributed inside a country. For example, if budget shares vary according to household income, changes in commodity prices will redistribute an overall welfare change between household types. Household incomes will also be differentially affected. Sectoral differences in factor-intensity mean that changes in industrial structure cause redistribution of income between primary factors. Particular primary factors (such as capital, or less skilled labour) may contribute disproportionately to the incomes of certain household types. The fortunes of such households indirectly depend on the prospects of particular sectors. We emphasize these distributive issues, especially those arising from the income side. At the same time we distinguish households by regions (within the country). The regional distinction sharpens the contrast between groups of households. Particular regions have their own patterns of economic activity and so are differently affected by changes in the industrial protection structure. Since regional household incomes depend closely on value-added from local industries, economic change will tend to redistribute income between regional households. If the regional concentration of poverty is more than we could predict by regional primary factor endowments and industry structure, the addition of a regional dimension will add power to our analysis of income distribution beyond the mere addition of interesting regional detail. The paper deals with these issues more fully. We extend previous regional modeling of Brazil to include the intra-household dimension, addressing poverty and income distribution issues that may be caused by trade integration. An applied general equilibrium (AGE) inter-regional model of Brazil underlies our analysis, with a detailed specification of households. The model is static and solved with GEMPACK. The Representative Household (RH) hypothesis is abandoned; instead a micro-simulation (MS) model is used to track changes in household income and expenditure patterns. This micro-simulation model is built upon two Brazilian household studies: (1) the Household Budget Survey (POF, IBGE, 1999) covers detailed expenditure patterns for 16,013 households and 11 regions in Brazil in 1996; (2) the National Household Sample Survey (PNAD, IBGE, 1997) is a yearly survey that includes detailed information about household employment and income sources, with 331,263 observations. We integrate the two data sources to produce a detailed mapping of expenditure and income sources for 112,055 Brazilian households and 263,938 adults, distinguishing 42 activities, 52 commodities, and 27 regions. We link the AGE and MS models together, solving them iteratively to get consistency between results. After a shock the AGE model communicates changes in wages and employment by industry and labour type to the MS model that individually simulates the changes in employment, income and expenditure patterns for each household. The new expenditure pattern is then communicated to the AGE model, and the process is repeated until the two models converge. The final results from the MS model enable us to estimate changes in poverty and income distribution measures, both nationally and for regions within Brazil. We use the model to analyze poverty and income distribution impacts of the Free Trade Area of Americas formation upon the Brazilian economy. In the particular simulation we examine, freer trade leads to increased employment, especially for lower-paid workers. Poor households, which contain more unemployed adults, benefit most. This leads to a reduction in poverty in all 27 Brazilian states.
|Date of creation:||Jul 2004|
|Publication status:||Published in Revista Brasileira de Economia, FGV/EPGE Escola Brasileira de Economia e Financas, Getulio Vargas Foundation (Brazil), vol. 60(4), pages 363-388, February 2006.|
|Contact details of provider:|| Postal: PO Box 14428, Melbourne, Victoria, 8001|
Phone: 03 9919 1877
Web page: http://www.copsmodels.com/about.htm
More information through EDIRC
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Foster, James & Greer, Joel & Thorbecke, Erik, 1984. "A Class of Decomposable Poverty Measures," Econometrica, Econometric Society, vol. 52(3), pages 761-766, May.
- Green, Francis & Dickerson, Andy & Saba Arbache, Jorge, 2001.
"A Picture of Wage Inequality and the Allocation of Labor Through a Period of Trade Liberalization: The Case of Brazil,"
Elsevier, vol. 29(11), pages 1923-1939, November.
- Francis Green & Andy Dickerson & Jorge Saba Arbache, 2000. "A Picture of Wage Inequality and the Allocation of Labour Through a Period of Trade Liberalisation: The Case of Brazil," Studies in Economics 0013, School of Economics, University of Kent.
- Mark Horridge, 2000. "ORANI-G: A General Equilibrium Model of the Australian Economy," Centre of Policy Studies/IMPACT Centre Working Papers op-93, Victoria University, Centre of Policy Studies/IMPACT Centre. Full references (including those not matched with items on IDEAS) | <urn:uuid:76aa8009-818c-4913-a9ae-fe24a5a9eba3> | CC-MAIN-2017-43 | https://ideas.repec.org/p/cop/wpaper/g-149.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00373.warc.gz | en | 0.890321 | 1,093 | 2.5625 | 3 |
The need to feed hungry families cultivates new interest in gleaning
Corinne Almquist wants to restore the biblical tradition of harvesting what farmers leave behind.
Sarah Beth Glicksteen/ The Christian Science Monitor
Clusters of plump, wine-red Empire apples hang from sagging boughs, yearning to be picked. A small group of volunteers is obliging, quickly filling a truck bed with wooden boxes of fruit.
They're led by a smiling, energetic young woman, her red hair pulled back and practical rubber boots on her feet, ready for tromping in an orchard on a day that threatens rain.
Later that afternoon Corinne Almquist will deliver some 20 bushels of apples, about 1,000 lbs., to a food shelf for free distribution to hungry local residents. Sunrise Orchards in Cornwall, Vt., where Ms. Almquist and her helpers have been gleaning, can't sell the apples: Most have cosmetic blemishes caused by being pelted in a late summer hailstorm. Though grocery chains won't buy them, they're still tasty and nutritious.
Gleaning – harvesting leftover crops for the poor – is an idea as old as the Bible. In the story of Ruth she gleans in the fields of Boaz and the two fall in love. Leviticus urges farmers to leave the corners of their fields unharvested, providing food for the poor and strangers. The practice was common in 19th-century France, too, celebrated in Jean-François Millet's 1867 painting "The Gleaners," which shows women picking through a harvested wheat field.
But gleaning is also finding modern advocates in the United States as the recession eats a hole in many family budgets.
"This idea of rescuing food that's going to go to waste makes an awful lot of sense to people," says Teresa Snow, program director of agricultural resources at the Vermont Foodbank in South Barre. Gleaning, she says, is growing in popularity "not only across Vermont but across the nation" as hard times are "forcing people to be creative."
"There's so much food available in fields. It's astounding how much is wasted," says Almquist, who graduated from nearby Middlebury College in June with a degree in environmental studies. A 2004 report from the University of Arizona in Tucson estimates that 40 to 50 percent of all the food that could be harvested from fields will never be eaten.
Working with local farmers, and with help from loyal volunteers, in recent weeks she's delivered about 6,000 lbs. of squash, carrots, potatoes, and apples to food shelves or senior centers. Earlier in the summer she gathered cabbage, kale, radishes, and herbs.
"Gleaning isn't really happening on a widespread basis across the United States, and most people haven't even heard of it," Almquist says. "I've found some really wonderful volunteers who are excited about doing this every week and working it into their routine."
Almquist's quest to introduce gleaning is "quite inspiring," says Ms. Snow, who is acting as her mentor for the fellowship. "She has tremendous energy and drive and sees her potential to make an impact."
Gleaning often means harvesting what a farmer can't sell, such as produce that has a bruise or mark on it. Sometimes the vegetable may be the wrong size (too large or small) or the wrong shape. Other times, farmers simply overplant and don't have time to harvest it all.
"It's been incredible how open farmers are to the idea of gleaning and how generous they've been," she says. "Farmers don't like to see food go to waste when they've worked so hard to grow it."
Another form of gleaning involves hauling away leftover produce from a farmers' market at the end of the day. "It's such a simple concept – the food is already harvested," she says. Farmers don't want to have to pack up produce and haul it back to their farms to compost it.
What kinds of crops can be gleaned?
"Probably any crop you can think of I've gleaned," Almquist says. Her hardest task has been picking beans, she says with a laugh. "It's so labor-intensive to pick green beans. You can spend three hours and get 10 lbs. of green beans. It doesn't always feel worthwhile."
There can be other challenges too. On some farms, she says, "The weeds were so out of control that it was more like a scavenger hunt to try to find vegetables in the ground."
Her favorite glean has been raspberries.
"People are so excited to get fresh fruit, especially berries," she says. "Those are so rare for people who eat from the food shelf." They can eat the fruit immediately, while "they may not know what to do with a bunch of kale."
After she stops gleaning for the winter, Almquist plans to teach cooking and nutrition classes, helping food-shelf visitors learn how to blend fresh foods into their diets. One idea: Place easy recipe cards in front of the produce. "This is what you can do with turnips," for example.
Ideally, people who take free food from a food shelf would help with gleaning. But "that's really a time issue," she says. "Many of the people using the food shelf are working multiple jobs and are already struggling to find time to cook and feed their families. They don't necessarily have time to come out and help pick."
Food shelves also need to shift their thinking to accommodate gleaned food. Many aren't prepared to store perishable commodities.
"Some food shelves are still reluctant to take [fresh food]. They're not sure what to do with it or whether their clients will want it," she says. "Part of the challenge with gleaning is finding food shelves with cold-storage facilities and space to refrigerate things."
Another challenge is to time gleaning to the schedule of the food pantries, which may be open only once a week or less.
Besides introducing fresh food into the diets of those who rely on food shelves, gleaning also leaves a smaller carbon footprint. The produce travels perhaps 10 or 15 miles "instead of thousands of miles – and arriving really tired and stale," she says.
Almquist grew up in New Jersey ("the Garden State," she points out) an hour outside New York City. She worked on an organic farm in Vermont for a semester as a high school junior, which instilled a love of growing things.
But she had made a special connection with one plant long before then, as a preschooler.
"I befriended this giant bush in my backyard," she says with a smile. "I would talk with it for hours at a time, telling it my 4-year-old woes. And the bush would talk back." | <urn:uuid:d50df536-db7a-4dc5-918b-db95da68f18a> | CC-MAIN-2016-36 | http://m.csmonitor.com/The-Culture/2009/1102/p07s01-lign.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982948216.97/warc/CC-MAIN-20160823200908-00211-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.979515 | 1,426 | 2.609375 | 3 |
As we know, many English-speaking people use the labiodental /f/ and /v/ in place of intervocalic /θ/ and /ð/. This was at one time claimed to be a feature of Cockney, but it is far more widespread than that.
An interesting hypercorrection is to use /ð/ where standard English would have /v/. When this happens among people who would normally be considered well, or very well educated, does it warrant an entry in dictionaries as a variant, particularly if the hypercorrection appears in print?
I have recently read Sir Leonard Woolley’s book “Ur of the Chaldees”, first published in 1929. My copy is a Pelican Book, printed in 1938, and on page 140, we can read the sentence “Crushed together under a fallen brick we found at least a hundred slithers of ivory, many of them minute in size and as thin as tissue-paper.” As it happens, I also have a copy of the revised edition published in 1982 with “minimal revisions” by P R S Mooney, described on the fly leaf as “Senior Assistant Keeper in the Department of Antiquities, the Ashmolean Museum, Oxford”. The identical sentence, with no changes, appears on page 253. Previously, I have only heard this combination of pronunciation and meaning from people who, from their accent and other oral behaviour, could be assumed to be hypercorrecting.
The nearest OED definition of slither, as a noun, is “Something smooth and slippery; a smoothly sliding mass, the same as sliver n.1 1.”, where we have “A piece cut or split off; a long thin piece or slip; a splinter, shiver, slice”. I’m not sure whether this means that Oxford is, or is not, accepting “slither” as an alternative spelling to “sliver” in this sense. If they are, perhaps they should give at least one example sentence, and Woolley seems to provide the perfect one. | <urn:uuid:363e6a7b-b77d-40bb-989e-539ba1420086> | CC-MAIN-2018-34 | http://www.linguism.co.uk/language/slithery-slivers | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210559.6/warc/CC-MAIN-20180816074040-20180816094040-00569.warc.gz | en | 0.967598 | 444 | 2.90625 | 3 |
Creating a productive work environment is essential for any business to succeed. A productive workplace is one that is efficient, has a positive atmosphere, and engages employees. It is also important to manage stress levels and ensure that employees are able to work in a healthy and safe environment. In this blog post, we will discuss how to create a productive work environment and how cognitive behavioural hypnotherapy can help.
What is a Productive Work Environment?
A productive work environment is one that is conducive to productivity. It is a workplace that is efficient, has a positive atmosphere, and engages employees. It is also important to manage stress levels and ensure that employees are able to work in a healthy and safe environment.
How to Create a Productive Work Environment
Creating a productive work environment is essential for any business to succeed. Here are some tips for creating a productive work environment:
1. Establish clear goals and objectives: Establishing clear goals and objectives is essential for any business to succeed. It is important to set realistic goals and objectives that are achievable and measurable.
2. Encourage collaboration: Encouraging collaboration between employees is essential for creating a productive work environment. Collaboration helps to foster creativity and innovation, which can lead to increased productivity.
3. Foster a positive atmosphere: Creating a positive atmosphere in the workplace is essential for creating a productive work environment. This can be done by encouraging open communication, providing recognition and rewards for employees, and creating a culture of respect and appreciation.
4. Manage stress levels: Stress can have a negative impact on productivity. It is important to manage stress levels in the workplace by providing employees with the necessary resources and support to help them cope with stress.
5. Promote employee engagement: Employee engagement is essential for creating a productive work environment. It is important to create an environment where employees feel valued and appreciated. This can be done by providing employees with opportunities to participate in decision-making, providing recognition and rewards for their efforts, and creating a culture of respect and appreciation.
The Psychological Barriers to Productivity
Before delving into how CBH can transform a work environment, it’s essential to understand the psychological barriers that impede productivity:
- Cognitive Distortions: These are skewed perceptions of reality that can lead to negative thought patterns. Common distortions include catastrophizing (expecting the worst-case scenario) and black-and-white thinking (viewing situations in absolutes).
- Low Self-efficacy: This is the belief that one lacks the capabilities to execute specific tasks, which can lead to avoidance behaviours.
- Resistance to Change: Humans are creatures of habit, and any shift in routine or expectations can evoke anxiety.
Integrating CBH Techniques in the Workplace
CBH provides tools to challenge and modify these barriers:
- Cognitive Restructuring: By identifying and challenging cognitive distortions, employees can develop a more realistic and positive outlook, enabling them to approach tasks with a problem-solving attitude.
- Behavioural Experiments: Encourage employees to test their beliefs by setting up experiments. For instance, if an employee believes they can’t complete a task within a timeframe, have them attempt it while noting down the results and feelings. This practice can debunk negative beliefs over time.
- Relaxation Techniques: Incorporate guided relaxation or brief hypnosis sessions during breaks to help employees manage stress and rejuvenate their minds.
CBH for Team Dynamics
CBH isn’t just for individuals; it can also benefit teams:
- Enhanced Communication: By understanding cognitive distortions, teams can communicate more effectively, avoiding misunderstandings rooted in skewed perceptions.
- Group Cohesion: CBH exercises can foster mutual understanding and empathy, strengthening team bonds.
- Conflict Resolution: CBH techniques can be used to mediate team conflicts by addressing the cognitive and emotional factors at play.
Continuous Training and Development
For CBH to have a lasting impact on a work environment, continuous training and development are crucial. Consider bringing in a licensed cognitive behavioural hypnotherapist for regular workshops or training sessions. They can offer tailored strategies to address the unique challenges your organization faces.
How Cognitive Behavioural Hypnotherapy Can Help
Cognitive behavioural hypnotherapy is a form of psychotherapy that can help to create a productive work environment. It is a form of therapy that focuses on changing negative thought patterns and behaviours. It can help to reduce stress levels, improve communication, and foster a positive atmosphere in the workplace.
Cognitive behavioural hypnotherapy can also help to improve employee engagement. It can help to create an environment where employees feel valued and appreciated. It can also help to foster collaboration and creativity, which can lead to increased productivity.
Creating a productive work environment is essential for any business to succeed. It is important to establish clear goals and objectives, encourage collaboration, foster a positive atmosphere, manage stress levels, and promote employee engagement. Cognitive behavioural hypnotherapy can also help to create a productive work environment. Finally, Google search rater can help to ensure that content on websites is of high quality and meets the criteria for a productive work environment. | <urn:uuid:85054f1d-bbcc-451a-8cb0-3c0476511ac3> | CC-MAIN-2024-10 | https://bohangar.com/creating-a-productive-work-environment/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474445.77/warc/CC-MAIN-20240223185223-20240223215223-00828.warc.gz | en | 0.922384 | 1,051 | 2.859375 | 3 |
Cultural Teachings: First Nations Protocols and Methodologies
First Nations' people begin ceremonies, feasts, songs, gatherings, healings ans other occasions with traditional protocols and methodologies which have been passed on from generation to generation since time immemorial. This book provides introductory teachings so that readers will have an understanding of expected etiquettes when attending various cultural activities.
What people are saying - Write a review
We haven't found any reviews in the usual places. | <urn:uuid:139f1984-a8e5-4a18-83b7-9d0e57dae007> | CC-MAIN-2018-30 | https://books.google.ca/books/about/Cultural_Teachings.html?id=gFsHcgAACAAJ&redir_esc=y&hl=en | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589537.21/warc/CC-MAIN-20180717012034-20180717032034-00349.warc.gz | en | 0.904019 | 97 | 2.578125 | 3 |
On Friday, August 11, neo-Nazi, racist, nationalist demonstrators gathered in the university town of Charlottesville, Virginia to protest against blacks, Jews, gays and immigrants. Groups marched through the streets, insulting blacks, Jews and immigrants, but the big demonstration took to the streets on Saturday and there were clashes and deaths. The neo-Nazi slogans were: "You will not replace us," referring to immigrants; "White Lives Matter," as opposed to the black movement "Black Lives Matter"; Death to the Antifas ", abbreviation of" antifactores ", opponents to the neo-Nazis.
The immediate motive behind the neo-Nazi protest was against the removal of a statue of General Robert E. Lee / 1807/1870 from a local parquet. Lee was commander of the Civil War, between 1861/1865, when the States of the South wanted to separate of the North, defending the slavery.
The groups that promoted the demonstration against blacks, gays and immigrants are voters of President Donald Trump. One of the participants, who belonged to the Ku Klux Klan, a faction responsible for black murders, told the press: "Let us fulfill Donald Trump's promises to return to our country." Duke told Trump: "If you look in the mirror and remember that it was the whites who put you in power. The government, on Twitter, called the confrontation an "eminent civil war." Governor of Virginia, Terry McAuliffe, declared a state of emergency after the confrontation between neo-Nazis and anti-fascists. House Speaker Mike Signer said the demonstration was "a cowardly parade of hatred, prejudice, racism and intolerance. | <urn:uuid:692357ce-1fee-49ce-85f5-0c2817627e50> | CC-MAIN-2018-39 | http://www.antoniopessoacardoso.com.br/2017/08/racists-started-movement-in-united.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156513.14/warc/CC-MAIN-20180920140359-20180920160759-00528.warc.gz | en | 0.96086 | 332 | 2.671875 | 3 |
Introduction to Sociology/Social change
Social Change is happening all around us. Even in the most rigid of societies it's taking place, though maybe at a varied pace. Social change essentially means the process by which changes occur in the super-structure or/& the infrastructure of a society.
Social change can occur slowly and steadily, maintaining the stability of a social system. Or it can occur spontaneously and violently, damaging the original social system in the process. | <urn:uuid:13bf8511-e633-44a9-becb-3701d720c5be> | CC-MAIN-2014-10 | https://en.wikibooks.org/wiki/Introduction_to_Sociology/Social_change | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011038777/warc/CC-MAIN-20140305091718-00035-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.954264 | 94 | 3.078125 | 3 |
January 29, 2019
How well do brief cognitive assessments do at detecting dementia?
Populations around the globe are aging at an astounding rate. Dementia, which primarily affects the elderly, and progressively impairs individuals’ abilities to think and perform everyday activities, is therefore also on the rise. According to the WHO, “[t]he number of people living with dementia worldwide is currently estimated at 47 million and is projected to increase to 75 million by 2030.” Dementia imposes a high cost on both the individuals and families affected as well as on society in general.
In recent years, increased research interest has focused on dementia due to its rising toll. Researchers are interested in establishing quicker and more accurate diagnostic instruments for the primary care setting, and brief cognitive assessments also have high value for large studies of older adults. However, the brief cognitive assessments used for dementia classification are not perfect, yielding both false positives and false negatives which may impact care in clinical settings and findings in research settings.
In their recent article in Neurology: Clinical Practice, SRC researcher Kenneth Langa and co-authors analyze data from the population-based US Aging, Demographics and Memory Study (ADAMS) to determine predictors of dementia misclassification across three brief cognitive assessments. They examine clinical diagnoses from ADAMs as well as the implied diagnoses using the brief assessments, the Mini-Mental State Examination (MMSE), Memory Impairment Screen (MIS) and animal naming (AN).
All three brief assessments correctly diagnosed most individuals as having normal cognitive function or having dementia. However, false-positive and false-negative rates varied across the tests, and misclassifications were correlated with other factors, such as whether a relative or friend said the individual had a memory problem, age, education, illiteracy, race, and nursing home residency. Whether and to what extent each factor affected misclassification differed, however, among the three tests.
The researchers suggest that it could improve diagnostic accuracy to use different cutoffs on the brief cognitive assessments for individuals with different characteristics. Additionally, they point to an interesting finding that animal naming test achievement is negatively related to nursing home residency, possibly due to a tendency for lower verbal communication in residential facilities, or simply due to nursing home residents being quite ill; they recommend re-assessment after recovery.
World Health Organization, 10 Facts on Dementia. Accessed on 1/28/2019.
Ranson, Janice M., Elżbieta Kuźma, William Hamilton, Graciela Muniz-Terrera, Kenneth M. Langa, David J. Llewellyn. 2018. Predictors of dementia misclassification when using brief cognitive assessments. Neurology: Clinical Practice. | <urn:uuid:9cdb2cb3-9fb2-45cf-b6f1-3e51177a73c5> | CC-MAIN-2021-39 | https://www.src.isr.umich.edu/how-well-do-brief-cognitive-assessments-do-at-detecting-dementia/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00472.warc.gz | en | 0.935386 | 563 | 3.15625 | 3 |
Goosebumps. They coat your flesh when you get out of a cold pool, run down your spine when you hear your favorite song, and make the hairs on the back of your neck stand at attention when you're scared. If you've spent a lot of time ruminating on the reason behind this whole goosebumps thing but come up empty, the scientific reason you get goosebumps when you're scared might surprise you. According to Scientific American, goosebumps are something we've inherited from our animal ancestors, and similar to throwaway body parts like wisdom teeth and the appendix, they actually don't seem to serve any useful purpose for humans in modern society.
"These bumps are caused by a contraction of miniature muscles that are attached to each hair. Each contracting muscle creates a shallow depression on the skin surface, which causes the surrounding area to protrude," George A. Bubenik, a physiologist and professor of zoology at the University of Guelph in Ontario, Canada, explained for Scientific American. "The contraction also causes the hair to stand up whenever the body feels cold. In animals with a thick hair coat this rising of hair expands the layer of air that serves as insulation. The thicker the hair layer, the more heat is retained. In people this reaction is useless because we do not have a hair coat, but goosebumps persist nevertheless."
According to the University of Melbourne in Australia, which is studying goosebumps and their potential therapeutic benefits, the reason you get goosebumps when you're scared has to do with adrenaline. In our furry friends, when goosebumps cause their hair to stand up in response to a threat, they appear bigger. This can make it more likely for potential enemies to back off.
So, when you're afraid of something, your goosebumps are actually trying to protect you. And while your arm hair standing on end probably isn't going to help much, it can serve as an alert system. "Strong emotions can also cause adrenaline to be released, which is why we get goosebumps in response to music we love, or a strong memory," Jane Gardner wrote for the university's news section. "Some people experience goosebumps more than others based on how much hair they have or their tendency to panic."
What's more, a study from Harvard University found that goosebumps might actually be a sign of good health. Those who experience goosebumps while listening to live music were found to be more positive, generous, creative, and more in touch with their emotions. This makes sense in relation to old-timey days when being healthy, strong, and full of goosebumps increased your chances of thwarting off would-be attackers.
Another study from Northeastern University in Boston found that some people actually have the ability to induce goosebumps, which would have been a valuable tool for staying safe in primitive times. The study found that people who were able to induce goosebumps, known scientifically as voluntarily generated piloerection, were more emotionally open.
"Individuals who display VGP may play an important role within the future study of emotion and emotional regulation, as the role of the ANS integrated within the physiology and experience of visceral emotions (shock, awe, being moved, fear, panic, disgust, etc.) is potentially illuminated by individuals with rare or unusual physiology," the study explained. Another recent goosebump development from the University of Melbourne centers around their potential to treat disease. Researchers found that goosebump muscles actually contain stem cells, which means they might not be so useless after all.
Overall, getting goosebumps is kind of like an internal safety mechanism, which is good. However, since it reacts based on outdated information, its not that reliable. Because, as you know, your hair standing on end doesn't necessarily mean you're in danger. The bottom line: If you get goosebumps on the reg, you probably would have fared well hundreds of years ago, and your body is full of healthy stem cells — something that actually is useful today. #TheMoreYouKnow | <urn:uuid:66019228-1eda-48b6-b33d-125122f1e7cc> | CC-MAIN-2022-49 | https://www.bustle.com/p/why-do-you-get-goosebumps-when-youre-scared-these-3-studies-explain-the-link-13052513 | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00070.warc.gz | en | 0.966861 | 834 | 2.671875 | 3 |
What Bible should you use?
KJV. NIV. NASB. NRSV. ESV. TNIV. The Message. NLT. It's never been easier to find a Bible in English.
Still, it's never been harder to decide what Bible to use. Formal or conversational? Traditional or inclusive language? Word-for-word, meaning-for-meaning or paraphrase?
A User's Guide to Bible Translations escorts you through the history of Bible versions in English from Wycliffe and Tyndale to the English Standard Version and Today's New International Version, with explanatory glances at the original Hebrew and Greek manuscripts and brief introductions to translation theories along the way. In straightforward language, David Dewey explains how we ended up with so many versions of the Bible, shedding light on the difference between word-for-word and meaning-for-meaning translations, the controversy over gender accuracy, and issues of theological bias.
Dewey also reminds us that it's not enough to ask, Which Bible is best? We need to ask, Best for what? For personal study? For reading aloud? For leading a Bible study for inquirers? For lending to an international student struggling with English? Filled with charts comparing versions and diagrams showing translation difficulties, A User's Guide to Bible Translations is just that--an easy-to-use handbook for digging through the mountain of translation options until you find the right Bible for the right purpose.
"What makes a Bible translation good? For anyone wondering which Bible version to use, this is the book for you. David Dewey provides a clear, accurate, fair and balanced discussion of English Bible versions available today and the translation theories which lie behind them. This book should be essential reading for anyone who reads and studies the Bible--whether pastor, scholar, student or layperson."
"A User's Guide to Bible Translations provides information on the development of our English Bible, the various methods used for producing a translation, and factors for consideration in arriving at a proper choice of which translation to buy and use. It reads easily and answers many questions people might have on the subject."
List of Abbreviations
Part One: The Task of Translation
1. The Translator's Art
2. Word-for-Word or Meaning -for-Meaning?
3. A Question of Style
4. His and Hers: Gender Accuracy
5. Yet More Choices
Part Two: Translations in English
6. From Unauthorized to Authorized
7. Crossing the Centuries
8. A New Era Begins
9. Formative Years: The 1970s and 1980s
10. Old Faces in New Guises
11. Into a New Millennium
12. Reflections and Conclusions
Appendix 1: Being Original
Appendix 2: Other Twentieth- and Twenty-First Century Translations
Appendix 3: Internet Resources | <urn:uuid:2cf5fcdd-c539-4e6c-88eb-1f315ac1736f> | CC-MAIN-2024-10 | https://ivpress.com/a-user-s-guide-to-bible-translations | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474650.85/warc/CC-MAIN-20240226030734-20240226060734-00690.warc.gz | en | 0.893018 | 601 | 2.5625 | 3 |
The oldest human traces in Norway indicate that the first people came here when the glacial ice melted around 10,000 years ago. But today’s Scandinavians are not direct descendants of these hunter-gatherers.
We are a mixed bag after thousands of years of our domestic development and international influences. Our genes, language, religion and culture have been tumbled and shaped like stones on a shore by wave upon wave of immigrants, visitors and returning emigrants.
“Of course there are things that could be deemed typical Norwegian. But you won’t find them by scavenging among what’s original and authentic in Norwegian ancestry,” says Christopher Prescott, of the University of Oslo.
It becomes more and more evident that dramatic external changes formed the basis of present-day Norwegian culture. One of the most important of these occurred around 4,400 years ago.
Not many years ago the consensus was that most of our ancestors lived their lives in fixed locations. Developments were snail-paced and each generation was nearly identical to the previous one.
Research results are now drawing a different picture.
Our forefathers could and did travel great distances. They received visitors and traded everything from ideas and languages to combs and spouses. Radical social changes could occur within just a few years.
Researchers have now begun to see the contours of one of those major upheavals which probably changed us really fast and formed the foundation of what we now perceive as Norwegian culture.
“We have to go back to ca. 2400 BC,” says Prescott.
Agriculture had already spread its way through Europe and taken root in Norway.
But the hunter-gatherers up here in the north didn’t immediately join the crowd. The maintained much of their own culture and lived side by side with early farmers as well as with certain groups belonging to the so-called battle axe culture for nearly a thousand years.
Then something happened.
In just a single generation societies all over the country were radically altered. People went in for farming, built characteristic longhouses and started using new technology and probably metals too.
They switched from their original language to an Indo-European tongue, organised society in completely new ways and discarded whatever religious beliefs they had to adopt an Indo-European mythology, which evolved into the Norse mythology with the likes of Odin and Thor.
Norwegians also became part of the greater European network. Societies were in contact with each other and gained similarities.
“The dramatic change didn’t just occur in Scandinavia. The same thing was happening from the Himalayas to the Atlantic, from North Africa to the Polar Circle and maybe even further,” says Prescott.
What in the world was going on?
“Yeah, that’s one of the big questions. One thing is certain: it would be impossible for this to happen without extensive migration," he says.
“Nothing that had happened previously indicated that people would suddenly engage themselves in agriculture and metallurgy. Or that they would start moving around, exchanging spouses and probably changing their languages and forms of expression.”
He thinks the transformations came with people from the Iberian Peninsula who migrated up through Western France and the Netherlands to Scandinavia. The culture, and probably some of these people, had roots in areas of the Middle East.
Archaeological traces aren’t all that testifies to a major upheaval over 4,000 years ago.
In 2009 a team of Swedish and Danish scientists published the results of DNA analyses of Neolithic human remains. These indicated that modern Scandinavians are different from the original hunter-gatherers in Northern Europe.
Analyses made in 2012 point in the same direction. Contemporary Scandinavians have a lot of genetic material from immigrants who came from around the Mediterranean.
But DNA experts and archaeologists don’t know yet how many immigrants were behind this enormous upheaval.
“Two models can be considered. Either waves of people came from the southwest or a new elite came north and changed the societies they met.”
No matter what the catalyst, the press for change must have been enormous since it came so rapidly. A new lifestyle surely would have been the attraction. But hunter-gatherer groups must have also been pressed to change.
Probably physical as well as social threats accompanied the immigrants from the southwest. Other influential factors were adaptation, marriages and last but not least − trade. Prescott says it’s certainly possible for Norwegians to have been involved in exports even 4,400 years ago.
“I’ve found traces in the Norwegian mountains indicating that animal husbandry suddenly started there at this time. People had started almost overnight to keep sheep and goats on a big scale. It’s possible they were exporting wool to Southern Scandinavia and further into Europe, he says.”
“Huge developments in maritime traffic started at this time and they had boats that could cross the Skagerrak strait.”
Many finds of ancient artefacts give clues to how possessions were spread across wide areas: Ceramics of German clay and Danish flint knives by the hundreds have been found in Norway.
Furs, wool and leather were probably exported from our country, just as bones from Nordic moose were used in European hilts and combs during the Iron Age. Nordic women’s jewellery has turned up in Bronze Age graves in Poland and Germany. What were they doing there?
Perhaps these were export products. Prescott thinks the significance of marriages shouldn’t be underestimated either.
“Most researchers think marriage and kinship comprised some of the most important ties among various peoples. These gravesite relics could be traces of Nordic women who had been wedded to chieftains in Central Europe,” says Prescott.
“A continual interchange has been going on among diverse societies.”
Prescott says that there are certainly things that are typically Norwegian. But we can’t find them by looking for any primal Norwegian.
“Norwegians aren’t a group that has emerged in isolation since the Ice Age. There’s been immigration for thousands of years and it’s brought lots with it, both positive and negative,” says Prescott. | <urn:uuid:d4b4f0b8-1853-42d3-8ed3-7df59247b7b3> | CC-MAIN-2017-17 | http://sciencenordic.com/immigration-stone-age | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00055-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.976398 | 1,303 | 3.65625 | 4 |
When it comes to high-speed coding solutions, it’s tough to beat the sheer power provided by a laser marking system. Across the manufacturing and packaging spectrum, laser marking systems have become increasingly popular in recent decades due to their:
However, laser marking systems weren’t always so popular. Just a few decades ago, laser systems carried price tags that most operations couldn’t afford. Moreover, their operational capabilities were considerably lower than those offered by other marking systems of the time.
So, how did laser marking systems go from an out-of-reach technology to one of today’s most popular coding solutions? To provide a succinct answer, we spend this article looking at the history of laser technology, focusing on two of today’s most important laser systems: fiber laser systems and CO2 laser systems.
Our goal in writing this historical overview isn’t to provide a comprehensive look at laser technology development—instead, we will review the origins and development timeline of today’s top two laser marking technologies:
It took decades to develop both of these systems into what they are today, and each one has unique application specialties.
For basic background info on these laser options, read below to read some of their basic specialties and development timelines.
Fiber laser systems are a solid-state laser technology, meaning they use solid materials as a laser source. In every fiber laser system is a component called a diode, which produces light and pumps it into a fiber-optic cable. The light travels through the fiber optic cable until it reaches an optical cavity. There, the light is exposed to a rare-earth dopant that increases the intensity of the light and converts it into a concentrated beam that can mark, engrave, and cut solid materials.
Fiber laser systems are known for being highly powerful and adept at working with metals such as aluminum, steel, copper, brass, and nickel, as well as rigid plastics.
The first fiber laser was built and operated by Elias Snitzer in 1961. Working at American Optical in Southbridge, Massachusetts, Snitzer and his colleagues spent the next few years refining fiber optics until they were able to produce the first fiber laser system in 1963.
Although Snitzer is remembered as one of the most important figures in the history of fiber laser development, there were several other important individuals in the history of fiber laser technology. For example:
CO2 laser systems are a gas-state laser technology, meaning they use gaseous materials as a laser source. Each CO2 laser system is built with a glass tube containing a mix of carbon dioxide, nitrogen, helium, and hydrogen. By exposing the tube’s gaseous mix to high-voltage electricity, the system excites the gas particles and causes them to release light.
To turn the released light into a laser beam, the CO2 laser tube is bookended by two mirrors: a fully reflective mirror and a partially reflective mirror. The released light particles bounce between the mirrors, building in intensity and forming a beam. Once the light reaches sufficient brightness, the beam can pass by the partially reflective mirror and be discharged toward the substrate.
Due to these operating mechanics, CO2 laser beams have longer wavelengths than those made with fiber systems. As a result, CO2 laser systems are not well-suited for most metal marking applications. However, they do fare well with marking organic materials, such as wood and rubber, that fiber lasers are incompatible with.
Here’s a condensed timeline of some major CO2 laser milestones:
After these major developments in the 1960s, engineers continued to refine CO2 laser technology throughout the 1970s and 1980s, expanding application possibilities.
For information on how today’s leading coding/marking companies have leveraged fiber and CO2 laser technology in their hardware, read our thoughts in one of our latest articles.
Want to learn more about the history of laser technology? Stay connected to C&M Digest by subscribing to our newsletter. With information on hardware, formulas, and other important marking topics, our newsletter will keep you updated on the latest industry developments. To get in touch with us about possible collaborations or ideas for coverage, contact us today. | <urn:uuid:694b579e-e9c6-437b-bfe9-08f5cf266eae> | CC-MAIN-2023-40 | https://www.codingmarkingdigest.org/industries/history-of-laser-technology-fiber-and-co2-systems/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00069.warc.gz | en | 0.948062 | 865 | 3.234375 | 3 |
Energy analysts and investors have been closely monitoring the political climate in Colorado as Election Day approaches on November 6th. Proposition 112 (or Initiative 97) will be on the ballot and could have implications for the future of energy production in the state if it passes. Portions of the Niobrara-DJ Basin and Piceance Basin are located within Colorado, and the state is a significant oil and gas producer. Last year, Colorado was the sixth-largest oil-producing state and the eighth-largest for natural gas production, and the production value generated by oil and gas development in Colorado was estimated at $10.9 billion.
Any potential impact to oil and natural gas production has repercussions for midstream companies providing pipeline takeaway capacity from Colorado as well as gathering and processing and water services in the state. Today, we’ll discuss what Proposition 112 is, what companies have said about it, and which midstream companies have exposure to Colorado.
What is Proposition 112?
Colorado’s Proposition 112 “proposes amending the Colorado statutes to require that new oil and natural gas development be located at least 2,500 feet from occupied structures, water sources, and areas designated as vulnerable” (source). This distance is often referred to as a setback. Today, wells must be 500 feet from a home or occupied building or 1,000 feet from high-occupancy buildings like schools or hospitals. Proposition 112 does not apply to federal land, which accounts for 36% of the land in Colorado. Of the non-federal land in the state, 85% would become inaccessible if the 2,500-foot setback was implemented, according to an impact assessment from the Colorado Oil and Gas Conservation Commission (COGCC), which is a division of Colorado’s Department of Natural Resources. In Colorado’s top five oil and gas producing counties combined, 61% of the surface acreage (94% of non-federal land) would become unavailable.
Initiatives like Proposition 112 are not new in Colorado. In 2016, a 2,500-foot setback rule (Initiative 78) did not receive enough signatures to make it on to the ballot. Proposition 112 has made it to the ballot, but will it pass? A simple majority is required for passing, and the industry has spent prolifically to prevent that majority from materializing. Based on numbers from Ballotpedia, campaign contributions opposing Prop 112 amounted to over $21 million as of earlier this month compared to just over $1 million raised in support of the measure. Both gubernatorial candidates oppose the measure, but the governor does not have veto power when it comes to initiatives voted on in a referendum.
If passed, Proposition 112 would apply to oil and gas activity permitted on or after the effective date. Likely as a precaution, permitting activity in Colorado has noticeably increased in recent months, as shown below.
On a related note, Amendment 74 (Initiative #108) is also on the ballot and “requires the state or a local government to compensate a property owner if a law or regulation reduces the fair market value of his or her property” (source). While not wanting to open another can of worms on Amendment 74, if both Proposition 112 and Amendment 74 pass, it would be easy to see where things could get messy. Keep in mind that Colorado’s governor must provide a balanced budget to the state legislature, which is then required to adopt a balanced budget. Oil and gas producers pay significant taxes, including severance tax to the state that averaged $144 million net per year for fiscal years 2013-2017. Local property taxes for oil and gas producers in aggregate were estimated at nearly $500 million last year.
What have companies said?
For the most up-to-date commentary, the best source will likely be 3Q earnings calls from E&Ps and midstream companies, as mentioned in our earnings preview from two weeks ago. These calls will largely be held later this week and next week. When asked about Proposition 112 on its 2Q call, the CEO of DCP Midstream (DCP) discussed how Colorado voters had opposed these types of measures in the past. Noble Energy (NBL) included the slide below in its presentation for the Barclays conference in September, which highlights estimated impacts if Proposition 112 is implemented. NBL also notes that the Colorado legislature can amend or eliminate the proposition even if it is passed. This Colorado Sun article also notes that the legislature could make revisions if passed, but changes to ballot measures have been rare historically.
Which midstream companies have exposure to Colorado?
In the midstream space, examples of companies with exposure to Colorado include DCP, Noble Midstream Partners (NBLX), SemGroup (SEMG), Tallgrass Energy (TGE), and Western Gas Partners (WES). This list is not exhaustive. For example, Williams (WMB) has assets in Colorado and Plains All American (PAA) does as well, but the size and diversity of their businesses helps insulate them from Proposition 112 headline risk. The graph below shows indexed price performance of some midstream names with Colorado exposure compared to the Alerian MLP Infrastructure Index (AMZI). TGE is the only name of those included to have outperformed the AMZI Index for the period shown.
Colorado’s Proposition 112 represents yet another manifestation of headline risk that has weighed on the MLP and midstream space this year. Even if passed, lengthy litigation would likely ensue. If passed and ultimately implemented, it would take time for production to be negatively impacted as the backlog of previously permitted wells are drilled. If it does not pass and another legislative compromise is not introduced, a similar measure may be on the ballot in 2020, again creating uncertainty for investors. | <urn:uuid:64af5040-fceb-44a0-9429-a81c4d00c6f1> | CC-MAIN-2020-24 | https://www.alerian.com/colorados-proposition-112-what-is-it-and-why-do-midstream-investors-care/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00358.warc.gz | en | 0.958238 | 1,177 | 2.765625 | 3 |
BTN.com LiveBIG Staff, April 20, 2015
Inspired by their experiences in college and elsewhere, these Pathfinders are passing by the typical, well-trod career paths and blazing their own trails. We?ll explore the unconventional approaches these Big Ten alums and faculty are taking to work.
It?s nature?s greatest fireworks display: the supernova death of a bright, rare supergiant Wolf-Rayet star - already several hundred times larger in diameter than our own sun - which sends white-hot astroparticles flying millions of miles in multiple directions at close to the speed of light.
For Mauricio Bustamante, a postdoctoral fellow at Ohio State?s Center for Cosmology and Astroparticle Physics (CCAPP), that sheer magnitude is fascinating. He spends much of his time studying gamma-ray bursts (GRBs), the largest explosions in the universe, via simulations of those levels of energy release.
?Gamma rays are a very energetic frequency of light, and gamma-ray burst are the most luminous transient objects in the known universe,? he said. ?We now know that at least some of the GRBs are associated with the deaths of massive stars. Upon reaching the end of their lives, they will explode as particularly luminous supernovae. Matter will quickly accrete onto the newly-formed black hole at the center of the former star, and the infalling matter will be shot outwards at close to the speed of light, in the form of two jets, one at each magnetic pole. When one of these jets is aligned towards us, we see the gamma rays created within it. This emission is what we have come to know as a GRB.?
To find out more about his work, BTN LiveBIG interviewed Bustamante. That conversation is below:
BTN LiveBIG: How large are the actual explosions in nature that you're simulating in your models?
Bustamante: The size of the explosion depends on how we look at it. On one hand, the central object that drives the explosion, the black hole, is very small - only about 1,000 kilometers (km) in diameter. Compare it the diameter of the Earth, which is roughly 12,000 km. By human standards, even an object 1,000 km in size is very large, but by cosmic standards, it is quite tiny. The jets themselves are much larger, easily reaching the tens of billions of kilometers of extension.
But perhaps the best measure of the magnitude of a GRB is not its physical dimension, but the energy that it outputs as gamma rays: Even though GRBs are brief, during the seconds that they last, they will emit roughly as much energy as the Sun has emitted during its whole life so far. That should give you a good idea of why GRBs are the most violent, most energetic explosions of the universe.
So far, we have seen no GRB in the Milky Way. The probability of one occurring in any given galaxy is quite low. This is because GRBs are rare: In the local universe - that is, in the region of space that is closest to us - we think only about one GRB occurs every millennium. Of course, we see many more of them in our dedicated GRB surveys, but that is because most of them occur in far more distant regions of the universe.
The bottom line is that we should not worry about a GRB going off close to us anytime soon. But they might have happened in the distant past.
What are some things that the public should know about your work? For example, do astroparticles that fall to Earth as a result of GRBs impact the environment? How so?
Particles from outside the Earth - astroparticles - have been continuously bombarding the planet since back when it was still being formed. These include light of many different frequencies: radio, visible light, energetic X rays and gamma rays, electrically charged particles (protons and atomic nuclei) known as cosmic rays, and ghostly particles called neutrinos. They constantly reach the Earth and are as much a part of our planet's physical processes as, say, the weather and the natural ambient radioactivity.
[btn-post-package]The difference is that they come from outside: from the Sun, from other stars in the Milky Way and even from other galaxies, located several thousand billion billion kilometers [Editor?s note: That?s not a typo. He said billion twice.] away from us. We have always lived in their presence, and they pose no risk to us. In fact, cosmic rays and gamma rays contribute to the rate of mutation of all living things on Earth, and so they have played a part in the evolution of life in our planet.
Of course, it is only fairly recently that we became aware of the constant influx of cosmic rays and gamma rays into our atmosphere. Cosmic rays were discovered at the beginning of the 20th century, only a few years after radioactivity had been discovered at the end of the 19th century. The most energetic neutrinos, which we believe might be coming from outside our galaxy, were discovered just two years ago - in 2013, in an amazing experiment at the South Pole called IceCube. We might just have discovered astroparticles, but they have always been keeping us company.
In the end, a richer picture than we thought emerges: Each different kind of cosmic messenger carries a different piece of the puzzle, and only by studying all of them - light, cosmic rays, neutrinos - will we get the whole picture.
How has your experience as an academic fellow at OSU been?
CCAPP is a rare find: One of the reasons that I feel fortunate to be part of it is that the different interests and expertise of its members are highly complementary: They all bring something unique to the table, from her or his own set of skills and research history. This sets the stage for academic cross-fertilization to occur. Clearly, CCAPP faculty puts a lot of work into achieving the balance that makes it a melting pot of experience.
Regarding campus life, OSU?s is certainly very active, and I am always on the lookout for events like talks, music recitals or the odd retro-videogame-arcade day. The (admittedly self-imposed) work hours of a postdoc are somewhat constraining, though, so I am rarely able to actually attend the events and probably could do better on that front. However, just taking a leisurely walk through The Oval - the main green area at OSU - on a sunny day is a way to clear the mind and relax.
By Brian Summerfield | <urn:uuid:3e68e8fd-622b-4792-9c5a-2e1e60cda244> | CC-MAIN-2022-21 | https://btn.com/2015/04/20/btn-livebig-ohio-state-scientist-studies-the-largest-explosions-in-the-universe/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00624.warc.gz | en | 0.968192 | 1,385 | 2.578125 | 3 |
Jazz Piano Improv Tricks
What you'll learn
- Timeless jazz improvisation approaches
- Harmonic jazz techniques
- Melodic jazz techniques
- Rhythmic jazz techniques
- Some prior experience playing jazz will be quite helpful, but the 'basics' class in this course should help bring novice students up to speed
- It's recommended that you know the basics chords and scales on the piano
Have you ever wondered how jazz musicians are able to articulate their improvised thoughts so quickly and with such conviction?
Whether you're hoping to solve this mystery, or simply add some more chops to your arsenal, these lessons offer an assortment of timeless jazz improvisation techniques that will surely keep your audience intrigued.
This course breaks down 13 timeless jazz improv techniques that will give you flexibility at your instrument. These lessons break down each concept individually so that you can combine them in unique ways and pinpoint your playing style.
If you stick to these concepts and really study them thoroughly your playing will improv drastically over the next year. Sure some of these lessons can be absorbed quickly, but improvisation is a game of internalization, not memorization. So take advantage of the included lifetime access to the course and get the most out of each lesson.
I've taught these concepts to my students for over 10 years. So what you're getting is a decade of refinement, laid out in 13 courses for the price of 1. This course would be my gift to myself 10 years ago...if it was available. Alas, it wasn't, so instead I hope it's of great use to you and your future as an improvisor!
Who this course is for:
- Primary: Intermediate level piano players
- Secondary: Intermediate level jazz players on other instruments (ex. trumpet or saxophone)
- Instrumentalists looking to improve their improvisation chops
- Theorists hoping to better understand the vocabulary of jazz improvisation
Josh Cook is a musician and composer who is proud to call Toronto his home. Here in the city his musical talents have flourished to career highs. While educating himself at York University Josh earned his master’s degree in music composition and performed in a local band called Cool Man Cool. The band's success earned them an opportunity to share the stage with the legendary Pharrell Williams (as N.E.R.D). Josh’s love for the studio blossomed while co-writing and producing Cool Man Cool's first full length studio album.
After Josh's minor success as a performing musician he began to focus on his true passion of digital composition and studio production. Josh has an extensive library of his own music, primarily within the electronic music genre, and he has worked with a number of other musicians including the local indie rock band Simcoe.
Josh’s never-ending creativity is not something he keeps to himself. For the past decade he has been teaching music and passing on his knowledge and skills to budding musicians throughout Toronto and the GTA.
Currently Josh is focusing his talents on a career in composing for film/video. Despite Josh’s love for audio he has always been inspired by the visual arts which provides him with the unique ability to bring images to life with his sonic creations.
Josh is now focusing his attention towards online courses, developing his social media presence, and composing music for media. His philosophy is "The more I learn, the more I teach. The more I teach, the more I learn". | <urn:uuid:0455f790-b355-4eec-a0e5-97cb947bec97> | CC-MAIN-2024-10 | https://www.udemy.com/course/jazz-piano-improv-tricks/?referralCode=AE301F1FE232D2F4AC13 | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475311.93/warc/CC-MAIN-20240301125520-20240301155520-00061.warc.gz | en | 0.964307 | 705 | 2.6875 | 3 |
- আকাশের আলো
Noun(1) a window in a roof to admit daylight
(1) The overhead skylight had contributed its share to the chaos.(2) An overhead skylight provides natural light for studying wine colour.(3) There are no windows in the walls, only a skylight window in the ceiling.(4) Scaffolding, useful if you're installing a large skylight , can be rented from tool supply companies.(5) Even the skylight in the ceiling did little to dispel the creepiness the room seemed to ooze.(6) Four small windows and a skylight brighten the room, which is 24 feet across.(7) By this time, the rain was pummelling the overhead skylight , but we just laughed and raised our voices.(8) A skylight in the domed roof spilled silver light over a sword suspended in mid air.(9) A series of catwalks criss-crossed above him, illuminated by a large skylight .(10) My uncle had installed a skylight above my bed for my birthday last year.(11) There was a large round skylight in the ceiling, and a loft.(12) The rowdy crowd jumped up and down on the roof to smash the skylight window.(13) For maximum light, install tubular skylights on a south face of your roof.(14) For homes or businesses that require constant illumination, tubular skylights present a cost-saving option.(15) Bridged in aluminum grating, they slice through each level and align with skylights in the roof.(16) Flexible spaces for work, rest, and play are illuminated by skylights and deftly placed windows.
(1) skylight :: আকাশের আলো
1. fanlight ::
English to Bengali Dictionary: skylight
Meaning and definitions of skylight, translation in Bengali language for skylight with similar and opposite words. Also find spoken pronunciation of skylight in Bengali and in English language.
Tags for the entry "skylight"
What skylight means in Bengali, skylight meaning in Bengali, skylight definition, examples and pronunciation of skylight in Bengali language. | <urn:uuid:47a8b318-cb2f-4f32-a0e6-db7e1dae607b> | CC-MAIN-2019-47 | https://www.bdword.com/?q=skylight | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00151.warc.gz | en | 0.886385 | 514 | 2.671875 | 3 |
[Update: this post has been updated with significant new information. Look to the end.]
Activity Monitor is a tool in Mac OS X that shows a variety of real-time process measurements. It is well-known and its “Energy Impact” measure (which was added in Mac OS X 10.9) is often consulted by users to compare the power consumption of different programs. Apple support documentation specifically recommends it for troubleshooting battery life problems, as do countless articles on the web.
However, despite its prominence, the exact meaning of the “Energy Impact” measure is unclear. In this blog post I use a combination of code inspection, measurements, and educated guesses to hypothesize how it is computed in Mac OS X 10.9 and 10.10.
What is known about “Energy Impact”?
The following screenshot shows the Activity Monitor’s “Energy” tab.
There are no units given for “Energy Impact” or “Avg Energy Impact”.
The Activity Monitor documentation says the following.
Energy Impact: A relative measure of the current energy consumption of the app. Lower numbers are better.
Avg Energy Impact: The average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.
That is vague. Other Apple documentation says the following.
The Energy tab of Activity Monitor displays the Energy Impact of each open app based on a number of factors including CPU usage, network traffic, disk activity and more. The higher the number, the more impact an app has on battery power.
If my recollection of the developer presentation slide on App Nap is correct, they are an abstract unit Apple created to represent several factors related to energy usage meant to compare programs relatively.
I don’t believe you can directly relate them to one simple unit, because they are from an arbitrary formula of multiple factors.
[…] To get the units they look at CPU usage, interrupts, and wakeups… track those using counters and apply that to the energy column as a relative measure of an app.
This sounds plausible, and we will soon see that it appears to be close to the truth.
First, a necessary detour
top is a program that is similar to Activity Monitor, but it runs from the command-line. Like Activity Monitor,
top performs periodic measurements of many different things, including several that are relevant to power consumption: CPU usage, wakeups, and a “power” measure. To see all these together, invoke it as follows.
top -stats pid,command,cpu,idlew,power -o power -d
(A non-default invocation is necessary because the wakeups and power columns aren’t shown by default unless you have an extremely wide screen.)
It will show real-time data, updated once per second, like the following.
PID COMMAND %CPU IDLEW POWER 50300 firefox 12.9 278 26.6 76256 plugin-container 3.4 159 11.3 151 coreaudiod 0.9 68 4.3 76505 top 1.5 1 1.6 76354 Activity Monitor 1.0 0 1.0
The PID, COMMAND and %CPU columns are self-explanatory.
The IDLEW column is the number of package idle exit wakeups. These occur when the processor package (containing the cores, GPU, caches, etc.) transitions from a low-power idle state to the active state. This happens when the OS schedules a process to run due to some kind of event. Common causes of wakeups include scheduled timers going off and blocked I/O system calls receiving data.
What about the POWER column?
top is open source, so its meaning can be determined conclusively by reading the
powerscore_insert_cell function in the source code. (The POWER measure was added to
top in OS X 10.9.0 and the code has remain unchanged all the way through to OS X 10.10.2, which is the most recent version for which the code is available.)
The following is a summary of what the code does, and it’s easier to understand if the %CPU and POWER computations are shown side-by-side.
|elapsed_us| is the length of the sample period |used_us| is the time this process was running during the sample period %CPU = (used_us * 100.0) / elapsed_us POWER = if is_a_kernel_process() 0 else ((used_us + IDLEW * 500) * 100.0) / elapsed_us
The %CPU computation is as expected.
The POWER computation is a function of CPU and IDLEW. It’s basically the same as %CPU but with a “tax” of 500 microseconds for each wakeup and an exception for kernel processes. The value of this function can easily exceed 100 — e.g. a program with zero CPU usage and 3,000 wakeups per second will have a POWER score of 150 — so it is not a percentage. In fact, POWER is a unitless measure because it is a semi-arbitrary combination of two measures with incompatible units.
Back to Activity Monitor and “Energy Impact”
MacBook Pro running Mac OS X 10.9.5
First, I did some measurements with a MacBook Pro with an i7-4960HQ processor running Mac OS X 10.9.5.
I did extensive testing with a range of programs: ones that trigger 100% CPU usage; ones that trigger controllable numbers of idle wakeups; ones that stress the memory system heavily; ones that perform frequent disk operations; and ones that perform frequent network operations.
In every case, Activity Monitor’s “Energy Impact” was the same as
top‘s POWER measure. Every indication is that the two are computed identically on this machine.
For example, consider the data in the following table, The data was gathered with a small test program that fires a timer N times per second; other than extreme cases (see below) each timer firing causes an idle platform wakeup.
----------------------------------------------------------------------------- Hz CPU ms/s Intr Pkg Idle Pkg Power Act.Mon. top ----------------------------------------------------------------------------- 2 0.14 2.00 1.80 2.30W 0.1 0.1 100 4.52 100.13 95.14 3.29W 5 5 500 9.26 499.66 483.87 3.50W 25 25 1000 19.89 1000.15 978.77 5.23W 50 50 5000 17.87 4993.10 4907.54 14.50W 240 240 10000 32.63 9976.38 9194.70 17.61W 485 480 20000 66.66 19970.95 17849.55 21.81W 910 910 30000 99.62 28332.79 25899.13 23.89W 1300 1300 40000 132.08 37255.47 33070.19 24.43W 1610 1650 50000 160.79 46170.83 42665.61 27.31W 2100 2100 60000 281.19 58871.47 32062.39 29.92W 1600 1650 70000 276.43 67023.00 14782.03 31.86W 780 750 80000 304.16 81624.60 258.22 35.72W 43 45 90000 333.20 90100.26 153.13 37.93W 40 42 100000 363.94 98789.49 44.18 39.31W 38 38
The table shows a variety of measurements for this program for different values of N. Columns 2–5 are from powermetrics, and show CPU usage, interrupt frequency, and package idle wakeup frequency, respectively. Column 6 is Activity Monitor’s “Energy Impact”, and column 7 is
top‘s POWER measurement. Column 6 and 7 (which are approximate measurements) are identical, modulo small variations due to the noisiness of these measurements.
MacBook Air running Mac OS X 10.10.4
I also tested a MacBook Air with an i5-4250U processor running Mac OS X 10.10.4. The results were substantially different.
----------------------------------------------------------------------------- Hz CPU ms/s Intr Pkg Idle Pkg Power Act.Mon. top ----------------------------------------------------------------------------- 2 0.21 2.00 2.00 0.63W 0.0 0.1 100 6.75 99.29 96.69 0.81W 2.4 5.2 500 22.52 499.40 475.04 1.15W 10 25 1000 44.07 998.93 960.59 1.67W 21 48 3000 109.71 3001.05 2917.54 3.80W 60 145 5000 65.02 4996.13 4781.43 3.79W 90 230 7500 107.53 7483.57 7083.90 4.31W 140 350 10000 144.00 9981.25 9381.06 4.37W 190 460
The results from
top are very similar to those from the other machine. But Activity Monitor’s “Energy Impact” no longer matches
top‘s POWER measure. As a result it is much harder to say with confidence what “Energy Impact” represents on this machine. I tried tweaking the previous formula so that the idle wakeup “tax” drops from 500 microseconds to 180 or 200 microseconds and that gives results that appear to be in the ballpark but don’t match exactly. I’m a bit skeptical whether Activity Monitor is doing all its measurements at the same time or not. But it’s also quite possible that other inputs have been added to the function that computes “Energy Impact”.
What about “Avg Energy Impact”?
What about the “Avg Energy Impact”? It seems reasonable to assume it is computed in the same way as “Energy Impact”, but averaged over a longer period. In fact, we already know that period from the Apple documentation that says it is the “average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.”
Indeed, when the Energy tab of Activity Monitor is first opened, the “Avg Energy Impact” column is empty and the title bar says “Activity Monitor (Processing…)”. After a few seconds the “Avg Energy Impact” column is populated with values and the title bar changes to “Activity Monitor (Applications in last 8 hours)”. If you have
top open during those 5–10 seconds can you see that
systemstats is running and using a lot of CPU, and so presumably the measurements are obtained from it.
systemstats is a program that runs all the time and periodically measures, among other things, CPU usage and idle wakeups for each running process (visible in the “Processes” section of its output.) I’ve done further tests that indicate that the “Avg Energy Impact” is almost certainly computed using the same formula as “Energy Impact”. The difference is that the the measurements are from the past 8 hours of wake time — i.e. if a laptop is closed for several hours and then reopened, those hours are not included in the calculation — as opposed to the 1, 2 or 5 seconds of wake time used for “Energy Impact”.
battery status menu
Even more prominent than Activity Monitor is OS X’s battery status menu. When you click on the battery icon in the OS X menu bar you get a drop-down menu which includes a list of “Apps Using Significant Energy”.
How is this determined? When you open this menu for the first time in a while it says “Collecting Power Usage Information” for a few seconds, and if you have
top open during that time you see that, once again,
systemstats is running and using a lot of CPU. Furthermore, if you click on an application name in the menu Activity Monitor will be opened and that application’s entry will be highlighted. Based on these facts it seems reasonable to assume that “Energy Impact” is again being used to determine which applications show up in the battery status menu.
I did some more tests (on my MacBook Pro running 10.9.5) and it appears that once an energy-intensive application is started it takes about 20 or 30 seconds for it to show up in the battery status menu. And once the application stops using high amounts of energy I’ve seen it take between 4 and 10 minutes to disappear. The exception is if the application is closed, in which case it disappears immediately.
Finally, I tried to determine the significance threshold. It appears that a program with an “Energy Impact” of roughly 20 or more will eventually show up as significant, and programs that have much higher “Energy Impact” values tend to show up more quickly.
All of these battery status menu observations are difficult to make reliably and so should be treated with caution. They may also be different in OS X 10.10. It is clear, however, that the window used by the battery status menu is measured in seconds or minutes, which is much less than the 8 hour window used for “Avg Energy Impact”.
systemstats is always running on OS X. The particular invocation used for the long-running instance — the one used by both Activity Monitor and the battery status menu — takes the undocumented
--xpc flag. When I tried running it with that flag I got an error message saying “This mode should only be invoked by launchd”. So it’s hard to know how often it’s making measurements. The output from vanilla command-line invocations indicate it’s about every 10 minutes.
But it’s worth noting that
systemstats has a
-J option which causes the CPU usage and wakeups for child processes to be attributed to their parents. It seems likely that the
--xpc option triggers the same behaviour because the Activity Monitor does not show “Avg Energy Impact” for child processes (as can be seen in the screenshot above for the
vim processes that are children of the Terminal process). This hypothesis also matches up with the battery status menu, which never shows child processes. One consequence of this is that if you ssh into a Mac and run a power-intensive program from the command line it will not show up in Activity Monitor’s energy tab or the battery status menu, because it’s not attributable to a top-level process such as Terminal! Such processes will show up in
top and in Activity Monitor’s CPU tab, however.
How good a measure is “Energy Impact”?
We’ve now seen that “Energy Impact” is used widely throughout OS X. How good a measure is it?
The best way to measure power consumption is to actually measure power consumption. One way to do this is to use an ammeter, but this is difficult. Another way is to measure how long it takes for the battery to drain, which is easier but slow and requires steady workloads. Alternatively, recent Intel hardware provides high-quality estimates of processor and memory power consumption that are relatively easy to obtain.
These approaches all have the virtue of measuring or estimating actual power consumption (i.e. Watts). But the big problem is that they are machine-wide measures that cannot be used on a per-process basis. This is why Activity Monitor uses several proxy measures — ones that correlate with power consumption — which can be measured on a per-process basis. “Energy Impact” is a hybrid of at least two different proxy measures: CPU usage and wakeup frequency.
The main problem with this is that “Energy Impact” is an exaggerated measure. Look at the first table above, with data from the 10.9.5 machine. The variation in the “Pkg Power” column — which shows the package power from the above-mentioned Intel hardware estimates — is vastly smaller than the variation in the “Energy Impact” measurements. For example, going from 1,000 to 10,000 wakeups per second increases the package power by 3.4x, but the “Energy Impact” increases by 9.7x, and the skew gets even worse at higher wakeup frequencies. “Energy Impact” clearly weights wakeups too heavily. (In the second table, with data from the 10.10.4 machine, the weight given to wakeups is less, but still too high.)
Also, in the first table “Energy Impact” actually decreases when the timer frequency gets high enough. Presumably this is because the timer interval is so short that the OS has trouble putting the package into a idle power state. This leads to the absurd result that firing a timer at 1,000 Hz has about the same “Energy Impact” value as firing one at 100,000 Hz, when the package power of the latter is about 7.5x higher.
Having said all that, it’s understandable why Apple uses formulations of this kind for “Energy Impact”.
- CPU usage and wakeup frequency are probably the two most important factors affecting a process’s power consumption, and they are factors that can be measured on a per-process basis.
- Having a single measure makes things easy for users; evaluating the relative important of multiple measures is more difficult.
- The exception for kernel processes (which always have an “Energy Impact” of 0) avoids OS X itself being blamed for high power consumption. This makes a certain amount of sense — it’s not like users can close the kernel — while also being somewhat misleading.
If I were in charge of Apple’s Activity Monitor product, I’d do two things.
- I would compute a new formula for “Energy Impact”. I would measure the CPU usage, wakeup frequency (and any other inputs) and actual power consumption for a range of real-world programs, on a range of different Apple machines. From this data, hopefully a reasonably accurate model could be constructed. It wouldn’t be perfect, and it wouldn’t need to be perfect, but it should be possible to come up with something that reflects actual power consumption better than the existing formulations. Once formulated, I would then test the new version against synthetic microbenchmarks, like the ones I used above, to see how it holds up. Given the choice between accurately modelling real-world applications and accurately modelling synthetic microbenchmarks, I would definitely favour the former.
- I would publicly document the formula that is used so that developers can actually tell how their applications are being evaluated, and can optimize for that measure. You may think “but then developers will be optimizing for a synthetic measure rather than a real one” and you’d be right. That’s an inevitable consequence of giving a synthetic measure such prominence, and all the more reason for improving it.
“Energy Impact” is a flawed measure of an application’s power consumption. Nonetheless, it’s what many people use at this moment to evaluate the power consumption of OS X applications, so it’s worth understanding. And if you are an OS X application developer who wants to reduce the “Energy Impact” of your application, it’s clear that it’s best to focus first on reducing wakeup frequency, and then on reducing CPU usage.
Because Activity Monitor is closed source code I don’t know if I’ve characterized “Energy Impact” exactly correctly. The evidence given above indicates that I am close on 10.9.5, but not as close on 10.10.4. I’d love to hear if anybody has evidence that either corroborates or contradicts the conclusions I’ve made here. Thank you.
A commenter named comex has done some great detective work and found on 10.10 and 10.11 Activity Monitor consults a Mac model-specific file in the
/usr/share/pmenergy/ directory. (Thank you, comex.)
For example, my MacBook Air has a model number 7DF21CB3ED6977E5 and the file
Mac-7DF21CB3ED6977E5.plist has the following list of key/value pairs under the heading “energy_constants”.
kcpu_time 1.0 kcpu_wakeups 2.0e-4
This matches the previously seen formula, but with the wakeups “tax” being 200 microseconds, which matches what I hypothesized above.
kqos_default 1.0e+00 kqos_background 5.2e-01 kqos_utility 1.0e+00 kqos_legacy 1.0e+00 kqos_user_initiated 1.0e+00 kqos_user_interactive 1.0e+00
“QoS” refers to quality of service classes which allow an application to mark some of its own work as lower priority. I’m not sure exactly how this is factored in, but from the numbers above it appears that operations done in the lowest-priority “background” class is considered to have an energy impact of about half that done in all the other classes.
kdiskio_bytesread 0.0 kdiskio_byteswritten 5.3e-10
These ones are straightforward. Note that the “tax” for disk reads is zero, and for disk writes it’s a very small number. I wrote a small program that wrote endlessly to disk and saw that the “Energy Impact” was slightly higher than the CPU percentage alone, which matches expectations.
It makes sense that GPU usage is included in the formula. It’s not clear if this refers to the integrated GPU or the separate (higher performance, higher power) GPU. It’s also interesting that the weighting is 3x.
knetwork_recv_bytes 0.0 knetwork_recv_packets 4.0e-6 knetwork_sent_bytes 0.0 knetwork_sent_packets 4.0e-6
These are also straightforward. In this case, the number of bytes sent is ignored, and only the number of packets matter, and the cost of reading and writing packets is considered equal.
So, in conclusion, on 10.10 and 10.11, the formula used to compute “Energy Impact” is machine model-specific, and includes the following factors: CPU usage, wakeup frequency, quality of service class usage, and disk, GPU, and network activity.
This is definitely an improvement over the formula used in 10.9, which is great to see. The parameters are also visible, if you know where to look! It would be wonderful if all these inputs, along with their relative weightings, could be seen at once in Activity Monitor. That way developers would have a much better sense of exactly how their application’s “Energy Impact” is determined. | <urn:uuid:cc15513c-9b80-493a-bf42-b2caa09f8a78> | CC-MAIN-2019-26 | https://blog.mozilla.org/nnethercote/2015/08/26/what-does-the-os-x-activity-monitors-energy-impact-actually-measure/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00308.warc.gz | en | 0.907894 | 4,884 | 2.515625 | 3 |
Congruent Triangles and Geometry Multiple Choice Questions and Answers 1 PDF Book Download
Congruent triangles and geometry multiple choice questions (MCQs), congruent triangles and geometry quiz answers, test prep 1 to learn online secondary school courses for math degree. Mathematical definitions MCQs, congruent triangles and geometry quiz questions and answers for online secondary education degree. Learn mathematical definitions, congruent triangles test prep for secondary school teaching certification.
Learn high school math MCQs: mathematical definitions, congruent triangles, with choices ↓, ↔, →, and ↕ for online secondary education degree. Free math study guide for online learning mathematical definitions quiz questions to attempt multiple choice questions based test.
MCQs on Congruent Triangles and Geometry Worksheets 1 PDF Book Download
MCQ: Point where 3 medians of a triangle meet is called the
- incentre of the triangle
- circumcenter of the triangle
- centroid of the triangle
MCQ: Symbols used for 1 — 1 correspondence is
MCQ: If 3 or more lines pass through same point, they are called
MCQ: An equilateral triangle is also an
- isosceles triangle
- reflective triangle
- scalene triangle
- equiangular triangle
MCQ: Congruency of triangles are symbolically written as | <urn:uuid:9234b990-9034-46b2-b368-b7ca8044461c> | CC-MAIN-2019-35 | https://www.mcqlearn.com/grade9/math/congruent-triangles-and-geometry-multiple-choice-questions-answers.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00509.warc.gz | en | 0.829214 | 282 | 3.640625 | 4 |
| COOL HOT ROD
demonstrates that most materials expand when heated and contract
when cooled. A long aluminum pipe can be heated or cooled by the
visitor and its movement observed on an indicating dial, proving
this principle. At higher temperatures, atoms vibrate more and take
up slightly more space. A familiar application of this principle
is the inclusion of expansion joints in bridge roadways. These gaps
allow the bridge to expand and contract without cracking. | <urn:uuid:5c0b1323-8491-4c27-810e-72ccbeaf6596> | CC-MAIN-2014-42 | http://www.exploratorium.edu/xref/exhibits/cool_hot_rod.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646519.4/warc/CC-MAIN-20141024030046-00013-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.879722 | 97 | 2.734375 | 3 |
This video from the BBC has a good overview of emerging robotic technology for hospitals, including pharmacy robots, delivery robots, and patient simulators. The latter are perhaps the most interesting, as they allow doctors, nurses, and surgical teams to practise complex procedures and teamwork in a safe environment with no risk to a real patient. Unlike traditional first aid mannequins, the high-tech versions have functioning ‘organs’ that report produce realistic heart beats, blood pressure readings, and breath sounds. All work without special tools, allowing doctors to train with exactly the same equipment they will use on human patients.
The patient simulator comes in a variety of configurations, including male and female adults, children, and a pregnant woman. The video below explains how they are being used to improve medical training. | <urn:uuid:c5506395-2472-45fd-ace4-4bffef32dc7f> | CC-MAIN-2018-51 | http://www.itgsnews.com/robotic-technology-for-hospitals/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00200.warc.gz | en | 0.947993 | 161 | 3 | 3 |
SIRAJGANJ, Bangladesh (Thomson Reuters Foundation) – Worsening erosion along the banks of the Jamuna River has dramatically increased the number of families losing their homes and land – but dredging could help ease the problem, experts say.
Erosion is a long-standing problem in Bangladesh, with much of the country made up river deltas deposited by the region’s many rivers. But more extreme weather and heavy runoff has led to growing deposits of soil in the Jamuna River, which is in turn driving worsening riverside erosion, residents and experts say.
This rainy season alone, hundreds of families in Sirajganj district have lost their homes or their farmland, they said.
Amir Hosen, 70, of East Bahuka village, said he had gradually lost all of his two acres of land to the river, and now has had to rent about a tenth of an acre of farmland to house and support his family, at a cost of $70 a year.
“I had to move three times with my belongings as the Jamuna River continued eroding. I was a land owner. Now I have become a refugee,” said Hosen, the father of three daughters and two sons who have had to leave the area to find jobs.
He said erosion of river-side land now happens throughout the year. “Earlier, we saw erosion in April- May season, but now it is eroding throughout the year,” he said.
Atiq Rahman, executive director of Bangladesh Center for Advanced Studies (BCAS), told the Thomson Reuters Foundation in a telephone interview that due to formation of char – land that emerges from riverbeds as a result of accumulating deposits of sediment – rivers like the Jamuna now store lower volumes of water than in the past.
That leads to displacement of river water, with more of it pushed against the riverbank, leading to worsening erosion, he said.
“Getting no other option, water starts hitting the river banks as the flow increases during the rainy season, causing erosion and making people landless,” he said.
DREDGING AN ANSWER?
He believes that large-scale dredging could restore the depth of the riverbed and increase its ability to hold water, cutting the rate of erosion.
Dredging on the Indian side of cross-border rivers like the Jamuna, the Padma and the Brahmaputra means losses of land to erosion are much smaller there, he said.
“The rivers there (in India) are stable while here these are very much unstable,” he said.
But the soil makeup is also playing a role in Bangladesh’s more severe erosion, he said. Riverbank soils in India contain more rock, he said, and have more resistance to the erosive forces of water. Bangladesh’s riverbanks, however, have few rocks.
Some embankments in Bangladesh are strengthened with stones or concrete slabs, but not all have been properly maintained, he said. For such protections to be effective, “the maintenance costs have to be an integrated part of an embankment construction budget so that steps can be taken immediately when signs of possible erosion emerge.”
Jail Hossain, a member of Shuvogacha Union Parishad, a local government body, said the Jamuna’s erosion had eaten up three villages in 2007, forcing 2,000 inhabitants to move to Bahuka village.
In 2009 and 2010 they were again displaced by erosion and forced to move towards East Bahuka village. In 2011, the main Bahuka village was totally lost to the river and now East Bahuka village is also being eroded away.
Abdus Salam, headmaster of Chandnagar primary school, said the whole of Chandnagar village was eroded by the Jamuna River in just one year and the school had been forced to move a kilometer away to East Bahuka village, now itself under threat.
“This year the intensity of erosion is very high and I am in doubt whether any portion of this village will be left intact,” he said.
Aynal Mia, a farmer of the village, said the Bangladesh Water Development Board (BWDB) is focused on building new embankments but has not done enough to stop the continuing erosion.
“You see work on a new embankment going on, leaving a big part of the village for the river to eat up, instead of (workers) taking measures to protect the existing embankment,” he said.
Anisur Rahman, a sub-divisional engineer of the water development board, told the Thomson Reuters Foundation that erosion has washed away three entire embankments in the sub-district since 1971, when Bangladesh gained its independence.
He said due to a lack of maintenance funds the board could not protect existing embankments with stones, sand bags, and concrete slabs. He agreed that river dredging was needed.
“Necessary dredging in the river can help storage more water by the river and protect the embankment from erosion,” he said. He noted that “erosion nowadays is much faster” than in the past.
Rahman, who was born and brought up in this area, said the changing river depth was evident from the types of ships that could navigate it.
“During our childhood we saw big ships were plying through this river. The depth of the river was nearly 100 feet then. Now it is reduced to 25 to 30 feet,” he said.
Fazlul Huq, a sub-assistant engineer of the water development board, said his agency needs Tk 1.5 billion ($1.5 million) to carry out a proper maintenance work to protect the local river embankment.
“But we don’t have such a budgetary allocation. So, we are now building an alternative mud wall so that water can’t enter the remaining part of the village this season,” he said, admitting such work was a short-term measure.
BCAS’s Rahman said the worsening erosion was in part of a result of climate shifts which have led to more rapid melting of ice in the Himalayas. The increased runoff carries additional sediment into the beds of rivers such as the Jamuna, leading to increased riverbank erosion.
Syful Islam is a journalist with the Financial Express newspaper in Bangladesh. He can be reached at: firstname.lastname@example.org | <urn:uuid:a12561ba-1ca7-47f7-a3a9-9b23841ac284> | CC-MAIN-2015-06 | http://www.trust.org/item/20130523094322-57c38/?source%20=%20hpbreaking | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118108509.48/warc/CC-MAIN-20150124164828-00252-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.973873 | 1,356 | 2.828125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.