text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Peru Land Reform Sources: The Library of Congress Country Studies; CIA World Factbook The most striking and thorough reform imposed by the Velasco government was to eliminate all large private landholdings, converting most of them into cooperatives owned by prior workers on the estates. The reform was intended to destroy the basis of power of Peru's traditional elite and to foster a more cooperative society as an alternative to capitalism. Such socialpolitical purposes apparently dominated questions of agricultural production or any planned changes in patterns of land use. It was as if the questions of ownership were what mattered, not the consequences for output or rural incomes. In fact, the government soon created a system of price controls and monopoly food buying by state firms designed to hold down prices to urban consumers, no matter what the cost to rural producers. As mentioned earlier, the cooperatives had very mixed success; and the majority were converted into individual private holdings during the 1980s. The conversions were authorized in 1980 by changes in the basic land reform legislation and were put into effect after majority votes of the cooperative members in each case. The preferences of the people involved at that point clearly went contrary to the intent of the original reform. But the whole set of changes was not a reversion to the pre-reform agrarian structure. In fact, the conversions left Peru with a far less unequal pattern of landownership than it had prior to the reform and with a much greater role for family farming than ever before in its history. Data as of September 1992 NOTE: The information regarding Peru on this page is re-published from The Library of Congress Country Studies and the CIA World Factbook. No claims are made regarding the accuracy of Peru Land Reform information contained here. All suggestions for corrections of any errors about Peru Land Reform should be addressed to the Library of Congress and the CIA.
<urn:uuid:d750c0b3-fa6c-40b4-af7c-199f792ae5a2>
CC-MAIN-2016-26
http://www.photius.com/countries/peru/economy/peru_economy_land_reform.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961863
373
3.65625
4
The Assembly of Experts by FARIDEH FARHI 30 Jun 2011 22:21 The 86 members are popularly elected every eight years. But candidates, all Islamic scholars and jurists, have been vetted to exclude reformers or critics since 1991. For all its powers, the assembly has served as a rubber stamp organization that has never seriously questioned the actions of either of the two supreme leaders who have led Iran since the 1979 revolution. The absence of a real check has allowed the office of leadership, even under Ayatollah Khamenei who began as a relatively weak political and religious figure, to become increasingly powerful. The idea for an assembly of experts dates back to the 1979 Iranian Revolution, when a constituent assembly was needed to draft a new constitution. Debates over the nature of that body ultimately led to the formation of a small, expert-based group rather than a larger assembly of representatives from all over the country. The first assembly was dissolved after the constitution was ratified in December 1979. The Assembly of Experts in its current form was established in 1982 under Article 108 of the constitution. It officially began work in 1983. Subsequent elections were held in 1991, 1999, and 2007. The assembly is designed to play a key role during periods of transition, as it did after the sudden death of Ayatollah Ruhollah Khomeini in 1989. It elected Hojatoleslam Ali Khamenei as the new supreme leader. The assembly was not designed to reflect political views, although members have different political tendencies. As of mid-2011, the assembly is divided largely between traditional and hard-line conservatives, with traditional conservatives still in the majority. Because of the vetting process and written examinations, most high-ranking clerics from the reformist camp have either been disqualified or refused to take part in the vetting process. In the 1979 constitution, the supreme leader could be selected through direct election by the people or through the selection of elected experts. But the constitution was revised in 1989 to make the assembly the sole body responsible for the supreme leader's election. Membership and tasks The Assembly of Experts is currently a body of more than eighty scholars of Islamic Law; the number has fluctuated due to deaths after the last election in 2006. Members are elected by direct public vote for eight-year terms from 30 electoral districts (provinces). Candidates do not need to be residents of or even to be born in the province from which they are elected. The Iranian constitution defines the assembly's tasks to be: Select the supreme leader (Article 107 and 111). Dismiss him if he is unable to perform his constitutional duties or it becomes known that he did not possess some of the initial qualifications such as "social and political wisdom, prudence, courage, administrative facilities and adequate capability for leadership (Article 111)." Supervise the supreme leader's capabilities to determine whether he is able to perform his duties. The assembly also has a committee to oversee "the continuation of qualifications for the leader specified in the constitution." The last task is the most ambiguous. A committee is tasked to monitor the supreme leader's activities but all the deliberations and proceedings of the assembly are kept confidential. The assembly's bylaws also state that it does not see supervision to be "in contradiction to absolute guardianship." So legally, the mandate is not clear about how much the assembly can challenge the supreme leader over his conduct if he does not show signs of incapacity or if he lacks qualifications. In practice, the assembly has never challenged or criticized the supreme leader, although individual members have expressed their concerns about the country's direction. In 1991, changes in the assembly's procedural laws reduced chances of dissent or a more dynamic oversight role since they accorded the 12-man Guardian Council a supervisory role for assembly elections. The changes assured that candidates would not get a seat on the assembly unless the leader and conservative-controlled council attested to their religious qualifications. The assembly has a leadership council and six committees. The leadership is elected by secret ballot for two years and consists of the assembly's chair, two vice-chairs, two secretaries, and two assistants. The assembly has had three chairs up to date. Former president Akbar Hashemi Rafsanjani was elected chairman in 2007 after the death of Ayatollah Ali Akbar Meshkini, who had led the assembly since its inception in 1983. Rafsanjani's election was the result of the first real contest for the post. He beat hard-line Ayatollah Ahmad Jannati, secretary of the Guardian Council. The vote was close in 2007 but hard-line clerics were unable to block Rafsanjani's chairmanship and in 2009 he was reelected decisively. Rafsanjani was pressured not to run for the chair of the assembly in 2011 after Ayatollah Mohammad-Reza Mahdavi Kani, a traditional conservative, agreed to run. Controversial assembly elections First Assembly of Experts (1983-1991) This body chose Grand Ayatollah Hossein Ali Montazeri as the designated successor to Khomeini in 1985. But Montazeri was stripped of his title after fallout with the supreme leader. There was no immediate replacement. The assembly chose Khamenei as the Leader after Khomeini's death in 1989. But the Iranian constitution at the time required the Leader to be a marja' (source of emulation), which Khamenei was not. So the assembly reconfirmed Khamenei as the leader, after the elimination of the constitutional criteria of marja' was approved by voters in 1989. Second Assembly of Experts (1991-99) For the 1991 election, many candidates -- mainly reformists--either withdrew in objection to the imposed written test or were disqualified due to the Guardian Council's new vetting powers. Voter turnout also dropped from 77 percent in 1983 to 37 percent in 1991, suggesting voter disinterest, disillusionment or unease about the vetting process. Third Assembly of Experts (1999-07) The presidency of reformist Mohammad Khatami in 1997 emboldened the reformist camp to challenge the Guardian Council's vetting process during the 1999 assembly elections. They ultimately failed to reduce the council's vetting powers. But for the first time the election saw the candidacy of non-clerics, though they were all disqualified. Also for the first time, all members of the Guardian Council--the body in charge of the assembly's vetting process--announced candidacy. The body rejected the charge of conflict of interest and went ahead to disqualify many other candidates. Ironically, only one candidate from the city of Qom--Iran's center for religious teaching--was approved. Fourth Assembly of Experts (2007- present) The latest elections took place during the presidency of Mahmoud Ahmadinejad. The long-standing chair of the assembly, Ayatollah Ali Meshkini, died soon after his election, prompting the first ever contest for chairmanship. Hard-line cleric and Guardian Council head, Ayatollah Jannati, ran against Rafsanjani who in the assembly election had won decisively in Tehran--with more votes than any other candidate, including the deceased Meshkini. Rafsanjani won in a close vote but was able to retain his chairmanship two years later in a more convincing manner. In one shift, the Guardian Council did allow women and non-clerics to register for the first time as candidates. But it then disqualified them all for not having sufficient Islamic credentials. Rivalry over chairmanship On March 8, 2011 the Assembly of Experts effectively pushed Rafsanjani out of the leadership and replaced him with ailing conservative Ayatollah Mohammad-Reza Mahdavi Kani. The defeat was a significant setback for a man long considered to be one of Iran's most resilient politicians. Mahdavi Kani, 79, was reportedly brought into the chamber in a wheelchair. Mahdavi Kani had not sought the chairmanship. He agreed to run only under tremendous pressure, which led Rafsanjani to withdraw his name as a candidate. So Mahdavi Kani ran unopposed. Leader Khamenei almost certainly played a background role in ousting Rafsanjani. Farideh Farhi is an independent scholar and affiliate graduate faculty at the University of Hawaii at Mānoa. This article is presented by Tehran Bureau, the U.S. Institute of Peace, and the Woodrow Wilson International Center for Scholars as part of the Iran project at iranprimer.usip.org.
<urn:uuid:85b69902-4e79-4e59-92bd-86adc346f1d0>
CC-MAIN-2016-26
http://www.pbs.org/wgbh/pages/frontline/tehranbureau/2011/06/the-assembly-of-experts-1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967623
1,748
2.625
3
Synopsis: El Niño is expected to strengthen and last through the Northern Hemisphere Winter 2009-2010. A weak El Niño was present during July 2009, as monthly sea surface temperatures (SST) departures ranged from +0.5oC to +1.5oC across the equatorial Pacific Ocean, with the largest anomalies in the eastern half of the basin (Fig. 1). Consistent with this warmth, all of the Niño-region SST indices were between +0.6oC to +1.0oC throughout the month (Fig. 2). Subsurface oceanic heat content (average temperatures in the upper 300m of the ocean, Fig. 3) anomalies continued to reflect a deep layer of anomalous warmth between the ocean surface and thermocline (Fig. 4). Also, convection was suppressed over Indonesia and enhanced across the western Pacific and near the International Date Line. In addition, developing El Niño’s often feature westerly wind bursts over the western equatorial Pacific, such as the one which occurred at the end of July (Fig. 5). These oceanic and atmospheric anomalies reflect El Niño. A majority of the model forecasts for the Niño-3.4 SST index (Fig. 6) suggest El Niño will continue to strengthen. While there is disagreement on the eventual strength of El Niño, nearly all of the dynamical models predict a moderate-to-strong El Niño during the Northern Hemisphere Winter 2009-10. A strengthening El Niño during the next few months is also suggested by the recent westerly wind event in the western equatorial Pacific, which can lead to additional anomalous warmth across the central and east-central equatorial Pacific during the next two months. Therefore, current conditions and model forecasts favor the continued development of a weak-to-moderate strength El Niño into the Northern Hemisphere Fall 2009, with the likelihood of at least a moderate strength El Niño (3-month Niño-3.4 SST index of +1.0oC or greater) during the Northern Hemisphere Winter 2009-10. Expected El Niño impacts during August-October 2009 include enhanced precipitation over the central and west-central Pacific Ocean and the continuation of drier than average conditions over Indonesia. Temperature and precipitation impacts over the United States are typically weak during the Northern Hemisphere Summer and early Fall, and generally strengthen during the late Fall and Winter. El Niño can help to suppress Atlantic hurricane activity by increasing the vertical wind shear over the Caribbean Sea and tropical Atlantic Ocean (see the Aug. 6th update of the NOAA Atlantic Seasonal Hurricane Outlook). This discussion is a consolidated effort of the National Oceanic and Atmospheric Administration (NOAA), NOAA’s National Weather Service, and their funded institutions. Oceanic and atmospheric conditions are updated weekly on the Climate Prediction Center web site (El Niño/La Niña Current Conditions and Expert Discussions). Forecasts for the evolution of El Niño/La Niña are updated monthly in the Forecast Forum section of CPC's Climate Diagnostics Bulletin. The next ENSO Diagnostics Discussion is scheduled for 10 September 2009. To receive an e-mail notification when the monthly ENSO Diagnostic Discussions are released, please send an e-mail message
<urn:uuid:4a68436b-a070-4b7a-b0ce-99c5ed6a4acf>
CC-MAIN-2016-26
http://www.cpc.noaa.gov/products/analysis_monitoring/enso_disc_aug2009/ensodisc.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896271
665
3.0625
3
This chapter of the state building code governs the construction of prefabricated buildings. These are buildings intended for use as one- or two-family dwellings or accessory buildings of closed construction, meaning they are constructed so that concealed parts or processes of manufacture cannot be inspected at the site without disassembly. This regulation is similar to that in Chapter 1361 but is intended for those small manufacturers, particularly those lumber yards or vocational schools, that construct no more than three buildings for permanent installation in a calendar year. Compliance with the state building code is evidenced by a permanent seal and data plate that gets affixed to each building or building module. |Prefabricated buildings documents and forms| Window and door schedule (Microsoft Excel)
<urn:uuid:93160050-9511-4672-980c-822397da89b7>
CC-MAIN-2016-26
http://www.dli.mn.gov/CCLD/ManufacturedPrefab.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959244
148
2.515625
3
back to Health & Wellness Colon Polyps Learning Center A colorectal polyp is a growth that sticks out of the lining of the colon or rectum. Gardner's syndrome is a rare genetic disorder. It usually causes benign or non-cancerous growths. Colonic polyps are growths that appear on the surface of the colon. Learn about colonic polyp symptoms, causes, treatment, and prevention. back to top
<urn:uuid:310f55f2-172a-4fcd-a3ee-cdee870f1e47>
CC-MAIN-2016-26
https://www.aarpmedicareplans.com/channel/colon-polyps.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.858535
97
2.796875
3
Shackleton's Antarctic Adventure: The greatest survival story of all time. Shackleton's Antarctic Adventure is a giant-screen film that tells the extraordinary true story of polar explorer Sir Ernest Shackleton's now-legendary 1914-1916 British Imperial Trans-Antarctic Expedition. While never accomplishing its goal of the first crossing of the Antarctic continent, this expedition has become a larger-than-life testament to heroism and human endurance, with all 28 men surviving nearly two years in the barren, frigid Antarctic when their ship, Endurance, was caught in pack ice and eventually crushed. Shackleton's Antarctic Adventure recounts a true story of epic proportions that is ideally captured in the giant-screen format, where viewers will feel as if they have been transported back in time to experience what is considered to be the greatest survival story of all time.
<urn:uuid:d9c7f984-e595-49d6-bb24-c2c6d28c0e47>
CC-MAIN-2016-26
http://www.cradleofaviation.org/plan_your_visit/shackleton.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926625
176
2.640625
3
William Bradford, the governor of the Plymouth Colony gave the following account: “Those that scraped the fire were slaine with the sword; some hewed to peeces, others rune throw with their rapiers, so as they were quickly dispatchte, and very few escapted...It was a fearful sight to see them thus frying in the fyer, and the streams of blood quenching the same, and horrible was the stincke and sente there of... [The pilgrims] gave the prayers thereof to God, who had wrought so wonderfully for them, thus to inclose their enemise in their hands, and give them so speedy a victory over so proud and insulting an enimie.” |The TRUE origin| However, it was over 150 years later that the familiar story of the 1621 Mayflower Thanksgiving was actually established, in large part due to Sarah Josepha Hale (1788-1879). Her enchantment with the Pilgrim narrative compelled her to campaign aggressively for the adoption of the national holiday. Her bucolic editorials and petitions shaped the modern conception of Thanksgiving, which became a national holiday in 1863. This year on Thanksgiving, take time to learn the stories that aren't being told in school. Become familiar with the National Day of Mourning and the Indigenous Peoples Alcatraz Sunrise Gathering, which commemorate the true history of Thanksgiving and honor the many voices that have been silenced. |Wamsutta (Frank B.) James| The fact that such a sordid history is associated with the day we set aside to ‘thank God’ for his providence should give us pause. In reality, the United States celebrates Thanksgiving because the majority of its population benefits from the fruits of genocide and slavery. Let us indeed set aside time to count our blessings, but let us also be honest with ourselves about the legacy from which those blessings are derived. See Also: Adam Ericksen's great article discussing similar issues on Sojourners
<urn:uuid:12447336-4aeb-4f20-800f-8aa85ce407d8>
CC-MAIN-2016-26
http://bytheirstrangefruit.blogspot.com/2012/11/creation-myths-thanksgiving.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965802
411
3.453125
3
Class Meetings and Individual Interventions DVD Set 2 DVDs (24 & 28 min. cc) and Facilitator Guide The Class Meetings and Individual Intervention Set will give elementary and middle school teachers the knowledge, skills and confidence to implement Class Meetings and to intervene when bullying occurs in their classroom. View two clips from the DVDs: Class Meetings and Individual Interventions for High School A Video Training Program for High School Staff Olweus Bullying Prevention Program, DVD and CD-ROM The Class Meetings and Individual Intervention Set will train high school teachers on how to implement Class Meetings into their high school classes, how to intervene when bullying occurs, and additional suggestions on how to communicate these topics to parents and others within the high school community. Click here to watch a preview of this training video.
<urn:uuid:0992e730-9fcf-4c94-b3ec-f8a48664f9f1>
CC-MAIN-2016-26
http://www.violencepreventionworks.org/public/olweus_program_materials.page
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912925
169
2.53125
3
This article must not be viewed as a legal opinion or any form of legal representation made by the author in connection with the subject matter discussed herein. Before entering into any type of commercial or private agreement, we advise that you consult with a competent legal advisor in the applicable jurisdiction to receive an accurate representation and opinion on the legal consequences taking into consideration the pertinent facts surrounding your request. Many of us wake up in the morning and possibly use the public transportation to get to work for a full day of intensive labour, buy a cup of coffee on our way there and stop by the grocery store to purchase some groceries before heading back home. The fact of the matter is that in this example, we have entered into at least four different contracts that may be classified as transportation, labour and purchase contracts. We enter into numerous contracts on a regular basis, sometimes without even being aware of its formation, execution and extinction. The simplicity in entering into contractual relationships is a central element in the efficient functioning of our society and relationship among people providing the flexibility needed to customize every contractual provision and a legal support in its enforcement. It is in the nature of every contract to entitle the parties thereto to specific rights and impose upon the same specific obligations. A contract is defined to be a bilateral juridical act consisting of a consensual agreement among two or several persons obligating the parties thereto to execute a certain prestation. Two conditions are necessary in the formation of a contract, the creation of legal obligations enforceable by law driven by an economic interest at the core such obligation. In this essay, we shall consider and define the notion of “obligations” to better understand the mechanics and characteristics underlying any contractual relationship. The term obligation is most commonly used and defined to be a duty or constraint; however, in legal terms, obligation refers to the legal relationship among two or several persons requiring the accomplishment of a certain prestation as its objective. In essence, there are three components to any obligation: (1) juridical relationships (2) among legal persons (3) and patrimonial in nature. First, by juridical relationship, it is understood that there are opposing parties to any obligation whereby such obligation can be legally enforced and sanctioned by the judicial system in the event of one party’s non-execution. To be legally enforceable, an obligation or civil obligation must be based on either the text of law or terms of a juridical act. Upon determination of the existence thereof, the judicial system or courts will ensure that the debtor fully, completely and in a timely manner satisfies and executes his or her obligation towards the creditor of such obligation. Furthermore, an obligation must be among legal persons as prescribed by the Civil Code of Quebec who may be a physical persons or a moral person including the State and all public bodies and each party may be a creditor and a debtor simultaneously or just one or another. The objective of any legal obligation is the prestation entitling the creditor to demand its execution and obligating the debtor to execute such prestation. Ultimately, when such obligations are based on a juridical act, we refer to this act as a “contract”. Last, obligations must be patrimonial in nature defined to be all assets, real and personal rights, and liabilities or debts of a legal person. Moreover, a legal person cannot sell, assign or transfer its patrimony but may sell, assign or transfer the assets and liabilities constituting such legal person’s patrimony. It is only upon a physical person’s death or moral person’s dissolution that the patrimony shall be assigned to the heirs or successors extinguishing completely the patrimony. Extra-patrimonial rights are one type of assets that may be found in a patrimony, having no quantifiable economic value such as the right to one’s image or dignity. We may also encounter personal rights to refer to certain rights that can only be exercised by a specific person such as the entitlement to alimony by a spouse upon divorce. Contracts are essential to the development and evolution of the collectivity and must provide as much as freedom as possible to the market players, but within certain legal parameters, to negotiate and define each other’s rights and impose obligations on one another. The judicial system must help protect the institution of contracts in enforcing as strictly as possible the contractual terms as defined by the parties, subject to abuse and actions contravening the public order. With hundreds of thousands of contracts entered into on a daily basis, maintaining the market participant’s confidence in this institution is thus paramount and must be preserved at all times.
<urn:uuid:7c36e6b7-9794-4bec-af83-9b58f1abf36f>
CC-MAIN-2016-26
http://amirkafshdaran.blogspot.com/2007/12/legal-definition-of-obligations.html?showComment=1242160020000
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945843
948
2.65625
3
High Fever in Dogs Home Care and When to Call the Vet If your dog has a temperature greater than 103 F, you should call your veterinarian. Fevers above 106 F are emergencies that must be treated promptly. If your dog has a temperature above 105 F, you can help bring his body temperature down by applying cool water to his fur, especially around the ears and feet. Using a fan on the damp fur will help lower the temperature. Be sure to monitor your dog’s rectal temperature as you do this, and stop the cooling procedure once it reaches 103 F. You don’t want to bring down the temperature too fast. If your dog has a fever, try to see that he drinks small amounts of water on a regular basis to stay hydrated, but don’t force it. And never give your dog any human medicines intended to lower fever, such as acetaminophen or ibuprofen, as they can be poisonous to dogs and cause severe injury or death.
<urn:uuid:ba914f85-ae03-49e4-90b6-186c18be540b>
CC-MAIN-2016-26
http://pets.webmd.com/dogs/high-fever-in-dogs?page=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946357
206
2.65625
3
2013-04-30 16:08:00 GMT By ACDSee pro photographer & guest blogger Alexandra Pottier We already talked about Aperture and Speed, it is now time to talk about Sensitivity, and close the chapter on the basics of a good exposure. Sensitivity is the sensibility of the sensor to the light. It is the measurement of the sensitivity of the surfaces, a sensor in digital photography, a film in silver based photography. Sensitivity is expressed in ISO, most of the times vary between 100 and 3200. Usual numbers are: 50, 100, 200, 400, 800, 1600, 3200. A big number (ex: 3200), represents a high sensitivity where a smaller amount of light will be necessary to expose the picture correctly. We can compare it to human skin. A high ISO (small number) can be compared to a light skin type (as of a blonde or redhead) which is very sensitive to the light and will burn if it is exposed to too much light. On the contrary, a low ISO (high number) corresponds to a dull or dark skin, which will take longer to burn or correctly expose in photography. We can also compare the aperture to the clouds (the more clouds there are, the less light comes through) and the speed to the amount of time the skin is exposed to the sun. As for the rest in photography, when you double the ISO number you will need half the amount of light to expose correctly. Therefore, you can double the speed or you can use a smaller aperture (everything is connected together!). In traditional photography. a roll of film is set for one sensitivity only. On a digital camera you can change the ISO for each picture. The sensitivity has a big impact on the quality of the picture. When the ISO is set up high, the grain of the picture increases. The grain in traditional photography is the bigger silver salts that appear on the prints. It is called noise in digital photography, where the pixels appear. When you have very little amount of light, the first thing to do is to open up the aperture as much as possible or use the lowest speed before you increase the sensitivity. If you need speed, you’ll have to use a bigger aperture to compensate the loss of light. Sometimes noise has nice effects. You can also add noise with your favorite software! You’ll have to juggle between the three parameters speed, aperture, and sensitivity to expose your images correctly, compromise is what makes photography interesting.
<urn:uuid:1b94ad0e-6c6e-41fe-9321-19442ba5e19b>
CC-MAIN-2016-26
http://www.acdsee.com/en/community/blog/post/49263808030/aperture-speed-sensitivity-part-3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927683
526
3.328125
3
Ordinary Time: January 14th Monday of the First Week of Ordinary Time Old Calendar: St. Hilary, bishop and doctor; St. Felix of Nola, priest and martyr According to the 1962 Missal of Bl. John XXIII the Extraordinary Form of the Roman Rite, today is the feast of St. Felix who lived in the third century. He was a priest and suffered greatly in the Decian persecution. The tomb of St. Felix at Nola, a small town in the south of Italy, was a much frequented place of pilgrimage in Christian antiquity, and in the Middle Ages veneration of him spread throughout the west. Along with St. Hilary his feast is celebrated today on the Tridentine Calendar. According to the Ordinary Rite St. Hilary's feast is now celebrated on January 13. In one of the early persecutions the priest Felix was first tortured on the rack, then thrown into a dungeon. While lying chained on broken glass, an angel appeared, loosed his bonds, and led him out to freedom. Later, when the persecution had subsided, he converted many to the Christian faith by his preaching and holy example. However, when he resumed his denunciation of pagan gods and false worship, he was again singled out for arrest and torture; this time he escaped by hiding in a secret recess between two adjacent walls. No sooner had he disappeared into the nook than a thick veil of cobwebs formed over the entrance so that no one suspected he was there. Three months later he died in peace (260), and is therefore a martyr only in the wider sense of the word. - Let us be convinced that if we strive and struggle in God's behalf, we may also rely on His special protection. God shields you from your enemies, even, if need be, by a spider's web. Spend some time recalling occasions when you were protected in an unusual way from harm.
<urn:uuid:a3872e0a-4bba-4b6c-bb52-7419e4328884>
CC-MAIN-2016-26
http://www.catholicculture.org/culture/liturgicalyear/calendar/day.cfm?date=2013-01-14
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980127
397
2.625
3
Timeline: History of the Synthesizer It may seem a strange thing to start the history of the synthesizer before the use of electricity. However, throughout the history of mechanical instrumental music (music made by machines) inventors have sought to discover new and creative ways to make musical sounds and play music. These seeds of invention reach back to the begining instrument design and eventually blossomed into the synthesizer/keyboard revolution that music has and is currently experiencing. You may notice that some very significant instrumental inventions (such as the pianoforte) are not mentioned here. This is mainly because of the vast amount of information already out there about these instruments. We have chosen instead to highlight the kinds of technical innovations which led to the development of modern synthesizers that may not be apparent to a synthesizer enthusiast. Do you have any additions to this timeline? Please let us know. |3rd century BC ||Hydraulos - Invented by Ktesibios (or Ctesibius), a Greek engineer, to solve the age-old question, "How can a person play more than one instrument at a time?" He created an air chamber in a tub of water that was filled by a hand pump. The pressure was regulated by the weight of water. Mechanical levers or switches would send the air to different pipes. Of course, the modern pipe organ was basically the same concept. By the 15th Century CE organs had grown to be the first additive synthesizers, using multiple pipes for each note, adding harmonic complexity as well as volume.| ||The "hurdy-gurdy" - This is similar to an organ grinder in concept. By rotating the handle, wheels are set in motion and rub against different strings to create a melody. Other strings resonate to create a drone. Could this instrument be considered the first strap-on synthesizer with built in sequencer?| ||Pascaline - At the age of 21, Blaise Pascal developed a calculating machine very similar to designs found (in 1997) in old manuscripts from Leonardo Da Vinci. Although these adding machines made no music, they are pre-cursors to the modern computer and, thus, digital synthesizers. ||The Nouvelle Invention de Lever - This was a hydraulic engine which produced musical sounds.| ||Clavecin Electrique - The "electric harpsichord" invented by Jean-Baptiste de Laborde and built (in 1761) by Abbe Delaborde in Paris, France. Via a short harpsichord-like keyboard, clappers, charged with static electricity, were activated to ring bells. | ||Panharmonicon - This was a mechanical keyboard instrument that automated the playing of flutes, clarinets, trumpets, violins, cellos, drums, cymbals, triangle, and other instruments (guns?). It was invented by Johann Maelzel who, at some point, convinced Ludwig Van Beethoven to compose for it. Beethoven wrote "Wellington's Victory" for it and started to write the battle symphony "The Battle of Victoria" for the Panharmonicon, but quarrels between him and Maelzel later changed his mind. Although the Panharmonicon was mechanical and not electrical, the spirit of the invention lives on in today's sampling instruments.| ||The Music Box - Watchmaker A. Favre invented this early instrument in Geneva.| ||Telegraph - Samuel Morse invented the telegraph which allowed rhythmic pulses to be broadcasted great distances. Unfortunately, to our knowledge, these rhythmic pulses were not musical in nature, but restricted to the Morse Code, also developed by Samuel Morse.| ||The Difference Engine - Invented by Charles Babbage, a British scientist, this machine futhered the eventual development of the modern computer.| ||David E. Hughes invented a typewriting telegraph utilizing a piano-like keyboard to activate the mechanism.| ||Electromechanical Piano - Developed by Hipps (first name unknown) who was a director of the telegraph factory in Neuchatel, Switzerland. The keyboard activated electromagnets that activated dynamos (small electric generators) which produced sound. Dynamos where later to be used in Thaddeus Cahill's Dynamophone (also known as the Telharmonium).| ||Telephone - Alexander Graham Bell invents a way to transmit the voice over a telegraph wire.| ||Electroharmonic or Electromusical Telegraph - Elisha Gray (also an inventor of a telephone, but beaten to the patent office by Bell) invented this simple keyboard with oscillators for each key. That's right, oscillators. Mr. Gray found that he could create a self-vibrating electromagnetic circuit, basically a single frequency oscillator. He could transmit music over a telephone line. Later he built a simple speaker to make the sounds audible without a telephone line. This instrument transmitted musical tones over wires. Not to be done out by Gray, Alexander Bell developed a similar instrument he called the "Electric Harp."| ||Phonograph - Invented by Thomas Edison, this early device used a diaphragm with an attached needle to record sound on a wax cylinder. The cylinders didn't last very long but Edison thought this device could be used for businesses. This invention, the same concept as Edison but using either a cylindrical system or a disc system, was simultaneously developed and patented by Emile Berliner.| ||Loudspeaker - The idea was first expressed independently in the patents of Ernst Wermer of Siemens, Germany who filed his patent on Dec. 14, 1877, and Sir Oliver Lodge of the UK who secured a patent on April 27, 1898 in the UK. However, music had yet to be converted into electrical signals in order to be played on these speakers.| ||Bell Laboratories - This laboratory (at American Bell Telephone Company first called the Electrical and Patent Department and later, in 1884, refered to as the Mechanical Department) was established by Alexander Graham Bell, who financed it with his own money. This lab was later to contribute significantly to recording and transmitting sound. Research here also led to the development of the GDS/Synergy instruments.| ||Player Piano - Invented in the US, this instrument could record a performance on a paper roll. This roll could be copied and manufactured and distributed to people with player pianos to reproduce the performance, much like MIDI files are traded today.| ||Choralcello ("Heavenly Voices") - This instrument was developed by Melvin Severy and his brother-in-law, George B. Sinclair, in Arlington Heights, Massachusetts, USA from 1888 to 1909 when it was debuted in Boston, Massachusetts.| The Choralcello was manufactured by the Choralcello Manufacturing Co. It was sold as an expensive home organ for social music recitals. The company was taken over by Farrington C. Donahue and A. Hoffman. As far as we know, at least 6 were sold. Much like the Telharmonium, the Choralcello used an electromagnetic tone wheel to generate organ sounds. However the Choralcello went a step further and also had piano-like strings that were either struck with piano-type hammers or vibrated electromagnetically. The Choralcello had two keyboards, an upper 64-note keyboard played the strings like a piano, and the lower 88-note keyboard played the tone wheels and activated electromagnets to vibrate the strings, having organ style stops to control the timbre by passing the sound through cardboard, hardwood, softwood, glass, steel, or "bass-buggy" spring resonators. The Choralcello eventually incorporated into the instrument a player-piano styled paper roll device for recording and playing back performances, as well as a 32-note pedal board system. The entire machine was very large, taking up "two basements" with only the keyboards and speakers publicly visible. ||Telharmonium or Dynamophone - Thadeus Cahill applied for and was granted patent number 580,035 entitled Art of and Apparatus for Generating and Distributing Music Electronically. His idea was to create an electric machine on which music could be played and distributed through the phone lines to businesses, hotels and private homes. He would do this by using dynamos which produces an alternating current, a sine wave (in this case the dynamo is also called an "alternator"). He did this with electromagnets and very large tone wheels. In 1898 Cahill began working on his machine in Washington D.C. (see 1901)| ||Telegraphone - This, the first magnetic recording machine was patented by Valdemar Poulson. The theory behind this machine was worked out theoretically by Oberlin Smith of the UK in 1888. Poulson's machine recorded by passing a thin wire across an electromagnet. Each minute section of the wire would retain its electromagnetic charge, thus recording the sound. Sound could be both recorded and played back. Unfortunately, because the machine's output wasn't very loud and there was no way to amplify the signal, the Telegraphone was not much of a success.| ||Stereo Phonograph - developed by Edison.| ||Singing Arc - This was arguably the first fully electronic instrument. It was developed by William Duddell from the technology used in the carbon arc lamp, an electric precursor to the light bulb used in England and throughout Europe. The problem with the carbon arc lamp was that it made a lot of noise, from a low hum to an annoying high pitched whistle. Mr. Duddell, an English physicist was commissioned to investigate the sound that these lamps made. He found that the more electricity was applied to the lamp, the higher the resulting pitch. To demonstrate this phenomenon, he hooked up a keyboard to the lamp and called it the Singing Arc. The Singing Arc could be heard without the benefit of an amplifier or speaker. At a lecture to the London Institute of Electrical Engineers, the keyboard was hooked up to the building's arc lamps and it was found that no only did they all sing but those in the other buildings on the same circuit sang also. Although this demonstrated a method of transmitting music over a distance, it was never developed further. Duddell never even applied for a patent for his machine. However he did tour the country and show off his Singing Arc, which never became more than a novelty.| An interesting side story to all this: A few years earlier, in 1887, a Dutch inventor discovered that a sound waves could be used to modulate the intensity of a flame produced by gas under pressure (a manometric flame). "A Short History of the Pursuit and Capture of Musical Sound," http://knowmadz.org/library/ref/soundcap.htm| "Early Electronic Instruments," http://mi.cz/obl/obl_data/instrument/instrument2.html "Electronic Music Origins," http://www.electronicmusic.com/datafiles/people/origins.html "History of Electronic and Computer Music Including Automatic Instruments and Composition Machines," Women On the Web/ElectronMedia. http://music.dartmouth.edu/~wowem/electronmedia/music/eamhistory.html "New World Destruction: The Electronic Music Channel," http://www.megs.com/neworld/tmeline1.htm Chadabe, Joel, Electric Sound: The Past and Promise of Electronic Music, New Jersey: Prentice Hall, 1997 Cook, James H., "Organ History Tutorial," http://panther.bsc.edu/~jcook/OrgHist/welcome.htm Fridh, Patrick, "Synthesizer Timeline," 1999. http://hem.passagen.se/sequence/timeline/ Miller, Scott L., "Electronic Music Timeline," from resource material for Electronic Music Courses at St. Cloud University. http://condor.stcloudstate.edu/~slmiller/433timeline.htm Paradiso, Joe, "Electronic Music Interfaces," http://www.media.mit.edu/~joep/SpectrumWeb/SpectrumX.html
<urn:uuid:2724664e-5a49-4401-a763-1efd53af77e8>
CC-MAIN-2016-26
http://www.synthmuseum.com/magazine/time0010.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95195
2,567
3.703125
4
In the last 10 years, microRNAs (miRNAs) have emerged as critical regulators of numerous physiological and pathological mechanisms [1-2], including cardiac and vascular smooth muscle cell (VSMC) plasticity [3-5]. These small molecules (approx. 20 to 25 nucleotides) comprise a novel and abundant class of endogenous interfering RNAs. More than 1 500 miRNAs are now listed by dedicated internet databases such as miRBase, Tarbase, MicroRNA.org or miRdb (See sub-chapter II for URL adresses). They are transcribed and matured, in a process known as miRNA biogenesis which starts with the transcription of a larger RNA product, called pri-miRNA, by the RNA polymerase II in the vast majority of cases. Pri-miRNA, which is a few hundred to a few thousand nucleotides long, is then submitted to cleavage in the nucleus by a specific RNase III (Drosha) and its protein partner, DiGeorge syndrome critical region 8 (DGCR8), near the base of the miRNA hairpin stem. This process releases a pre-miRNA hairpin (of approx. 60 to 70 nt). The pre-miRNA is then released in the cytoplasm where it is recognized and cleaved within its stems by the Dicer RNase III and its protein partners. This results in a double stranded RNA, known as the miRNA/miRNA* duplex (approx. 22 bp). This complex is unwound to single strands. One strand (guiding strand / mature miRNA) is incorporated in the RNA-induced-silencing complex (RISC) that contains Argonaute 2 (Ago2), another endonuclease, the other strand is usually rapidly degraded. Finally, the RISC complex carries the mature miRNA to its target messenger RNAs (mRNAs), which results in gene silencing, in a post-transcriptional manner . Figure 1 shows a representative example of the biogenesis of miR-143 and miR-145, which are the main miRNAs expressed in smooth muscle cells. Determining how the RISC complex carries a specific miRNA to its target mRNAs, and thus regulates gene expression, remains an intense field of research. The most important feature in the miRNA sequence is a short but critical region called the seed sequence, which is only 7 nts long, and most of the times, located in nucleotides in positions 2 to 8 of the miRNA. The base-pairing uses canonical Watson-Crick complementarity. Conversely, this small stretch of nucleic acids is a useful tool to classify miRNA into families based on shared seed sequences. Since the miRNA seed is so short, each miRNA can potentially bind hundreds of target mRNAs when one considers the large number of possible binding sites in mRNA regulatory sequences. This is one of the reasons why one single miRNA can regulate the expression of multiple target genes, by binding as many as several hundreds of mRNA targets. This clearly shows the role of these small RNAs in the intricate tapestry of gene regulations. Another, foremost reason for this complexity is that miRNAs, outside of the seed region, bind their mRNA targets mostly as imperfect complements. Similarly, one mRNA can be regulated by several miRNAs. All this further explains how the 1 500 miRNAs known to date are able to regulate the expression of approximately one third of the human genes. Thus, microRNAs are likely to impact multiple mechanisms of gene regulation and developmental pathways, by using extensive regulatory gene expression networks [1-2]. The current paradigm states that miRNAs act mostly by inhibiting the translation of their target mRNAs, rather than by inducing their degradation. Bartel’s team has however recently challenged that view by showing that, in a vast majority of cases, mammalian microRNAs act by destabilizing their target mRNAs and by decreasing their levels . No matter the case, a wide consensus agrees that miRNAs are posttranscriptional regulators which bind to their target mRNAs, mostly in their 3’ Untranslated Region (UTR). Note, however, that recent unbiased studies have shown that, in some particular cases, miRNAs bind the coding region or 5’UTR of respective target mRNAs ([9-11]. The miRNA nomenclature is remarkably standardized and straightforward : the prefix "mir" is followed by a dash and an assigned number, reflecting the prevalence of discovery, to experimentally confirmed miRNAs. Gene is referred to in italic, eg mir-143. The pre-miRNA is designated by the suffix mir-, eg mir-143. Finally, the mature miRNA is abbreviated miR-, eg miR-143. MiRNAs with similar sequences / structure except for 1-2 nts, will be assigned a supplementary letter, eg miR-29a, miR-29b, miR-29c and often originate from the same gene. Different loci will produce different pre-miRNAs, but yield a mature miRNA with the precise same sequence. In this case, the nomenclature miR-1-1, miR-1-2 etc will be used. In some cases, one pre-miRNA will result in two different mature miRNAs, one from the 5-prime stem and one originating from the 3-prime stem. They will be designated by the suffixes miR-142-5p and miR-142-3p. Species can also be taken into consideration: hsa-miR refers to Homo sapiens miRNAs, mmu-miRNAs to Mus musculus, etc. Mature miRNA sequences and pathways are remarkably preserved throughout phylogeny. Also, the evolutionary complexity of multicellular organisms positively correlates with the number of miRNA genes, their expression, and the diversity of their targets . This remarkable conservation is used by most miRNA target prediction algorithms. These dedicated softwares use a standardized method to evaluate interactions between miRNAs and their specific target mRNAs based on (1) complementarity between mRNA 3’ UTR and miRNA seed sequence and (2) the degree of conservation of miRNA across species (see subchapter 2 for url adresses). The end result is straightforward: the higher the conservation, the higher the score given. 2. Current methods for studying miRNAs: Functional analysis Various techniques have been developed over the last decade to quantify the expression of miRNAs and study their function (extensively reviewed in ). Figure 2 exposes the most widely used methods. Northern blotting was first used to quantify miRNAs, but this tedious technique, relying on radioactive labeling, was rapidly replaced by the much more convenient qRT-PCR. Two interchanging chemistries can be used for qPCR: (1) Sybr Green, which is often associated with a modified oligo(dT) technique, and the adjunction of an universal primer for reverse transcription which enables reverse transcription of all transcripts within an RNA sample; therefore, target miRNA and normalizing mRNA can be analyzed from the same RT reaction; or (2) Taqman chemistry, in association with the use of stem-loop miRNA-specific RT primers to produce cDNA, with the advantage of additional specificity. Both give accurate results. Each method has its own assets and setbacks. Technically speaking, the modified oligo(dT) method requires only a single RT reaction to reverse transcribe both miRNA and its target mRNAs and is less time consuming. Both techniques have been optimized and developed to screen most of known miRNAs in one experiment. Indeed, analysis of high-throughput miRNA expression remains a challenge since the number of miRNAs continues to increase with in silico prediction and experimental verification. Oligonucleotide microchip (microarray) was first widely used for high throughput miRNA screening, but a novel miRNA expression profiling approach, quantitative RT-PCR array (qPCR-array) is now rapidly gaining ground. Comparison between microarray and qPCR-array indicated a superior sensitivity and specificity of qPCR-array , and qPCR arrays are now rapidly becoming the method of choice. Over-expressing or inhibiting miRNA activity, using RNA constructs, and examining the resulting phenotypic effects is crucial for understanding microRNA involvement, both in vitro and in vivo. For that, solutions are readily available for in vitro use . Knock-in is most of the time induced by the addition of pre-miR precursors of a particular miRNA, often nicknamed ”mimics”. Alternatively, viral vectors have been developed by various industrials, most of them relying on modified lentivirus or adeno-associated virus (AAV). On the other hand, knock-down is classically induced by transfection of so-called “antagomiRs”, which have been developed to interfere with expression of a specific miRNA. These synthetic RNA inhibitors incorporate the reverse complement of the mature miRNA (which represents here the target site). They are chemically modified to enhance binding affinity and decrease nucleolytic cleavage by the RISC complex and degradation by other RNAses. The last years have also seen the development of miRNAs sponges, which are equivalent to the endogenous sponges described below (see sub-chapter V). These long transcripts contain repeated regions complementary to specific miRNAs which will bind them instead of their dedicated mRNA targets, thus resulting in miRNA silencing. All these techniques can also be implemented in vivo, in various animal models. Finally, classic knock-out genetic strategies can be applied to miRNA genes . MiRNA-processing proteins such as Dicer, Dgcr8, Drosha or Ago2 are essential for viability in mice. Knock-out mice individually lacking these key miRNA-processing genes die during early gestation with severe developmental defects, including in vessels and heart. However, conditional murine knockout of these genes have been developed recently and offer valuable tools to study miRNA importance, including in the cardiac organ. Also, one can obtain animals models totally devoid of the miRNA of interest. We describe below the interesting example of mice knock-outed of the most widely expressed vascular miRNAs, miR-143 and miR-145 (see subchapter III, [18-20]). A- Schematic outline of microRNA RT-qPCR Systemq. (1) A poly-A tail is added to the mature microRNA template. cDNA is synthesized using a poly-T primer with a 3’ degenerate anchor and a 5’ universal tag. The cDNA template is then amplified using microRNA-specific and LNA™-enhanced forward and reverse primers. SYBR® Green is used for detection. (2) Taqman chemistry with the use of specific stem loop RT primers, and of an internal Taqman probe for qPCR quantitation.B-qRT-PCR arrays: The microRNA PCR array protocol is a two-part protocol consisting of, 1- First-strand cDNA synthesis, 2- Real-time PCR amplification. Two 384-well plates enable to quantitate more than 700 miRNA in one experiment.C-High throughpout sequencing. Nucleotides flow sequentially over Ion semiconductor chip– Direct detection of natural DNA extension– A few seconds per incorporationD- miRNA sponges interfere with miRNA function. Sponges are ectopically expressed or artificial RNAs that contain multiple miRNA target sites. These target sites compete miRNAs away from their natural mRNA targets. miRNA sponges are suitable for use in a variety of experimental systems, including cultured cells and transgenic animals. Websites are dedicated to identification of microRNA gene targets, and experimental validation of these in silico data: miRBase (http://www.mirbase.org/), Tarbase (diana.cslab.ece.ntua.gr/DianaToolsNew/index.php?r=tarbase/index), MicroRNA.org (http://www.microrna.org/microrna/home.do) or miRdb (http://mirdb.org/miRDB). They were described above. Unfortunately, as of today, the results given by the various websites are often very different from each other and the informations must therefore be subject to careful scrutiny. An interesting complement is provided by Patrocles (http://www.patrocles.org/). This software attends to the referencing of polymorphisms (Single nucleotide polymorphisms, SNP, mostly) and the interactions between target genes and relevant miRNAs in seven vertebrate species. Significant progress will certainly come from this sort of research efforts in the next few years since it is now increasingly clear that non-coding SNPs provide a potential mechanism for transmission of phenotypes and diseases. Finally, biochemical approaches, using affinity purification, are also being developed for direct empirical detection of miRNA associating with the 3'UTR of mRNA targets . 3. MicroRNAs are implicated in the pathophysiology of vascular smooth muscle cells VSMCs are not terminally differentiated cells like skeletal and cardiac muscle cells. They have a remarkable plasticity which allows them to undergo phenotypic modulation inducing a switch from a “synthetic” to a “contractile” phenotype, in response to physiological and pathological environmental cues. On one hand, vascular injury or growth factors like PDGF provoke VSMC dedifferentiation which, as a consequence, transdifferentiates the cells into a highly migratory and proliferative (“synthetic”) phenotype necessary for vascular repair or angiogenesis. On another hand, Transforming Growth Factor-β (TGF-β) and its related family member Bone Morphogenetic Protein 4 (BMP4) promote differentiation into a less migratory and less proliferative phenotype known as the “contractile” phenotype. This VSMC phenotypic modulation, called transdifferentiation, is characterized by significant changes in cellular gene expression pattern. In particular, high expression of VSMC-specific genes, such as smooth muscle α-actin (SMαA), calponin1 (CNN), and SM22α (SM22) are associated with the contractile phenotype. Transcription of contractile genes is regulated by SRF through a DNA sequence motif known as the CArG box (CC(A/T)6GG) which is present in the promoter of VSMC-specific genes. A coactivator (and binding partner) of SRF, myocardin, activates VSMC expression of key contractile genes ([22-23]. The recent emerging role of miRNAs in gene expression regulation via gene silencing (through mRNA degradation or translation inhibition) suggests a role for these small nucleic acids in VSMC phenotypic regulation. Indeed, numerous publications have documented their importance through in vitro and in vivo studies in the cardiac and vascular biology fields and their related diseases . This important role of miRNAs in VSMC development, differentiation, and related pathologies has been emphasized by two independent teams [25-26] that independently showed that knock-outing the miRNA processing enzyme Dicer in murine VSMCs provokes severe vascular abnormalities, resulting in embryonic lethality. Among vascular miRNAs, miR-143 and miR-145 are the most documented to date, and will be explored in greatest detail. They are the most highly expressed miRNAs in smooth muscle cells, and their down-regulation is directly associated with a phenotypic switch from contractile, i.e. fully differentiated to synthetic, i.e. proliferative, VSMCs. Other miRNAs such as miR-21, miR-221 and miR-222 also have demonstrated roles in VSMC differentiation. Their functions in smooth muscle will also be described. See Figure 3A for a pictorial representation of their roles in VSMCs. 3.1. The miR-143/145 cluster The bicistronic unit which encodes miR-143 and miR-145 is critical for maintaining the VSMC contractile phenotype. For example, miR-143 and miR-145 are down-regulated in synthetic VSMCs [19-20] when VSMC dedifferentiation is induced by PDGF and during neointimal formation . On the opposite, TGFβ1 (Transforming Growth Factor 1), a strong activator of VSMC differentiation, stimulates both miRNAs expression in a dose- and time-dependent manner . The transcription of miR-143/145 is under the control of two independent signaling pathways: SRF/myocardin/Nkx2.5 and Jag-1/Notch signaling . The expression of miR-143/145 is drastically reduced in several models of vascular disease: carotid artery ligation injury in mouse, carotid balloon-injury in rat, and ApoE Knock-out mice . In miR-143 or miR-145 KO mice, abnormal vascular tone and reduced contractile activity have been detected but VSMCs are functional . Moreover, miR-143/145 levels are decreased in aortas from patients with aortic aneurism and lower circulating levels are detected in the serum of patients with coronary artery disease [20-22]. Overexpression of miR-145 induced lower neointima formation in balloon-injuried arteries . Dimmeler and colleagues showed recently an example of vesicle-mediated miRNA transfer between human vascular endothelial cells and human aortic SMCs . Blood vessels exposed to laminar blood flow undergo high shear stress. It is known that under shear stress conditions vascular endothelial cells overexpress the transcription factor Krüppel-like factor 2 (KLF2) which in turn induces up-regulation of miR-143 and miR-145. MiR-143/145 are transported in extracellular vesicles such as exosomes and they reduced the expression of miR-143 and miR-145 specific targets in co-cultured VSMCs. Additionally, the authors showed that extracellular vesicles derived from KLF2-expressing endothelial cells decrease atherosclerotic lesion formation in the ApoE KO mice kept on a high fat diet . 3.2. MiR-21, miR-221, and miR-222 In contrast to miR-143/145, miR-21, miR-221, and miR-222 are up-regulated in neointimal lesions. TGF-β and its related family member BMP4 promote contractile gene expression and VSMC differentiation. Interestingly, they induce the transcription of the miR-143/145 cluster and promote also an increased expression of miR-21 post-transcriptionally. The critical target of miR-21 that is down-regulated in this process is programmed cell death 4 . As a consequence, miR-21 induces VSMCs transdifferentiation to the contractile phenotype in response to BMP4 and TGFβ . Additionally, miR-21 promotes VSMC proliferation and reduces apoptosis . These miR-21 actions were confirmed in balloon-injured rat carotid arteries . Moreover, knock-down of mir-21 using antisense oligonucleotides (antogomiRs) in the rat decreases vascular remodeling following balloon injury in carotid arteries . Although an increase in differentiation is usually coupled to a decrease in proliferation, this is not necessarily the case in VSMCs. MiR-21 indeed targets a diverse set of genes and mediates differential biological outcomes depending on the cellular context . Mir-221 and miR-222 genes are clustered on the X chromosome and share a common seed sequence. Some reports indicate that they are transcribed from a common promoter . Mir-221 and miR-222 contribute to VSMC dedifferentiation from the differentiated / contractile to the undifferentiated / synthetic phenotype and thus to increased cellular proliferation. Indeed, miR-221 and miR-222 are strongly elevated in vivo in VSMC following balloon injury of the vessel. Knock-down of mir-221 and miR-222 in the vessel reduced VSMC proliferation and neointimal lesion formation after angioplasty. Mir-221 and miR-222 are important for PDGF-cell mediated proliferation, by repressing tyrosine kinase c-kit, p57Kip2 and the cyclin-dependent kinase inhibitor p27Kip1. Interestingly, inhibition of c-kit reduced the expression of myocardin [32,35]. Overexpression of miR-221 induces an important decrease of myocardin expression, even if this miRNA does not target myocardin directly. Instead, it is the down regulation of c-Kit that is responsible for the up-regulation of myocardin. MiR-221 overexpression also increases during VSMC migration but the targets are still unknown. All these findings provide an example of the potential of one miRNA to mediate various cellular outcomes by regulating multiple targets [22,32]. 4. MicroRNAs are implicated in the pathophysiology of cardiac muscle cells Cardiovascular pathologies represent the prevalent causes of human morbidity and mortality in the Western hemisphere. As a consequence, a vast number of research groups consider that studying heart molecular and cellular characteristics is a major step in order to develop novel diagnostic and therapeutic strategies and to counteract cardiovascular diseases. It is now clear that miRNAs are an important part of the complex transcriptional and posttranscriptional regulatory circuit essential for the homeostasis of the cardiac tissue. They are powerful modulators in virtually all aspects of cardiac biology, from cardiac development to cardiomyocyte survival and hypertrophy, which we will now describe in more detail in this subchapter. In the recent literature, more than a hundred microRNAs have been described as stably expressed in the cardiac tissue [36-38]. However, the vast majority (90%) of these miRNAs are represented by no more than 18 miRNAs in the mature murine organ. Even more remarkable is the fact that all these 18 miRNAs show an altered expression in pathological conditions, including coronary artery diseases and cardiomyopathies. Interestingly, it has been shown that a strong characteristic of these various models of cardiovascular disorders is the re-expression of a fetal cardiac miRNA program. This miRNA expression will finally trigger the over-expression of several fetal proteins, such as the atrial and brain natriuretic factor genes and the fetal isoform of the β-Myosin Heavy Chain gene (βMHC). Exploring further how miRNAs regulate gene expression in the heart will thus provide us with unique mechanistic insights into cardiac diseases. We will now describe in further details the miRNAs which have been the most implicated in the process. See Figure 3B for a pictorial representation of their roles in cardiac muscle. 4.1. Anti-hypertrophic miRNAs Muscle miRNAs, such as miR-1 and miR-133, are integrated into myogenic regulatory networks: their expression is under the transcriptional and posttranscriptional control of myogenic factors, and they in turn have widespread control of the muscle gene expression program. Recent studies demonstrated that both miR-1 and miR-133 are significantly downregulated in hypertrophic and failing hearts. They play major roles in the development of cardiac hypertrophy, and have thus been nicknamed anti-hypertrophic miRNAs. In addition, miR-1 and the related miRNA miR-133 arise from a common precursor RNA which is regulated by the transcription factors Serum Response Factor (SRF) and Myocyte Enhancer Factor 2 (MEF2, ), which clearly suggest their importance in an intricate cardiac regulatory network. The mature miR-1 transcript is the product of two genes, miR-1-1 and miR1-2, and it is now proven that its elevation induces arrhythmia in cardiac disease states. Its expression is specific for both cardiac and skeletal muscle. Overexpression of mature miR-1 in rat exacerbates cardiac arrhythmia whereas its knock-down by an antagomiR in the same animal, in the infarcted heart relieves arrhythmogenesis . MiR-1 is also overexpressed in individuals with coronary artery disease. Part of miR-1 action is mediated by down regulation of connexin 43 and the inward rectifier K channel (Kir2.1, ). Another important role of miR-1 is to modulate cardiac excitation–contraction coupling by selectively increasing phosphorylation of the L-type and RyR2 channels via disrupting localization of PP2A activity to these channels . Determining plasma levels of miR-1, using qPCR techniques (see subchapters II and VI) can be used as a sensitive biomarker for myocardial infarction and its expression is strongly down-regulated in hearts from patients afflicted with myocardial infraction compared to healthy adult hearts . There are three known mir-133 genes: mir-133a-1, mir-133a-2 and mir-133b found on chromosomes 18, 20 and 6 respectively. In the human genome, all three genes encode miRNA with identical mature sequence . Actually, miR-133a-1 and miR-133a-2 are each expressed bicistronically with miR-1-1 and miR1-2. Knockdown experiments of miR-133 gave at first puzzling results: in vitro overexpression of miR-133 or miR-1 inhibited cardiac hypertrophy while infusing mice with antagomiRs against the mature miR-133 sequence induced cardiac hypertrophy . On the other hand, genetic models with knockout of either mir-133a gene did not display significant cardiac pathologies, or actually any phenotype. However, deleting both mir-133a genes resulted in a drastic phenotype: ectopic expression of cardiac-specific markers genes in VSMCs, embryonic lethality, and aberrant proliferation of cardiac muscle cells . The phenotypic difference between mice treated by antagomiRs at the adult stage, and genetic models which have been deprived from miRNAs from conception clearly shows the limits of both models. The sum of these studies however clearly emphasizes the role of the miR-133 mature sequence in cardiac muscle biology. On a mechanistic side, Horie et al. have shown a direct role of miR-133 in cardiomyocyte glucose transport: overexpression of the miRNA decreased levels of the glucose transporter GLUT4 and reduced insulin-induced glucose uptake. Additionally, this increase of miR-133 reduced Krüppel-like transcription factor 15 (KLF15) expression, which induces GLUT4 expression. 4.2. Pro-hypertrophic miRNAs The role of mir-21 in cardiac modeling and pathophysiology is clearly controversed. MiR-21 inhibition by antagomir strategies was first reported as causing an alleviation of murine cardiac hypertrophy [38,45]. A divergence arose when the first team attributed this pro-hypertrophic effect to an effect on cardiomyocytes whereas the second team claimed the primary site of miR-21 action was actually cardiac fibroblasts . In contrast to both team results, Cheng et al. reported that miR-21 was indeed increased by fourfold in hypertrophic mouse hearts, but that modulating miR-21 via antisense depletion had a significant negative effect on cardiomyocyte hypertrophy. Finally, Patrick et al. found that a genetic deletion of the mir-21 gene results in mice with a normal phenotype that did not respond differently to normal littermates when exposed to cardiac stress conditions. Also, in the same study, LNA-modified antagomiRs specific for miR-21 did not block a remodeling response of the murine heart to stress conditions. The authors concluded that, although miR-21 is highly up-regulated during cardiac remodeling, it is not essential for cardiac hypertrophy, a disease state associated with fibrosis in response to heart injury. Nonetheless, miR-21 is a miRNA of interest in the cardiac field, at least as an innovative biomarker, since it is almost undetectable in the healthy heart, but is strongly over-expressed in cardiac pathologies. The human miR-29 family of microRNAs is encoded by two gene clusters. As a consequence, three matures members exist: miR-29a, miR-29b, and miR-29c. In this instance, the miR-29 family has been shown to be expressed in both cardiac fibroblasts and cardiomyocytes . In these cell types, sixteen of their targets are extracellular matrix genes. This clearly shows a striking example of a single microRNA mature sequence which is capable to target a large group of functionally related genes. As a consequence, miR-29 expression induces strong antifibrotic effects in heart and other tissues. MiR-29s have also been shown to be pro-apoptotic and involved in cell differentiation. Acute myocardial infarction due to coronary artery occlusion also results in a decrease of the expression of the miR-29 family in the region of the fibrotic scar . Using up- and down-regulation of miR-29, the same authors showed that this miRNA regulates the expression of collagens, and as a result the fibrotic response. Finally, the miR-29 family has also been shown to down-regulate elastin and other extracellular matrix (ECM) genes implicated in elastogenesis . Jones et al. have examined miRNA expression using qPCR in aortic tissue collected from patients with ascending thoracic aortic aneurysm and shown that miR-29a expression is correlated with cardiac tissue proteolytic degradation and aortic size. These last results show the interest of determining specific miRNA levels in human diagnostic. Another miRNA of interest in the cardiac field of investigation is miR-208a, which is expressed strictly in the heart. Mir-208a overexpression in transgenic mice induces hypertrophic growth of the cardiac muscle and induces arrhythmias. This hypertrophic growth is concomitant with fibrosis and a decrease of contractility, which results from down-regulation of the faster isoform, α-myosin heavy chain (α-MHC) and up-regulation of the fetal specific, slower isoform, β-MHC [51-52]. Thus, cardiac-specific overexpression of miR-208a induces cardiac remodeling and regulates the expression of hypertrophic proteins, including β-MHC. Conversely, the same authors showed that genetic deletion of miR-208a in mouse induces a decrease of β-MHC. Additionally, miR-208 targets other proteins, such as thyroid hormone-associated protein 1 and myostatin 2, which are both inhibitors of muscle growth and hypertrophy, with for final consequence hypertrophic cardiac growth. MiR-208a is also strongly up-regulated in the diseased human heart as detected in biopsies from patients afflicted with myocardial infarction . Also, miR-208a is not detected in plasma from healthy patients, but is raised to a detectable level as soon as 1 h after coronary artery occlusion . This result is important, since it clearly shows that, at least for this miRNA, plasma levels reflect tissue amounts, and thus that miRNA are strong candidates as non-invasive biomarkers (see subchapter VI for more information on this topic). 5. The miRNA regulators: Who watches the watchmen? Expressional patterns of miRNAs vary according to organs and their developmental stages. Indeed, several transcriptional and post-transcriptional processes control the levels of mature miRNAs. The study of these regulatory mechanisms is still in its very early steps. We will review here what is known, starting with other non-coding RNAs which are as long as miRNAs are short, we will proceed with transcription factors, and finally explore how cells are able to communicate with each other, using microRNAs. In the last few years, it has been shown that long non-coding RNAs act as miRNA sponges and/or competing endogenous RNAs, are able to regulate miRNA function by binding complementarily with them, thus competing with their dedicated mRNAs targets, and thereby to impose an additional level of post-transcriptional regulation. Little is known about the mechanisms of action of these exciting new regulatory RNAs in muscle cells. A groundbreaking result came from Cesana et al : they have identified a long non coding RNA, called linc-MD1, in the skeletal muscle. linc-MD1 is stably expressed in mouse and human myoblasts, and controls the myogenesis program by binding, and thus “sponging” two instrumental miRNAs, miR-133 and miR-135, which in turn regulate transcription factors that activate muscle-specific gene expression. By “inhibiting the inhibitors”, linc-MD1 accelerates myogenesis. Interestingly, this RNA’s expression is strongly reduced in Duchenne muscular dystrophy, a genetic disorder which is characterized by a drastic reduction of myoblasts. No equivalent of linc-MD1 has yet been described in smooth muscle biogenesis or in cardiogenesis but one can strongly guess that they will be identified in the forseeable future. Several transcription factors have been characterized as miRNA regulators in smooth, cardiac and skeletal muscle cells, thus revealing novel mechanisms underlying VSMC differentiation. Myocardin is the best characterized in VSMCs and cardiac cells . Myocardin, with its co-activator Serum Response Factor (SRF), is a cardiac- and muscle specific trans-acting protein, and a master regulator of the smooth muscle phenotype. It has been shown that this transcription factor regulates several miRNAs in VSMCs. It induces miR-1 expression, which in turn inhibits VSMC proliferation, and increases their differentiation, by targeting Pim-1, a serine/threonine kinase . Similarly, myocardin represses versican, a chondroitin sulfate proteoglycan of the extra-cellular matrix that is produced by synthetic VSMCs and promotes VSMC migration and proliferation, by inducing the expression of miR-143, a miR instrumental in VSMC differentiation . SRF has been shown to regulate the expression of several mIRs, including miR-1, miR-133a and miR-21 . Interestingly, myocardin co-activator SRF regulates microRNA biogenesis, specifically the transcription of pri-microRNA, thereby affecting the mature microRNA level, by binding to the proximal promoter region of miR genes. Transforming Growth Factor-β1 (TGFβ1), another known stimulus capable of inducing VSMC differentiation, has also been shown to induce both miR-143 and miR-145 in human coronary artery SMCs . We have already discussed the importance of transcription factors SRF and MEF4 in the regulation of miR-1 and miR-133 (see subchapter 5). Finally, although skeletal muscle is outside of the topic of this chapter, it is interesting to note that another important muscle-specific transcription factor, MyoD, impacts miR-1 and miR-206 expressions, with strong consequences on myoblast apoptosis levels . Although it has not been shown yet, one can speculate that similar systems exist in smooth muscle and cardiac muscle cells. Decay mechanisms affecting miRNAs in order to regulate their expression are not well understood. However, new notions have recently been put to the forefront: it seems that changes in cellular density and cell adhesion mechanisms affect rapidly miRNAs expression . When cells are grown at low density or after cell splitting, some miRNAs are rapidly degraded while others remain unaffected. This rapid, and yet unexplained, degradation of persistent regulatory molecules such as miRNAs may facilitate cellular plasticity and remodeling in response to various stresses. Wang et al. have recently shown that several human cell lines from various origins (glioblastoma, hepatocytes, lung bronchial epithelium, pulmonary fibroblasts, alveolar basal epithelial cells) actively release miRNAs in a short time period of time (approx. 1 h) after serum deprivation. Thus, one can hypothesize that, at least some, exported miRNAs are used for cell-to-cell communication. More studies will of course be needed to determine exactly how miRNAs are specifically targeted to relevant target cells, and what information is transduced. It will also be important to determine why evolution has selected several different means of transportation for miRNAs: protein complexes, exosomes, Microvesicles, High Density Lipoprotein (HDL) or apoptotic bodies (Figure 4). For example, miRNA complexed with proteins, could be targeted to specific cell surface receptors, and miRNAs inside vesicles to others targets. All these recent results ask important questions about cell-to-cell communication mediated by miRNAs, and raise the possibility that a yet undiscovered biological information transduction system exists, and could be important to explain many biological processes including development, differentiation, and stress response. In cardiovascular diseases, for example, the general decrease in circulating miRNAs detected in patients with CAD might be caused by a disregulation of this miRNA trafficking system in atherosclerotic lesions or in the infarcted myocardium. 6. MiRNAs: New biomarkers in vascular and cardiac diseases Being instrumental players in the fine-tuning of gene regulation networks, microRNAs have significant diagnostic and prognostic value, as biomarkers of disease etiology and progression. Until recently, however, miRNA quantitation and usefulness as a biomarker was dependent on the availability of the pathological tissue. This was not a major setback for the diagnosis of cancer, where biopsies were readily available in most cases, but proved to be serious concern when dealing with heart or vascular pathologies. Very recently, these concerns were alleviated when several teams showed that miRNAs can be detected and precisely measured in human blood (eg [63,64]). These papers, showing a stable presence of miRNA in human plasma, came as a surprise. Indeed, any researcher having experience with RNA work considers ribonucleic acids as fragile and unable to survive in a liquid like serum which contains a wealth of specific and non-specific degrading enzymes. Stephanie Dimmeler’s pioneer studies show that miRNA can be detected in the serum of patients with coronary artery diseases (CAD) and that their levels are altered in patient’s serum when compared to healthy counterparts [65-66]. MiRNAs are thus prime candidates as novel non-invasive biomarkers in cardiovascular diseases, which can be measured in routine clinical diagnosis. The main question here is which endogenous referent genes to use as this is instrumental in qPCR. In cardiovascular disease studies, various endogenous circulating miRNAs (eg miR-17-5p, miR-454, U6 or RNU6b) have been used for normalization of circulating miRNAs, but the use of spiked-in miRNAs, i.e. adding a known amount of exogenous non-human miRNA, (eg synthetic Caenhorabditis elegans miR-39) is now increasingly common as the experimenter knows clearly the amount of the referent miRNA, and no further experimental bias is added . An important question is the exact localization of miRNAs in the bloodstream. They exist in a highly stable, extracellular form and are remarkably persistent in the RNase-rich environment of blood. The first model postulates that circulating miRNAs are protected by encapsulation in membrane-bound vesicles such as exosomes, phagosomes, apoptotic bodies…[65-66] Arroyo et al. have however recently challenged this view: they used a combination of differential centrifugation and size-exclusion chromatography and showed that circulating miRNAs cofractionate mostly with protein complexes rather than with vesicles, in human plasma and serum. Even more surprising was the fact that the main miRNA binding partner was Ago2, the key effector protein of miRNA-mediated silencing, which is considered as a cytosolic protein. Ago2 seems to be one of the factors protecting circulating miRNAs from plasma RNases, since purified miRNAs, devoid of protein partners, were sensitive to RNase treatment. Figure 4 summarizes the different hypotheses that have been put forward to explain how miRNAs can be secreted and exported in human blood. Goren et al. have very recently published that four miRNAs, miR-22, miR-92b, miR-320a and miR-423-5p were significantly increased in the serum of patients with heart failure. By relying on a signature derived from the expression of these miRNAs, the authors were able to discriminate between systolic heart failure patients and healthy controls with a sensitivity and specificity of 90%. Moreover, there was a significant correlation with important clinical prognostic parameters such as an elevated serum natriuretic peptide and a wide QRS. Other recent papers have highlighted the interest of determining their levels in serum and other body fluids, including urine, feces and saliva [69-70]. This forebodes well for the increasing usefulness of specific miRNA expression as non-invasive biomarkers. 7. Potential of miRNAs as innovative drug targets In the last decade, the increasing interest in small RNAs has triggered the arrival of innovative drug targets on the pharmaceutical market. Among these small RNAs, miRNAs provide perhaps the most promising new opportunities for developing new compounds, especially with the recent advances in anti-miRNA chemistry. On one hand, therapeutic nucleic acids can be administered using lentivirus-mediated antagomir expression, which induces a stable knock-down phenotype for a specific miRNA [71-72]. On the other hand, the vast majority of anti-miRs used in trials are in fact altered locked nucleic acids (LNA), also known as inaccessible RNA (reviewed in ). The canonic nucleic acid ribose sugar backbone is modified with an extra bridge connecting the 2' oxygen and 4' carbon . This conformation enhances base stacking and backbone pre-organization, which will significantly enhance the hybridization properties for the compounds. These poly-anionic molecules tend to distribute broadly but also to accumulate in liver, kidney and phagocytes. They are highly hydrophilic, with a molecular weight ranging from 2 to 6 kD. For the moment, routes of administration used are essentially intravenous and subcutaneous injections [75-76]. One has however got to keep in mind that developing innovative drugs is risky, represents a tremendous cost in resources, time, etc., and should not be undertaken lightly. A first clinical trial, in this post-genome era, is under way in human patients affected by viral hepatitis C. Phase IIa results of this promising trial, focusing on the liver-specific miR-122 , aims to develop the related drug called Miravirsen, developed by Santaris Pharma A/S as a LNA antagomir, and thus to antagonize miR-122, which is instrumental for Hepatitis C virus C (HCV) infection. Langford et al. had already shown in primates that a LNA-specific for miR-122 was able to suppress HCV viremia, with no evidence of viral resistance or side effects in the treated animals. These promising results are confirmed in humans and show that using an antagomir approach induces a decrease of the patient’s viral load, and that this revolutionary treatment is less toxic and more effective than current medicine (http://www.santaris.com/news/2011/11/05/santaris-pharma-phase-2a-data-miravirsen-shows-dose-dependent-prolonged-viral-reduct), perhaps due to the specificity brought by RNA strand complementarity. Concerning cardiovascular diseases, we will now focus on the expanding interest of miRNAs in cardiovascular molecular medicine, and the various studies that have been undertaken, on animal models, for the time being. The important role of the miR-29 family of microRNAs has already been evoked in this chapter (see subchapter V). A promising study dealt with their effects in two murine models of abdominal aortic aneurysm (AAA) (porcine pancreatic elastase [PPE] infusion model in C57BL/6 mice and the AngII infusion model in ApoE-/- mice). Antagomirs against miR-29b was administered in vivo under the form of LNA. This resulted in an increase of collagen expression , which resulted in an early fibrotic response in the aortic wall and an actual reduction of AAA progression in both models. Conversely, overexpression of miR-29b using lentiviral vectors resulted in an aggravation of AAA, and a premature rupture of the aortic wall. This miRNA is thus a promising target for creating an innovative treatment for AAA. Matkovich et al. have shown that over-expression of miR-133a in the heart of transgenic mice prevented TAC-associated miR-133a downregulation and improved myocardial fibrosis and diastolic function. In another, more exotic, model, Yin et al. have shown that miR-133 restricts injury-induced cardiomyocyte proliferation. Very recently, miR-33, although not specific for cardiac or vascular tissues, has gained a lot of attention in atherosclerosis treatment . Both miR-33a and miR-33b target the adenosine triphosphate-binding cassette transporter A1 (ABCA1), an important regulator of high-density lipoprotein (HDL) synthesis and reverse cholesterol transport in a murine model. Inhibiting miR-33 using two different methods (overexpression of dedicated lentiviral particles or injection of LNA antagomiRs) led to an up-regulation of ABCA1 and importantly to an increase of cholesterol influx and concomitant increase in the levels of HDL, and thus of atheroprotective effects. These authors show thus clearly that increasing HDL levels in Mouse via miR-33-specific antagomiRs promotes reverse cholesterol transport and suggest that it may be a promising strategy to induce atherosclerosis regression. Several months later, the same team published similar results in primates, more precisely African green monkeys . In addition to the beneficial effects already detected in Mouse, the authors showed a strong decrease of plasma levels of very-low-density lipoprotein (VLDL)-associated triglycerides. This difference can tentatively be attributed to the presence of miR-33b in the SREBF1 gene of medium and large mammals and its absence in rodents. Pharmacological use of antagomiRs specific for miR-33a and miR-33b is thus able to markedly raise plasma HDL and lower VLDL triglyceride levels and therefore a promising therapeutic strategy to treat dyslipidaemias, and their induced cardiac consequences in human patients. 8. In conclusion: MicroRNAs, a bright future? In the last decade, many advances have been made to decipher miRNA roles in cardiovascular development and pathogenesis. New methods have been developed in order to use them as innovative biomarkers in diagnostics, and as groundbreaking drugs in pharmacological treatments. A first, promising, clinical trial in humans is in progress right now. However, many questions remain still to be answered. Each miRNA targets up to one hundred mRNA targets, which poses significant challenges to the identification, and specific targeting, of the mRNAs that are relevant to a particular pathological process. On the other hand, this problem could also become a solution, since it is now clear that a particular family of miRNAs is associated with the same disease type . So it could prove more efficient to target a predefined network of related miRNAs rather than a single one . With the current pace of evolution in understanding the basic ways of miRNA action in cardiovascular development and disease, one can safely trust that these small molecules will still amaze us with more revelations in the near and not so near future.Acknowledgement This work was funded by grants from the Picardie Regional Council (MARNO-MPCC and Modulation des calcifications cardiovasculaires), including a PhD fellowship for FT and a post-doctoral fellowship for EMM.
<urn:uuid:44be7c14-c635-415a-aa00-67c7b81c4f7a>
CC-MAIN-2016-26
http://www.intechopen.com/books/current-basic-and-pathological-approaches-to-the-function-of-muscle-cells-and-tissues-from-molecules-to-humans/implication-of-micrornas-in-the-pathophysiology-of-cardiac-and-vascular-smooth-muscle-cells
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931833
10,295
2.9375
3
Empowers youth to conceive, plan, shoot, edit, and screen short animated films that speak about their lives and their community. Films are listed alphabetically by location. [read more] Alert Bay is 350 km and a forty-minute ferry ride north of Nanaimo. The town’s main street has some of the best whale watching in the world. Its long ocean-side walkway passes old pastel houses, totem poles set in the grassy Namgis burial grounds, and the renowned U’Mista Cultural Centre with its display of Kwakwaka’wakw masks. The T’lisalagi’lakw School is a short walk up hill from the U’Mista Cultural Centre. “The Unexpected Hero” was animated by Mrs. Dawson’s Grade 4 class. “Bajoolahoop” was animated by Mr. Kalnay’s Grade 7 class. Duncan is a city on southern Vancouver Island. Students at a summer camp created “Cowichan Pride,” which provides a birds-eye view of the community and its activities, including a memorial service, a pow wow, and a soccer game. Gold River is located in central Vancouver Island 90 kilometers west of Campbell River. This area is the traditional territory of the Mowachaht and Muchalaht people of the Nuu-chah-nulth First Nation. Students from the Ray Watkins Elementary School and Gold River Secondary School created these videos about Luna, inspired by the film “Luna: Spirit of the Whale.” Masset is a village in the Haida Gwaii (Queen Charlotte Islands), located on the northern coast of Graham Island. Students in Grades 5, 6 and 7 at Tahayghen Elementary School and the George M. Dawson Secondary School created these videos with the help of their Haida Language teacher and elders. Penticton is a city in the Okanagan Valley of the Southern Interior of British Columbia. Artists in residence at the En’owkin Centre created these videos. Seabird Island is 120 kilometers east of Vancouver, near Agassiz, in British Columbia’s Fraser Valley. Sq’éweqel is a village at the northern tip of Seabird Island. The name is Halq’emeylem for “turn of the river.” Sechelt is located on the Sunshine Coast of British Columbia. The town takes its name from the Coast Salish people, who first settled the area thousands of years ago, and means “land between two waters.” The SPIDER Homeschool group is for Students Participating In Distance Education Resources. Children in Kindergarten through Grade 4 created this video. Skidegate is located on Skidegate Inlet on the southern point of Graham Island. Students in Grades 4, 5, 6 and 7 at Sk’aadgaa Naay Elementary School created these videos. Squamish, which means “Birthplace of the Winds” in the Coast Salish language, is located at the head of Howe Sound, 60 km north of Vancouver. Grade 3 and Grade 7 students at Squamish Elementary School created these videos. Maple Grove Elementary Over two days, students conceived, planned, shot, and edited shorts animated films that focused on the language, activities, and enjoyment of winter in Vancouver. Zeballos is a village on the west coast of Vancouver Island. Grade 7 students at Zeballos Elementary School created “The Nootka and Captain Cook” about the first cultural exchange between Europeans and the Mowachaht/Muchalaht people. Grade 8-12 students created “Anytown, BC.”
<urn:uuid:c80eec67-5539-4a8c-afb5-c9683c541797>
CC-MAIN-2016-26
http://www.r2rfestival.org/video-gallery/bc-stories/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947886
795
2.578125
3
One wrench that use to be tossed into Darwinism's mechanistic view of the universe was the raising of the issue of what supposedly happened to all of those transitional forms. Even Darwin himself is alleged to have relented that his theory would ultimately be proven or discarded on the basis of such geological evidence. For well over a century now, those wanting to extol what passes for education over and above commonsense have attempted to elaborate any number of conceptual bypasses around the 800 pound subhuman hominid in the room. An article in the May 2011 edition of Discover Magazine makes such an attempt by positioning that we ourselves are the transitional forms or at least what's left over of them in terms of primate evolution. No longer are we to think of ourselves in terms of being exclusively modern homo sapiens. Rather we are to view ourselves as the genetic composites of previous ancestors such as Neanderthals and those other creatures reminiscent of Chaka from Land of the Lost. This theory is put forward as an attempt to silence the critics of naturalistic evolution. Yet the hypothesis ends up raising a number of questions that reveal just what one has to ignore and overlook in order to accept this particular narrative's attempt to account for the origins of man. Foremost, if other higher order hominids were eventually wiped out or disappeared because they interbred increasingly with what we would recognize as human beings, why wouldn't these alleged ancestors we are more reluctant to embrace as part of our own kind, if they are able to produce a fecund offspring as a result of copulation through mating, be considered fellow human beings? For is not the history of Anthropology literally littered with the corpses of people thought to be of the status of less than fully human? I recall Ken Ham one time claiming that at one point in the 1800's Australian Aborigines were harvested as research specimens. Even when these remains are uncovered as part of legitimate research and excavation, it must be asked if a number of these conclusions arrived at are really inherent to the evidence or are active imaginations reading back into the data what these researchers instead intensely want to see. For if Neanderthals could interbreed with run of the mill human beings to the point where certain evolutionary theorists are insisting that we ourselves are partially Neanderthal, aren't Neanderthals just anther racial or ethnic group? Researcher Jack Cuozzo hypothesized in “Buried Alive:The Startling Truth About Neanderthal Man” that Neanderthals may have been the extremely aged or the diseased suffering from degenerative bone conditions similar to arthritis. For daring to proffer such a conjecture foremost proponents of inquiry and knowledge resorted to intimidation and threats of violence for presenting such an unconventional perspective. By downplaying distinctions between human beings and what were at one time categorized as species preceding us along the chain of primatology obviously nothing more than glorified apes, radical evolutionists hope to further erode the preconceived boundaries between the species for the purposes of biological manipulative amalgamation. Several years ago, I posted a column about Darwinistic propaganda speculating that in prehistoric times that the genetic boundaries might not have been as set in stone with jungle fever taking on a connotation that might shock those of us entrapped by a morality that frowns upon transpecies romance. Sophisticates of the scientific establishment easily dismiss bloggers for being out of touch and not playing with a full deck. However, seldom will they speak out against media mouthpieces allied in the cause of foisting a revolutionary secularism upon the nation such as The New Republic. On the cover of the April 23, 2008 issue was a photo that bordered on the creepy. Depicted was a chimpanzee gazing dreamily off into the sky. However, that was not the truly disturbing aspect. For as the chimp looked to the sky, tucked beneath his arm was a human female. However, this was not the embrace of a zookeeper showing a little affection to one of her charges or like one would share with a pet. Rather, from the depiction, one gets more of the impression that these two are somehow lovers. The look on the woman's face with head tilted back with her eyes shut and her hand intertwined with the paw of the chimp causes one to wonder if the duo might go swinging in the trees together a bit later if one gets the drift. Some might dismiss such shock as the rantings of a prude with too much time on their hands. However, numerous credentialed scientists have come out speculating as to the possibility of a human/chimp hybrid as mankind's technical expertise continues to advance while moral expertise among the overly educated continues to atrophy. According to an article in Wired Magazine titled “Science Without Limits”, such a primate hybridization program was suggested by renowned evolutionary theorist Stephen Jay Gould. Categorizing the experiment as “the most potentially interesting and ethically unacceptable experiment I could imagine”, Gould speculated such a hybrid would theoretically shed light on how the retention of juvenile characteristics in chimpanzees led to the rise of human beings. That is if one believes in that sort of hooey. The Wired article insists such an endeavor would not be as outlandish as it sounds. Research conducted with baboons and rhesus monkeys suggests that given genetic similarities such an undertaking might be biologically feasible. Such a creature could be brought into existence through the techniques of invitrofertilization and placed within a human surrogate. Proverbs 8:36 teaches that those that hate God love death. That not only applies to the individual existential death that comes to mind when contemplating that term horrid to all people of goodwill. It also applies to the broader obliteration of our species that will result from the failure to properly recognize those distinctions that set mankind above his fellow creatures in the natural order below. By Frederick Meekins
<urn:uuid:67fa1823-68c9-4d9f-962e-a8243d994256>
CC-MAIN-2016-26
http://www.redstate.com/diary/FMeekins/2013/08/12/evolutionists-more-insistent-than-ever-about-being-a-monkeys-uncle/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00070-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962956
1,203
3
3
A team of Russian geographers took winter swimming to an extreme last Friday. In a record-breaking dive, the head of the Russian Geographical Society sunk to the bottom of Lake Labynkyr in Siberia, one of the coldest lakes in the world, RIA Novosti reports, where air temperatures regularly hit minus 50 degrees Celsius. The team hopes to get its name in the Guinness Book of World Records for the stunt. In addition to breaking records for cold dives, the geographers sought to follow up on mysterious discoveries of past years. Though no one is known to have ever entered the lake before, Labynkyr has been remotely explored with echo-sounders and probes. Sonars revealed unusually large objects in the lake, but scientists could not figure out what they were based on echolocation alone. Locals in the nearby village of Oymyakon—which has a population of around 500 and is the coldest permanently inhabited settlement in the world—have their own ideas of what those objects could be. An old legend claims that Labynkyr is home to a Loch Ness-like water monster called “the devil” by nearby villagers. According to the Voice of Russia, the team reported finding jaws and skeletal remains of a large animal with their underwater scanner, though these claims are not yet confirmed. More from Smithsonian.com:
<urn:uuid:db3b5848-d555-4236-aaf8-223ddd4719c6>
CC-MAIN-2016-26
http://www.smithsonianmag.com/smart-news/searching-for-the-russian-loch-ness-monster-in-a-frozen-siberian-lake-10695883/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964657
281
2.75
3
ohn Carlisle was a U.S. representative, speaker of the house, senator, and secretary of the treasury. He was born in Campbell (now Kenton) County, Kentucky, to Mary Reynold Carlisle and Lilbon Hardin Carlisle, farmers. He attended local academies, then worked as a schoolteacher before studying law under John White Stevenson, a prominent attorney. In 1857, Carlisle married Mary Jane Goodson; they had two sons who survived to adulthood. In 1858, he passed the state bar and joined the law firm of Judge William Kinkead in Covington, Kentucky. Over the next two years, he won consecutive terms in the lower house of the state legislature (1859-1861). Carlisle supported sectional compromise to keep the slave states from seceding in the winter of 1860-1861. After the Civil War began, he voted for the Kentucky legislature's proclamation of neutrality, and did not join either side's military. Since the majority of his constituents favored the Union cause, he lost a reelection bid in September 1861. He aligned himself with the Peace Democrats, who sought a negotiated settlement and restoration of the Union status quo antebellum (i.e., with slavery intact). In 1866 and again in 1869, Carlisle was elected to the state senate, where he spoke out against Radical Reconstruction. In 1871, he was elected Kentucky's lieutenant governor, allowing him to gain parliamentary experience while presiding over the state senate. In 1876, Carlisle won election to the first of seven consecutive terms in the U.S. House of Representatives (1876-1890). He lobbied unsuccessfully for the repeal of the Specie Resumption Act of 1875, which was scheduled to return the U.S. to the gold standard in January 1879. He took a moderate bimetallist position, endorsing the use of silver as well as gold, but opposing the inflationist policy of the unlimited coinage of silver (free silver). When the Democrats won control of the House in 1878, Carlisle's outspoken support of the Democratic attempt to roll back the civil rights legislation of Reconstruction earned him respect among his partisan colleagues, although Republican president Rutherford B. Hayes vetoed the bills. As a member of the powerful House Ways and Means Committee, Carlisle pushed for tariff reduction, arguing that high tariffs helped only special business interests to the detriment of farmers, workers, and consumers. During the acrimonious debate in Congress which resulted in passage of the Mongrel Tariff of 1883 (so named because it was a compromise which did not satisfy either side), Carlisle emerged as a leader of the low-tariff/free trade Democrats. In December 1883, he won the House speakership over trade-protectionist and former speaker Samuel J. Randall of Pennsylvania, and was reelected in 1885 and 1887. In 1884, Carlisle was Kentucky's favorite-son candidate for president, but lost the nomination to Grover Cleveland of New York. After President Cleveland appealed for lower tariffs in his annual message of December 1887, Speaker Carlisle redoubled his efforts to pass the reform. The result was the Mills Bill of 1888, which failed in the Republican-controlled Senate. When the Republican won the White House and both houses of Congress in the 1888 elections, tariff reformers were in the minority on Capitol Hill. As minority leader, Carlisle vigorously opposed the rules imposed by the new, Republican speaker of the house, Thomas B. Reed of Maine, which enhanced the speaker's authority to halt dilatory practices. The timing of his election to the U.S. Senate in May 1890 to fill a seat vacated by the death of Senator James Beck, allowed Carlisle to vote against the protectionist McKinley Tariff in both houses (the bill passed both). He also voted against Lodge Federal Elections Bill and the expansion of veterans' pensions. When Cleveland was reelected in 1892, he appointed Carlisle as treasury secretary. The slowing economy of 1892, under the watch of President Benjamin Harrison, grew into a full-fledged depression shortly after the Cleveland administration took office. Carlisle and Cleveland lobbied for Congress to repeal the Sherman Silver Purchase Act of 1890 during a special session in 1893. It passed, but the debate further divided the hard money (gold standard) and soft money (free silver) wings of the Democratic party. The Treasury Departments sale of bonds to J. P. Morgan's banking syndicate provoked heated criticism. When the Democrats nominated free-silver champion William Jennings Bryan of Nebraska for president in 1896, Carlisle backed the ticket of the remnant Gold At the close of the second Cleveland administration in March 1897, Carlisle largely retired from public life and practiced law in New York City, where he died in 1910.
<urn:uuid:a62216d4-d7c1-43c1-9820-e4b68f234bf6>
CC-MAIN-2016-26
http://elections.harpweek.com/1884/bio-1884-Full.asp?UniqueID=4&Year=1884
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958368
1,064
2.90625
3
Common elderberry (Sambucus canadensis, also called S. nigra ssp Canadensis) is pruned in late winter or early spring while it is still dormant. General pruning recommendations for elderberries are different for commercial growers looking for the greatest return on their investment compared to more tailored pruning for home growers seeking to grow more and larger berries. The common elderberry can be grown in U.S. Department of Agriculture plant hardiness zones 3 through 11. Cane Growth and Production Elderberries grow new canes each year. Some flowers grow on the cane tips, but more grow on lateral branches. The canes reach their full height the first year; in the second year, they grow lateral branches that produce the most berries. Canes lose their vigor and weaken after the third year, yielding small clusters of berries that ripen early. New and 1-year-old canes grow larger flowers that yield larger berries. Researchers at the University of Missouri found that commercial growers lose up to 20 percent of their potential harvest by cutting elderberries to the ground each year, but they make up for it in savings in production costs and labor. Home gardeners growing elderberries on a few plants harvest more berries by annual pruning of the interior of the brush and removing old, nonbearing limbs. Pruning to the Ground Pruning the canes to the ground will send up numerous, vigorous shoots. The result will be fewer berries the first year but more berries the second year. Alternatively, the canes can be pruned to the ground every other year. The home gardener seeking to grow more and larger berries on a few plants typically prunes them annually, removing broken, weak or dead canes plus all nonproductive canes older than 3 years old. An equal number of 1-, 2- and 3-year-old canes should be left on a plant. If elderberries are planted too close together they can become stressed before or shortly after they flower, causing a loss of berries. Pruning nonfruiting canes will reduce the loss of berries in crowded plants. Not all elderberry cultivars have the same growing habit. Pruning back lateral and terminal branches will typically help make the plant more rigid. When bushes begin to slow after the fourth year, you can rejuvenate them by pruning them to the ground.
<urn:uuid:6fbde1d7-e7e1-45a3-a96c-9291aaef0256>
CC-MAIN-2016-26
http://homeguides.sfgate.com/pruning-sambucus-nigra-46905.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937834
487
3.40625
3
Vision, Mission, and Principles Promote and protect the integrity of domestic fair trade principles and practices through education, marketing, advocacy and endorsement. The agriculture and economic system is a healthy community where all look after and support each other, everyone feels safe, and all contribute to and benefit from a clean and harmonious environment. Family-scale and community-scale farms and businesses thrive. All people recognize the realities, challenges, and effects of production, distribution, and labor and choose to participate in fair trade. Our vision includes a world where: - Contributions of all workers and farmers are valued - Human rights and human dignity are affirmed and promoted - Fair Trade is synonymous with fair wages, fair prices, and fair practices - Risks and rewards are equitable and shared, and this information is open and available to all stakeholders - Information is readily available on the origin, processing, and distribution of every product - All practices are environmentally, economically, and socially just, sustainable, and humane - Direct trade and long-term relationships dominate the economy - Strong local communities are the foundation of society - Power is shared; development is community-driven and cooperative - Cultural and indigenous rights and diversity are recognized, honored, and protected. What follows is our attempt to translate the traditional principles of international fair trade, as expressed by organizations such as the World Fair Trade Organization (WFTO) and the Fair Trade Federation (FTF), into the domestic, regional and local economic spheres. Our primary goals are to support family-scale farming, to reinforce farmer-led initiatives such as farmer co-operatives, to ensure just conditions for agricultural workers, and to bring these groups together with mission-based traders, retailers and concerned consumers to contribute to the movement for sustainable agriculture in North America. It is our hope that by maintaining a consistent approach, which shares basic values with international fair trade, we may help create a more holistic model which can be applied wherever trade takes place. The work of the DFTA is guided by the Principles for Domestic Fair Trade as defined by its members. These principles represent the values which underlie and guide our work together as organizations and individuals united for the promotion of Health, Justice and Sustainability. Family Scale Farming. Fair Trade focuses on reinforcing the position of small and family-scale producers that have been or are being marginalized by the mainstream marketplace, as a means of preserving the culture of farming and rural communities, promoting economic democracy, environmental and humane stewardship and biodiversity, and ensuring a healthier and more sustainable planet. Capacity Building for Producers and Workers. Fair Trade is a means of developing producers and workers independence, strengthening their ability to engage directly with the marketplace, and to gain more control over their futures. The resources from trading relationships are directed toward this purpose in a participatory manner by those who will benefit from them. Democratic & Participatory Ownership & Control. Fair Trade emphasizes co-operative organization as a means of empowering producers, workers, and consumers to gain more control over their economic and social lives. In situations where such organization is absent, mechanisms will be created to ensure the democratic participation of producers and workers, and the equitable distribution of the fruits of trade. Rights of Labor. Fair Trade means a safe and healthy working environment for producers and workers and conforms to all International Labour Organization conventions and the Universal Declaration of Human Rights. The participation of children (if any) does not adversely affect their well-being, security, educational requirements and need for play, and conforms to the United Nations Convention on the Rights of the Child as well as pertinent local/regional laws. Fair Trade ensures that there are mechanisms in place through which hired labor has an independent voice and is included in the benefits of trade through mechanisms such as living wages, profit sharing, and cooperative workplace structures. Apprenticeships are promoted to develop the skills of the next generation of farmers, artisans, and workers. Equality & Opportunity. Fair Trade emphasizes the empowerment of women, minorities, indigenous peoples and other marginalized members of society to represent their own interests, to participate directly in trade, and to share in its economic benefits. Direct Trade. Where possible, Fair Trade attempts to reduce the intermediaries between the primary producer and the consumer. This delivers more of the benefits of such trade to the producer and connects consumers more directly with the source of their food and other products, and with the people who produced them. Fair & Stable Pricing. A fair price is one which has been agreed upon through dialogue and participation. It covers not only the costs of production but enables production which is socially just and environmentally sound. It provides fair pay to the producers, fair wages to workers, and takes into account the principle of equal pay for equal work by women and men. Fair Traders ensure prompt payment and stable pricing which enables producers to plan for the future. Shared Risk & Affordable Credit. Farmers often bear the greatest risks of agriculture and an unstable marketplace. Fair Traders work to share these risks among producers, processors, marketers and consumers through more equitable trade partnerships, fair and prompt payment, transparent relationships and affordable credit. In situations where access to credit is difficult, or the terms of credit are not beneficial to producers, Fair Traders provide or facilitate access to such credit, or assist producers in creating their own mechanisms for providing credit. Long-Term Trade Relationships. Fair Trade fosters long-term trade partnerships at all levels within the production, processing and marketing chain that provide producers with stability and opportunities to develop marketing, production and quality skills, as well as access to new markets for their products. Sustainable Agriculture. Fair Trade emphasizes a holistic approach to agriculture, as defined by Via Campesina to include fishing, hunting and gathering and other means of sourcing food. Fair Trade supports sustainable agriculture practices such as organic, biodynamic, non-toxic bio-intensive integrated pest management, farm diversification, and small-scale farming which protect the environment, sustain farming communities, and provide consumers with quality, healthful food. Fair Trade emphasizes the biodiversity of traditional agriculture, supports the rights of farmers to their own seed, and preserves cultural diversity. Fair Trade also emphasizes sustainable business practices through the entire supply chain, which can include green office operations, use of alternative energies, or other sustainable practices. Appropriate Technology. Fair Trade supports the use of traditional technologies, which are openly and freely shared in the public domain, and excludes plants, animals, and biological processes which have been genetically engineered or modified. Further, fair trade discourages the use of machinery that threaten the health, safety, and employment opportunities for farmworkers and farm families. Indigenous Peoples Rights. Fair Trade supports indigenous peoples rights to access land for cultivation, fishing, hunting and gathering in customary and traditional ways, to freely exchange seeds and to retain rights to their germplasm. We fully support the right of indigenous and all peoples to food sovereignty. Transparency & Accountability. The Fair Trade system depends on transparency of costs, pricing and structures at all levels of the trading system. Fair Traders are accountable to each other and the wider community by openly sharing such information. Education & Advocacy. Fair Trade emphasizes education at all levels of the agricultural chain, engaging farmers, workers, traders and consumers in advocating for a more equitable, democratic and sustainable economy. Fair Traders in particular educate consumers about the inequities of the trading system and the need for alternatives, while sharing information with producers about the marketplace. Education strengthens the Fair Trade movement and empowers its stakeholders in creating a better world for everyone. Responsible Certification and Marketing: Domestic Fair Trade (DFT) should represent substantive and qualitative differences from the conventional food and agriculture system. DFT programs should be inclusive of and accountable to all stakeholders, focusing on benefiting those most marginalized in our current food and agriculture system (such as workers and small-scale producers). Certification programs should follow good practices of third-party systems and/or participatory guarantee including complaints processes, transparency about the decision-making process, and adequate accreditation and oversight. All market claims and labels of international or domestic fair trade, social justice, or related claims, whether part of a certification process or not, should be accurate, clear, and verifiable. Animal Welfare: Fair Trade ensures every animal raised for or used in production of meat, dairy, egg, honey, and other products has access to clean water, fresh air, appropriate feed, an appropriate physical environment, and adequate health care. Animals on Fair Trade farms are provided with the environment, housing, and diet they need to engage in natural behaviors, thereby promoting physiological and psychological health and well-being.
<urn:uuid:c66f62e9-2735-42dd-acd3-c46095793ce5>
CC-MAIN-2016-26
http://www.thedfta.org/about/vision-mission-and-principles/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947998
1,769
2.53125
3
An 8-b, 1.8 V, 20 MS/s analog to digital converter Ever increasing market-driven demand for System-on-a-Chip (SOC) integration means that analog and RF circuits should operate along side their digital counterparts at low voltage levels with low power dissipation, high manufacturing yield, high noise immunity, and few, if any, off-chip components. Specific design practices and circuit techniques are used to achieve these goals. Utilizing simple low-gain amplifiers and calibration techniques, maximum converter bandwidth and minimum power consumption are achieved. Errors introduced by circuit techniques and non-ideal effects are compensated through smart design and digital error correction. The resulting ADC is conducive to system integration without trading off circuit performance and also alleviates the need for special circuit considerations such as separate power supplies or external circuit components. - Electrical engineering
<urn:uuid:6c81dde5-4fa5-4a4d-b477-b3ff66c24b05>
CC-MAIN-2016-26
https://digital.lib.washington.edu/researchworks/handle/1773/6050
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00025-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898676
174
3
3
Tigers (Panthera tigris) are mammals of the Felidae family, one of four "big cats" that belong to the Panthera genus, and the largest of all cats, living or extinct. Tigers are predatory carnivores. Most tigers live in forests and grasslands (for which their camouflage is ideally suited). Of all the big cats, only the tiger and jaguar are strong swimmers, and tigers may often be found bathing in ponds, lakes and rivers. Tigers hunt alone, and their diet consists primarily of medium-sized herbivores such as deer, wild pigs, and buffalo, but they will also take larger or smaller prey if the circumstances demand it. Humans are probably the tiger's only predator, often illegally killing tigers for their fur or their penises, believed to be aphrodisiacs. From the destruction of its habitat, to the poaching for the fur, tiger numbers have decreased in size and have been placed on the endangered species list. The tiger is one of many animals at the top of the food chain. Different subspecies of tiger have somewhat different characteristics. In general, male tigers may weigh between 150 and 310 kilograms (330 lb and 680 lb) and females between 100 and 160 kg (220 lb and 350 lb). The males are between 2.6 and 3.3 metres (8'6" and 10'9") in length, and the females are between 2.3 and 2.75 metres (7'6" and 9') in length. Of the more common subspecies, Corbetts Tigers are the smallest and Amur Tigers the largest. The ground of the coat may be any colour from yellow to orange-red, with white areas on the chest, neck, and the inside of the legs. A common recessive variant is the white tiger, which may occur with the correct combination of parents; they are not albinos. Black or melanistic tigers have been reported, but no live specimen has ever been recorded. Also in existence are golden tabby tigers (also called "golden tigers" or "tabby tigers") which have a golden hue, much lighter than the colouration of normal tigers, and stripes that are brown. This variation in colour is very rare, and only a handful of golden tabby tigers exist, all in captivity. There are also old texts referring to 'blue'or 'Maltese' tigers, actually a silvery-grey tone, though no reliable evidence has been found. The stripes of most tigers vary from brown/grey to pure black, although white tigers have far fewer apparent stripes. The form and density of stripes differs between subspecies, but most tigers have in excess of 100 stripes. The now extinct Javan Tiger may have had far more than this. The pattern of stripes is unique to each animal, and thus could potentially be used to identify individuals, much in the same way as fingerprints are used to identify people. This is not, however, a preferred method of identification, due to the difficulty of recording the stripe pattern of a wild tiger. It seems likely that the purpose of stripes is camouflage, serving to hide these animals from their prey (few large animals have colour vision as capable as that of humans, so the colour is not so great a problem as one might suppose). Tigers overpower their prey from any angle, usually from ambush, and bite the neck, often breaking the prey's spinal column or windpipe, or severing the jugular vein or carotid artery. Powerful swimmers, tigers are known to kill prey while swimming. Some tigers have even ambushed boats for the fishermen on board or their catch of fish. The tiger has certainly managed to appeal to man's imagination. Both Rudyard Kipling in The Jungle Books and William Blake in his Songs of Experience depict him as a ferocious, fearful animal. In The Jungle Books, the tiger Shere Khan is the biggest and most dangerous enemy of Mowgli, the uncrowned king of the jungle. Even in the Bill Watterson comic strip, Calvin and Hobbes, Hobbes the tiger sometimes escapes his role of cuddly animal. At the other end of the scale there is Tigger, the tiger from A. A. Milne's Winnie the Pooh stories, who is always happy and never induces fear. In the award winning A Tiger for Malgudi, a Yogi befriends a tiger. A stylized tiger was a mascot of the 1988 Summer Olympic Games of Seoul. Tigers have been used in advertising such commodities as gasoline and breakfast cereal in long-standing advertising campaigns. Most recently, Yann Martel won the Man Booker Prize in 2002 with his novel Life of Pi about an Indian boy castaway on the Pacific Ocean with a Royal Bengal Tiger. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Tiger".
<urn:uuid:c997e034-06be-4407-969a-3abebde8bd6b>
CC-MAIN-2016-26
http://sheppardsoftware.com/Asiaweb/factfile/Unique-facts-Asia11.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958852
1,004
3.78125
4
A rootkit is a type of software designed to hide the fact that an operating system has been compromised, sometimes by replacing vital executables. Rootkits allow viruses and malware to “hide in plain sight” by disguising as necessary files that your antivirus software will overlook. Rootkits themselves are not harmful; they are simply used to hide malware, bots and worms. Rootkits get their name from the Unix term for the primary administrator account called “root” and “kits,” which refer to the software pieces that implement the tool. To install a rootkit, an attacker must first gain access to the root account by using an exploit or obtaining the password by cracking it or social engineering. Rootkits were originally used in the early 1990’s and targeted UNIX operating systems. Today, rootkits are available for many other operating systems, including Windows. Because rootkits are activated before your operating system even boots up, they are very difficult to detect and therefore provide a powerful way for attackers to access and use the targeted computer without the owner’s notice. Due to the way rootkits are used and installed, they are notoriously difficult to remove. Rootkits today usually are not used to gain elevated access, but instead are used to mask malware payloads more effectively. - dorrie on Bin Laden Story Brings an Abundance of Malware - Sean Young on Warning: New rogue antivirus – ThinkPoint - PC Tools on Top 5 Fake Security Applications in the 1st Quarter of 2011 - Mr Zoolook on The FBI and the Case of the YouTube Crazy - Ringman on Top 5 Fake Security Applications in the 1st Quarter of 2011 Tag Cloudanonymous antivirus black hat SEO botnet businesses capacitors computer virus cybercrime cyberwarfare economy facebook fake av foreign hackers fraud google hackers hacktivism identity theft internet security koobface law malware online shopping password security phishing politics removal guide reports rogue antivirus scams scareware search engine poisoning SEO poisoning social engineering social media social media malware social networking spam tips trends trojan Twitter virus worm zeus
<urn:uuid:12f0e254-6413-4d75-8850-a87ffc385fd3>
CC-MAIN-2016-26
http://www.pctools.com/security-news/what-is-a-rootkit-virus/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897364
437
3.21875
3
If you’ve clicked on this link, you’re obviously interested in a slightly more in-depth explanation of prismatic structures and how they work. But first, let’s start at the beginning… the goal of any optical assembly is three-fold: provide high-quality lighting of the space, minimize glare, and maximize the utilization of light from the source. So, the objective of fixture design simplifies the strategic placement of optical components to control the direction of light from the bare source. This mission can obviously be accomplished in an infinite number of ways, as demonstrated by the diversity of fixture construction, both today and in the past. The answer lies in the ability of these structures to provide a highly efficient and effective luminaire. They are efficient because they are made of transparent media, with minimal absorption of light energy. A typical glass refractor assembly can be as high as 95% light throughput! They are effective because they provide the ultimate flexibility to aim the light in virtually any direction. Look around this web site and you will see the myriad of different light distributions that are possible. How do prisms "refract" light? Prisms work on the principle of refraction, which is Latin for “to turn aside, or bend”. As light enters a transparent medium of greater density than air, the laws of physics dictate that the beam undergoes a slight bending. Without getting too technical, it is simply that the light beam conserves energy by traveling in different directions in different media. The degree to which the light bends depends on a number of factors… The properties of the materials at the interface (eg air / glass), the angle of approach to the surface, and the shape of the prismatic structure. How do prisms "reflect" light? As counterintuitive as it may seem, prism can actually be used to reflect light! Light traveling in a medium denser than air (eg glass) can literally be trapped within the medium! This phenomenon is called Total Internal Reflection (TIR for short). By shaping and orienting a prism in a particular way, we can take advantage of TIR to reflect light. This same phenomenon of TIR is used in fiber optics to allow transmission of signals over hundreds of miles in telecommunication applications. Why "glass" refractors? With all of the materials available today (plastics, acrylics, polycarbonate), Holophane has chosen to focus its intellectual energy on borosilicate glass for one simple reason… YOU, THE CUSTOMER! Glass is actually a very difficult material to work with in manufacturing, but we have chosen to invest heavily in this technology since it has such great economic advantages in application. Here are just a few of the advantages…. Longevity – SiO2 (sand) just doesn’t degrade over time! UV impervious – sunlight and lamp energy don’t affect it. Temperature resistance – typical fixture temps are way below melting point. Thermal Shock – Borosilicate glass is almost impervious to temperature swings Chemical Resistance – Remember the containers in chemistry class? ISD SuperGlass™, available exclusively from Holophane, is the next generation of, and first technological breakthrough in prismatic lighting technology. Holophane has redesigned the prism, and redefined the glass manufacturing process in such a way that our reflectors can now supply more light, more efficiently than ever before. Already leading the pack in lighting performance, Holophane’s glass luminaires are now taking a quantum leap forward to never before seen levels of lighting proficiency. With ISD SuperGlass from Holophane you can expect: Up to 28% more light to the task than the next best alternative Up to 59% energy savings More light with far fewer fixtures installed Lower Installation costs Lower annual energy costs Reduced maintenance and relamping costs Better lighting uniformity Lower long-term cost of ownership For more detailed information on ISD SuperGlass, click the links below to download our new SuperGlass PDF Brochure and our ISD SuperGlass video, “Fundamental Shift”… complete with a virtual plant tour of the glass manufacturing process! ISD Superglass brochure in
<urn:uuid:56438d5c-efb3-4db2-aea8-16235a0650c8>
CC-MAIN-2016-26
http://www.holophane.com/education/tech_docs/prism/poweroftheprism.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902583
887
4.25
4
June 17, 2007 Using a Theoretical Ecospace to Quantify the Ecological Diversity of Paleozoic and Modern Marine Biotas By Novack-Gottshall, Philip M Abstract.- The process of evolution hinders our ability to make large-scale ecological comparisons-such as those encompassing marine biotas spanning the Phanerozoic-because the compared entities are taxonomically and morphologically dissimilar. One solution is to focus instead on life habits, which are repeatedly discovered by taxa because of convergence. Such an approach is applied to a comparison of the ecological diversity of Paleozoic (Cambrian- Devonian) and modern marine biotas from deep-subtidal, soft- substrate habitats. Ecological diversity (richness and disparity) is operationalized by using a standardized ecospace framework that can be applied equally to extant and extinct organisms and is logically independent of taxonomy. Because individual states in the framework are chosen a priori and not customized for particular taxa, the framework fulfills the requirements of a universal theoretical ecospace. Unique ecological life habits can be recognized as each discrete, n-dimensional combination of character states in the framework. Although the basic unit of analysis remains the organism, the framework can be applied to other entities-species, clades, or multispecies assemblages-for the study of comparative paleoecology and ecology. Because the framework is quantifiable, it is amenable to analytical techniques used for morphological disparity. Using these methods, I demonstrate that the composite Paleozoic biota is approximately as rich in life habits as the sampled modern biota, but that the life habits in the modern biota are significantly more disparate than those in the Paleozoic; these results are robust to taphonomic standardization. Despite broadly similar distributions of life habits revealed by multivariate ordination, the modern biota is composed of life habits that are significantly enriched, among others, in mobility, infaunality, carnivory, and exploitation of other organisms (or structures) for occupation of microhabitats.Ecological communities, however, do exist, but what are linked in them by biotic factors are not thefaunistic units, the species, but the ecological units, the life forms. -G. Thorson (1957: p. 470) Though the technical difficulties are very great, they could probably be solved by anyone who really wanted to compare the furry growth of diatoms on a stone in a stream with the larger-scale patches of woodland that have about the same sort of uniformity when viewed from an airplane. -G. E. Hutchinson (1965: p. 77) Is the modern marine biota composed of the same life habits as ancient ones? Which biotas are ecologically more diverse, in terms of both the number of life habits and the disparity (similarity) of these life habits? These are basic questions that ought to be answerable quantitatively by comparative paleoecologists. I will argue below that the answers to these and similar questions are impeded by a methodological limitation in our ability to compare communities (or other ecological entities) when they are separated by vast expanses of time and space and when they share few or no evolutionary homologies. Their solution hinges on the ability to compare quantitatively all kinds of entities directly on the basis of their ecological capabilities. Taxonomy has remained a typical yardstick for such comparisons. It has formed the dominant basis for comparing the structure of Paleozoic and Recent communities (Bretsky 1968; Ziegler et al. 1968; Walker and Laporte 1970; Levinton and Bambach 1975; West 1976; Miller 1988; Radenbaugh and McKinney 1998). Although all of these studies considered various ecological characters (e.g., trophic guilds, abundance), their primary impetus was the presence of taxonomically similar entities. The underlying assumption when using taxonomy in this way is that the ecological characters of taxonomic groups are conserved during evolution, such that taxonomy acts as shorthand for ecology. Although this may be generally true at low taxonomic levels, and occasionally high ones (Webb et al. 2002), there are many exceptions. For example, Fauchald and Jumars (1979) noted stark population-level differences within individual species of polychaetes, and Stanley (1968,1972) and Miller (1990) noted widespread life habit convergence among bivalve orders. As a general rule, Peterson et al. (1999) demonstrated that conservatism is less likely above the familial level. Thus, although taxonomic comparisons may be suitable for documenting the ecological organization of taxonomically similar communities, such a basis is not useful when comparing taxonomically disparate communities. In short, taxonomy is an indirect, and potentially misleading, proxy for getting at ecological questions. Morphology has been another vehicle for ecological comparisons (Van Valkenburgh 1985, 1988, 1991, 1994; Foote 1996b; Wainwright and Reilly 1994; Van Valkenburgh and Molnar 2002; Lockwood 2004). The general premise of ecomorphology is that morphology can be used as a proxy for the ecological characters of organisms. Such correspondence has been well supported (e.g., Winemiller 1991; Wainwright 1994). However, there seems little potential in using these methods for large-scale comparisons spanning phyla and long time scales because of the lack of appropriate homologous characters. The most ambitious comparisons include Paleozoic and Recent arthropods (Briggs et al. 1992; Wills et al. 1994; Stockmeyer Lofgren et al. 2003) and animal skeletons (Thomas and Reif 1993; Thomas et al. 2000). There are few homologous (and even functionally comparable) morphological characters shared throughout benthic communities composed of green algae, foraminifera, corals, trilobites, bryozoans, brachiopods, and bivalves. It is essential to focus such comparisons on ecological characters directly, instead of on their underlying morphology or their consequences for taxonomy. It is important here to understand what I mean by the term ecological character. We can start with the understanding that each organism exhibits unique phenotypic features (sensu Bock and von Wahlert 1965) that affect environmental interactions. Collectively, these phenotypic features endow each organism with ecological capabilities or characters (faculties sensu Bock and von Wahlert 1965). For now, I will focus on those autecological characters related to feeding, use of space, mobility, dispersal, reproduction, and body size; taken together, these describe an organism's basic life habit. Ecological diversity, regarded as the overall variety of life habits within some group, can be most easily assessed by richness, the number of unique life habits in this group. It can also be assessed by ecological disparity, a measure of how different each life habit is from others in this group (modified from Foote 1993a). I propose below a common framework for such characters and formal definitions for ecological richness and disparity. Focusing on such ecological characters directly has two benefits. First, it avoids the problems of homology associated with morphological comparisons. Because distinct phenotypes can perform identical functions in numerous ways (Bock and von Wahlert 1965; Alfaro et al. 2004, 2005; Wainwright et al. 2005; Marks and Lechowicz 2006), there exists in nature an innate tendency for ecological convergence when emergent capabilities are beneficial. In a sense, such higher-order capabilities are "screened off" (sensu Brandon 1984) from their underlying morphological and functional causes. These characters are accordingly more suitable for large- scale ecological comparisons. This may diminish, although not eliminate, the role of phylogenetic effects (Felsenstein 1985b; Harvey and Pagel 1991). Second, compared with analyses using the proxies of taxonomy or morphology alone, such a focus better aligns results with the theoretical understanding of ecological diversifications (Grant 1999; Schluter 2000; Coyne and Orr 2004). Such benefits motivated the development of the guild concept. Originally focused on comparisons among taxa sharing diet and foraging habits (Root 1967), it was later modified to include other categories-microhabitat, locomotion, ecomorphology, timing of reproduction and daily activities, among others (Schoener 1974; Bambach 1983, 1985; Simberloff and Dayan 1991). Many studies have compared individual ecological characters over long time scales, including tiering (the stratification of infauna and epifauna; Thayer 1979, 1983; Ausich and Bottjer 1982; Bottjer and Ausich 1986; Droser and Bottjer 1989, 1993), insect feeding habits (Labandeira and Sepkoski 1993), energetic consumption (Vermeij 1999; Bambach 1993, 1999; Bambach et al. 2002), and body size (Smith et al. 2004), among others. Various individual characters related to escalation- chiefly carnivory, infaunality, and mobility-have been a recurrent focus (Vermeij 1977, 1987; Signor and Brett 1984; Kowalewski et al. 1998, 2006; Kosnik 2005; Madin et al. 2006; Aberhan et al. 2006). The most ambitious multivariate guild attempt was conducted by Bambach (1983, 1985) in a series of studies comparing the ecology of Sepkoski's (1981) three evolutionary faunas. Using a three- dimensional framework defined by foraging habit, microhabitat, and mobility, he concluded that the timing of marine ecological diversification throughout the Phanerozoic was irregular and coincided with the diversification of successive evolutionary faunas, primarily resulting in increased utilization of previously vacant ecospace. These conclusions have withstood more recent analyses using broader ecological characters (Bambach et al. 2007; Bush et al. 2007). Bambach's framework has been influential (Aberhan 1994; Bottjer et al. 1996; Droser et al. 1997; Radenbaugh and McKinney 1998), but different qualitative frameworks also exist. For example, the Evolution of Terrestrial Ecosystems consortium (Behrensmeyer et al. 1992, 2003; especially Wing 1988; Wing and DiMichele 1992; Damuth et al. 1992) used a comprehensive framework for comparing terrestrial communities. Retallack (2004) also presented a framework focused on general ecological strategies. Although such approaches are well suited to identifying synoptic ecological trends, they primarily are limited to making descriptive, qualitative comparisons or statistical comparisons of isolated ecological characters. A synthetic quantitative framework, while also allowing such analyses, is preferable for several reasons. First, it facilitates more robust documentation of overall changes in ecospace utilization (Bambach 1983, 1985, 1993). This can benefit our understanding of the previously mentioned univariate trends because their causes are likely intricately related to other ecological characters that this method captures simultaneously. second, quantification makes it possible to determine the structural components of individuals occupying ecospace (Van Valen 1974). That is, it allows measurement of the central location, dispersion (disparity), and distribution of all individuals' life habits in the multidimensional space defined by the ecospace framework. Of equal importance, it allows recognition of those ecological regions that are not occupied by individuals-either currently or in the past. Finally, quantification fosters the development of mechanistic null models that can test both the robustness of observed trends and distinguish among their possible causes (McShea 1994. 1998; Foote 1996a; Ciampaglio et al. 2001; Pie and Weitz 2005). The proposed framework marks the first framework suitable for such large-scale, quantitative comparisons. Such motivations drove the quantification of morphological disparity (Gould 1989, 1991; Briggs et al. 1992; Foote and Gould 1992; McShea 1993; Wills et al. 1994). Given the success of these approaches (Saunders and Swan 1984; Foote 1991a,b, 1992,1993a,b, 1994,1995, 1996a,b, 1999; Thomas and Reif 1993; Wagner 1995. 1997; Wills 1998, 2002; Lupia 1999; Smith and lieberman 1999; EbIe 2000; Thomas et al. 2000; Ciampaglio et al. 2001; Ciampaglio 2002; Harmon et al. 2003; Stockmeyer Lofgren et al. 2003; McClain et al. 2004; Villier and Korn 2004; Collar et al. 2005), quantification of ecological diversity seems to offer profound benefits. In this study, I propose a general method for quantifying ecological diversity that unites an extended framework of Bambach (1983,1985) and the methodological advances of morphological disparity (see Foote 1991a; Wills 2002). The modified framework consists of 60 ecological character states that are universally applicable to extant or extinct organisms and that are logically independent of taxonomy; in this sense, the framework constitutes a theoretical ecospace. It allows quantification of ecological richness and disparity directly for any entity-individuals, lineages, or entire communities. The framework and the methods used in analyzing it are suitable for answering many questions in comparative paleoecology. Here it is used to compare the ecological diversity of Paleozoic (Cambrian through Devonian) and modern biotas from deep-subtidal, soft-substrate habitats in terms of ecological (life habit) richness, disparity, and overall distributions of life habit gradients in ordination-space. Paleozoic and Modern Data Sets The biotas used here represent assemblages from deep-subtidal, soft-substrate habitats. The Paleozoic biota comprises 449 samples compiled from 167 references, including nearly 80,000 individual fossils (an underestimate considering only one-quarter of samples have abundance data) and more than 3500 species ranging in age from Cambrian through Devonian (Novack-Gottshall 2004). The modern biota comprises 50 samples compiled from three references in the literature. Ten samples were selected at random from comparable habitats along the western North Atlanticfive samples from the Mid Atlantic Bight (Lynch et al. 1979) and five from the Beaufort Shelf (Day et al. 1971)-totaling more than 8000 individual organisms and 450 species from the Boreal Province on an outer continental shelf margin. Although these samples are from the same habitat as the Paleozoic samples, the temperate, oceanic shelf does not represent the same latitude as most Paleozoic samples. To account for this difference, 40 samples were also selected at random from appropriate habitats in the tropical, epeiric Gulf of Carpentaria (Australia) (Long et al. 1995), totaling more than 91,000 individuals and 400 species. The Ecospace Framework The life habits of the taxa in the biotas were operationalized by using the following standardized ecospace framework criteria. It is important to note that although the framework is well suited for comparing such marine biotas, it is equally well suited for characterizing the life habits of other ecological groups; the explanations that follow draw on examples from the full spectrum of life, both extinct and extant and representing most habitats. Characters in the Framework.-The framework (Table 1; see also Appendix A online at http://dx.doi.org/10.1666/pbio06054.s1) includes 60 character states in 27 characters that describe the basic autecological capabilities of organisms. Characters include (1) resources, such as diet and microhabitat; (2) structures, behaviors, or other features related to the acquisition, maintenance, or defense of these resources, such as foraging, mobility, and substrate attachment; and (3) other important autecological characters, including body size, physiology, and reproduction. Depending on the scope of analysis, some researchers (especially macroecologists and paleobiologists) may be inclined to add geographic range, abundance, or other emergent (statistical, sensu Maurer 1999) group characters to this list (Peters 1983; Brown 1995; Gaston and Blackburn 1996; Maurer 1999). Adopting cladistic terminology, the term character refers to individual classes of ecological capabilities (faculties sensu Bock and von Wahlert 1965, whereas character state denotes the possible types of these capabilities (Swofford et al. 1996). The characters were chosen according to four criteria. First, the characters must be ecologically important for living organisms. Habitat and dietary characters are given greater emphasis-that is, there are more characters-because of their recognized importance (Schoener 1974). Second, the characters must be logically independent of one another; that is, they refer to different components of life habits. This is a requirement of all theoretical multidimensional morphospaces (McGhee 1999) and even cladistics (Swofford et al. 1996). In reality, correlations may exist and can be investigated a posteriori, but the assumption here is that all character combinations are possible-even if never realized because of constraints (Seilacher 1970). Third, the characters must be assignable to ancient taxa, including long-extinct species with no living relatives or morphological analogues. Reliance on taxonomic information has been minimized by focusing on general-and consequently often convergent-ecological capabilities of organisms instead of on particular, often taxon-specific adaptations. Fourth, the individual states for each character must be fully subdivided. For example, a fluidic substrate (Table 1) is a valid substrate state because it represents a logical absence of a substrate, used by organisms that do not inhabit lithic or biotic substrates; see Appendix A for further examples. TABLE 1. Twenty-seven characters (bold) and 60 states (numbered) in ecospace framework. Characters listed in parentheses are not easily determined for many fossil groups. The ecospace framework does not include synecological characters, except when an organism's autecological characters necessarily imply some form of interaction. For example, carnivores are categorized only as meat eaters, and not with regard to their particular prey. In other words, character states referring to particular organisms- trilobite eaters, nectar eaters, and the like-were avoided because they limit comparisons to particular times when that dietary item was extant. This may limit the framework's utility for some comparisons, but it is a prerequisite if comparisons are to be made across wide taxonomical, morphological, and ecological ranges. Modifications of this framework are possible depending on the objectives of the study. A comparison spanning the history of life on Earth might find the character states carnivore, herbivore, and fungivore too restrictive; a replacement with chemoheterotroph might prove more useful. Unlike the Skeleton Space (Thomas and Reif 1993; Thomas et al. 2000), this list is provisional and not intended to be fully inclusive. Although it is applied below to marine biotas, it is intended to characterize universally the significant autecological capabilities of all organisms in any habitat. Additional characters can be devised when such information is available or when a study requires them. Reproductive strategies, seasonality, daily cyclicity, food size, and numerous biogeochemical and physiological characters are important ecologically (Schoener 1974; Pianka 2000), but this information is not available for most fossil species, and so it is not included in the present treatment. Some possible candidates are listed here (Table 1 and Appendix A, in parentheses) with the hope they will be included in future comparisons. Similarly, it may sometimes be necessary to limit the number of characters and states if relevant information is not available. In the examples that follow, for instance, only 44 character states are used because of the current limitations of using fossilized species. Coding of Character States.-Unless noted, the term individuals in the following refers to individual ecological entities-individuals or species-whereas the term groups refers to more inclusive groups- communities and lineages. Most character states are binary, coded O for absent and 1 for present. Several characters-body size, microhabitat stratification and others-are coded as continuous, ordered, multistate characters by using integers (or fractions if de- weighting is preferred; Sneath and Sokal 1973; Van Valen 1974). Such multistate characters are used only when there is a clear ordination among their states. When a state cannot be confidently assigned or is unknown currently, it can be coded as unknown. Such codings, however, reflect only a lack of knowledge rather than nonrelevance; in principle, all states can be coded. This method of coding, in which multistate characters are the exception, might seem to warrant further explanation. In most cladistics or morphological disparity studies, the characters are typically homologous, with each individual displaying a single phenotype. In contrast, individuals are more variable ecologically, marked by behavioral flexibility, generalism, and convergence with unrelated individuals (Peterson et al. 1999; Losos et al. 2003). This can be notably true for sexually dimorphic species (e.g., Pietsch 1976, 2005). The coding scheme used here accommodates such variability by allowing single individuals to be coded with multiple states in the same character. For example, semi-infaunal individuals, such as trees and some mussels, can be coded as living simultaneously above and within the sediment. (Other common ecological and behavioral capabilities best described by multiple character states for the same individual-hermaphroditism, parthenogenesis, substrate and microhabitat generalism, omnivory, among others-are discussed further in Appendix A.) In cases where individuals typically utilize a primary resource, even when capable of using others, only the primary resource is coded; this is the same method used to classify guilds (Root 1967). A limitation of such flexibility in coding is that every individual must exhibit at least one state for every character. In other words, no individual- while alive-lacks some diet, some microhabitat, or some body size. This has analytical consequences, namely that not every combination in the framework is possible. Although individuals can undergo changes in their ecological characters throughout their life cycle-most notably due to metamorphosis or allometry-they are coded from the perspective of adult, sexually mature organisms, where ecological characters are usually most stable. Organisms with indeterminate growth are coded at the attainment of sexual maturity. Entire colonies are treated as individuals. Depending on the goals of a study, one could focus on each colony member individually, include individual genders or age or life stages separately, or code individuals within a population separately. Some characters, such as absolute body size and microhabitat stratification, are scale-independent and coded according to absolute criteria. However, because the ecospace framework has an autecological focus, most characters are coded from the perspective of the individual organism (or colony). An example is an organism's immediate substrate, which may be rather different from the primary substrate of the focal habitat. Organisms that live cryptically within the cavities of coral reefs or endoparasitically within another organism may be both above substrates in a primary sense (i.e., situated above the sediment-water interface), but within substrates in an immediate sense (i.e., inhabiting a crevice or tissue). Because of this versatile perspective and the broad nature of the characters, the ecospace framework transcends the limitations of scaling: any ecological entity can be compared. TABLE 2. Utility of ecospace for describing benthic microhabitats. If using just three characters (primary microhabitat, immediate microhabitat, and substrate) with several character states (in italics), it is possible to describe twelve unique microhabitat combinations. Although existing terminology exists for most combinations, the three characters are more succinct and more broadly applicable for describing them. Additional combinations are possible; for example, it is possible to occupy multiple states simultaneously. The same classification can be used for other focal habitats; in this example, the habitat is the benthic one with the sediment-water interface as the primary substrate. See text for further discussion. This ecospace framework allows the discovery of combinations that are unoccupied in nature, such as the microhabitat that is within the benthic sediment but above the water; although this may seem an unlikely life habit, it might be possible to imagine an organism that floats or flies in gas-filled chambers within a burrow network. The Framework as a Theoretical Ecospace.-Ecological terms used in common classifications (e.g., Hunt 1925; Yonge 1928; Elton and Miller 1954; Turpaeva 1957; Jennings 1965; Walker 1972; Walker and Bambach 1974; West 1977; Bambach 1983; Merritt and Cummins 1996; Taylor and Wilson 2002) were used in the framework only when they were defined by a single character. For example, the common term deposit feeder was avoided because it conflates diet with feeding microhabitat, while implying aspects of mobility, resting microhabitat, foraging strategy, and sometimes even body size (Plante et al. 1990). The framework is versatile, however, in recognizing such common life habits through combinations of relevant states. In this way, the framework reduces the number of ecological terms needed to describe different life habits. Consider, for example, the number of words describing microhabitats (Elton and Miller 1954; West 1977; Taylor and Wilson 2002). If the substrate of the focal habitat is the sediment-water interface, three independent characters alone-primary microhabitat, immediate microhabitat, and immediate substrate-describe a dozen microhabitats (Table 2). Epibenthic organisms live on lithic sediment above the sediment- water interface in both a primary and an immediate (i.e., at the scale of the organism) sense. Cryptobionts and some miners live above the primary substrate, but within a lithic immediate substrate. The various parasites, epibionts, borers, and nestlers have similar relationships to a biotic substrate. Additional character states can further partition each broad microhabitat, with other combinations also possible; for example, semi-infaunal bivalves live above and within the primary sediment simultaneously (Stanley 1970). The same framework accommodates terrestrial and oceanic microhabitats by changing the primary substrate of the focal habitat from sediment-water interface to ground or water's surface, respectively (see discussion of microhabitat characters in Appendix A). TABLE 3. Example of ecospace framework coding. Taxa 1-5 are modern species and taxa 6-10 are Paleozoic species. Only 44 characters and states for which reliable information is available are used. The numbers and order of the character states follows that in Table 1 and Appendix A. For binary characters, a value of 1 designates the presence of an ecological character state. See Tables A1-6 for designation of multistate character states in characters 51- 55; when measuring disparity in the text, such states have been deweighted to range from 0 to 1. TABLE 3. Example of ecospace framework coding. Taxa 1-5 are modern species and taxa 6-10 are Paleozoic species. Only 44 characters and states for which reliable information is available are used. The numbers and order of the character states follows that in Table 1 and Appendix A. For binary characters, a value of 1 designates the presence of an ecological character state. See Tables A1-6 for designation of multistate character states in characters 51- 55; when measuring disparity in the text, such states have been deweighted to range from 0 to 1. This example demonstrates an additional feature of the framework: both realized and unrealized ecological combinations are noted a priori. For example, a microhabitat exists in Table 2 within the benthic sediment in a primary sense, but above water in an immediate sense. Although this may seem a logically impossible life habit, it is not. Imagine some organism that floats on the surface of-or perhaps flies in-gas-filled chambers in a submerged burrow network. Similarly unusual ecological habits and microhabitats are common in nature (Darwin 1875; Norell et al. 2001; Rubinoff and Haines 2005; Seilacher 2005). The framework thus constitutes a theoretical ecospace, in the sense of a theoretical statespace defined by its character- dimensions. This is analogous to the term morphospace used by theoretical morphologists (Raup and Michelson 1965; Hickman 1993; Thomas and Reif 1993; Chapman et al. 1996; McGhee 1999; Thomas et al. 2000), with which they share similar methodological approaches and goals. The framework delineates, a priori, the domain of logically possible life habits that could be occupied by all organisms, and that is independent of the actual life habits occupied by organisms. When used in a comparative context, existing life habit complexes-such as deposit feeding-can emerge as outcomes of analyses comparing ecological entities (see below). With the exception noted above, the framework can be fully occupied, at least in theory. By being unconstrained by the life habits occupied by actual organisms, it also points toward life habits that have yet to evolve, that are biomechanically nonfunctional or evolutionarily unfit, or that are developmentally impossible (Raup and Michelson 1965; Seilacher 1970; McGhee 1999). Coding of Individual Organisms and Species.-The framework is equally suitable for coding extant and extinct individuals. For living species (such as those in the modern biota), inferences of basic autecological characters are straightforward, but not without some obstacles (Ricklefs and Miles 1994). Performance studies (e.g., Arnold 1983; Garland and Losos 1994; Wainwright 1994; Irschick 2002, 2003) have been used to great effect in determining how individual morphologies perform functionally. One important consequence of such studies is that the same function can be performed by multiple morphological designs (Alfaro et al. 2004, 2005; Wainwright et al. 2005); such convergence has been identified by using more general ecological characters as well (Marks and Lechowicz 2006). However, such formalized analyses are not usually necessary here because the framework characters are straightforward and often readily inferable (in much the same way as done by Marks and Lechowicz ). For fossils (such as those in the Paleozoic biota), direct observation of ecological characters is rare but not impossible (Boucot 1990; see obrution deposits in Brett 1984,1990; Brett and Baird 1986). Barring direct evidence, ecological characters in fossils are inferred by biomechanical studies, analysis of environmental distribution, and comparison with relatives or living morphological analogues (Rudwick 1964; Stanley 1970; Alexander 1983, 1990; Hickman 1988; Plotnick and Baumiller 2000; Vogel 2003). Despite these varied sources, such inferences remain less powerful than those made with living individuals. Although specific procedures are not developed here, it is possible to test the sensitivity of results to such coding decisions (see Felsenstein 1985a; Foote 1993a). In cases where characters are unknown currently, states can be coded as unknown. The life habits of all taxa in the databases were coded to the lowest taxonomic level-usually family or genus-for which reliable information was available. Coding decisions have been informed currently by 197 published references. Detailed examples of how two species-one extant and one extinct-were coded with the ecospace framework are found in Appendix B in the supplementary material (http://dx.doi.org/10.1666/pbio O6054.sl). Representative codings for ten arbitrarily selected extant and extinct species from the Paleozoic and modern biotas are reported in Table 3: the five extant species are the Ungulate brachiopod Glottidia pyramidata, bryozoan Bugula neritina, crab Cancer irroratus, isopod Cirolana polita, and snail Mitrella marquesa; the five extinct, Paleozoic species are the trilobite Isotelus tnaximus, putative trilobite Naraoia compacta, crinoid Calceocrinus chrysalis, mussel Modiolopsis versaillesensis, and rhynchonellate brachiopod Zygospim modesta. The Quantification of Ecological Diversity Ecological Diversity of Organisms and Species.-Because the character states in the ecospace framework are all theoretically independent, the entire ecospace contains more than 106 quintillion unique combinations (1.069 x 10^sup 19^) that are theoretically possible (given the previous exception that all individuals occupy some state in each character). Using just the 44 character states that are currently practical with fossils still yields nearly 300 trillion possible combinations (2.993 x 10^sup 14^). Once coded, these unique combinations serve as a basic unit of ecological (life habit) diversity that is theoretically independent of taxonomy and morphology. Because they are coded quantitatively, they furthermore offer a rich arsenal for comparative paleoecology. As one example, it is possible to compare the life habits of species from Table 3 as a dendrogram (Fig. 1). Prior to calculating distances, multistate characters were deweighted so each maximum state was equal to 1 and the distances were divided by the square root of 44 character states so that pairwise distances could range from a value of 1 (when two species share no character states in common) to 0 (when the species occupy the same life habit). Although separated by at least 500 Myr, the modern predatory gastropod Mitrella marquesa and the Cambrian putative trilobite Naraoia compacta can be seen to share the same life habit, defined by the ecospace framework (Table 3, Fig. 1). Such correspondence across 44 states is an important feature of this framework, allowing recognition of meaningful ecological similarities, even when the taxa share no such overarching similarities in body plan, morphology, skeleton, taxonomy, or temporal occurrence. This similarity, however, does not preclude other important distinctions in their specific niches. Speed of locomotion, size of prey, specific foraging strategies, and the like could all be different, but it is not possible to determine such distinctions for most fossil taxa. A virtue of this comparative approach is that it may point toward unanticipated ecological similarities or interactions among very distantly related taxa (see Brown and Davidson 1977; Janzen 1977; Reichman 1979) that can be tested with additional research. FIGURE 1. Dendrogram of ecological distances between marine taxa in Table 3. Cluster analysis used function hclustO in R 2.3.1 (R Development Core Team 2006) with the UPGMA method and Euclidean distance. Distances were standardized to range from O to 1 by deweighting of multistate characters and by dividing distance by square root of number of character states. Figure 1 reveals several other potentially unanticipated results. There are four life habits-represented by the Mitrella/'Naraoia pair, Cancer, Cirolana, and lsotelus-that are similar to each other in being habitually mobile, epifaunal predators. Glottidia and Modiolopsis represent two similar life habits, sharing facultatively mobile, infaunal, filter-feeding abilities; M. versaillensis is known also to nestle in arborescent bryozoans (Pojeta 1971). The three remaining habits-represented by Zygaspira, Bugula, and Calceocrinus-are also similar, sharing sedentary, epifaunal, filter- feeding abilities. Each of these three basic life habit groupings includes modern and Paleozoic taxa, a similarity that would be unlikely if comparisons were based on evolutionary distances, or on their proxies in taxonomy and morphology. The ecospace framework allows analyses to focus solely on ecological characters, and it allows recurrently evolved and complex suites of life habits- raptorial predators, sedentary filter feeders, and the like-to emerge as results of the analysis, rather than assuming such life habit complexes exist a priori. Similar methods can be extended to compare entire biotas, such as the Paleozoic and modern. Ecological Diversity of Multispecies Assemblages and Clades.-The simplest measure of ecological diversity for comparing groups of taxa is ecological richness, defined here as the number of occupied combinations (life habits) in the framework. This is more direct than species richness (Magurran 1988) because it measures actual ecological variation instead of using the proxy of taxonomy (Tilman et al. 1997; Diz and Cabido 2001; Reich et al. 2004). Another important component of ecological diversity is disparity, a measure of how different the life habits within a group are from one another. The ecospace framework can be used to measure the disparity of individuals within clades or multispecies assemblages (communities). Distance metrics-mean Euclidean distance, range, total variance, and the like (Sneath and Sokal 1973; Van Valen 1974; Foote 1991a; Ciampaglio et al. 2001; Wills 2002)-are most commonly used in the study of morphological disparity and can also be used here. For example, using mean Euclidean distance, the disparity of the modern "assemblage" (0.443, Table 3) is approximately the same as that of the Paleozoic "assemblage" (0.492). This method is used below for comparing entire biotas in the Paleozoic and modern. At even larger scales, it is possible to use the ecospace framework to understand the macroevolutionary history and evolutionary paleoecology of entire lineages. An example is not provided here, but such an approach might allow novel ways to measure ecological diversity, and especially disparity, during evolutionary radiations (Stanley 1968; Valentine 1969, 1995) mass extinctions (Jablonski 1986a; Valentine and Jablonski 1986; Jablonski and Raup 1995), and post-extinction recoveries (Hansen 1988; Jablonski 1998), as well as address the impact of ecological diversity on genus-level longevity (Kammer et al. 1998; Miller and Foote 2003; Liow 2004), onshoreoffshore diversification (Jablonski et al. 1983; Sepkoski and Miller 1985; Westrop and Adrain 1998) and other macroevolutionary phenomena (Stanley 1979). Comparative Paleoecology of the Marine Biosphere: Do Paleozoic and Modern Biotas Exhibit Different Levels of Ecological Diversity? It is a long-held impression that modern communities are more diverse ecologically than those of the distant past (Hutchinson 1959: pp. 155-156; Valentine 1969; 1973; Vermeij 1977, 1987; Bambach 1983, 1985). The goal here is to use the quantitative ecospace framework proposed above to assess the overall similarity in ecospace occupation in two large biotic groups from a single, deep- subtidal, soft-substrate habitat. This was done by pooling individual genera (using a randomly selected species for each genus) in the Paleozoic (Cambrian-Devonian) database and comparing it to the pooled genera in the modern database.portant differences exist between the fossil and modern samples. For example, the modern ones are not fossilized and they were collected with benthic trawls and dredges. There are also many more Paleozoic samples, covering a much wider temporal duration. Although such differences limit absolute comparisons in ecological diversity (Foote 1992), standardizations used below provide a means to estimate the relative magnitude of ecological differences between Paleozoic and modern biotas. The impact of non-fossilizable organisms was evaluated by comparing the Paleozoic biota to untreated and taphonomically treated modern databases. All-Modern is the entire modern data set-including soft- bodied, fragile, and rarely fossilized taxa. The Taph-Modern treatment includes those genera-primarily mollusks, crustaceans, tubicolous and jawed polychaetes, echinoderms, and bryozoans-nearly always or only occasionally expected to yield fossil representatives (Schopf 1978; Sepkoski 1982, 2002; Kidwell 2001, 2002). First, the ecological diversity of groups is compared by using their ecological (life habit) richness: genus richness relationship based on 2000 bootstrapped iterations for each aggregate group sample. This rarefaction method (Sanders 1968; Hurlbert 1971; Bambach 1983; Foote 1992; Miller and Foote 1996; Gotelli and Colwell 2001) standardizes for differences in sample size both within samples-because all samples in each biota are combined-and between biotas, at least when observing differences between the shape of each resulting ecological richness/genus richness relationship. Error bars were calculated as the standard deviation of the distribution of means (Foote 1993b; Efron and Tibshirani 1993). Ecological disparity within each group was calculated as mean Euclidean distance after deweighting multistate characters and standardizing for number of character states. Significance of differences in richness and disparity was tested with 2000 bootstrap replicates (Efron and Tibshirani 1993). All tests are one-sided unless noted. All statistics and quantitative analyses used R 2.3.1 for Windows (R Development Core Team 2006). Although the All-Modern biota has not been taphonomically treated, its ecological richness/genus richness relationship is only moderately above that for the Paleozoic biota (Fig. 2). This difference is not statistically significant at a standard richness of 400 genera (Fig. 3A; diff.^sub obs.^ = 13.40, diff.^sub crit.^ = 18.00, p = 1.005). This overall similarity is surprising because the fossil record filters out some life habits, differentially preserving ecologically similar taxa (Schopf 1978); this might greatly underestimate the richness of the original Paleozoic biota. The magnitude of such differences might be approximated by the reduced trend using the Taph-Modern treatment (Fig. 2), although the difference here is only marginally significant at a standard richness of 240 genera (diff.^sub obs^ = 13.47, diff.^sub crit^ = 14.00, p = 0.054). Measurement of ecological disparity offers a different result. Both when untreated and when taphonomically treated, the modern biota is significantly more disparate than the Paleozoic biota after standardizing for differences in genus richness (Fig. 3B; All-Modern at 400 genera: diff.^sub obs^ = 0.065, diff.^sub crit^ = 0.014, p FIGURE 2. Ecological richness/genus richness relationship for modern and Paleozoic (Cambrian-Devonian) deep-subtidal, soft- substrate biotas. Ecological richness is defined as number of life habits. Each curve is a rarefaction (2000 bootstrap iterations) for all samples in the database, pooled by time and taphonomic treatment. Paleozoic includes all Cambrian-Devonian fossil taxa; AllModern includes all modern taxa, including unfossilizable ones; Taph-Modern includes modern taxa expected to leave a fossil record; see text for further explanation. Error bars are one standard deviation from the distribution of 2000 bootstrapped means. Inset graph highlights relationship where error bars overlap; error bars are removed to clarify relationships. see text for statistical tests. It is increasingly well established (Vermeij 1977, 1987; Bambach 1983, 1985; Bambach et al. 2002; Aberhan et al. 2006; Kowalewski et al. 2006; Madin et al. 2006; Wagner et al. 2006) that the ecospace of modern biotas is rather different from those of Paleozoic biotas. This can also be examined here by using ordination to compare visually the distribution of genus life habits in these biotas. As a nonparametric ordination method, nonmetric multidimensional scaling (NMDS) is appropriate because the ecospace character states are categorical. Furthermore, NMDS is a robust and well-substantiated ordination method (Kenkel and Orloci 1986; Faith et al. 1987; Minchin 1987), especially when resulting gradients are short, as is the case here because all ecospace states have a maximum distance of one unit, after deweighting. Metric ordination techniques resulted in nearly identical patterns despite vast algorithmic differences in methodology; the Procrustes sum of squares difference using principal components analysis was just 0.0000015 (p FIGURE 3. Ecological richness and disparity of modern and Paleozoic (Cambrian-Devonian) deep-subtidal, softsubstrate biotas. Modern communities include all taxa, including soft-bodied ones unlikely to be fossilized. Distributions produced from 2000 bootstrap iterations at constant genus richness of 400 genera. A, Ecological richness (life habit richness). Although modern communities contain slightly more numbers of life habits, the difference is not significant (diff.^sub obs^, = 13.40, diff.^sub crit^ = 18.000, p = 1.005), based on 2000 bootstrap iterations. B, Ecological disparity (mean Euclidean distance). Modern communities are significantly more disparate than Paleozoic ones (diff.^sub obs^ = 0.065, diff.^sub crit^ = 0.014, p FIGURE 4. Graphical ordination ecospace for genus life habits in modern and Paleozoic (Cambrian-Devonian) deep-subtidal, soft- substrate biotas. Figure shows two axes from ordination of life habits coded with the ecospace framework. Nonmetric multidimensional scaling was conducted using function isoMDSQ in R 2.3.1 (R Development Core Team 2006) with Euclidean distance. To avoid computational errors associated with species with identical life habits, the distance between such species pairs was made equal to one-half of the minimum observed distance between any other species pairs (see function metaMDS() in the vegan library; Oksanen 2006). There are 1376 Paleozoic taxa and 423 modern ones. Many points overlap-that is, taxa share identical life habits-but this overlapping does not obscure the graphical comparison because of the few life habits shared between the Paleozoic and modern biotas The overall distribution of Paleozoic and All-Modern life habits is broadly similar in multivariate space (Fig. 4). Many points overlap in this ordination-that is, the taxa share identical life habits-but this overlapping does not obscure the graphical comparison because, as noted below, there are few life habits shared in common between the Paleozoic and modern biotas. To aid interpretation of axes using widely known taxa, Figure 5 demonstrates just the molluscan fraction of these biotas at the class level. In this two-dimensional, graphical ecospace, both axes represent gradients in suites of life habit combinations broadly interpretable as foraging strategies. High values along axis 1 are associated with taxa with sedentary, particle-feeding, filter- feeding strategies living attached to hard substrates, whereas those with low values are associated with free-living, habitually mobile, carnivorous, bulk-feeding raptors (see Appendix A for definition and discussion of ecospace states). High values along axis 2 are associated with intermittently mobile, microbivorous, particle- feeding mass feeders whose food source is located within primary and immediate substrates; low values are associated with carnivorous, bulk-feeding raptors with epibenthic food sources. FIGURE 5. Graphical ordination ecospace for modern and Paleozoic (Cambrian-Devonian) mollusks only. Ordination is same as for Figure 4. Labels identify the major bivalve, cephalopod, and gastropod classes by first initial. Other classes are represented by symbols in Figure 4. There are 172 Paleozoic taxa and 114 modern ones, and the same circumstances for overlapping points apply as in Figure 4. Taken together, these gradients delineate a rich variety of unique life habits. The broadly triangular distributions in Figures 4 and 5 provide end-members for each of the archetypal life habit complexes: sedentary, epifaunal filter feeders are found in the lower-right corner; intermittently mobile, infaunal, deposit feeders in the central apex; and mobile, epifaunal and swimming predators in the lower left. However, the gradients also accommodate those life habits that are intermediate between these extremes. For example, corals-those chimerically flower-like microcarnivores that have confounded categorization since Aristotle (Holland 2004)-cluster at the bottom center of the distribution (Fig. 4) because of their bulk- feeding carnivory, their filter-feeding foraging habit, and their sedentary, attached, epifaunal microhabitat. A more complex gradient, apparent along the upper right side of the distributions seen in both Figures 4 and 5, describes the vast spectrum of particle-feeding microbivores. Using examples of modern molluscan genera (Fig. 5), this gradient delineates infaunal deposit feeders (such as nuculoid Ennucula and gastropod Turritella) at the apex through a region of siphonate deposit feeders (such as TelUna), mobile infaunal filter feeders (Cyclocardia and Cultellus), attached infaunal filter feeders (Cucullaea), and ends on the lower-right side with attached epifaunal filter feeders (such as semi-infaunal pterioid Pinna, adhesive gastropod Crucibulum, and ending with attached pterioid Anomia). Similarly, the gradient along the left side of this triangle delineates a mobility and predation spectrum dominated by gastropods, with intermittently mobile microbivores Strombus and Xenophora intersecting the particle-feeding gradient, continuing through facultatively mobile, attachment-feeding (and sometimes solution-feeding) carnivores and ectoparasites (Volva, Eulimella, and Epitonium), and ending at the lower-right corner with archetypal habitually mobile predators (such as Murex and cuttlefish Sepia). The predatory bivalve Cuspidaria also plots in this region. Similar gradient interpretations and the recognition of taxonomic overlap in life habits can also be observed in Figure 4. The same interpretations result when restricting analysis to the Taph-Modern treatment. Such categorically subtle but biologically real distinctions among life habits are not captured in most traditional ecospace frameworks (cf. Bambach 1983, 1985; Bambach et al. 2007; Bush et al. 2007). An important property of this ecospace framework is that it can make such rich and sometimes subtle life habit distinctions while still permitting the recognition of life habit convergence in unrelated taxa. Despite the broad overlap in the distributions of Paleozoic and modern life habits, important differences remain. For example, although the entire modern biota is composed of 230 distinct life habits and the Paleozoic of 287, only 17 are shared, and seven result from taxa whose life habits could not be coded completely, such as "sponge indet." Such differences are expected given that modern biotas are enriched in predatory, mobile, and infaunal life habits compared to Paleozoic biotas (Vermeij 1977, 1987; Thayer 1979, 1983; Barnbach 1983,1985; Bambach et al. 2002; NovackGottshall and McShea 2003; Aberhan et al. 2006; Kowalewski et al. 2006; Madin et al. 2006; Wagner et al. 2006; Bambach et al. 2007; Bush et al. 2007). This can be substantiated by comparing the distributions of occupied states. The two biotas are significantly different in the occupation of half of the 44 character states (Table 4; Mann-Whitney two-sided tests, total alpha = 0.05 after Bonferroni correction; but see warnings of reduced power [Underwood 1997]); this reduces to 15 significant differences when the Taph-Modern biota is used. Compared with the Paleozoic biota, TaphModern is enriched in taxa whose life habits are mobile (although there is no difference among habitually mobile habits), are infaunal (in terms of both primary and immediate microhabitat), exploit other organisms (or structures) to occupy their specific microhabitat, live and feed on food that is further away from the sediment-water interface (either infaunalIy or epifaunally), are carnivorous, are feeding on dissolved food (frequently as parasites) or intact food, and forage by attaching to or taking in large quantities of food sources. Of these significant differences, only solutionand attachment-feeding should be viewed with caution because parasitism is sometimes difficult to identify in the fossil record. Because the comparisons are made among inhabitants of the same deep-subtidal, soft-substrate habitat, perhaps it is not surprising that characters related to substrate relationships (states 16-21) and the sources of food (states 30-33) are generally similar. Particulate, microbial diets (states 35 and 41) are the most common manner in which food is eaten in both biotas, but the manner in which this food is acquired is distinct, with Paleozoic taxa using filters and modern ones feeding en masse. Most of these differences are maintained with the All-Modern biota (Table 4), although several additional ones emerge. Because these differences relate to many of the same general foraging characters defining the gradients in Figure 4, it should be expected that the distributions, despite much overlap, are distinct. Indeed, the Paleozoic biota has significantly greater values along the first axis than both All-Modern and Taph-Modern, marking a general shift from sedentary, epifaunal filter feeders to mobile predators (Mann-Whitney one-sided test, All-Modern: p TABLE 4. Comparison of ecospace character-state occupation among Paleozoic and modern biotas. Mann-Whitney U-test used for comparing state distributions. Statistically significant differences after Bonferroni correction (p Such differences between the Paleozoic and modem biotas can be due to several causes that the current analysis does not yet resolve. For example, it might be that such differences are the result of the combined accumulation of Paleozoic and modern samples spanning large geographic and temporal ranges. Finer geographic and temporal comparisons might reveal greater similarities (or differences) between modern and Paleozoic assemblages when restricted to certain regions or time intervals. Such temporal variation has been reported (Novack-Gottshall 2004; Madin et al. 2006; Bambach et al. 2007) during the Paleozoic interval considered here, especially from the Cambrian through Ordovician when many carnivorous habits were replaced by filter-feeding ones. The current method facilitates such finer-scale comparisons to be made quantitatively, even when there are no genera and but one family- the ubiquitous inarticulated brachiopod Lingulidae-shared in common among the ecological entities being compared. Conclusions and Prospects The composition of life has changed dramatically during its history (Valentine 1969, 1973; Bambach 1983, 1985; Vermeij 1987), and documenting this change and its ecological and evolutionary consequences remains an important goal. However, traditional methods to investigate these changes have been hindered by their focus on taxonomical or morphological comparisons alone. By focusing on ecological characters directly, the theoretical ecospace framework presented above serves as an important complement to these approaches. When applied to deep-subtidal, soft-substrate Paleozoic (Cambrian- Devonian) and modern biotas, the framework describes a wide spectrum of important life habits observed in modern and ancient marine biotas. It does so in a standardized and taxon-free (sensu Wing 1988; Damuth et al. 1992) manner that is amenable to comparative analyses of ecological diversity using techniques previously used for morphological disparity. Although the comparison is a broad one, it suggests that the life habits in modern biotas are more ecologically disparate from one another, on average, than were those in the Paleozoic, although both biotas shared generally similar numbers of life habits per genus. The distribution of these life habits overlaps broadly in ordination space, although the modern biota is enriched in carnivorous, actively mobile, and infaunal life habits, among others. Because the ecospace framework ultimately is coded from the perspective of the individual organism, the framework is suitable for comparing ecological entities existing at extraordinarily different scales or living in different focal habitats. For example, it would be a simple task to compare the biota of the Southern Appalachian ecosystem (Hackney et al. 1992; Martin et al. 1993a,b) to that of a single lake or stream (Hutchinson 1965; Merritt and Cummings 1996; Benz and Collins 1998), interstitial benthic community (i.e., those living between grains of sand; Fenchel 1978), or even the gut fauna of a single individual (Hungate 1975; Plante et al. 1990). Despite major differences in spatial resolution and habitat diversity between these scales, there might be important similarities in terms of their structural organization. But one might predict major differences between constituent organisms as well, primarily because of the influence of size on an individual's ecological capabilities (Peters 1983; Schmidt-Nielsen 1984; Bonner 2006). For example, to a first approximation, size determines whether the basic functions of life are governed by viscous or inertial forces (Vogel 1994, 2003). For many small organisms-such as agnostid trilobites (Muller and Walossek 1987) and copepods-their spinose or filamentous appendages function more like paddles than rakes (Koehl 1981; Koehl and Strickler 1981), making them bona fide raptors (sensu Appendix A) for their sizes (Vogel 1994). When found in much larger organisms, the same structures function very differently. Size should provide a dominant influence on the ecological constraints of organisms and the manner in which different organisms occupy ecospace. At the largest scale, the ecospace framework offers a means to study the extent to which life-in its enormity-has occupied ecospace (cf. Thomas and Reif 1993). Little attention has been paid to assessing the prodigious ecological varieties exhibited by organisms in this general, theoretical sense (McGhee 1999). Elementary-and essentially unanswered-questions abound. How extensively occupied is ecospace currently, and what degree of lability (sensu Losos et al. 2003) has it exhibited through time (Bambach et al. 2007)? To what degree is this occupation governed by convergent adaptation (Van Valen 1978; Moore and Willmer 1997; Losos et al. 1998; Vermeij and Lindberg 2000; Stayton 2006) and constraints of various kinds (Seilacher 1970; McPeek 2000)? How quickly and to what extent was ecospace filled during the Cambrian radiation and following the Late Permian mass extinction (Valentine 1969, 1995; Erwin et al. 1987; Droser et al. 1997)? Do mechanical constraints of anatomical design result in reduced levels of ecospace filling within terrestrial communities compared to marine ones (Thomas and Reif 1993)? Do equivalent taxonomic ranks-kingdoms, phyla, and classes-occupy similar levels of ecological diversity (Valentine 1969, 1980; Van Valen 1973; Valentine et al. 1991)? Such ideas deserve greater attention because they can point toward important, unrecognized explanations of evolutionary history. Acknowledgments I thank D. W. McShea, A. I. Miller, S. E. Novack-Gottshall, R. E. Chapman, C. N. Ciampaglio, D. H. Erwin, M. A. Kosnik, J. D. Marcot, D. L. Meyer, V. L. Roth, W. Wilson, G. A. Wray, and S. Vogel for valuable discussion, support, and inspiration at various stages of development, some long-ago. I. R. Poiner and M. Haywood of the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) Marine and Atmospheric Research graciously offered unpublished data on the benthos of the Gulf of Carpentaria, Australia. This paper was strengthened by reviews from M. E. Alfaro, T. K. Baumiller, A. M. Bush, D. H. Erwin, M. Foote, J. B. Losos, and four anonymous reviewers. This study is based, in part, on a portion of my Ph.D. dissertation at Duke University. Aberhan, M. 1994. Guild-structure and evolution of Mesozoic benthic shelf communities. Palaios 9:516-545. Aberhan, M., W. Kiessling, and E T. Fursich. 2006. Testing the role of biological interactions in the evolution of mid-Mesozoic marine benthic ecosystems. Paleobiology 32:259-277. Alexander, R. M. 1983. Animal mechanics. Blackwell Scientific, Boston. _____. 1990. Dynamics of dinosaurs. Columbia University Press, New York. Alfaro, M. E., D. I. Bolnick, and P. C. Wainwright. 2004. Evolutionary dynamics of complex biomechanical systems: an example using the four-bar mechanism. Evolution 58:495-503. _____. 2005. Evolutionary consequences of many-to-one mapping of jaw morphology to mechanics in labrid fishes. American Naturalist 165:E140-E154. Arnold, S. J. 1983. Morphology, performance and fitness. American Zoologist 23:347-361. Ausich, W. I., and D. J. Bottjer. 1982. Tiering in suspension feeding communities on soft substrata throughout the Phanerozoic. Science 216:173-174. Bambach, R. K. 1983. Ecospace utilization and guilds in marine communities through the Phanerozoic. Pp. 719-746 in M. J. S. Tevesz and P. L. McCaIl, eds. Biotic interactions in recent and fossil benthic communities. Plenum, New York. _____. 1985. Classes and adaptive variety: the ecology of diversification in marine faunas through the Phanerozoic. Pp. 191- 253 in J. W. Valentine, ed. Phanerozoic diversity patterns: profiles in macroevolution. Princeton University Press, Princeton, N.J. _____. 1993. Seafood through time: changes in biomass, energetics, and productivity in the marine ecosystem. Paleobiology 19:372-397. _____. 1999. Energetics in the global marine fauna: a connection between terrestrial diversification and change in the marine biosphere. Geobios 32:131-144. Bambach R. K., A. H. Knoll, and J. J. Sepkoski Jr. 2002. Anatomical and ecological constraints on Phanerozoic animal diversity in the marine realm. Proceedings of the National Academy of Sciences USA 99:6854-6859. Bambach R. K., A. M. Bush, and D. H. Erwin. 2007. Autecology and the filling of ecospace: key metazoan radiations. Palaeontology 50:1- 22. Behrensmeyer, A. K., J. D. Damuth, W. A. DiMichele, R. Potts, H.- D. Sues, and S. L. Wing, eds. 1992. Terrestrial ecosystems through time: evolutionary paleoecology of terrestrial plants and animals. University of Chicago Press, Chicago. Behrensmeyer, A. K., C. T. Stayton, and R. E. Chapman. 2003. Taphonomy and ecology of modern avifaunal remains from Amboseli Park, Kenya. Paleobiology 29:52-70. Benz, G. W., and D. E. Collins, eds. 1998. Aquatic fauna in peril: the southeastern perspective. Southeast Aquatic Research Institute Special Publication 1. Lenz Design and Communications, Decatur, Ga. Bock, W. J. and G. von Wahlert. 1965. Adaptation and the formfunction complex. Evolution 19:269-299. Bonner, J. T. 2006. Why size matters: from bacteria to blue whales. Princeton University Press, Princeton, N.J. Bottjer, D. J., and W. I. Ausich. 1986. Phanerozoic development of tiering in soft substrata suspension-feeding communities. Paleobiology 12:400-420. Bottjer, D. J., J. K. Schubert, and M. L. Droser. 1996. Comparative evolutionary ecology: assessing the changing ecology of the past, in M. B. Hart, ed. Biotic recovery from mass extinction events. Geological Society of London Special Publication 102: 1-13. Boucot, A. J. 1990. Evolutionary paleobiology of behavior and coevolution. Elsevier, New York. Brandon, R. 1984. The levels of selection. Pp. 133-141 in R. Brandon and R. Burian, eds. Genes, organisms, populations: controversies over the units of selection. MIT Press, Cambridge. Bretsky, P. W. 1968. Evolution of Paleozoic marine invertebrate communities. Science 159:1231-1233. Brett, C. E. 1984. Autecology of Silurian pelmatozoan echinoderms. In M. G. Bassett and J. D. Lawson, eds. Autecology of Silurian organisms. Special Papers in Palaeontology 32:87-120. _____. 1990. Obrution deposits. Pp. 239-243 in D. E. G. Briggs and P. R. Crowther, eds. Palaeobiology: a synthesis. Blackwell Scientific, London. Brett, C. E., and G. C. Baird. 1986. Comparative taphonomy: a key to paleoenvironmental interpretation based on fossil preservation. Palaios 1:207-227. Briggs, D. E. G., R. A. Fortey, and M. A. Wills. 1992. Morphological disparity in the Cambrian. Science 256:1670-1673. Brown, J. H. 1995. Macroecology. University of Chicago Press, Chicago. Brown, J. H., and D. W. Davidson. 1977. Competition between seed- eating rodents and ants in desert ecosystems. Science 196:880-882. Bush, A. M., R. K. Bambach, and G. M. Daley. 2007. Changes in theoretical ecospace utilization in m
<urn:uuid:7f8d07b1-aad9-4a07-863d-c28e62013c08>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/970225/using_a_theoretical_ecospace_to_quantify_the_ecological_diversity_of/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.883488
14,214
2.609375
3
Originally appeared in: EOS, Trans AGU, 77 , 73 and 79, 1996 The mortal beauty, Psyche, of Greek mythology, was driven by curiosity to gaze upon Eros, son of Aphrodite, defying his counsel not to look upon him by the light of day. In February, the Near-Earth Asteroid Rendezvous (NEAR) spacecraft will be on its way to reveal the geophysical and geochemical wonders of the asteroid, Eros (Figure 1), bearing the name of the god who fell in love with and secretly married Psyche against his mother's wishes. The spacecraft (Figure 2) , built and managed by the Johns Hopkins Applied Physics Lab, Laurel, MD was launched aboard a Delta II-7925-8 rocket from Cape Canaveral just after Valentine's Day. In order to further our understanding of the nature of asteroids and their role in the formation of the Solar System, the three-axis stabilized spacecraft, which is passively cooled and powered with fixed solar panels, will orbit the asteroid for a year after its three-year trajectory through the inner Solar System (Figure 3). Arriving in February, 1999, the spacecraft will make measurements with five scientific instruments and transmit data to Earth through a 1.5-m, fixed, high-gain antenna at rates up to 27 kbits/s. Upon analysis of the data, this planet-crossing asteroid will be brought into the realm of geophysical and geological study, extending our knowledge of the small bodies near Earth not only to understand their role in the formation of our Solar System, but to understand the physics and chemistry of the impactors that have significantly affected the surface and atmospheres of all planets. 433 Eros, discovered in 1898 by G. Witt, was the first asteroid discovered to cross within the orbit of Mars and to approach that of Earth. Its approximate diameters are 40 x 14 x 14 km making it the second largest planet-crosser and larger than the Martian moons Phobos and Deimos. Our selection of Eros as a mission target was driven partially by curiosity based on its proximity to Earth and its size. It was also selected because we anticipate geological diversity in terms of both surface features and composition. With the set of instruments on board there is a possibility of establishing the relationship between ordinary chondrite meteorites, those most abundant on Earth, and S-type asteroids found in the inner portion of the Main Asteroid Belt. These asteroids have moderate albedo and spectral reflectance absorption bands indicative of mafic silicate mineralogy. With ground-based measurements from asteroids alone, whether these asteroids are chemically and mineralogically similar to the ordinary chondrites, and thus are their parent bodies, is a subject of debate among scientists. NEAR will expand both scientific and technical horizons. The asteroid will be by far the smallest body in the solar system to have its mass measured and its gravity field mapped from orbit. It will be the first small body to have its elemental composition probed with x-ray and gamma-ray spectrometers; will have its shape measured with a laser rangefinder; and the first small body to be magnetically surveyed. NEAR's Multispectral Imager (MSI) will return images with the highest spatial resolution ever. The mission will provide the most comprehensive analysis to date of the surface and interior of any solar system body beyond the Moon. After analysis of the data, we will have knowledge of Solar System material extending into a new size regime. This information will undoubtedly force us to modify some details in our models of Solar System formation. The mission achieves a number of technical and financial firsts as well. Perhaps its most daring technological achievement will be its navigation in orbit about a highly irregularly-shaped body. A financial first is coming under the cost cap of $150 million for the development of the spacecraft and its first 30 days of operation, on time and under budget. We are clearly engaged in an experiment with a strong economical component to it as well. Unlike Psyche and Eros in their mythological world, we are not living in a jeweled palace. Just after launch, the engineers perform their spacecraft check-out and open the cover on the imager (other covers are deployed after the last major trajectory correction maneuver in July, 1997). The MSI team is eager to point their camera at the Moon for a calibration measurement before it gets too far away. Back on Earth, while the spacecraft is en route to Eros, scientists and mission planners will be putting their observation sequences into their final form. Science team members will be examining their preflight instrument calibrations and designing rapid and efficient data reduction and analysis procedures for the year-long data collection effort. In this trim budget environment, small teams will be working longer to prepare for the data stream from the spacecraft. If NEAR is launched early in its launch window, it will be able to fly by the 60-km diameter, low-albedo, main belt asteroid named 253 Mathilde in June, 1997. This target of opportunity is anticipated with excitement as the planetary community may get its first look at the surface of an asteroid which has the photometric properties occurring most frequently among objects in the Main Asteroid Belt. Mathilde is a C-type asteroid, a type designated by a low albedo of 0.03-0.06, (carbon black has an albedo of 0.01-0.005) and neutral colors in the visible and near- infrared. Spectra with these characteristics have few absorption bands from which to extract mineralogical surface information. Recent ground- based brightness measurements indicate that Mathilde has the third longest rotation period of any known asteroid, 417 hours. It is difficult to understand how such a long rotation period comes about. Some process is braking this asteroid's rotation rate. We eagerly await images of Mathilde that might bring additional insights to this observation. What scientific return do we expect from the primary mission at Eros? The objectives include an inventory of basic physical properties: shape, volume, rotational state and rate, mass and a search for satellites. The MSI will be used for optical navigation as the spacecraft approaches Eros and for shape determination. Our enthusiasm for a satellite search, to be conducted with MSI during the approach phase, is piqued by the unexpected discovery of Dactyl at the asteroid Ida as the Galileo spacecraft flew by it in August, 1993. After the spacecraft goes into orbit about Eros, the spacecraft's position will be monitored and Eros' total mass and its distribution will be derived from the radio science experiment using the spacecraft's main antenna. A model of the asteroid's interior structure will be derived from varying acceleration of the spacecraft as it moves around the asteroid. The NEAR Laser Rangefinder (NLR) and MSI will build shape models from which the volume and then bulk density of Eros will be derived. These data will address one of the fundamental debates in the planetology community, whether the asteroid is a solid fragment or is a loosely bound conglomerate of fractured debris. Knowledge of its mass distribution will shed light on this question at least for this particular asteroid. While in orbit the rest of the spacecraft's instrument complement consisting of: a magnetometer, a Near-Infrared Spectrometer (NIS) and X-ray and Gamma-ray Spectrometer (XGRS) will proceed with their mapping tasks. The orientation of the magnetic field as a function of location at the surface will reveal whether or not the asteroid has an intrinsic or a remanent magnetic field. The nature of this magnetic field will in turn constrain the state of thermal evolution of the parent body (or bodies) of Eros when the material in the present asteroid cooled. The NLR and MSI will continue mapping topography and morphology at spatial resolutions surpassing any on previous spacecraft missions. These data will provide a basis for discovering previously unknown processes active on surfaces of small bodies. The NIS will measure reflected sunlight between the spectral region of 0.8-2.7 microns, a region sensitive to electronic transitions in major rock- forming minerals. As the spacecraft is lowered into closer and closer orbits, the X-ray spectrometer will map the resonance fluorescence spectra from elemental Mg, Al, and Si, S, Ca, Ti, and Fe. Two solar monitors will continuously measure the x-ray output from the sun to enable quantitative elemental abundance measurements. Elemental abundances are measured independently by the gamma-ray spectrometer that will count emissions from elements that are stimulated by cosmic rays and energetic solar particles. Naturally occurring gamma radiation from K, U, and Th will also be measured. After gazing upon the surface of Eros for a year, bringing mortal scientists happiness and fulfillment from a deeper understanding of the early Solar System, what will be the fate of the spacecraft and asteroid? Mythology predicts they will live happily ever after, remain tightly bound forever. Celestial mechanics provides an alternative ending in which the wrath of Aphrodite reigns and the spacecraft is ejected from the gravitational sphere of Eros from a short-lived, chaotic orbit. The members of the NEAR science team are: Joseph Veverka, Cornell University (Team Leader), Ithaca, NY. James F. Bell III, Cornell University, Ithaca, NY. Clark R. Chapman, Southwest Research Institute, Boulder, CO. Michael C. Malin, Malin Space Science Systems, Inc., San Diego, CA. Lucy-Ann A. McFadden, University of Maryland, College Park, MD. Mark S. Robinson, U.S. Geological Survey, Flagstaff, AZ. Peter C. Thomas, Cornell University, Ithaca, NY. Jacob I. Trombka, NASA Goddard Space Flight Center (Team Leader), William V. Boynton, University of Arizona, Tucson, AZ. Johannes Bruckner, Max Planck Institut fur Chemie, Mainz, Germany Steven W. Squyres, Cornell University, Ithaca, NY. Mario H. Acuna, Goddard Space Flight Center (Team Leader), Greenbelt,MD Christopher T. Russell, University of California, Los Angeles Maria T. Zuber, Massachusetts Institute of Technology, Cambridge, MA and Goddard Space Flight Center, Greenbelt, MD (Team Leader) Donald K. Yeomans, NASA Jet Propulsion Laboratory (Team Leader), Jean-Pierre Barriot, Centre National D'Etudes Spatiales, Toulouse, France Alexander S. Konopoliv, Jet Propulsion Laboratory, Pasadena, CA. Andrew F. Cheng, Applied Physics Lab, Laurel, MD.
<urn:uuid:c982cbaa-7a71-47f3-a7b0-13b90dd4dcce>
CC-MAIN-2016-26
http://www-ssc.igpp.ucla.edu/personnel/russell/papers/NEAR/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918887
2,199
3.6875
4
Tuesday, November 05, 2013 A Gluten Free Diet Helps Type 1 Diabetes It has long been understood that two autoimmune diseases, celiac disease and type 1 diabetes are related. They share common genes and the incidence of celiac disease is higher among type 1 diabetics. There have been some anecdotal reports regarding children diagnosed with type 1 diabetes who were put on a gluten-free diet soon after their diagnosis and for a period of two years or more didn’t require any insulin. The thought was that the gluten-free diet effectively halted the progression of the diabetes, at least for the duration of the study. Studies of mice have shown that despite utilizing a genetic strain of mice that was strongly in-bred to increase the risk of type 1 diabetes, 2/3 of the mice did not develop the disease when a drug was administered to prevent leaky gut. This study was performed by Dr Alessio Fasano and his team. Dr Fasano, Director of the Center for Celiac Research and Treatment at MassGeneral Hospital for Children, is one of the world’s acclaimed researchers in the area of celiac disease and gluten sensitivity. Leaky gut is associated with the initiation and continuation of autoimmune disease and Dr Fasano’s work with these genetically predisposed mice shed a great deal of light on the power of an undamaged gut lining to effectively forestall development of a genetic condition, in this case type 1 diabetes. A study out of Immunology, dated August 22, 2012, is titled “Dietary gluten alters the balance of proinflammatory and anti-inflammatory cytokines in T cells of BALB/c mice”. The title is a mouthful but here is what the researchers out of Denmark found: Their initial premise was based on the idea, as I mentioned above, that dietary modifications, specifically a gluten-free diet, could reduce the risk of developing type 1 diabetes. The question they posed was, “How did this occur?” They discovered that wheat gluten induced the production of pro-inflammatory chemicals called cytokines that would damage the intestinal lining and immune tissues of the small intestine. More importantly, a gluten-free diet didn’t just neutralize the negative effects just mentioned, but it actually caused the production of anti-inflammatory chemicals that would provide protection for the immune system and gut. So, while gluten is a known bad guy, a gluten-free diet doesn’t just take the negative away, it actually induces a positive, healing response. Clinically, we frequently see this with patients here at HealthNOW Medical. As soon as we meet a patient with any history of autoimmune disease, we quickly test them for celiac disease and gluten sensitivity via lab tests and a 30 day elimination diet. If we discover any negative immune reaction to gluten, we begin a strict gluten-free diet. Happily, we often see stabilization, if not reversal, of their autoimmune disease. We support the gluten-free diet with our other protocols for normalizing gut permeability (healing a leaky gut) and strengthening the immune system. Taken together this program yields excellent results. I hope you found this information helpful If you know anyone suffering from an autoimmune disease, please show them this post. Gluten could be a component in worsening their disease while a gluten-free diet could be a positive influence in their journey to improved health. If your health is not at the level you would like, please consider contacting us for a free health analysis. Call 408-733-0400. Our destination clinic treats patients from across the country and internationally so you do not need to live locally to receive assistance. We would be delighted to help you! To your good health, Dr Vikki Petersen, DC, CCN Founder of HealthNOW Medical Center Co-author of “The Gluten Effect” Awarded Gluten Free Doctor of the Year 2013
<urn:uuid:c104632a-9ea9-4f33-9b3b-6e1215988eb2>
CC-MAIN-2016-26
http://glutendoctors.blogspot.com/2013/11/a-gluten-free-diet-helps-type-1-diabetes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941973
811
2.953125
3
Update: The statement issued late Tuesday by the FDA, quoted in part below, indicates the agency is willing to work with cheese producers on a solution to the problem. How safe is cheese aged on wood? It may be a debatable question, but now it's also a battle that has erupted in the wake of a recent U.S. Food & Drug Administration ruling that says the use of wooden boards as a surface for aging cheese is unsanitary. In particular, the agency is worried that wood might spread the toxin listeria. But artisan cheesemakers -- in the United States and abroad -- have been using wood for centuries and, in fact, consider it to be vital to the flavor of certain varieties. Aging cheese depends on "micro-flora" in its immediate environment. "It's a traditional surface," said Rob Ralyea, a senior extension extension associate at Cornell University who specializes in dairy. "The question is, has there been any real risk assessment? You could find 50 studies that say it is (a risk) and 50 that say it's not." But the FDA says it is, and that's because of an investigation that started in 2012 at Finger Lakes Farmstead Cheese, an artisan cheese-maker in Trumansburg, just outside of Ithaca. The FDA said it found listeria contamination, though it reported no one had become sick due to Finger Lakes Farmstead's cheese. The FDA got a court order to shut down Finger Lakes Farmstead until the cheesemaker could prove it has eliminated the listeria and its causes. It was only after the court order that Nancy Taber Richards, owner of Finger Lakes Farmstead Cheese, learned that the agency wanted to prohibit the use of wood. Richards, who admits listeria was present in her shelves due to quality control issues, believes the court order and the ensuing ban on wood "was heavy handed." She had cleaned her shelves and believed she was complying with the FDA's concerns. "It's frustrating because it seemed to be a moving target," said Richards, who has still not reopened and would have to spend considerable time and money to replace the wood. "That's difficult for a small producer to handle." The wood shelves, she said, are better for her cheese -- an aged gouda -- because they allow more moisture evaporation and air flow than plastic or steel. She is also not convinced the wood is inherently more dangerous. "I think there is science that support the benefit of wood, and there has been science that shows even small scratches in plastic can harbor pathogens," she said. After the issue was raised with Richards' dairy, Cornell and the New York State Department of Agriculture and Markets asked for clarification from the FDA, Ralyea said. That's when the FDA said it would be enforcing existing policies on sanitation -- and that came to be seen as a ban on wood. On Tuesday, the agency clarified its stance. "In the interest of public health, the FDA's current regulations state that utensils and other surfaces that contact food must be 'adequately cleanable' and properly maintained," according to statement released Tuesday by FDA press officer Lauren Sucher. "Historically, the FDA has expressed concern about whether wood meets this requirement and has noted these concerns in inspectional findings. FDA is always open to evidence that shows that wood can be safely used for specific purposes, such as aging cheese." The regulations cited by the FDA do not specifically mention wood. The agency, Ralyea said, has interpreted the rules to support its contention that wood is a surface that cannot be made sanitary. "They made it clear ... that wood is one of those things," said Ralyea, who stresses that he believes eliminating listeria is a worthy goal. "I would say it's not impossible to clean or sanitize wood." Enforcement of the rule could have a devastating effect on artisan cheese-makers who have recently adopted age-old techniques to separate their products from the mainstream. "There are dozens of artisan cheesemakers who are affected," Ralyea said, not to mention many more in other big dairy states like Wisconsin and Vermont. Moreover, Ralyea said, the rule could be used to prevent the import of some artisanal European cheeses that are aged in wood, like Comte and Beaufort from France or even Parmiggiano-Reggiano from Italy. It could also affect the new cave-aged cheese facility that Wegmans markets opened earlier this year outside of Rochester. "We do not use wood boards to age cheese in our Cheese Caves, but we want to do so in the future ... when we're ready," Wegmans spokeswoman Jo Natale said in a statement. " .... Many of the very finest imported and domestic cheeses are aged on wood boards, and we want to continue to offer them to our customers." A cheese blogger in the dairy capital of Wisconsin calles the FDA rule a "game changer" for artisan producers. "A sense of disbelief and distress is quickly rippling through the U.S. artisan cheese community," wrote Jeanne Carpenter in her blog, Cheese Underground. Inspecting and regulating cheese producers has traditionally been done by the states, not the federal government, Ralyea said, though that might be changing due to the newly enacted Food Safety Modernization Act. He notes that the U.S. Department of Agriculture does not prohibit the use of wooden boards. For now, the cheese producers and some government officials are hoping to get more answers from the FDA. "We are working with Cornell University to better understand the FDA's decision and to assess the science behind it," said New York Ag & Markets spokesman Dave Bullard in a statement. U.S. Sen. Charles Schumer, D-N.Y. is also going to ask the FDA to reconsider its policy, noting that competitors in Canada, Europe and other countries are allowed to use wood. Richards said despite everything, she has "empathy" with the FDA inspectors. "I do wish I had not become the poster child for this issue," she said. "We had a problem and I think we addressed it. No one wants to see any health incident associated with their business. That certainly wouldn't be good for me or the industry. But it's the heavy-handed nature of this that's frustrating."
<urn:uuid:13683e86-b633-42bb-9dcf-6ae6bec6bfcb>
CC-MAIN-2016-26
http://www.syracuse.com/food/index.ssf/2014/06/should_cheese_aged_on_wood_be_banned.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974294
1,324
2.53125
3
The Andamanese Language Family (I) by George Weber A contribution to the centenary of M.V. Portman's work If an island of ice-age hunter-gatherers was discovered in the Atlantic today, scientists and the media would trample each other to death in their eagerness get there. The Andamanese present a no less unusual case: an extremely primitive and very ancient pygmy people, the last remnant of the oldest human population of Asia and likely to be among the earliest ancestors of many Asian and Australian people. Yet no stampede threatens. Fig.1. The Andaman islands lie in the Bay of Bengal between India and Thailand. Geographically closer to Burma and Thailand than to India, they are politically a part of India today. The latter has a moral and historical claim on the islands since they were the location of a British penal colony from 1858 until the 1930s that was easily as brutal and deadly as the better-known French penal colony on Devil’s Island. There is a sad multitude of little-known primitive tribes on the verge of extinction in today’s world. In India alone (to which the Andaman islands belong politically) they number in the dozens and world-wide there are hundreds more. There are also still a few truly isolated and hostile tribes left around the world. So why pick out the Andamanese Negrito pygmies? Here is why: their languages, their genetic make-up, their customs, their prehistory, their attitude towards the outside world are all so out of the ordinary that not the least of the mysteries surrounding them is why they are not better known. Nevertheless, some major scientific figures have noted their importance (1): Prof. L. Cavalli-Sforza, geneticist, 1994: "The most interesting aspect of the Andamanese is that they probably had the least admixture compared with other Negritos, and perhaps represent relics of the human bridge that may have existed 65 to 70000 years ago between Africa and Australia. The few genetic data available show remarkable genetic homogeneity for 11 red-cell and enzyme proteins... A complete genetic investigation of these groups with modern techniques is very important... The tendency to homogeneity is obviously a consequence of strong drift, but if very many genes are tested... the information collected may... determine whether these populations represent a missing link between Africa and Australia." Prof. Göran Burenhult, archaeologist, 1994: "I completely agree that the Andamans are a blind spot in the eyes of most prehistorians... I am well aware that the archaeology of the Andaman islands is of the utmost importance to our understanding of how and when SEAsia was first settled by modern humans..." Dr. Peter Bellwood, prehistorian, 1992 "The Negritos... are therefore the only SEAsian survivors of the original Austro-Melanesian continuum outside the eastern Indonesian clinal zone... The Negritos are thus of great significance in SEAsia." What exactly is it that makes the Andamanese so special? There is their race: the Andamanese are Negritos and as such among the smallest (in stature as well as numbers) of all human races. They were estimated to have numbered around 4800 in 1858; today there are less than 400. Besides the Andamanese, there are several thousand Negrito and Negrito-like (Negritoid) people in Asia, especially on the Malay peninsula (Semang) and in the Philippines (Aeta or Agta). There are other groups in mainland and insular Southeast Asia and Australia who may have some Negrito affinities. Most consist of a few people in remote areas and most are threatened in their existence. Many are known to have vanished over the past century. Let us return to the Andamanese. With an average height of 137 cm (54 inches) for women and 148.5 cm (58.5 inches) for men, the Andamanese Negrito are tiny. The women have an average weight of 43.4 kg (95.5 lbs.), the men 39.5 kg (87 lbs.). From afar, the Andamanese Negrito may indeed be mistaken for "African" but apart from their common humanity they are not related to the African pygmies or to other Africans. There is, however, the possibility of a remote relationship to the Khoi (Hottentot) of southern Africa but such connection has not been seriously investigated and will, in any case, be very difficult to prove. The Negrito people are very dark-skinned and their hair is of the peppercorn variety, not to be confused with the curly hair common in people of African ancestry. Some Andamanese women (but not all and only very rarely men) show the trait known as steatopygia or ‘fat bottom’. Fig.2. A modern "Venus of Willendorf" – an Onge woman with steatopygia ("fat bottom"). Steatopygia has been widespread, perhaps even universal, in human prehistory and is reflected in the famous ice-age "Venus figurines." Only two living populations still have this ancient human trait today: the Khoi ("Hottentot") of South Africa and the Andamanese There are only two living populations today that still have a genetic predisposition towards steatopygia: the Andamanese and the Khoi. The adaptation provides reserves of fat and increases the chance of survival in unpredictable environments. Steatopygia is thought to have been widespread in human prehistory and to be reflected in the famous ice-age figurines. The famous ‘Venus of Willendorf’, found in Austria and estimated to be 30,000 years old, is only one of many known. When the linguist Joseph H. Greenberg in 1971 published the results of his research into links between Papuan, Tasmanian and Andamanese languages (2), only a few linguistic specialists took note. Small wonder: linguistics had not contributed to the knowledge of the remoter human past before. Moreover, the languages investigated are among the least-known in the world. Greenberg called his new hypothetical linguistic grouping of around 750 languages the Indo-Pacific phylum and arranged it as follows (3): (1) Andamanese family (14 languages of which 3 are still living, 4 if the virtually extinct Great Andamanese remnant is included) (2) Tasmanian family (classification controversial, up to 9 languages, all extinct) (3-13) 11 Papuan language families (727 languages) A language family (and still more a super-family or phylum) is a very high-level classification and one that is not usually obvious. Greenberg used methods he had employed in his earlier work on African (1950s) and Amerindian (1960s) languages, compiling long lists of basic vocabularies along with whatever grammatical information he could lay hands on. Sifting through this material, he thought he had found 35 cognates that he thought connected the Andamanese to the Tasmanian languages. The discovery remains controversial, not least because our knowledge of the extinct Tasmanian is even more incomplete than that of Andamanese languages. It must also be admitted that 35 cognates are not much to go on and Niclas Burenhult has recently questioned the lack of systematic correspondences among Greenberg’s cognates. However, his analysis of the Andamanese languages has led him to suspect a far deeper relationship which, incidentally, re-admits the Andamanese-Tasmanian connection by another, much wider and more theoretical door (4). Burenhult himself says: The most important point to be made concerning Andamanese... is that it is very different – not only genetically but also typologically – from neighbouring language families and may represent a trace of what Southeast Asia was like linguistically a few thousand years back. The Andamanese is indeed an enigmatic family. Apart from Greenberg’s controversial 35 cognates, it shows no relationship whatever with any other language grouping. The Asian Negrito are oddly shadowed by a "twin" people, the Vedda of Sri Lanka and similar groups throughout Southeast Asia, collectively called Veddoids. They form tiny groups who lead a primitive existence very much like that of the Negritos in remote jungle areas. Although Veddoid life and material culture is very similar, they are a little taller than the Negrito and are of distinctly different appearance, most immediately obvious being their wavy hair. What the relationship between the two groups might be has never been investigated. Indeed, the question has barely even been raised. There is quite a lot of information on the now nearly extinct Vedda of Sri Lanka but most of it is going back to colonial days. After gaining independence in 1948, Sri Lanka took little interest in its Veddas. In this, the country followed a pattern seen in many newly independent nations (India with its famous Anthropological Survey making an honourable exception): the dominant population did not see the need to take an interest in or allocate resources to embarrassingly primitive and numerically insignificant minorites. Negrito and Veddoid territories do not overlap and there is only one place, on the Malay peninsula, where the two live as neighbours. As far as the classification between Veddoids and Negrito can be said to be clear, there is a clear if somewhat wavy line going roughly from Bangladesh through Indonesia to Tasmania separating the two remnant populations: west of this line there are Veddoids, east of it Negrito. The line clearly has something to tell us– if only we could understand it. It has long been thought that the Negrito, the Vedda and Veddoid people were among the ancestors of many modern Australasian people, from the southern Indians (Dravidians), the Papuans and the Australian-Tasmanian aborigines to many Indonesian, Filipino and mainland-Asian groups. It is thought that there was once a "bridge" of anatomically modern human beings migrating, over tens of thousands of years, from Africa though Southeast Asia to Newguinea, Australia, Tasmania and Oceania. The bridge lasted until the end of the last cold period around 10,000 years ago. The Negrito and Veddoids probably represent populations that were left behind after the melting of the ice shields had caused the sea to rise. The Andamanese were cut off on their shrinking islands, others in their jungle valleys. Many prehistoric stone tools have been found throughout Southeast Asia that could have been made by Negritos or related populations– the level of skill involved is comparable with recent Andamanese and Veddoid stone tool technology. Unfortunately, very few relevant archaeological finds are accompanied by sufficient human bones to allow identification of the type of people associated with them. While a Negrito presence is plausible on a number of grounds, proving it will be difficult and must await future finds of quite exceptional quality. Fig. 3. The distribution of Negrito populations and their separation from the Veddoids. Most of the populations on this map are tiny groups surrounded by much more powerful neighbours and most are on the verge of extinction or already extinct. Kadar, Kanikkar, Kurumbar, Palliyan, Panyan, Puliyan, Urali Moi (Anu-chu, Jarai, or Montagnards) Alorese, Pantarese, some Timorese ... Loinang, Laki, etc. Papua New Guinea Pygmies of the Sepik source area Pygmies of the Torricelli mts. Pygmies of the Gogol and Ramu river areas Normanby island pygmies Population in the interior of the Gazelle peninsula (New Britain) The Philippines and Tiruray, Ata, Upland Bagobo Bathurst and Melville islanders (Negritoid traces) Barrineans (pygmies of the Atherton plateau, Queensland Negritos) With the sole exception of the Andamanese, all other Negrito, Vedda and Veddoid groups have lost their original languages. The Vedda (from a Sinhala word meaning "hunter") of Sri Lanka have been speaking dialects of their neighbours’ Indo-European Sinhala language for a very long time. Possible remnants of the prehistoric Vedda language have been reported in the late 19th century by the Sarasin cousins, who also expressed the hope that someone would do a systematic analysis of their evidence. Nobody has done so. The Sarasins noted a number of local terms used by some (but not all) Vedda groups that could well be survivors. Among them are three synonyms tambela, galrekki and malakedde for "axe" but also words used by some Vedda groups for "bow" and "arrow." The words vary from group to group which may reflect the existence of several original Vedda languages or, as the Sarasins themselves warned, the words could be local creations of ultimately Sinhala origin. Until someone does that rigorous analysis a century after it has been requested, we have no way of telling. There are also two Veddoid groups speaking unclassified languages that may not be related to the languages of their neighbours: the inaccessible Shompen of Great Nicobar speak a language about which virtually nothing is known (even if it is usually classified as Nicobarese, for lack of somewhere else to put it) while the unclassified Lom language from the interior of Bangka island in Indonesia, (off the eastern coast of Sumatra) has last been reported in the 19th century and has not been heard of since (5). Off the west coast of Sumatra on the tiny island of Enggano, a language is spoken that undoubtedly belongs to the Austronesian family but is said to be "extremely aberrant." If all this looks as if there are some serious gaps in our knowledge, appearances are not deceiving. Indeed not. As far as surviving traces of Negrito languages are concerned, things do look a little brighter. Traces of what must be the original Negrito language have been reported among the Semang of Malaya and the Philippine Negrito. That their change of language happened during prehistoric times is hinted at by the fact that the Malaysian Negrito speak languages of the Aslian branch of the Austro-Asian family, a family that dominated the area until two thousand years ago but has since been replaced on the peninsula except for isolated pockets by Malay and other Austronesian languages. Among the Malay Semang Negrito words that cannot be traced to any other language have been reported. There are jebeg ("bad"), chog or seneng ("bag"), lebeh ("bamboo"), boo ("big"), kawod ("bird"), herpai ("coconut"), keto ("day"), kam ("frog"), chas ("hand"), napeg ("pig"), jekob ("snake"), wayd ("squirrel"), takob ("yam") and many others (6). Recent painstaking research by Lawrence A. Reid (7) has also uncovered a wealth of evidence for the existence of several prehistoric non-Austronesian Negrito languages in the Philippines. Reid has even found evidence to give a likely sequence for the beginning in the change of languages by the Negritos: around 5000 years ago the first Austronesian agriculturalists appeared in the Philippines probably from Taiwan and somehow forced or persuaded local Negrito hunter-gatherers to labour in their new fields. Out of the need for a means of communications between immigrants and Negritos a pidgin language arose quickly which later, in the course of passing centuries, was creolised, finally to such an extent that the modern Negrito languages of the Philippines came to bear a close resemblance to the neighbouring Austronesian languages. A similar sequence very likely also took place in Sri Lanka, the Malay peninsula and elsewhere. Reid has found many unique (i.e. non-Austalasian) terms. The record holder is a term shared by no less than four modern Negrito languages: lati means "rattan" in the North Agta, Central Agta, Alta and Arta Negrito languages. Other unique sample words are shared among three Negrito languages each: litid ("vein"), tapur ("bury," "inter") and babak ("snake) and still many others occur in only one or two languages. Traces of ancient and extinct Negrito languages found so far show no obvious relationship with Andamanese and no cognates have been found. That would indeed be asking rather a lot. Even 5000 years ago the Negritos of Malaysia and the Philippines must have been out of contact with their Andamanese brethren for untold millennia – their languages, if they ever were related, having drifted apart into profound mutual unintelligibility. Yet the hope remains that a painstaking analysis of the available evidence will one day reward a hardworking linguist with a discovery and following inevitable controversy. Let us once more return to the Andamanese. Why did they, alone, retain their own languages? A relatively isolated island habitat without interfering migrants or dominant neighbours must have been a major reason but it cannot have been the whole story. The Indian archaeologist, Zarine Cooper has done field work in the Andamans and has pushed the archaeological evidence for Negrito occupation back to around 2500 years (8). This is not a long time in view of the antiquity of the Negritos but it is a long time for a culture to show no change or outside influence despite its location athwart shipping lanes, busy with commerce since early times. For the past 1300 years and possibly much longer, the Andamanese have been hostile to outsiders. Ships trying to replenish their fresh water, looking for wood to make repairs or just seeking shelter from storms, were attacked just as was any mariner unfortunate enough to be wrecked on one of the many Andamanese reefs. If the visitors were too numerous or well-armed, the natives hid in the dense undergrowth and waited until the coast was, literally, clear again. No wonder the Andamans have always had a terrible reputation. The setting up of a British penal colony did nothing to improve it. Even today, two of the three surviving Andamanese groups still more or less follow the same behaviour and will not allow anyone to approach them and still employ technologies that went out with the mammoth and sabre-toothed tiger elsewhere. What caused this extreme hostility in the first place can only be guessed at. Slave-raiders seem to have played a part and continued to play it until less than 130 years ago. Whatever the reasons in detail, there is good genetic evidence that the Andamanese have been isolated for a very long time. Indeed, theirs is by far the most complete and longest-lasting isolation of any human group alive today. In 1858 the Andamanese were tragically pulled back into the broad stream of world history. The British rulers of India needed an inaccessible and escape-proof place in which to lock up thousands of prisoners fresh from the atrocities of the "Great Mutiny" of 1857. The Andaman islands were just what they were looking for: hostile natives, atrocious climate, surrounded by reefs and a large expanse of stormy seas. From the Andamanese point of view, the British invasion was the beginning of the end of the world. Long isolation from the rest of humanity had not prepared them for the many "new" diseases that came with the British jailers and their Indian and Burmese prisoners. After initial resistance, some Great Andamanese groups grudgingly came to accept the presence of outsiders and some even turned actively friendly. Contact became closer and common. The consequences of such friendship were not long in coming: epidemics of measles, pneumonia and syphilis started in the 1870s and never let up again. The friendly groups are now extinct but for two dozen survivors of mixed Indian-Burmese-Andamanese ancestry that exist on Indian government handouts on tiny Straits island. They have preserved Great Andaman cultural traditions only in fading traces. The Straits islanders use a kind of mongrelised Aka-Jeru amongst themselves which the Indians have taken to calling "Andamanese." For external contact Hindi is used as the Indian social workers do not speak Andamanese. This essentially new language is the result of mixing 30-odd Great Andamanese survivors on the reservation in the 1950s and Aka-Jeru speakers were the majority, they dominated. An ever increasing amount of Hindi words and expressions is also constantly being added to the mix. This "Andamanese" is not likely to have a long-term future. The distribution of the Andamanese tribes at the time of the British annexation in 1858. Little is known about the areas occupied by Jarawas at that time. Before the 1790s, when the first shortlived British attempt to set up a colony introduced new diseases among them, they occupied the southern coastlines as well as the interior and seem to have been more numerous. When the British returned in 1858 the coastal Jarawa had disappeared and been replaced by Great Andamanese. The remaining Jarawa in the interior were found in the 1860s to be hostile to all outsiders and they have remained so until the present. Today they are living further north, along the west coast of South and Middle Great Andaman. The three groups that did not become friendly soon after 1858 are still with us today, mostly healthy and with their culture intact. One has become friendly since and is now fading, physically and culturally. Two of them are still hostile. The point should be borne in mind when humanitarian plans to cultivate "friendship" with the hostile Andamanese are discussed. Friendship in the Andamans has been deadly. The Onge on Little Andaman have been peaceful only since the 1890s when M.V. Portman managed to pacify them by diplomatic means and the force of his personality. They did not entirely escape the fate of the earlier "friendlies" even though their island was left largely alone by the British and they did not have to accept permanent resident outsiders until the 1950s. Only then did Indian refugee settlers start to arrive in numbers. Still, the number of Onges has shrunk from an estimated 700 in the 1860s, to 150 in 1951 and is now down to just under 100. Looked after by Indian doctors, their problem today is not so much the obvious diseases rather than their very low birth and infant survival rate, the causes of which have not been established. Two other groups are both hostile and unapproachable. The Jarawa originally lived on the southern tip of South Great Andaman but today are spread out in the dense jungles along the west coast of South and Middle Great Andaman. Few years pass without a number of dead Jarawas and Indian settlers, the result of recurrent clashes. Keeping the Jarawa on the one hand from raiding the Indian farmers for iron and the farmers on the other hand from poaching in Jarawa territory is the difficult task of a special Indian bush police. The number of Jarawa is thought to be less than 200 and can be estimated only by flying over their territory in the early evening to count the number of smokes from cooking fires. Two small Jarawa local groups have been contacted and are getting accustomed to taking coconuts and other gifts from visitors without the traditional bloodbath. Most Jarawa remain utterly unapproachable, however. The few visitors are mostly Indian researchers and security staff who have to go through a strict medical check-up before being allowed into Jarawa territory. Unfortunately, tourists without medical check-ups are occasionally brought in to meet the friendly Jarawas in return for some baubles for the aborigines and hard cash for the guides. The first such tourist with a runny nose could well wipe out the entire local group. Nor are such visits without risk to the visitors since Jarawa behaviour remains quite unpredictable. Fig.5. Youths from one of the few approachable Jarawa groups look over a rare photographer from the outside world. While they look cheerful and harmless enough, on several similar occasions the mood had changed dramatically and for no apparent reason within seconds to frothing fury. We are a long way from understanding these people A third Andamanese group, the Sentineli, have been observed only from afar. Some magnificent photographs have been taken of them from an off-shore boat by the Indian photographer Raghubir Singh and first published 1975 (9). The Sentineli live on North Sentinel island, an isolated place surrounded by a nearly unbroken line of dangerous reefs. Virtually nothing is known of them and they still defend their beaches, as they must have done for centuries, by shooting seriously hostile arrows at approaching boats. Recently some have mellowed a little and have accepted gifts of coconuts thrown into the water. One Sentineli party has even climbed briefly aboard a small Indian vessel containing delighted Indian anthropologists. Perhaps closer contact with them is possible in future. The Indians are working at establishing a line of communication with the Sentineli in case of a shipwreck, an oil spillage or other accident. The authorities could not, for example, let the survivors of an aircraft crash or ship wreck simply be slaughtered in the traditional manner. A way must be found to persuade the Sentineli to let such people live until they can be removed from the island. There is no argument against this if saving lives and scientific research is indeed the only motive. Suspicions are raised by reported plans to establish coconut plantations on the island which are said to be required "to feed the Sentineli" and to establish "friendship." That they need feeding will no doubt come as a surprise to the Sentineli since they have fed themselves for centuries with no apparent ill effects. Even the surrealistic argument that the Sentinelis are "voters in a democracy" and need their rights as cititzens explained to them has been trotted out. Apparently, only waves of social workers and plantation managers landing on their beaches could do so. Such a move would be an exact repetion of the mistake made with the Onge in the 1950s, only now there would not be the excuse of inexperience. . Their island is difficult to reach and access to it can be easily controlled. Such dubious help would put an end to the only stable Palaeolithic human community on earth today with a chance for long-term survival. M.V. Portman, the centenary of whose two important publications (10,11) on the Andamanese we are celebrating, was in charge of the first known landing on North Sentinel island in 1880. He stayed for a fortnight. As the Sentineli would do so many times later, they evaded the unwelcome and well-armed visitors by simply vanishing into the jungle. A woman and four children were nevertheless captured by chance and kept for a few days aboard the expedition ship after which the woman and one child were loaded with presents and released. A few days later, an old man, a woman and a child were also caught and they with the three children from the earlier "bag" were brought to Port Blair for observation. There, as so often happened with captured Andamanese, the two adults sickened and died within a few days. The children were hurriedly returned to their home island, given presents and released. As Mr. Portman himself admitted later, this was hardly the right way to go about establishing friendly relations. Rather unconvincingly, he blamed lack of reliable intelligence for the unfortunate tactics and took a little revenge on the elusive Sentinelis by describing them as habitually wearing a "peculiarly idiotic expression." Today, little more is known about them than when Mr. Portman went ashore there 120 years ago. Mr. Portman was much more successful with the Great Andamanese. His book of 1898 on the languages of the southern Great Andamanese (11) is one of only a handful of sources on these languages; another is a supplement by A. J. Ellis contained in the major anthropological (but not linguistic) work on the Andamanese by E.H. Man (12). The Aka-Bea tribe’s territory included what was to be the British penal colony at Port Blair so that they were first and longest in contact with the intruders and became the best-documented of all Great Andamanese tribes. We know little about the languages of the northern Great Andamanese tribes, some of which were not discovered until just before 1900. A.R. Radcliffe-Brown, later a famous anthropologist, gained his first field experience in the Andamans 1906–1908. In 1922 he published (13) one of the few major works on Andamanese anthropology. He dealt with the languages only in a brief but important appendix which, together with an earlier article, gives us practically all the information we have today on the northern languages apart from Aka-Jeru. With the outbreak of War 1914 field research into the Andamanese languages ceased and was not resumed until Indian independence in 1947. By that time, the Great Andamanese had become culturally and very nearly physically extinct. Indian efforts therefore concentrated on the only accessible living Andamanese, the Onge of Little Andaman. That the Great Andamanese languages on the one hand and the Onge language on the other have been researched and documented at such widely different times and by such widely different people tends to make comparison between the sources difficult. While it is clear that all are members of the same Andamanese language family, it is by no means clear just how the two main groups are related. Besides Onge there are three other languages in the Onge-Jarawa group. Manuscripts on the Jarawa language exist and are gathering dust in libraries and museums, little having been published on the subject. That Jarawa is closely related to Onge is not in doubt: the Jarawa (which means "stranger" in Aka-Bea) call themselves ya-eng-nga which, without the characteristic Jarawa prefix ya-, is very close to what the Onge call themselves: en-nge (which, not unexpectedly, means "human being") . Nothing is known of the Sentineli language but some information might be gathered in future. A relationship with Onge and Sentineli can be assumed on what is known of their technology. Jangil, sometimes called Rutland-Jarawa, will remain forever unknown and few scientific works even mention its existence. Although it has never been documented, the existence of a separate Jangil language has been reported by M.V. Portman (10) and cannot be seriously doubted. The last reported sighting of a Jangil man took place 1895; in the 1920s the Jangil territory in the interior of Rutland island was found to be empty. Since the days when Greenberg had trouble finding any information on the Onge language, some good Indian publications on the subject have appeared (15). Even video cassettes for teaching yourself Onge and "Andamanese" (presumably the Aka-Jeru-based "Andamanese" of Straits island is meant) have been announced (16) – but perhaps not published. Neither this author nor several of his Indian bookshops have been able to get hold of these mythical cassettes despite years of effort. Fig.6. How the Andamanese languages are related. Andamanese tribes were/are lose groupings of "local groups" held together by a common language as well as split into forest and shore dwellers and many local feuds). There was no central authority, no chief. A feeling of belonging together arose only after the arrival of the outsiders (the British, their prisoners – and their diseases) in 1858. Before we go on to discuss the Andamanese languages in more detail, a few words must be said about the "tribes" of the Great Andamanese (the Onge-Jarawa group did not know these divisions). The centre of all traditional Andamanese life was the local group, a sort of semi-nomadic village community of between 30 to 50 persons. Some of these were allied to form "septs." The Great Andamanese tribes in normal times were merely collections of local groups speaking the same language and being bound by ties of friendships as well as feuds, i.e. the tribe was basically a linguistic unit. People from outside the tribe did not really exist and, if met, would be ignored or killed. This system broke down rapidly after 1858 but this is how traditional Andamanese had organised itself before that date. One would have thought that "tribes" numbering between only 100 and 700 people would be too small to split up further. But they did. The Great Andamanese seem to have had an insatiable and rather self-destructive urge for splitting up and excluding others. Most Great Andamanese tribes were split into two sub-groups (some even managed three but let that be): the Aryoto and the Eremtaga. The former were people living along the shores and the latter in the interior. Within each tribe, the language of the two sub-groups, apart from minor dialectal differences, was the same but the basis of their food collection was obviously different. The Aryoto thought themselves superior to the Eremtaga and the two heartily disliked each other, rarely mixing or speaking. Children could be adopted from the Eremtaga to the Aryoto but never the other way round. A clear case of palaeolithic snobbery. Scientific study of the Andamanese and their languages commenced with British officers of the penal colony. Initial interest was motivated by the problems of setting up a penal colony. Later, ways had to be found to stop the natives from murdering escapees and to make the natives hand them over to the authorities for a civilised hanging. Man and Portman were both in turn appointed "Officers in Charge of the Andamanese," two of a mere handful of officers in this brutal environment who developed a liking for and a genuine interest in their unusual charges. They could not escape their time and place entirely, however, and a remark in Portman’s obituary in "The Times" of London 1935 sums up the mixture of paternalism and brutality by stating that Portman "judged them and if necessary he hanged them" (actually, only one Andamanese man was ever legally executed – for multiple murder). Despite occasional brutalities which the Andamanese took for granted from each other and from the outside world, both Man and Portman were popular with the Andamanese. Each of the two men kept the surface civilities towards the other as befitted gentlemen in the service of Her Majesty the Queen. In reality, each was intensely jealous of the other’s scientific reputation and influence over the Andamanese. Scientific knowledge was the chief beneficiary of the resulting competition to collect and publish. Without the two men’s original field work, our knowledge of traditional Great Andamanese society and language before the onslaught of the great epidemics and the following cultural disintegration would be close to zero. The southern and middle Great Andamanese tribes had a common legend that points to a site called Wota-Emi (see the legend at the end of this article) on the north-eastern corner of Baratang island (the largest island between South and Middle Great Andaman) as the place where humans were created and fire was later brought to them by the god Biliku. It is interesting to note that this spot was in the territory of the A-Pucikwar tribe whose name (in Akar-Bea, Akar-Bale, Oko-Juwoi, Aka-Kol as well as their own language) means "they speak Andamanese." Whether this points to a myth of great antiquity relating to the origin of all Andamanese Negrito or is merely the residue of a splitting up of an originally much larger tribe centred on the A-Pucikwar in less remote times, we have no way of knowing. It is not clear whether the northern tribes without contiguous territories with the A-Pucikwar (the Aka-Kede, Aka-Jeru, Aka-Bo, Aka-Kora and Aka-Cari) subscribed to the same myth. Pointing in the direction of the second, more recent, possibility is that the northern Andamanese languages have some features in common with Onge: only Onge and Aka-Jeru are known to use infixes and while Aka-Bea is strict about prefixes and some neutral suffixes, Onge and Aka-Jeru are much more relaxed. © 1998, George Weber, Switzerland Go to The Andamanese Language Family (II)
<urn:uuid:f2e63584-bcab-436b-8ac5-99ea2152e084>
CC-MAIN-2016-26
http://www.assatashakur.org/forum/they-all-look-like-all-them/35101-afrikans-asia-heavy-eurocentric-evaluation-but-still-good-info.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96491
7,684
3.15625
3
The MythBusters chose Marshall as one of several NASA locations for an episode to debunk the notion that NASA never landed on the moon. The cast conducted tests involving a feather, a weight, a lunar soil boot print, and a flag in a vacuum. A team of Marshall scientists helped with the tests. The MythBusters built a small scale replica of the lunar landing site with a flat surface and a single distant spotlight to represent the Sun. They took a photo and all the shadows in the photo were parallel, as the myth proposed. They then adjusted the topography of the model surface to include a slight hill around the location of the near rocks so the shadows fell on a slope instead of a flat surface. The resulting photograph had the same shadow directions as the original NASA photograph from Apollo 14. To test this, they built a much larger scale (1:6) replica of the landing site, including a dust surface with a color and albedo similar to lunar soil. The MythBusters then took a photograph which was nearly identical to the original NASA photo from Apollo 11. The MythBusters explained that the astronaut was visible because of light being reflected off the Moon's surface.
<urn:uuid:48a20f47-d94c-4615-9d20-190cb5144915>
CC-MAIN-2016-26
http://topdocumentaryfilms.com/mythbusters-moon-landing-hoax/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959144
244
3.734375
4
To obtain official soil series description, click here. The Okolona series consists of deep, well drained very slowly permeable soils in uplands of the Blackland Prairie Major Land Resource Area. These are nearly level to gently sloping soils that formed in calcareous clayey material that is underlain by marly clay and chalk. These soils have very high shrink-swell potential. Slopes range from 0 to 5 percent.
<urn:uuid:3aadba33-713b-4917-891e-b3560349e296>
CC-MAIN-2016-26
http://www.nrcs.usda.gov/wps/portal/nrcs/detail/tx/soils/?cid=nrcs142p2_048003
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00154-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959741
89
2.578125
3
Earth's Rotation Teacher Resources Find Earth's Rotation educational ideas and activities Showing 1 - 20 of 521 resources Earth's Rotation Changes and the Length of the Day In this Earth's rotation and day length worksheet, students are given a table with the period of geological time, the age of the Earth and the total days per year. Students calculate the number of hours per day in each geological era,... 7th - 10th Math Day And Night: Interdisciplinary Study of Cyclic Change Eighth graders conduct a "Length of Day Symposium." They complete a variety of activities and explorations regarding the earth's rotation, its revolution around the sun and the cyclic changes in climate and energy distribution on the...
<urn:uuid:66b0bbf5-d59d-4672-86bf-f1face627479>
CC-MAIN-2016-26
http://www.lessonplanet.com/lesson-plans/earths-rotation
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.867597
153
3.875
4
Global Change Master Directory. NASA, Goddard Space Flight Center. Large website with data sets, web information, and technical reports on a wide variety of topics related to global climate change including agriculture, atmosphere, oceans, paleoclimate, climate indicators, sun-earth interactions, and more. There are also links to 200 sites for background information, links to more than 70 lesson plans, and links to more than 150 classroom activities. Climate Change Kids Site. U.S. Environmental Protection Agency. EPA’s website contains information, online animations, and games related to climate and global warming including Climate Change, Greenhouse Effect, Climate Systems, Climate’s Come a Long Way (history of climate change), Climate Detectives, Can We Change the Climate?, and We can make a Difference. Online word searches, puzzles, and quizzes are also offered. Global Change. U.S. Geological Survey. Grades 4-6. Activities are presented to assist in teaching concepts of Global Change. Includes sections: Introduction and Activities, Teacher’s Guide, Time and Cycles, Change and Cycles, Earth as a Home. Stabilization Wedges Game, Connecticut Energy Education. Grades 9-12. This lesson and game was created to emphasize the need for early action in order to find solutions to the greenhouse gas problem. This game introduces the concept that no single action will be sufficient and only through a combination of many actions will a doubling of atmospheric carbon dioxide over the next 50 years be avoided. Carbon and Climate. University of Wisconsin-Madison. This site provides brief explanations with some images of climate change relative to the carbon cycle, atmosphere, fossil fuels, land use, land uptake, and ocean uptake. The site has an interactive applet in which users can change the amount of CO2 emitted from sources or put into sinks through the year 3000, and then see what the global temperature change will be based on those changes. Environment. U.S. Energy Information Administration (EIA). This website provides official statistics for U.S. CO2 emissions data (and other emissions) by state, region, country, and international data; power plant emissions, carbon emission factors, an annual emissions report (with graphs and maps that can be useful for teaching), and many technical reports and projections of emissions trends and energy use. Climate Change. U.S. Environmental Protection Agency. This site offers a wealth of data concerning various aspects of climate change including data (e.g., emissions inventories, calculators, etc.) and information about basic information, FAQs, science of climate change, climate policy, greenhouse gases, health and environment issues, climate economics, and what you can do to limit your carbon footprint. Lots of information. Intergovernmental Panel on Climate Change (IPCC). This is the panel that has been leading the international awareness/concerns about global climate change (global warming). The site offers the reports on the panel’s findings concerning climate change, the physical science basis (of climate change), impacts of climate change, and mitigation of climate change; press releases; graphics; and more information. Carbon Dioxide Information Analysis Center (CDIAC). Oak Ridge National Laboratory, U.S. Department of Energy. The Carbon Dioxide Information Analysis Center (CDIAC) is the primary climate-change data and information analysis center of the U.S. Department of Energy. The site provides data and reports on concentrations of CO2 and other greenhouse gases in the atmosphere; the role of the terrestrial biosphere and the oceans in the biogeochemical cycles of greenhouse gases; emissions of carbon dioxide to the atmosphere; long-term climate trends; the effects of elevated carbon dioxide on vegetation; and the vulnerability of coastal areas to rising sea level. Climate Program Office. U.S. National Oceanic and Atmospheric Administration. NOAA's climate goal is to: "Understand and describe climate variability and change to enhance society's ability to plan and respond". Their website contains summaries of the various programs and datasets available through NOAA, including observations and analyses, climate forcing, predictions and projections, and climate and ecosystems. Other features include spotlight events and information, featured events and research, and climate outlooks. Global Warming Art. Wikipedia. A collection of useful graphics (graphs and maps) to help teach global climate change. Global Warming Images. ScienceDaily. This site offers images and commentary on various aspects of global warming and climate that might help in teaching global climate change. Goddard Space Flight Center. This site offers educational materials in both the space and earth sciences. The Mission to Planet Earth project is a global research effort with interagency and international partners investigating patterns in climate that will allow better prediction and response to environmental events such as floods and severe winters. Exploring the Environment, NASA. A NASA Classroom of the Future. This site provides an Earth on Fire module for middle and high school students under the subheading Modules and Activities. The module examines human's impact on the environment and provides information under the subheadings Carbon Cycle, the Culprits, and Solutions. Institute on Climate and Planets, NASA. This is a research, science education, and minority outreach program of the Goddard Space Flight Center. It is aimed at pre-college and undergraduate students and seeks to foster collaborations with students, teachers, and schools. Under the subheading Education Strategies, the site provides numerous atmospheric, climate change, and weather modules, free software, data sets and analysis tools. Under the subheading Climate Research, the site summarizes current scientific research, abstracts, etc. concerning climate change. U.S. Global Change Research Information Office. Provides links to a collection of resources about global climate change and environmental education resources for K12 educators and students. Aspen Global Change Institute, Colorado. Provides a search interface that allows you to select different types of global change information by audience and format that you need, summaries of scientific articles on global change, and a list of publications, videos and programs for K12 teachers, which are for sale. Center for the Study of Carbon Dioxide and Global Change. This web site was created to disseminate factual reports and sound commentary... on the climatic and biological consequences of the ongoing rise in the air's CO2 content... In this endeavor it attempts to separate reality from rhetoric in the emotionally-charged debate that swirls around the subject of carbon dioxide and global change." This site often offers another opinion in ongoing debates about global climate change with reviews of journal articles and editorials. The site has an interesting world temperature trend calculator, in which you input a beginning year (to 1880), ending year and range of latitude and longitude, and the calculator plots a graph of CO2 change through time. Under the section U.S. Climate Data, you can also plot temperature and precipitation trends for 1221 locations in the United States across the same time period. There is also an interesting experiment section that shows students how to collect and measure CO2. See also Key Weather links
<urn:uuid:a0212f17-f927-4115-bbad-f60b46b68d30>
CC-MAIN-2016-26
http://www.uky.edu/KGS/education/globalwarming.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.889658
1,443
3.65625
4
Is Common Core's Effect on Achievement Fading? A NAEP-score analysis raises the question The common core’s impact on student achievement may have peaked early and already tapered off, according to a new analysis of national test scores by the Brookings Institution’s Brown Center on Education Policy. “Most people when they think about common core, they think we won’t see an impact for 10 years,” said Tom Loveless, a nonresident senior fellow at the Brookings Institution and the author of the report. “This is telling me the opposite.” Most states adopted the common standards in 2010, although they may not have fully implemented them in classrooms for some time after. According to this year’s Brown Center Report on American Education, 4th and 8th grade students in states that adopted the Common Core State Standards outperformed their peers on the National Assessment of Educational Progress between 2009 and 2013. But between 2013 and 2015, students in non-adoption states made larger gains than those in common-core states. This means that “common core may have already had its biggest impact,” said Loveless. However, other experts say it’s still much too early to be drawing conclusions about how the common core is affecting student assessment data. States are in all different stages of implementation, said Mike Kirst, the president of the California State Board of Education. “At least speaking only for California, in the early part, we were doing very little,” he said. “We were in such a primitive state of implementation ... there isn’t enough treatment to do the measurement.” The report, the 15th in the Brown Center series, also looks at whether the common-core standards really are altering classroom instruction—and finds evidence that they are. The common core aims to get students reading more nonfiction than they have previously. In 4th grade, about half the texts students read should be fiction and half should be nonfiction, the standards say. By 8th grade, the balance should tip toward nonfiction. Looking at NAEP survey data, Loveless found that many teachers appear to be making this change. In 8th grade, just 25 percent of teachers said they put a heavy emphasis on nonfiction reading in 2009. Six years later, that share was up to 36 percent. “The dominance of fiction is waning,” writes Loveless. Math and Course-Taking The study also shows that 4th grade teachers are not teaching as much data and geometry as they did previously—a shift that also aligns with the common core. And the common core may be changing 8th graders’ course-taking habits, the study finds. For decades, there have been concerted efforts in many places to get more 8th graders taking Algebra I, traditionally a high school course. But Loveless writes that, “from 2011 to 2013, the relative growth of advanced courses stopped dead in its tracks.” Then between 2013 and 2015, 8th graders’ enrollment in Algebra I declined from 48 percent to 43 percent, according to NAEP data, while enrollment in general math increased. That’s likely because the common core delineates a single 8th grade math course for all students, Loveless explains. Common-core experts have noted that the 8th grade math course is a much tougher course than what was traditionally taught at that level—it now includes many concepts that students used to learn in Algebra I. So getting to advanced math early is now a tougher climb. Overall, Loveless says these findings in 4th and 8th grades indicate that “curriculum and instruction are changing at the ground level of schooling.” That’s not a hugely surprising finding, many say. But the Brown Center analysis likely doesn’t tell the whole story. The NAEP teacher survey data on implementation is all self-reported, and the study only looks at small slices of the common core at two grade levels. “It’s interesting to see [common core] is grabbing hold,” said Kirst. “However, what he has is pretty superficial. Common core features analysis, synthesis, interpretation, modeling, communication, extrapolation. ... [For a full picture of implementation] you’d have to measure really deeply how things are being taught and changed and what’s going on in classrooms in terms of instruction at a deeper level than this report has.” In analyzing how NAEP scores and common-core implementation are linked, Loveless divided states into three categories: strong implementers, medium implementers, and nonadopters of the common core. States that planned to have fully implemented the English/language arts common-core standards by the end of the 2012-13 school year were considered strong implementers. “I used that as a proxy for the level of commitment state officials had to implementing the standards,” Loveless explained. Those with slower adoption timelines were in the medium category. However, some experts have questioned those labels. “Frankly, the timeline states set up may or may not have a relationship to when the standards were implemented in classrooms,” said Mike Cohen, the president of Achieve, which led the development of the common-core standards. The nonadopters category includes seven states, three of which initially adopted but then reversed that decision. Indiana and South Carolina both reversed adoption, but then ended up approving new standards that look very similar to the common core. “If you count Indiana as a nonadopter but don’t look at the standards, you’re not characterizing it the right way,” said Cohen. Loveless said it’s a “legitimate concern” that the nonadopters ended up with common-core-like standards. “But what went on in Indiana was a political controversy, which even if they winded up adopting the same standards and giving them a different name, that controversy may have had an impact on classrooms and curriculum,” he said. Chris Minnich, the executive director of the Council of Chief State School Officers, which facilitated the development of the common core, pointed out that all states have raised the expectations for students in recent years—even those that never adopted the common core. In terms of common core versus noncommon-core states, “we’re not really looking at it that way anymore,” he said. And yet despite those higher expectations, NAEP scores overall declined from 2013 to 2015—for the first time in about two decades. “We weren’t celebrating in the early years of NAEP” after states raised their standards, said Minnich. “The bigger thing for us is the longer-term view of performance.” Other critics of the Brown Center report noted that NAEP may not be the best means for measuring the common core’s effect. A recent report by the NAEP Validity Studies Panel, an independent panel run by the American Institutes for Research, found that NAEP is reasonably, though not entirely, aligned with the common core. For 4th grade math, the researchers found that 79 percent of NAEP’s test items matched material from the common-core standards at or below that grade level. “There’s real dispute as to whether NAEP is an appropriate and complete assessment to measure common core,” said Kirst. “If we’re teaching stuff in 5th grade that they’re testing in 4th grade, that’s a problem.” Loveless agreed NAEP may not be a perfect measure. “There’s some truth to that,” he said, “but we don’t have any other national assessment to judge what’s going on.” Vol. 35, Issue 26, Page 6
<urn:uuid:99a5c319-a10f-41e6-9509-8c8f150fffb8>
CC-MAIN-2016-26
http://www.edweek.org/ew/articles/2016/03/24/is-common-cores-effect-on-achievement-fading.html?cmp=RSS-FEED
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966958
1,669
3.109375
3
1) General Questions of SQL SERVER What is RDBMS? Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage. (Read More Here) What are the properties of the Relational tables? Relational tables have six properties: - Values are atomic. - Column values are of the same kind. - Each row is unique. - The sequence of columns is insignificant. - The sequence of rows is insignificant. - Each column must have a unique name. What is Normalization? Database normalization is a data design and organization process applied to data structures based on rules that help building relational databases. In relational database design, the process of organizing data to minimize redundancy is called normalization. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships. What are different normalization forms? 1NF: Eliminate Repeating Groups Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain. 2NF: Eliminate Redundant Data If an attribute depends on only part of a multi-valued key, remove it to a separate table. 3NF: Eliminate Columns Not Dependent On Key If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key. (Read More Here) BCNF: Boyce-Codd Normal Form If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables. 4NF: Isolate Independent Multiple Relationships No table may contain two or more 1:n or n:m relationships that are not directly related. 5NF: Isolate Semantically Related Multiple Relationships There may be practical constrains on information that justify separating logically related many-to-many relationships. ONF: Optimal Normal Form A model limited to only simple (elemental) facts, as expressed in Object Role Model notation. DKNF: Domain-Key Normal Form A model free from all modification anomalies is said to be in DKNF. Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database. What is De-normalization? De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access. What is Stored Procedure? A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database. What is Trigger? A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS. Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures. Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger. (Read More Here) What is View? A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views. What is Index? An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes; they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance. What is a Linked Server? Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data. Stored Procedure sp_addlinkedsrvlogin will be used add new Linked Server. (Read More Here) Reference : Pinal Dave (http://blog.SQLAuthority.com)
<urn:uuid:8120be96-c2ad-4f14-bf97-3622f1677f90>
CC-MAIN-2016-26
http://blog.sqlauthority.com/2008/09/12/sql-server-2008-interview-questions-and-answers-part-1/?like=1&source=post_flair&_wpnonce=78d053c4d8
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897438
1,451
3.578125
4
- review the possible consequences of making risky choices, - become familiar with statistics involving teens, and - create and play a board game designed to show how bad choices can lead to setbacks. - Computer with Internet access - Poster board, markers, paper, index cards, and bottle caps or other game tokens - After watching the video, ask students: What are some choices they will be faced with that could affect the rest of their lives? (Whether to drive safely; finish high school; to lie, cheat, or steal; and to treat others with consideration) - Review some of the statistics presented in the program and listed below. Then ask the class what the statistics mean to them. Do they know of teens who have been in car accidents or dropped out of school? What has happened to them? How might cheating or plagiarizing a paper affect a student's future? Why do students think teens are more likely to make risky choices than adults? Do they think teens consider consequences when they make a bad choice? - Teen drivers are twice as likely to be involved in a fatal accident as other drivers. - Every year 6,000 teens die and 600,000 are hurt in car crashes. - Those who don't finish high school are more susceptible to health, economic, and social problems than those who do. - High school dropouts are twice as likely to have incomes below the poverty level than those who finish school. - It is more likely that a violent crime will be committed by a teen than an adult in the United States. - The percentage of students who admit to cheating in school is 97 percent. - To help students realize the effects decisions can have, they will develop a board game called Choices and Consequences. Divide the class into groups of four. The group will come up with 24 different choices and outcomes-12 good and 12 bad. The choices will be real-life ones; the outcomes will either move the player forward in the game (for a good choice) or set him or her back (for a bad choice). Some examples of choices and outcomes are listed here. Students can decide the number of spaces a player will be moved forward or back according to how big a boon or setback a choice might be. To keep the game moving though, they should probably limit setbacks to no more than three spaces, and not include too many such choices. - You cheat on a math quiz. Move back one space. - You help a younger student practice reading. Move ahead a space. - You drive too fast and run a red light. Move back two spaces. - You refuse to shoplift a CD even though your friend urges you to do it. Move ahead two spaces. - You drink at a party and are involved in a car accident. Move back three spaces. - You stay in school and earn a graduate degree. Move ahead three spaces. - The game board the students create should have a starting space at least 40 steps or moves to reach the end goal-in this case, a bright future! Game boards might be designed to look like a ladder in which players advance up rungs, a path with stepping stones, or a staircase in which players move up and down steps. They should draw their game board on poster board and write each of the 24 choices and outcomes on an index card. Use simple objects such as bottle caps as tokens for each player. - To play, shuffle the index cards and place them face down on the game board. Students take turns drawing cards and moving their tokens along the board. They must draw a good outcome card to make the first move. If they get moved back to the start, they'll need to get another good outcome card to start again. Continue drawing cards and making moves. Reshuffle cards once they've all been used and continue until one player reaches the end-and a bright future. - Should students need some fodder to come up with their choices and outcomes, these resources will come in handy: - Some of the topics covered in this program are suited for older students, since only high school students can drive or drop out of school. To make the game activity more appropriate for them, eliminate the game board and simply add and subtract points (the same as moving forward or back a certain number of spaces) based on the outcomes of the choices they draw from the shuffled stack of cards. - Making good choices builds good character. You'll find character-building activities, handouts, links, and information for students of all ages at the Web site of Character Counts:http://www.charactercounts.org/howto/teaching-tools.htm. Goodcharacter.com athttp://www.goodcharacter.comis another site with many resources to plan extension activities to strengthen character. - Education equals earning power. Let students make the connection by researching average incomes and education levels. A useful publication that downloads in PDF format is at the U.S. Census Bureau pagehttp://www.census.gov/prod/2002pubs/p23-210.pdf. Students can read the publication, examine the numerous charts and graphs, and draw their own conclusions about the correlation between education and income. Back to Top Use the following three-point rubric to evaluate students' work during this lesson. Three points:Students were highly engaged in class discussions and devised outstanding choices and outcomes for their board game. Two points:Students participated in class discussions and devised adequate choices and outcomes for their board game. One point:Students participated minimally in class discussions and failed to develop enough choices and outcomes to complete their board game. Back to Top Definition:An assessment of a person's values, traits, and abilities Context:Sean was a boy of good character; he unselfishly helped others. Definition:The result of a decision or course of action Context:When Tiffany decided to drop out of school, she didn't consider the consequences of not being able to earn a good living. Definition:Humiliating or punishing someone, often as a rite of initiation Context:Freshmen at Josh's school were often victims of hazing in which upper classmen stole their books or made them sing or dance in public. Definition:Using someone else's written work without attributing it Context:Turning in a composition downloaded from the Internet is plagiarism. It can get a student a failing grade or suspension from school. Definition:The image-either good or bad-that others have of someone Context:Sean had always had a reputation as a good student until he was caught cheating on a test. Back to Top The National Science Education Standards provide guidelines for teaching science as well as a coherent vision of what it means to be scientifically literate for students in grades K-12. To view the standards, visithttp://books.nap.edu. This lesson plan addresses the following national standards: - Science as Inquiry: Abilities necessary to do scientific inquiry; Understandings about scientific inquiry - Science in Personal and Social Perspectives: Personal health; Risks and benefits Back to Top Rhonda Lucas Donald, curriculum writer, editor, and consultant Back to Top
<urn:uuid:97ec49f0-49db-4558-ba47-b254a89a150f>
CC-MAIN-2016-26
http://school.discoveryeducation.com/lessonplans/programs/riskyBusiness/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950924
1,490
4.28125
4
Let’s be clear: This is a parlor trick, not neuroscience. Nonetheless, with the help of some friends, I was able to make a toy shark fly through the air using brain waves. So even if it’s a parlor trick, it’s a trick worth doing! First things first. When attempting to make something fly using your mind, it is important to choose a target object that compels attention. It’s also important that the object have the power to move itself in some way. This project uses brain waves to control an object’s movements; we cannot move the object directly with our minds. This is not the Force, after all. So I chose to use an Air Swimmers toy: a remote-controlled helium balloon that’s shaped like a shark. When you press some buttons on a remote control, the shark swishes its tail and “swims” through the air with a mesmerizing motion. I started by modifying the remote control. Some friends and I opened up the remote and soldered wires to the connections of the various push buttons. We then attached the wires to the pins on an Arduino microcontroller. By sending commands from a PC to the Arduino, I could pull the voltage of the pins high or low, making the shark respond as if I had pressed the corresponding button on the remote. Now I had to feed the PC information from my brain. The key piece of hardware needed for mind control is an electroencephalograph system. EEG systems use electrodes placed on the scalp to pick up electrical signals produced by brain activity. Usually, they are bulky and expensive. A while back, however, I had a hand in helping my friends at OpenBCI develop a low-cost, open-source EEG system that can be easily hooked up to a computer. The current version of OpenBCI’s kit is a US $450 board built around a 32-bit PIC microcontroller. The 6- by 6-centimeter board can record up to eight EEG channels at once. Microvolt-level signals from the electrodes are amplified and fed into a 24-bit, low-noise, analog-to-digital converter chip. EEG data can be stored locally on an SD memory card or transmitted in real time via a Bluetooth connection. The hardest part of this mind control project was figuring out how to interpret the data streaming in from the board. EEG interpretation is not easy because, to be technical, EEG signals are a crazy mess. EEG recordings are a jumble of the signatures of many brain processes. Detecting conscious thoughts like “Shark, please swim forward” is way beyond even state-of-the-art equipment. The electrical signature of a single thought is lost in the furious chatter of 100 billion neurons. To accommodate this limitation, I chose to alter my expectations for how the system would work. Instead of looking for specific thoughts, I looked for an EEG signature that would be naturally easy to detect and that I could use to signal intent. The easiest such signal occurs whenever you close your eyes: For most people, when the eyes are closed, a strong 10-hertz brain wave begins across the back of the head, where the brain’s visual processing centers are located. The 10-Hz brain wave is such an obvious feature that it was one of the first signals identified when the EEG was initially developed (which is why waves with a similar frequency are called alpha waves). So to control my shark, I decided to focus on the brain signature of closing my eyes. I pulled out my OpenBCI EEG kit and connected two electrodes to it. I placed one EEG electrode on the back of my head and then the other on a neutral location (my earlobe) to provide a reference signal. On my computer, I used software developed by OpenBCI to receive the data and to convert the raw time-varying signal data into the frequency domain, which made it much easier to look for peaks of activity at specific frequencies. I modified the software to look for a peak at 10 Hz and, if detected, to send a shark command out to the Arduino. As a result, whenever I closed my eyes, the shark swam forward. While this worked great for commanding a single action, the shark is capable of five different motions: forward, left, right, up, down. One way to take advantage of this might be to map five distinct brain signals to the five different shark actions. The brain is not so easily read, however. I could not find five distinct yet easy-to-detect signals. As before, the solution was to alter my expectations. Rather than control the shark by myself, I decided it would be more fun to control it as a group. So I enlisted four friends. Connecting five people at once to a traditional EEG is not really possible: Traditional systems permit only a single reference electrode against which all other channels are measured. But OpenBCI is built to be flexible, and each of its eight EEG channels can have its own reference electrode. Connecting five people to one OpenBCI board is not a problem. With all five people hooked up, the computer looked for eyes-closed alpha waves in each person’s data stream. I modified the software to associate each data stream with one specific shark command. So, depending upon which shark motion we wanted, the correct person simply had to close his eyes. As you can imagine, our coordination was poor. It was like those three-legged races, but with five people instead of two, and with our brains tied together instead of our legs. The outcome of this near-chaos was hilarity as the shark lurched through the air. But we did it. Five-person mind control! One heck of a parlor trick. This article originally appeared in print as “Mind Control.”
<urn:uuid:7250aad6-b631-43ce-aa3c-200a14e42c3c>
CC-MAIN-2016-26
http://spectrum.ieee.org/geek-life/hands-on/openbci-control-an-air-shark-with-your-mind
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952399
1,215
3.375
3
U.S. Drought Monitor Update for February 12, 2013 According to the February 12, 2013 U.S. Drought Monitor, moderate to exceptional drought covers 55.7% of the contiguous United States, a decrease from last week’s 56.8%. The worst drought categories (extreme to exceptional drought) also decreased from 19.1% to 17.7%. Potent weather systems brought beneficial rain and snow which improved drought conditions in the Southeast and parts of the Plains, while improving snowpack conditions led to slight drought reductions in the West. In addition to Drought.gov, you can find further information on the current drought at The National Drought Mitigation Center. The most recent U.S. Drought Outlook is available from NOAA’s Climate Prediction Center and the U.S. Department of Agriculture’s World Agriculture Outlook Board provides information about the drought’s influence on crops and livestock.
<urn:uuid:659590a0-63ca-4fea-ad57-ea5521097634>
CC-MAIN-2016-26
http://www.ncdc.noaa.gov/news/us-drought-monitor-update-february-12-2013
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902649
192
2.828125
3
Phrasal Verb – Carry On The phrasal verb 'carry on' means to continue with something. Example: Don't give up now, you must carry on with your plans to open a new business. Carry on to the end of the road and then turn right. Carry on quietly with your work until the teacher arrives. The phrasal verb 'carry on' has another meaning which is 'to behave badly'. Example: The children have been carrying on all morning and driving me mad.
<urn:uuid:dc9bd1c2-d824-459f-8667-24c51154201e>
CC-MAIN-2016-26
http://www.englishwithjo.com/phrasal-verb-carry-on/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949813
109
3.015625
3
Details about The Oxford Handbook of Analytical Sociology: Analytical sociology is a strategy for understanding the social world. It is concerned with explaining important social facts such as network structures, patterns of residential segregation, typical beliefs, cultural tastes, and common ways of acting. It explains such facts not merely by relating them to other social facts, but by detailing in clear and precise ways the mechanisms through which the social facts were brought about. Making sense of the relationship between micro and macro thus is one of the central concerns of analytical sociology. The approach is a contemporary incarnation of Robert K. Merton's notion of middle-range theory and represents a vision of sociological theory as a tool-box of semi-general theories each of which is adequate for explaining certain types of phenomena. The Handbook of Analytical Sociology brings together some of the most prominent sociologists in the world in a concerted effort to move sociology in a more analytical and rigorous direction. Some of the chapters focus on action and interaction as the cogs and wheels of social processes, while others consider the dynamic social processes that these actions and interactions bring about. Back to top Rent The Oxford Handbook of Analytical Sociology 1st edition today, or search our site for other textbooks by Peter Hedstrom. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Oxford University Press. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Sociology tutors now.
<urn:uuid:74cdab41-26a0-4d09-b0bc-b8d9933eba75>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/the-oxford-handbook-of-analytical-sociology-1st-edition-9780199587452-0199587450?ii=4&trackid=4095bb5f&omre_ir=1&omre_sp=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92863
306
2.796875
3
Venn diagram is two or more intersecting/overlapping circles that students use to identify similarities and differences between given topics. Characteristics unique to each idea are written in the non-intersecting portions of the circles. Characteristics that apply to the given topics are written in the intersection of the circles. Procedures: Choose topics for comparison. Create large overlapping circles that provide plenty of writing spaces. Write the topics as headings on each circle. Have students use their knowledge and data from a reading selection to write the similarities and differences between the topics. Use the diagram to help students find patterns and 1. Invite students to compare and contrast different species. How are manatees and dugongs alike and different? How are whooping cranes and Sandhill cranes alike and different? How are monarch and viceroy butterflies alike and different? How are caribou and deer alike and different? 2. Adaptations: Students compare and contrast how different species adapt to environmental factors to meet their survival needs. 3. By Land, By Sea, By Air: How are the migratory journeys of various species alike and different? Ask students to find the similarities and differences of species that fly through the skies, swim in the seas, or roam across land during migration. 1. Create overlapping sections with different shapes, such as rectangles or ovals. These shapes provide additional writing space for detailed charts. 2. Invite students to draw Venn-Style diagrams with pictures specific to the topic. For example, overlapping clouds could be drawn to compare and contrast migratory species that take flight. An outline of a butterfly makes a great Venn diagram. Students write distinctive characteristics about two different kinds of butterflies, such as the monarch and viceroy. They record similarities on the body of the butterfly. This picture graph could also be used to compare butterflies and moths. Reading Strategies: Compare and Contrast Ideas, Classify Information, Make Connections, Synthesize Ideas, Summarize Main Ideas and Details
<urn:uuid:0e5f34d9-a7e1-48c7-9166-99629c64fddf>
CC-MAIN-2016-26
http://www.learner.org/jnorth/tm/InstrucStrat37.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.903903
443
4.125
4
— This article by Jerry Cates, first published on 12 September 2010, was revised last on 12 August 2013. © Bugsinthenews Vol. 11:09(01). - The following exposition covers an infestation of non-biting midges and tetragnathid spiders at Walden Marina, on Lake Conroe, in southeast Texas. Populations of these concomitant organisms at many Texas lakes have been on a steady incline for at least a decade; in August 2007, for example, the simultaneous bloom of chironomid midges and tetragnathid spiders produced a gigantic spider web along a section of Lake Tawakoni State Park, at Wills Point in northeast Texas, that attracted worldwide scientific and media attention (Quinn & Garde, 2007). Two years later and some 200 miles to the south at Lake Conroe and Lake Houston, populations of these organisms — which had been increasing each year since at least 1997 — reached unbearable proportions. That’s when Allison Harpold, the manager of Walden Marina, contacted me to see if her midge/spider infestation could be resolved without applying toxic pesticides. As a charter member of the Certified Clean Texas Marina program, she insisted on following not just the letter, but the spirit proclaimed by that program. - This report describes the midges and spiders that were involved, the initial palliative measures that were taken until a workable, non-toxic solution could be developed, and the ultimate approach that is slowly, but successfully, returning Walden Marina to the pristine condition that existed before the midge population began its steady incline. Today, at Walden Marina only localized pockets of the midge and spider infestation observed in 2009 remain, while along the surrounding shoreline of Lake Conroe populations of both organisms continue to soar unabated. Most important — as Allison insisted — not a single drop of toxic pesticide has been applied at Walden Marina in the process. It is difficult to imagine a more idyllic setting. Before you stretch the placid waters of Lake Conroe, a large, inland, freshwater lake in southeast Texas. It is a warm, sunny day, and the breeze smells sweet. You’re ready — to the point of excited impatience — to take your watercraft out for a spin on the water. To you, the waterways are more than a source of recreation; they’re also a precious natural resource. Like most boaters, you do your part to protect them, and you knew that Allison Harpold, the conscientious, hard-working manager at Walden Marina, does everything in her power to make it as clean and green as possible. The honors she’s garnered over the years for Walden Marina, as a long-standing, charter Certified Clean Texas Marina, testify to her competence. Over the years she has managed to win the coveted Clean Texas Marina of the Year award several times. In fact, in 2011, she won that award for the fourth time, a record that no other Texas marina can boast. Then, in 2012, she won it for the fifth time, adding to her laurels. Allison engages firms like EntomoBiotics Inc. to help keep Walden Marina as pesticide free as humanly possible, a factor that 2011 Marina Association of Texas president Jodi Looper, of Lakeway Marina, and Clean Texas Marina project chairman Dewayne Hollin, of Texas A&M University, took into account when making the award. In the photo at left, Allison’s son (middle) accepts the award from Jodi Looper, MAT President (right) and Dewayne Hollin, CTM Chair, Texas A&M University (left) at the annual Marina Association of Texas banquet. Allison could not attend, as the low lake levels produced by the drought of 2011 — which wreaked havoc at all of the lakes and waterways in Texas during 2011 — had created such monumental challenges at the marina that she had to stay behind to deal with them. So, here you are, at Walden Marina… It’s a great feeling to know you, your boat, and this lake are in such good hands. Seriously, one poorly managed marina can produce a lot of pollution in a year’s time. Imagine what a few decades can cause… But not here. Not on Allison Harpold’s watch. A few days before, you’d made a special trip to check your boat’s hull and superstructure, tidy up your part of the dock, and stock up on provisions. Now you look forward to enjoying the fruits of those labors. As you approach the slip where your boat is berthed, you notice that things have changed since your last visit. It’s hard to believe it — you were here just a few days ago — but thick spider webs are everywhere. Not only that, but a huge bunch of annoying flying insects swarm like a cloud around your boat and your face. You raise your boat out of the water for a closer inspection of the hull, and find streamers of gooey, snot-like gel hanging from parts that had been underwater. While daydreaming a little about your upcoming stint on the lake, you hadn’t paid much attention to the dock and the other boats on the way down. Now you look around, and discover that all the other boats are just like yours. Covered with flies and spiders. Immediately you realize what has happened. Since your last visit the cold days of late winter and early spring have gone, only to be replaced by balmy breezes spawned by a warm, summer-like sun. Along with that transition have come a few unwelcome changes in the local fauna: The lake flies are back, and this year they’ve come back early… Depending on where you live, you might call these insects sand flies, muckleheads, muffleheads, blind mosquitoes, or chizzywinks — if you call them anything at all. Regardless, if they don’t bite, and under close scrutiny some (the males) are seen to have plume-like antennae, they’re one or more species of non-biting midges in the Chironomidae family. Scientists generally refer to them as chironomids, as though the moniker refers to a very specific kind of distinctive animal. In a way, it does, as the various species are all so much alike that many cannot be distinguished solely by their outward appearances. Still, worldwide some 5,000 species of these flies have been described to date, 700 or so unique to North America. Most likely, as with similar infestations elsewhere, a number of species of chironomids are represented at Walden Marina. Along with them, seemingly overnight as well, have come several species of spiders. One genus of spiders (Tetragnatha) in particular, the longjawed orb weavers of the Tetragnathidae family, seems determined to enmesh everything in sight in the webbing that the spiders found here are busily spinning. These spiders thrive on the abundant midges, chowing down with gusto on every fly that gets caught in their webs. They mate early and often, and lay eggs in specially constructed casings plastered to boats, posts, canvas coverings, and anything else that lies still for the indignity. In that sequence, they quickly blossom into huge, semi-social spider colonies housed within the protective confines of expansive, unkempt silken snares. Their webs are so large, and are woven with such abandon, that they threaten to wrap even you into their domains if you hesitate in one spot for even a moment. You think this as you flick a spider off your clothing and unwind a long sash of webbing from your midsection. Gad… they’re everywhere! These flies and the spiders that feed on them can get out of hand quickly. You saw it happen last year, and it wasn’t pretty. The underlying cause — the water conditions that spawn these flies and, as a consequence, attract so many spiders — isn’t any prettier. As you think this, you peer into the water around your boat and confirm with your eyes what your mind already expected; the water is brackish, glazed with a surface layer of dead flies, egg masses, and a bunch of other, indescribable gunk. Chironomid midges result from lake pollution of one kind or another, the kind that results in brown-colored, turbid water. Deep below the water surface, at the lake floor itself, they require a nutrient-rich muck to hatch into and feed on while developing as larvae, and to pupate in while awaiting the changes that transform their bodies from larvae to adult flies. Often the source of lake water pollution is strictly a function of ordinary, natural events that don’t involve humans. Mother Nature is not above polluting her own waterways, under certain circumstances, often with a seeming careless insensitivity that overshadows the pollution caused by man. It is a well-know fact that some of the most severe chironomid fly infestations in the world are endemic to what might well be described as “pristine” lakes. Pristine, because they are located in the middle of perfectly natural settings, unsullied by human habitations and industry, where only native animals, birds, and marine fauna live, breed, and — yes — pollute. A good example is Midge Lake, on Livingston Island, Antarctica, which — along with a number of similar lakes on the island — is home to some of the largest midge infestations ever recorded on earth, yet lies thousands of miles from the nearest human-caused pollution sources (Toro, et al. 2006). In other locales, though, humans easily — and often unwittingly — have an important, if not overarching part in helping these midges thrive by contributing to natural pollution sources and making things worse. Everything used on the water — or even within miles of the water, where rain-water runoff washes everything that floats or dissolves into creeks and streams that carry them on to the lake — used to wash, paint, and maintain our automobiles, lawns, homes, boats and docks should be considered a potential water pollutant. It takes serious thought and conscientious effort to make sure nothing you do creates conditions that are hurtful to the water, the myriad forms of wildlife that depend on it, or your fellow boaters. Pesticides, in particular, around lakes and other bodies of water almost always cause more problems than they solve, and tend — in almost every case we have examined — to make bad situations worse. Because Allison reminds every boater at Walden Marina regularly about their pledge not to use pollutants here, the source of the pollution these chironomids thrive on is almost certainly not the marina itself. Chances are the original source was something as seemingly innocent as the chemical fertilizers used to create and maintain the beautiful, emerald green lawns at the residential homes and apartment complexes ringing the marina’s northwestern perimeter. Only a thin layer of nutrient-rich matter, easily produced by fertilizers and similar chemicals that are washed into the marina by a few heavy rains, is needed to start a chironomid midge population going. They initially concentrate their populations where the shadows of the marina’s docks and slips block the streams of cleansing sunlight from reaching the lake floor. This allows the nutrient-rich muck to thicken even more, while the growing chironimid population’s life-cycle activities, and those of their natural predators — the tetragnathid spiders — start to turn the water a deepening brown that eventually blocks the sunlight even in the open water. Once that state is reached, the chironomids can develop anywhere on the lake floor. As the midge population develops, its life stages create its own pollution. When introduced into the water, millions of jelly-like egg masses, midge larvae, pupal casings, and bodies of expired adults, provide all the pollutants needed to support a life cycle that is destined to explode upward toward more and more midges as the years go by. It may take years for midge populations to surge from their small beginnings to the point that they become major polluters of the waterways they spring from, but once the cycle gets started, it slowly, but surely, spirals out of control. By September of 2009 the midge population at Walden Marina had reached the breaking point. So, from a biological perspective, what are these lake flies? And what kinds of spiders do their infestations tend to attract, at least in the subtropical climes of Texas? Why are they associated with our freshwater and coastal lakes? How should they be dealt with, not only to relieve boat-owners, swimmers, and fishing enthusiasts of their annoying presence, but in ways that avoid adding to the pollution of the environment, or harming the fish and other aquatic wildlife, not to mention the water, at such locations? These and other questions will be discussed below. This narrative and its accompanying photos describe how this beautiful marina became the scene of an ugly infestation of chironomid midges and longjawed orb weaver spiders. Details are also provided on the research we carried out to determine what caused the infestation, and the corrective measures that were eventually charted to successfully bring that infestation under control. Chironomid lake flies are non-biting midges, in the insect order Diptera (Greek δι-, “dye-” = two + -πτερον, “-(puh)TARE-on” = wings, thus “two wings”) a reference to the fact that true flies, such as houseflies, midges, and mosquitoes, have only one pair of wings. These midges are further classed under the suborder Nematocera (Greek νεμα-, “NEE-ma-” = thread- + -κερας “-SEHR-as” = -horns, a reference to their thin, segmented antennae which, in the males, tend to look like plumes and thus are described as plumose). Though flies that bite (such as mosquitoes) are almost always important vectors of disease, these lake flies — in the Chironomidae family (from the Greek expression χειρονομεω, “keer-AWN-oh-moh” = ‘I gesticulate,’ a reference to the exaggerated, plumose antennae possessed by some of the males in this family, which are displayed to seduce the females of their species — are non-biting midges. Their larvae hatch from eggs — laid in the water by adult females — that float on the surface for a time before sinking to the lake bottom. On hatching, they first feed on the nutritious gel of the egg mass, and then, on the bottom of the lake, they burrow into the lake floor’s oxygen-poor, nutrition rich muck to feed on sediment nutrients until they are ready to pupate. The pupated fly transitions to the adult form while still buried in the muck, then — once it fully matures inside its pupal casing — breaks out and floats upward to the surface of the water, where — if it doesn’t get gobbled up first by a minnow, a tadpole, or a fish — it emerges as an adult fly. Both the larvae and pupae of flies in the Chronomidae family serve as food for game fish. Some feed on them while they are still in the muck, and others feed on them while they move toward the surface after emerging as adult flies. In this regard the midges are useful, even critical, members of the freshwater marine ecosystem. Exterminating them from Lake Conroe entirely would not be a good thing. Any control measure adopted at Walden Marina would have to bear that in mind. Adult flies are not dangerous to humans or other animals, though their swarms can be quite disruptive of human congregations. Their swarms are annoying when the flies buzz around our heads, land on our shirts and trousers, and drop into our drinks and onto exposed food. They are also destructive when they stain walls, boat hulls and covers, and articles of clothing, with fecal excretions. Accumulations of dead midges are known to clog drains, obscure windshields, and mar the surfaces on which they collect. Under certain circumstances they reportedly produce allergic, asthmatic reactions in humans who aspirate them into their lungs or get them into their eyes. Worst of all, they attract the host of previously mentioned predatory spiders that erect nasty sheets of webbing to ensnare and feed on them. And, unlike the midges, the spiders bite. And though their venom isn’t considered deadly to humans, the bite is painful, and the angry, swollen wounds they leave behind are — to say the least — unsightly. The precipitating cause of the chironomid midge population that began to increase at Lake Conroe in the mid-to-late 1990’s is open to conjecture. Regardless of the underlying cause, those early population increases set the stage for continued development. Over a period of at least ten years of slow but steady growth, the initially small number of midges grew into large swarms, producing a plague of near-Biblical proportions that — each year — generally peaked in the months of August and September. As the midge populations grew, so did those of their natural predators. All of the solitary spiders in the Araneidae family (the common orb weavers that spin the webs commonly seen in forest, woodland, meadow, and residential yards) tend to be found in larger than usual numbers wherever large numbers of flies can be found. Jumping spiders in the Salticidae family are beneficiaries of large fly populations as well. So long as these solitary spiders are not challenged by large numbers of predatory spiders from other families, they thrive, albeit in densities that fall short of those found in the more rarely found social spiders. The populations of solitary spiders remain at relatively low densities because of their intrinsically competitive natures, which make them fiercely antagonistic toward one another. In close quarters, conflict inevitably results in the death — and consumption — of one by the other. Certain other spiders, however, particularly certain species of longjawed orb weavers in the family Tetragnathidae, tend to leave one another alone, to the point that they willingly build and share what may be considered communal webs. Often such spiders even cooperate in capturing and consuming prey. - The genus Tetragnatha derives its name from the Greek word τετρας- “tetras-” = four + -γναθος, “-gnathos” = jaw, which means, of course, “four-jaws,” and thus seems to refer to a Solpugid rather than a spider. - Because the jaws of tetragnathid spiders are paired like those of all other spiders, the etiology of the word τετραγναθος, as applied by early writers — as early as the 2nd century BC by Aelian, and the 1st century BC by Strabo and Pliny, then much later, by the French arachnologist Latrielle in 1804 — to these spiders in particular, is somewhat obscure; Ubick et al., 2005, p. 323, sheds some light on the mystery, and the reader may wish to consult that resource for more details. - Tetragnathids sharing communal webs may be able to ensnare more prey than would be possible with less expansive webbing; the presence of several spiders, together, may confer added advantages such as the ability to incapacitate large prey more quickly. In any case, populations of tetragnathids sharing communal nests tend to increase rapidly in the presence of large numbers of flying insects. - Another spider, the southeastern social cobweb spider (Quinn: Theridiidae: Anelosimus studiosus Hentz 1850) — that is native to a large area stretching from New England to Argentina, is prone to creating large social groupings when abundant prey makes spider sociality propitious. However, though this species was initially suspected of being involved in the spider infestations involving chironomid midges at Lake Tawakoni State Park, assays of the spiders there and at Lake Conroe have not found many of them present at either location. It is common for the female spiders from most other families outside the Tetragnathidae to fiercely defend their territories. Under ordinary circumstances such pugnacity contributes to survival of the spider, but in the presence of multiple tetragnathids that engage in cooperative attacks on antagonistic species, it becomes a detractor. Solitary, pugnacious orbweavers are soon either devoured or displaced by more sociable spiders, some of whom — though not organically ill-tempered — often bring rather formidable weapons to whatever fight they are invited to. Interestingly, the males of those other families spend much of their time wandering about in the territories of others, seeking mates of their species, which may explain why most of the non-tetragnathid spiders found at Walden Marina tended to be males. The dynamics that lead to large populations of tetragnathid spiders in lake, pond, and marina settings, are more complicated than this brief exposition suggests, but suffice it to say that tetragnathid spiders and chironomid midges tend to go together, at least in the subtropical climes found throughout much of Texas, along the Gulf coast to Florida, and westward to southern California. At the height of the seasonal infestation that took place during the summer of 2009, literally millions of newly emergent adult midges took flight each night, just from the waters of the marina itself. Then, besides leaving ugly spots on boats, docks, and furnishings with their fecal matter, they polluted the lake water with their feces, pupal casings, dead bodies, and the huge masses of sticky, jelly-like clumps of eggs laid by female midges after they mated. Large quantities of such impurities were added to the water each day by this process, increasing its turbidity and making it impossible to peer into the water’s depths with any degree of clarity. Secchi disk measurements of the clarity of the water inside Walden Marina, in September 2009, revealed that sunlight — even on a bright, sunny, cloudless day — could not penetrate beyond a few inches below the water’s surface anywhere in the marina. Sunlight is one of nature’s most important purifying cleansers. The rays of the sun break down pollutants like nothing else, but wherever sunlight cannot penetrate pollutants decay so slowly that they accumulate and produce habitat for organisms, like chironomid midges, that feed on the pollutants and, later, add to it. The tetragnathid spiders attracted to Walden Marina by the chironomid midges, besides festooning docks, boats, and surfaces of the water with their extensive webbing, produced large numbers of bright white egg cases that were glued to the hulls of boats, the exposed surfaces of the docks, and everything else their webs came into contact with. Obviously, the need to find a means of eradicating both the lake flies and the spiders became a high priority for Allison. Eradicating the midges and spiders from this marina was not destined to be an easy task, though. Even the most innocent kinds of pest management here would have to be carried out in consonance with the highest standards of the Clean Texas Marina (CTM) program. As with all “environmentally friendly” programs, the challenges presented by the situation at Walden Marina were daunting. Fortunately, though, the various parts of the puzzle fit together neatly, so corrections in one area tended to produce corresponding corrections elsewhere. Chironomid midge infestations are not separate from, but rather the underlying cause of, the tetragnathid spider infestation. Wherever the midge infestation was resolved, the population of tetragnathid spiders immediately dropped below the threshold of annoyance. The ultimate goal, therefore, was to bring the midges under control. Since EntomoBiotics Inc. is a pioneer in the use of bio-rational PestAvoidance methods that eradicate pests through habitat modification rather than with pesticides, we viewed the constraints of the CTM program as a positive good. Besides charting a long-term solution to the midge infestation, immediate stop-gap measures were instituted to mitigate the effects associated with the tetragnathid spiders. Pesticides being out of the question, we supplied Walden Marina with 5-gallon pails of the non-toxic plant oil based cleansers that we manufacture to be applied to boats and docks to remove spider webbing, egg masses, and fecal spotting. Though these products are not pesticides, their use made conditions more bearable for the boaters at Walden Marina until the core problem — the midges themselves — could be dealt with. Important modifications were made to the formulas for these natural plant-oil cleansers, first to ensure that they would not mar the plastics and fabrics of the boats, as well as to minimize the amount of d-Limonene the cleansers contained. Eventually all traces of d-Limonene were removed from all our cleansers, making them imminently suitable for use in an aquatic environment. Scientific studies suggest that d-Limonene, in large quantities, besides being toxic to fish and other aquatic organisms, may have an adverse effect on certain aquatic ecosystems due its mediation of water nitrogen levels. Such effects are generally temporary, and involve high concentrations. Though the quantities we used at Walden Marina were never expected to come close to the thresholds cited in those studies, we preferred to err on the side of prudence and eliminate that group of terpenes altogether. From September through November of 2009 we focused on cleaning up the docks and boats at Walden Marina with a reformulated essential plant oil cleanser (without d-limonene) while carrying out basic research on the biological implications of severe chironomid midge infestations within marina environments. Throughout 2010 our research on these issues continued while, behind the scenes, we held a number of discussions with industry experts, measured the temperature and oxygen levels of the the water and lake floor of the Walden Marina estuary, and documented the marina basin’s bathimetrics. For a number of months our discussions with industry experts appeared quite positive. Though the remedies these experts recommended were expensive to buy, install, and service, the quoted prices of several remained within Allison’s budgetary limits. It was discouraging, but not surprising, to learn that none of these approaches promised a quick fix to the midge infestation. Much like the cyclic process that preceded it, the decline these methods offered would have to be gradual and measured, provided they worked at all. But would they work? Bio-rational control measures targeting chironomid midge populations elsewhere in the U.S., Canada, Australia, and New Zealand, among others, have traditionally sought to disrupt the life cycles of the midge’s larval and pupal stages through the use of one or several mechanical devices. Those devices were developed to enable marina, lake, and pond managers to alter conditions in the bodies of water under their control to make them less attractive and nurturing to midges, eventually reaching the point where the essential ingredients needed for the midges to thrive were no longer present. Any hope of disrupting the chironomid midge’s life cycle must consider the nutrients, temperatures, and oxygen levels at the lake floor itself. At Walden Marina, that meant, in general, the oxygen levels existing 18-24 feet below the surface of the water. Low oxygen and high nutrient levels at the lake floor offer the highest potential for midge propagation. Traditional eradication methods use mechanical processes that increase the amount of dissolved oxygen, and reduce the quantity of nutrient, at depth, as a first step. One highly successful approach to disrupting the midge’s life cycle works to circulate the water in order to mix upper levels with lower levels, tending toward a more or less uniform oxygenation level throughout. A second approach artificially oxygenates the water at depth without physically mixing the various water levels, usually by infusing the water with a steady stream of air bubbles, to accomplish much the same objective. Increasing the oxygen levels at the lake floor leads to the oxidation of nutrients that would otherwise be available to the midges. In the face of such nutrient losses, the larvae are starved before they can pupate. Over time, then, the midge population declines. Being environmentally friendly, non-pesticidal approaches to the midge problem, both methods have received well-deserved acclaim. But they don’t always work, particularly if certain conditions apply where they are to be used. In other cases, even where they have a fighting chance of working, the necessary energy and equipment required may exceed all bounds of practicality. Before choosing which of these solutions to recommend to Walden Marina, we first needed to make a few important calculations. Deciding how many of each of these mechanisms would be needed to achieve the desired life cycle disruptions depended on the size the body of water involved. Walden Marina, being an ancillary part of a much larger body of water (Lake Conroe), any evaluation done here must also take into consideration the manner in which the waters of the two associated bodies mix. Ideally, from the standpoint of these oxygenation devices, Walden Marina should have the character of a closed basin whose deepest water consists of an entirely captive sub-pool that does not mix with the water of Lake Conroe at all. If, contrariwise, all the water in Walden Marina mixes regularly with Lake Conroe, any effort to enrich the oxygen levels of the marina would fail simply because all that oxygenated water would quickly flow out into the much larger lake. On measuring water temperatures in the marina at one-foot intervals we discovered that, beyond the first few feet of surface water, the temperatures were surprisingly constant. This was a puzzling development. It seemed to imply that none of the the water in the marina is captive, but instead — in its entirety — is a dynamic extension of Lake Conroe proper. To test this hypothesis — with help from one of the marina’s boat owners — we took soundings throughout the marina basin, including the inlet to Lake Conroe. The results confirmed what the uniform water temperatures implied. Walden Marina is blessed with an inlet channel that is as deep as its deepest interior floor. No portion of its estuary is captive, with an internal circulation separate from that of Lake Conroe. That discovery ruled out the use of oxygenators and water circulators as a means of achieving control of local populations of chironomid midges. As one of our consultants put it, “You simply cannot oxygenate the whole of Lake Conroe.” At first blush, that seemed to leave few options beyond the use of pesticidal toxicants. One of the options that many marinas have found quite attractive involves a system of intermittent misters, fed through tubes emanating from a central mist generator that periodically pressurizes the tubing, ejecting a pesticidal mist from a series of nozzles placed aloft and regularly spaced throughout the marina’s environs. This system works well as a means of exterminating the tetragnathid spiders, but it has little or no effect on the chironomid midges. Several marinas, resorts, and similar venues that use such systems have informed me that, while generally free of spiders, they continue to experience a steady increase in chironomid midges with each passing year. One other pesticidal option remained — at least provisionally — on the table for Walden Marina. It involved the application of insect growth regulators (IGRs). Though some IGRs are toxic and others are not, the EPA regulates even the non-toxic IGRs under the same basic rubric used to regulate toxic pesticides. Two IGRs presently on the market are known to be effective against insects that exhibit, in their life cycles, the complete metamorphosis that occurs in dipteran organisms such as chironomid midges and mosquitoes. One of these, the mildly toxic and somewhat exotic IGR pyriproxifen (4-phenoxyphenyl (RS)-2-(2-pyridyloxy)propyl ether 2-[1-(4-phenoxyphenoxy) propan-2-yloxy] pyridine), is extremely effective against both chironomid midges and mosquitoes. However, because pyriproxifen is a toxic carbamate that degrades slowly in water, has the potential to bio-accumulate in aquatic organisms, and is not labeled for applications to waterways, we never considered using it at Walden Marina. Another, an IGR which happens to be essentially non-toxic to mammals, has a long history of performing well against dipteran organisms. It’s a rather ho-hum non-exotic synthetic terpenoid — specifically an acyclic sesquiterpene — that was initially derived from botanical sources. It has a pleasant, floral fragrance and a well-established mode of action. This IGR molecule is known as methoprene (11-methoxy-3,7,11-trimethyl-2,4-dodecadienoic acid 1-methylethyl ester), and functions in arthropod biology strictly as a juvenile growth hormone analog. It is not otherwise toxic, rapidly biodegrades in water, and does not bio-accumulate in aquatic organisms. Furthermore, its manufacturer, Wellmark International, markets methoprene under the brand name Altosid® for use in bodies of water, primarily against mosquitoes. This relatively expensive, but efficacious formulation binds the methoprene, in a proprietary process, to heavy granules. When released in water, the granules carry it downward, directly to the lake floor. There, in a slow-release process, the methoprene acts to prevent dipteran larvae from transitioning from larva to adult. As such, methoprene is widely touted as being a safe, effective means of protecting large populations of humans from mosquito-borne diseases such as West Nile Virus. After considerable research into the utility and safety of methoprene when applied as Altosid® granules for mosquito control, an experiment was carried out with it in March 2011 at Walden Marina. Though in earlier years mosquitoes and midges did not begin to emerge from Lake Conroe before the month of May, in 2011 we found large numbers of adult flies at Walden Marina in early March. This suggested that, unless stringent efforts were taken early that year, the marina faced an onslaught of these pests in greater proportions than in previous years. Accordingly, that month, the first methoprene application was conducted. Soon afterward, the populations of these dipterans noticeably declined. An April 2011 survey of the docks at Walden Marina observed small swarms of live chironomid midges at three isolated locations. Elsewhere they were found in small numbers if a thorough search was carried out, but nowhere — beyond the three locations noted above — were they abundant. A second application of methoprene IGR was carried out that month, with emphasis on the locations where the isolated swarms had been found previously. However, only half the total amount of methoprene was applied in comparison with the application carried out the previous month, in consonance with our commitment to using the least amount necessary to meet the Marina’s objectives. A subsequent survey of Walden Marina, conducted in May 2011, found isolated specimens of chironomid midges. No swarms were found anywhere on the docks or anywhere along the shoreline. Very few tetragnathid spiders were observed as well, indicating that these arachnids were not catching abundant prey. Their webs were clear of dipteran carcasses, supporting that conclusion. A third application of methoprene was carried out, reducing the total amount applied to half the amount applied in the month of April. Once the mosquitoes and chironomid midges stopped emerging from the waters of Walden Marina in large numbers on a nightly basis, the clarity of the water improved remarkably. This was partly unexpected, as a number of long-time residents at the marina had expressed the opinion that the lake water circulating from Lake Conroe itself was perennially turbid and had been so for as many years as they could remember. However, during the survey conducted in May 2011 one resident pointed out that, for the first time in the past six years, he now could clearly see the underwater light that is submerged about ten feet below the surface of the dock where his boat is berthed. He recalled that when he first arrived at the marina several years before that, the light had been perfectly visible. A few years later it could no longer be seen, though at night a dull glow betrayed the fact that it was still there. Our research at Walden Marina was discontinued at the end of 2011. The assistance provided by EntomoBiotics Inc., was geared toward helping Allison Harpold maintain Walden Marina in full compliance with the Clean Texas Marina program. It accomplished that objective, but at a high cost that eventually made it too expensive to continue. . Had it been possible to continue this program, we would have focused on the optimal mix and quantity of safe and effective habitat modification procedures and materials needed to keep the chironomid midges at Walden Marina under control. Execution of control protocols must, however, avoid disrupting the ecosystem in the marina and Lake Conroe in a negative way. Lake flies provide food for game fish. Elimination of all the midges on the periphery of Walden Marina is inconsistent with good environmental stewardship, though total cessation of midge activity within the marina itself is a worthwhile, environmentally sound, and — judging from our experiences in 2011 — a realistically achievable objective. The Itch Effect One of the lessons learned during the 2011 phase of this project was that eliminating 90-95% of the chironomid midges at Walden Marina is unacceptable. Total elimination within the marina itself is mandatory. Because of what we have come to call the Itch Effect, allowing even 5% of the midge populations of past years to remain elicits a severe, and unacceptable, annoyance response in boat owners who have experienced swarms of these midges in the past. Think how it would be if you had a serious, chronic itch of some months’ or years’ duration: Question: What good would it do to reduce that itch by 95%? Answer: very little. It all has to go, or the little that remains will remind you so much of the old, chronic itch that you don’t feel much better. On the way up, things are not so bad (the famous “Boiling the Frog” effect) until the breaking point is reached. On the way down, after crossing back over the breaking point, it all has to go. What the Itch Effect means is that an extended chironomid midge abatement program at Walden Marina would have to be even more aggressive in the future than it had been in the past. For reasons explained below, however, that cannot mean increasing the amount of methoprene applied here. Our challenge, instead, was to find new ways to modify the underwater habitat at Walden Marina, despite the realization that no part of the marina basin is captive, while simultaneously avoiding pesticidal products of all kinds, including all non-toxic juvenile hormone analogues. The Postulated Methoprene/Methyl Farnesoate Connection The methoprene molecule — an acyclic sesquiterpene analogous to the dipteran juvenile hormone JHB3 (a.k.a. JH III bisepoxide) — though non-toxic to humans and other mammals, has the potential to affect non-target aquatic organisms. It has been found, for example, that methoprene is chemically similar to the crustacean juvenile hormone, methyl farnesoate, which is secreted by certain species of lobsters and crayfish. The crayfish species Procambarus clarkii Girard 1852 is widespread in this region. We have not found it in the waters at Walden Marina, where its usual habitat preferences are lacking, but it seems likely to be close by, plying the bottoms of nearby streams, creeks, and ponds. While famous as a crucial ingredient in Cajun cuisine, P. clarkii is often considered an invasive pest, is nowhere regarded as endangered, and practically everywhere is infamous for wreaking havoc on the habitat it occupies. For example, its burrows damage water courses and rice crops, and its voracious feeding antics disrupt native ecosystems. Worse, it is a known vector for infectious fungal, viral, and helminthic agents, including the crayfish plague fungus Aphanomyces astaci, the crayfish virus vibriosis, and a long list of parasitic worms that attack vertebrates, including man. Regardless of this litany of negatives, P. clarkii remains a prized food item for some people. And, by happenstance, it is also known to secrete the juvenile hormone methyl farnesoate as part of its molting process. That chemical, while similar to methoprene, differs from it in lacking an epoxide group (a cyclic ether with three ring atoms). Many authorities assert that this disparity, by itself, is enough to prevent methoprene from posing a risk to crustaceans, like P. clarkii, that are exposed to it. Some (e.g., Felterman & Zou, 2011) have even concluded that, for many if not most crustaceans, the presence of exogenous methyl farnesoate in the environment does not negatively impact their development. A thorough review of extant literature suggests, however, that a measurable potentiality of such a risk, while evidently quite low, still remains. As mentioned above, that risk, even if confirmed, has no effect on our work at Walden Marina, as no crustaceans have been observed there. Still, the program we carried out at Walden Marina was intended to be applicable to as many other locales as possible, including those with thriving populations of crustaceans. That, coupled with the imperative of erring on the side of caution, made the postulated methoprene/methyl farnesoate connection problematic. Accordingly, we embarked on a serious research and development program to substitute the use of methoprene with more strenuous mechanical methods of habitat modification, in concert with the insertion of non-toxic, essential plant oils, surgically introduced in ultra-low concentrations into the beds of lakes and watercourses. This approach requires the development of proprietary substrates that allow us to target limited portions of the lake or watercourse bed directly, without actually introducing the plant oils into the water itself. In the process of cleansing the underwater floor, this approach will have no impact whatever on the water that covers it or the nearby, but un-touched, lake or watercourse bed. In this manner it will enable us to produce — in select portions of the lake or watercourse bed itself — an environment that neither nurtures or attracts pollutant-related organisms, such as chironomid midges, that thrive by subsisting on pollution-mediated nutrients. And it does so precisely as other ordinary cleansers and soaps do, wherever they are used, but without introducing foreign chemicals or pollutants into the water itself: - Unlike most other cleansers and soaps, the proprietary architecture of the micro-cleansers being transported, and the substrates that transport them, ensure that the intended cleansing action is strictly limited to the microhabitat into which it is placed. There it metabolizes, disperses, oxidizes, and/or washes away the pollutant-mediated nutrients with which it comes into contact. These micro-cleansers are not pesticides; they do nothing more than cleanse the micro-habitat into which they are introduced, without exterminating, repelling, or mitigating the organisms occupying that habitat. As we’ve observed in the process of the 2009-2011 experiment carried out at Walden Marina, the positive effects produced by this micro-cleansing action extended far beyond the micro-habitat itself, and included such things as reducing overall water turbidity, eliminating the flotsam associated with the aquaculture of chironomid midges, and eliminating the jetsam produced by the latter’s natural predators, the tetragnathid spiders. Droughts and Other Challenges Conditions at Lake Conroe — which was built in 1973 on the West Fork of the San Jacinto River as a reserve water reservoir for the Houston metropolitan area — are not static, either from month to month, year to year, or even decade to decade. At no time has that been more obvious than during the 2009-2011 drought, during which time all of Texas, including the host watershed for every drop of water that flows into Lake Conroe, suffered under drought conditions more dire than ever before recorded in history. The two side-by-side photos of the Walden Marina dock ramp, shown above, were taken on 7 October (left) and 9 November (right) of 2011. Ordinarily, all of “land” shown in the foreground should be under at least four feet of water, but the continuing drought has brought lake levels down, exposing more and more of the lake bed. Notice how much more of the lake bed has been exposed in the span of only one month and two days. The photos directly at left, of the lake water depth gauge, were taken on those same dates as the photos of the dock ramp. They reveal the rest of the story. On 7 October lake levels were down about 6½ feet from normal, but by 9 November the lake had nearly dropped to almost 8 feet below normal. Walden Marina is blessed with the deepest marina on Lake Conroe. That’s a good thing, because — even with an 8 ft. drop in lake levels — all of the water vessels docked here could still navigate its waters from the slips to the lake. Lake Conroe, itself stood at 67% of capacity as of 11 November (NOAA Drought Information Statement 11-11-2011). Meanwhile, rainfall records for the city of Conroe were 35 inches below normal, and though some rainfall was expected from isolated thunderstorm activity, the long-range forecast through January of 2012 was not promising. Drought conditions were expected to continue through mid-winter, as climate signals are indicating the commencement of a new La Niña episode, with warmer than normal temperatures and below normal rainfall in the coming months. In other words, Allison Harpold, along with every other marina manager on Lake Conroe, was busily studying her options in case worse comes to worse and lake levels drop below the threshold needed for maritime navigation here. That kind of situation was very painful to contemplate… Though enormously complicated in the past, the conditions at Lake Conroe are likely to present dramatically more acute challenges in the future. The level of this 21,000-acre lake has always been subject to wide fluctuations. The lake depends entirely on rainfall within the Lake Conroe watershed for all of the water that flows into it. It must respond immediately to demands placed by authorities in the Houston metropolitan area, a 10-county region that owns two-thirds of the water rights of Lake Conroe. Those demands are huge now, and destined to grow dramatically in the coming years. Though the worst of the drought ended in 2012, and a welcome increase in rainfall that year and the first half of 2013 managed to bring lake levels back to normal and keep them there, the future for reservoirs like Lake Conroe appears bleak, at best. According to the 2010 census, the Houston–SugarLand–Baytown metropolitan area was the sixth-largest in the United States, containing at that time nearly 6 million people. Population here is projected to grow much faster than most of the nation through 2030, when estimates are that it will become the fifth-largest metropolitan area in the U.S. At this rate, before long even a minor drought in this locale will impose unsustainable demands on Lake Conroe until and unless major additions and improvements are made to regional water utilization rates, transport infrastructure (it is said that over 50% of the clean water that enters Houston’s water distribution system is lost due to leakage), and the size and number if regional water collection reservoirs. These are daunting challenges, indeed. Epidemics of chironomid midges and tetragnathid spiders are serious issues, and must be dealt with to improve the quality of life for those living, boating, and working in areas where these organisms abound. In the process, we must also keep in mind the sensitive nature of the water reservoir and those who depend on it for a myriad of interrelated life processes. Emphasis on the use of Integrated Reduced Impact Methods, to achieve PestAvoidance through Habitat Modification (IRIM-PAHM™) will, as always, overshadow and govern everything we do. - Aiken, Marie, and Frederick A. Coyle. 2000. Habitat Distribution, Life History and Behavior of Tetragnatha Spider Species in the Great Smoky Mountains National Park. J. Arach. 28:97-106. - Ali, Arshad, et al. 2008. Population Survey and Control of Chironomidae (Diptera) in Wetlands in Northeast Florida, USA. Florida Entomologist 91(3). - Ali, Arshad. 1991. Activity of New Formulations of Methoprene Against Midges (Diptera: Chironomidae) in Experimental Ponds. J. American Mosquito Control Assn. 7(4). - Ali, Arshad. 1995. A Concise Review of Chironomid Midges (Diptera: Chironomidae) as Pests and Their Management. J. Vector Ecology 21(2). - Bolton, Michael J. 1992. Chironomidae (Diptera) of Cedar Bog, Champaign County, Ohio. Ohio J. Sci. 92 (5): 147-152 - EPA. 2001. Update of the March 1991 Methoprene R.E.D. Fact Sheet. U.S. Envir. Prot. Agency. - Epler, John H. 2001. Identification Manual for the Larval Chironomidae (Diptera) of North and South Carolina. N. Carolina Dept. Env. & Nat. Res., Div. Water Qual. - Fagnani, Joseph P., and Willard N. Harman. 1987. The Chioronomidae of Otsego Lake with Keys to the Immature Stages of the Subfamilies Tanypodinae and Diamesinae (Diptera). Occ. Paper No. 20; Biol. Field Stn., Cooperstown, N.Y; Biol. Dept., State Univ. College at Oneonta. - Felterman, Michelle, and Enmin Zou. 2011. The exogenous methyl farnesoate does not impact ecdysteroid signaling in the crustacean epidermis in vivo. Aquaculture 317:251-254. - Forzán, María, et al. 2008. Chytridiomycosis in an Aquarium Collection of Frogs: Diagnosis, Treatment, and Control. J. Zoo & Wildlife Medicine 39(3):406-411. - Gillespie, Rosemary G . 1987 . The mechanism of habitat selection in the long-jawed orb-weaving spider Tetragnatha elongata (Araneae, Araneidae) . J . Arachnol., 15 :81-90. - Greenstone, Matthew H. 1999. Spider Predation: How and why we study it. J. Arach. 27:333-342. - Hilburn, Larry R. 1979. Population Genetics of Chironomus Stigmaterus Say (Diptera: Chironomidae). Univ. Notre Dame. - Hirabayashi, Kimo, et al. 2008. Chironomid (Diptera, Chironomidae) Fauna in a Filtration Plant in Japan. Proc. 6th Intl. Conf. on Urban Pests, Hungary. - Hoover, Jan Jeffrey. 1990. Larval Midges (Diptera: Chironomidae) from Northeastern Oklahoma. Proc. Okla. Acad. Sci. 70:39-40 - Johnson, Pieter T. J., et al. 2002. Parasite (Ribeiroia ondatrae) Infection Linked to Amphibian Malformations in the Western United States. Ecological Monographs, 72(2):151-168. - Kiesecker, Joseph M. 2002. Synergism between trematode infection and pesticide exposure: A link to amphibian limb deformities in nature? Proceedings of the National Academy of Sciences 99(15):9900-9904. - Lobinske, Richard J., et al. 2002. Laboratory Estimation of Degree-Day Developmental Requirements of Glyptotendipes paripes (Diptera: Chironomidae). Environ. Entomol. 31(4): 608-611 - Lothrop, Branka B., and Mir S. Mulla. 1998. Field Evaluatio of Controlled Release Pellet Formulation of Methoprene Against Chironomid Midges in Man-Made Lakes. J. American Mosquito Control Assn., 14(3):335-339. - Perkins, T. Alex, et al. 2007. Interactions between the social spider Anelosimus studiosus (Araneae, Theridiidae) and foreign spiders that frequent its nests. J. Arach. 35:143-152. - Quinn, Mike, and Donna Garde. 2007. Giant Spider Web in an East Texas State Park – 2007. Texas Entomology - Read, N. 2002. Long Term Study of Nontarget Effects of Mosquito Larvicides. Status Report: Feb. 2002. - Stark, J. D. 2005. A Review and Update of the Report “Environmental and health impacts of the insect juvenile hormone analogue, S-methoprene” 1999 by Travis R. Glare and Maureen O’Callaghan. New Zealand Ministry of Health, Wellington. - Sunderland, Keith. 1999. Mechanisms underlying the effexts of spiders on pest populations. J. Arach. 27:308-316. - Toro, M., et al. 2006. Limnological characteristics of the freshwater ecosystems of Byers Peninsula, Livingston Island, in maritime Antarctica. Polar Biology DOI 10.1007/s00300-006-0223-5. — Questions? Corrections? Comments? BUG ME RIGHT NOW! Telephone Jerry directly at 512-331-1111, or e-mail firstname.lastname@example.org. You may also register, log in, and leave a detailed comment in the space provided below.
<urn:uuid:04d9241d-8b25-4cb6-a29b-f6f82effcc4f>
CC-MAIN-2016-26
http://bugsinthenews.info/?p=2353
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945094
11,734
2.90625
3
Satellite image of sea ice concentration on the narwhal wintering grounds in Baffin Bay, February 2002. Click image for larger view and image credit. Arctic Climate Change and Narwhals Dr. Kristin Laidre, Polar Science Center Applied Physics Lab, University of Washington Dr. Mads Peter Heide-Jørgensen Greenland Institute of Natural Resources The sea around Greenland is known for cyclical climate fluctuations (alternating periods of cooling and warming). The sea temperature in Greenland rose rapidly in the 1920s and remained high until the late 1960s. Then it fluctuated in the 1970s and 1980s, and since the 1990s it has been steadily increasing with the highest temperatures ever recorded in 2005. Between 1952 and 2002, the amount of sea ice increased in Baffin Bay, and the area of open water was reduced. This seems contrary to ‘global warming’ but some regions have cooled or gained more sea ice during climate change (e.g., Baffin Bay). Recent observations (2002-2005) indicate that the 50-year cooling trend may have slowed, or even reversed, and there has been less ice in recent years. The relationship between narwhals and sea ice is close and has existed for many thousands of years. Narwhals partition their annual cycle between coastal ice-free summering grounds and offshore wintering grounds covered in dense pack ice. During the autumn, fast ice is created in the narwhal’s summering localities. Before sea ice forms, narwhals leave coastal areas and migrate towards Baffin Bay. The migration ends in mid-November when the narwhals arrive in central Baffin Bay. With the exception of fast ice without cracks (ice attached to land), sea ice is not a barrier to the movements of narwhals. Narwhals choose the same wintering grounds year after year independent of sea ice conditions. Narwhal wintering grounds provide the best opportunity to look at the relationship between a winter whale and sea ice. Narwhals are well adapted to a life in the pack ice as indicated by how little open water there is in their winter habitat. There is often less than 5% open water on the wintering grounds (1 February and 15 April) and minimum estimates are 0.5%. This corresponds to only 150 to 400 km2 of open water available for breathing in a 25,000 km2 large area. The reason narwhals return year after year to an area with such dense sea ice cover is unclear. Although some believe these whales are seeking refuge from killer whales, it is more likely that narwhals need access to predictable prey. Narwhal survival on the northbound spring migration and female condition during calving and nursing in late spring/early summer may depend on food intake during the winter. Therefore the reliable Greenland halibut resources of Baffin Bay provide an attractive food source for surviving the harsh arctic winter. The winter habitat of the narwhal in Baffin Bay and Davis Strait is generally covered in ice with only a few leads and cracks available for breathing. Click image for larger view and image credit. Although narwhals spend much of their time in heavy ice, they are vulnerable to unique events called ice entrapments. During an ice entrapment, hundreds of whales might become trapped in a small opening in the sea ice and they often die. This occurs when sudden changes in weather conditions (such as shifts in wind or quick drops in temperature) freeze shut leads and cracks they were using. Narwhals occupy dense pack ice half of every year and they lack any ability to break holes in the ice. There have been no direct observations of narwhal ice entrapments in central Baffin Bay because the area they routinely occupy is hundreds of kilometers from shore and is rarely visited by humans. There are, however, reports of large coastal ice entrapments in areas near where humans live.It is well documented from ice core samples that the climate around Greenland changed rapidly and drastically many times over the last hundred thousand years. Narwhals evolved as a species sometime during the late Pleistocene (500,000 thousand years ago), a period when temperature and climate changed dramatically. The Pleistocene is also the period where mammoths, mastodons, saber-tooth cats, and many other large mammals and birds evolved and went extinct. Narwhals have survived periods of high environmental variability and glaciations that covered its whole geographic range with ice. In conclusion, increases in the amount of sea ice might exclude winter whales from habitat they are now using. They may also be exposed to increased mortality by sea ice entrapments. Decreases in the amount of sea ice will open new options but a large change in habitat may end up being detrimental to the fish species that the whales need. Laidre K. L. and M. P. Heide-Jørgensen. 2005. Arctic sea ice trends and narwhal vulnerability. Biological Conservation 121:509-517. Heide-Jørgensen, M.P., Richard, P., Ramsay, M., Akeeagok, S., 2002. Three recent ice entrapments of Arctic cetaceans in West Greenland and the eastern Canadian High Arctic. NAMMCO Scientific Publications 4, 143-148. Siegstad, H., Heide-Jørgensen, M.P., 1994. Ice entrapments of narwhals (Monodon monoceros) and white whales (Delphinapterus leucas) in Greenland. Meddeleser om Grønland Bioscience 39, 151-160. Sign up for the Ocean Explorer E-mail Update List.
<urn:uuid:6e413264-d076-4b16-8b33-ae23eb75d66c>
CC-MAIN-2016-26
http://oceanexplorer.noaa.gov/explorations/06arctic/background/climate/climate.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00113-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944635
1,186
3.75
4
Spectacular Brasilia becomes the legacy of master architect Niemeyer Oscar Niemeyer, a towering patriarch of modern architecture who shaped the look of modern Brazil and whose inventive, curved designs left their mark on cities worldwide, died late Wednesday. He was 104. Niemeyer had been battling kidney ailments and pneumonia for nearly a month in a Rio de Janeiro hospital. His death was confirmed by a hospital spokesperson. Starting in the 1930s, Niemeyer's career spanned nine decades. His distinctive glass and white-concrete buildings include such landmarks as the United Nations Secretariat in New York, the Communist Party headquarters in Paris and the Roman Catholic Cathedral in Brasilia. He won the 1988 Pritzker Architecture Prize, considered the Nobel Prize of Architecture for the Brasilia cathedral. Its Crown of Thorns cupola fills the church with light and a sense of soaring grandeur despite the fact that most of the building is underground. It was one of dozens of public structures he designed for Brazil's made-to-order capital, a city that helped define space-age style. After flying over Niemeyer's pod-like Congress, futuristic presidential palace and modular ministries in 1961, Yuri Gagarin, the Russian cosmonaut and first man in space, said the impression was like arriving on another planet. In his home city of Rio de Janeiro, Niemeyer's many projects include the Sambadrome stadium for Carnival parades. Perched across the bay from Rio is the flying saucer he designed for the Niteroi Museum of Contemporary Art. The collection of government buildings in Brasilia, though, remain his most monumental and enduring achievement. Built from scratch in a wild and nearly uninhabited part of Brazil's remote central plateau in just four years, Brasilia opened in 1960.
<urn:uuid:cde814fa-9f4e-4f9a-8a87-95c610ab1a70>
CC-MAIN-2016-26
http://en.mercopress.com/2012/12/06/spectacular-brasilia-becomes-the-legacy-of-master-architect-niemeyer
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951182
377
3.140625
3
Trigonometric Double-Angle and Half-Angle Formulas Written by tutor Michael B. In this section, you will learn formulas that establish a relationship between the basic trigonometric values (sin, cos, tan) for a particular angle and the trigonometric values for an angle that is either double- or half- of the first angle. These relationships can be very useful in proofs and also in problem solving because they can often be used to simplify an equation. Given the trigonometric values of an angle α, we would like to be able to determine the trigonometric values for another angle 2α: This can be easily accomplished by realizing that 2α = α + α, and utilizing the trigonometric summation formulas. Recall the three summation formulas: From these, we can derive the double-angle formulas for sin(2α), cos(2α), and tan(2α): In addition, the cos(2α) formula has two alternate but common forms. By utilizing the identity sin2(α) + cos2(α) = 1, we can also derive the two formulas: It is also important to note that the following relationships are NOT true: Just as with the double-angle formulas, when given the trigonometric values of an angle α, we would like to be able to determine the trigonometric values for another angle α/2: By solving for sin and cos from the alternate forms of cos(2α), and then substituting α = α/2, we obtain: There is one important thing to note about these two equations. Normally, when one sees the "±" symbol in math equations, it typically means to use both the positive and the negative answer. For example, in the quadratic equation, there are two answers - one for the positive version and one for the negative version of the radical. However, in this case, only one answer (either posititive or negative) should be selected. The choice is not arbitrary – the student must use information available from the given problem to determine which answer is correct. This is typically done by determining which quadrant the angle α/2 is located in, as the sign of each trigonometric function is strictly determined by the quadrant of the angle (ASTC). The tangent half-angle formula also has three versions that may be useful in different scenarios: Given an angle for which sin(α) = -3/5 in Quadrant III, determine the values for sin(2α), cos(2α), tan(2α), sin(α/2), cos(α/2), and tan(α/2). Use identities to simplify and write an exact expression for each of the following using a single trigonometric function:
<urn:uuid:c940cdc7-8bff-45ff-9e0a-047f5eae38eb>
CC-MAIN-2016-26
https://www.wyzant.com/resources/lessons/math/trigonometry/half-angle-double-angle-formulas
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892336
579
4.3125
4
Interested in writing for RunAddicts.net? Get started now! Shin splints; a fairly vague, non-medical term used by runners to describe pain felt in the general area of the shin or front of the lower leg. The pain could be originating from a number of sources but is frequently the result of inflammation in the thin protective layer surrounding the shin bone (tibia) caused by the repetitive impact of running on hard surfaces. How do I know I have shin splints? Pain will be felt in the lower leg. The affected area, generally front or inner edge, may be tender to touch or pain may be felt each time the foot hits the ground when walking or running. In some cases, pain experienced at the start of a run may ease off as the muscles warm up but will return afterwards, often with a vengeance the following morning! In some cases, swelling may be visible in the affected area and this will be painful when touched. It may be possible to feel actual lumps and bumps when touching the affected area, especially against the inner edge of the shin bone. In some cases, the inflammation may cause redness to appear in the skin over the affected area. What should I do if I have shin splints? The inflammation is the result of repetitive stress so continuing to exercise will only exacerbate the situation. Rest is a vital part of the recovery process and no attempt should be made to run until pain free. Applying ice treatment is one of the easiest and most effective ways to reduce inflammation and should be done on a daily basis until the affected area is no longer swollen or painful to touch. If possible, stretch the affected area by kneeling on the floor then sitting back gently onto your heels, keeping the top of your foot flat on the floor. Hold for a count of ten. Repeat the stretch several times and at least three times each day. Seek medical advice If symptoms fail to improve after several days of appropriate treatment, a visit to your doctor is advised to check for any other, more sinister, reasons for your shin pain. What are the causes and how can I prevent shin splints? The most common causes of shin splints appearing in runners are listed below: - Repetitive stress through running on hard surfaces - Making sudden or dramatic changes to your training programme e.g. increasing the mileage or the pace - Running in inadequately cushioned running shoes - Overpronation: an exaggerated inward roll of the foot which places greater stress on the lower leg when running - Oversupination: an exaggerated outward roll of the foot which affects its ability to absorb shock naturally - Continuing to run in worn out running shoes A visit to a specialist running shoe store with a gait analysis facility will help to rule out many of the above causes of shin splints as gaining an understanding of your running style makes choosing an appropriately cushioned and supportive shoe much easier. Problems such as overpronation or oversupination, once identified, can be corrected by using orthotics prescribed by a sports podiatrist and then suitable running shoes can be chosen to accommodate them. It then becomes your responsibility not to undo all of your efforts by reading too much into the sales blurb of your new ‘go-further-go-faster’ trainers. Make changes to your training programme gradually and incorporate some softer surface, off-road running into your routine – even if it means getting a bit of mud on your go-faster stripe! Thank you for reading and I hope the information we provide will be helpful to you. Please leave a comment below if you like the article.
<urn:uuid:5722a46c-78e3-4349-9712-cb66b4021f45>
CC-MAIN-2016-26
http://www.runaddicts.net/health-nutrition/a-runners-guide-to-shin-splints
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932559
750
3.1875
3
This is the VOA Special English DEVELOPMENT REPORT. In nineteen-eighty-eight, world health leaders started a campaign to end the disease polio around the world. The World Health Organization, the United Nations Children's organization, the United States Centers for Disease Control and the group Rotary International organized the campaign. It is called the Global Polio Eradication Initiative. In nineteen-eighty-eight, officials estimated three-hundred-fifty-thousand children around the world had polio. Recently, the W-H-O reported only five-hundred-thirty-seven new cases of polio in ten countries last year. This is the lowest rate of polio in history. It is also a sign that the campaign to end the disease has been almost a complete success. Polio is an infectious disease caused by a virus. It can affect people at any age. But polio usually affects children under age three. The virus enters through the mouth and then grows inside the throat and intestines. Signs of polio include a high body temperature, stomach sickness, and pain in the head and neck. Once the poliovirus becomes established in the intestines, it can spread to the blood and nervous system. As a result, victims of polio often become unable to move their bodies. This paralysis is almost always permanent. In very serious cases, the paralysis can lead to death because victims are not able to breathe. There is no cure for polio, so the best treatment is prevention. A few drops of a powerful vaccine medicine will protect a child for life. The vaccine must be given over several years to be fully effective. Last year, international health groups gave the vaccine to more than five-hundred-seventy-five-million children in ninety-four countries. That vaccine effort is continuing. The W-H-O wants to stop the spread of polio by the end of this year. The countries with the highest rates of polio are India, Pakistan, Afghanistan, Nigeria and Niger. Countries with lower rates of polio are Angola, Sudan, Somalia, Ethiopia, and Egypt. However, efforts to finally end the disease are being threatened by conflicts in several parts of the world. In Angola, for example, civil war has prevented vaccine medicine from reaching children. If the campaign succeeds, polio would become the second disease in history to be ended by a medical campaign. The first disease that was ended around world was smallpox. This VOA Special English DEVELOPMENT REPORT was written by Jill Moss.
<urn:uuid:af989ec1-3a18-49db-b1d0-9b664a3ee95b>
CC-MAIN-2016-26
http://www.manythings.org/voa/medical/5022.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960772
518
3.75
4
Media affect the evolution of knowledge in society. A suitable hypertext publishing medium can speed the evolution of knowledge by aiding the expression, transmission, and evaluation of ideas. If one aims, not to compete with the popular press, but to supplement journals and conferences, then the problems of hypertext publishing seem soluble in the near term. The direct benefits of using a hypertext publishing medium should bring emergent benefits, helping to form intellectual communities, to build consensus, and to extend the range and efficiency of intellectual effort. These benefits seem numerous, deep, and substantial, but are hard to quantify. Nonetheless, rough estimates of benefits suggest that development of an adequate hypertext publishing medium should be regarded as a goal of first-rank importance.From Social Intelligence, Vol. 1, No. 2, pp.87-120 (1991); an edited version of a paper originally submitted to the Hypertext 87 conference. Knowledge is valuable and grows by an evolutionary process. To gain valuable knowledge more rapidly, we must help it evolve more Evolution proceeds by the variation and selection of replicators. In the evolution of life, the replicators are genes; they vary through mutation and sexual recombination and are selected through differential reproductive success. In the evolution of knowledge, the replicators are ideas; they vary through human imagination and confusion and are likewise selected through differential reproductive success - that is, success in being adopted by new minds. (These ideas are memes, in Richard Dawkins' terminology .) Evolutionary epistemology maintains that knowledge grows through evolution. Animals - and even plants - can be said to know of certain regularities in their environments; this knowledge, embodied genetically, certainly evolved. Like genes, folk traditions are passed on from generation to generation; surviving traditions tend to embody knowledge that aids survival. Karl Popper describes science in evolutionary terms, as a process of conjecture and refutation, that is, of variation and selection . The scientific community evolves knowledge with unusual effectiveness because it has evolved traditions and institutions that foster the effective replication, variation, and selection of ideas. Teaching, conferences, and journals replicate ideas; the lure of recognition helps bring forth new ideas; peer review, refereeing, calculation, and direct experiment all help select ideas for acceptance or rejection. Every community evolves ideas, but science is distinguished by unusually rigorous and reality-based mechanisms for selection - by the nature of its critical discussion. To improve critical discussion and the evolution of knowledge, we can seek to improve the variation, replication, and selection of ideas. To aid variation, we can seek to increase the ease and expressiveness of communication. To aid replication, we can seek to speed distribution, to improve indexing, and to ensure that information, once distributed, endures. To aid selection, we can seek to increase the ease, speed, and effectiveness of evaluation and filtering. The nature of media affects each of these processes, for better or worse. The nature of a medium can clearly affect critical discussion and hence the evolution of knowledge. Consider how the lack of modern print media would hinder the process: Imagine research and public debate in a world where all publications took ten years to appear, or had to contain at least a million words apiece. Or imagine a world that never developed the research library, the subject index, or the citation. These differences would hinder the evolution of knowledge by hindering the expression, transmission, and evaluation of new ideas. If these changes would be for the worse, then a medium permitting faster publication of shorter works in accessible archives with better indexing and citation mechanisms should bring a change for the better. The naive idea that media are unimportant in evolving knowledge - that only minds matter - seems untenable. The effects of media on variation, replication and selection can be described in more familiar terms as effects on expression, on transmission, and on evaluation. These categories provide an analytical framework for examining how media affect critical discussion and the evolution of knowledge. The newest of major media is television, but it seems poorly suited for critical discussion. Its cost limits access, limiting the range of ideas expressed; political regulation worsens the problem. Its nature - a stream of ephemeral sounds and images pouring past on multiple channels - does not lend itself to the expression of complex, interconnected bodies of information. Transmission of new information is often very fast, but in a form awkward to file, index, and retrieve. Viewers cannot easily or effectively correct televised misinformation. It is hard to imagine researching anything by watching television, save television itself. Similar remarks apply to radio. The medium of paper publishing does better. It is relatively open, inexpensive, and expressive. Paper books and journals have been the medium of choice for expressing humanity's most complex ideas. Published items endure, and they can be copied, filed, and quoted. Paper books and journals, however, suffer from sluggish distribution and awkward access. Paper publishing's greatest weakness lies in evaluation. Here, refereed journals are best - but consider the delay between having a bad idea and receiving public criticism, that is, the cycle-time for public critical discussion: An author's (bad) idea leads to a write-up, then to submission, review, rewriting, resubmission, publication, and distribution: only then (after months' delay) does it become public. This then leads to reading by a critic, an idea for a refutation, write-up, submission, publication, and distribution: only then (after further months) has the idea received public criticism. This cycle can easily take a year or more, though the total thinking-time required may be only a matter of days. And even then the original publication exists in a thousand libraries, unchanged and unmarked, waiting to mislead future readers. The sluggishness of paper publishing forces heavy reliance on communication in small groups. There, cycles of expression, transmission, and evaluation are fast and flexible, but operate within the narrow bounds of the community. This limits both the criticism of bad ideas and the spread of good ones. Computer conferencing systems aim to combine the speed of electronic media with the text-handling abilities of paper media. They can combine some of the virtues of small-group interactions with those of wide distribution. The better computer conferencing systems have much in common with hypertext publishing systems, though all presently lack one or more essential characteristics. Since they are diverse and rapidly evolving, it seems better to describe what they might become than to try to take a snapshot of their present state. A hypertext publishing medium is a system in which readers can follow hypertext links across a broad and growing body of published works. Hypertext publishing therefore involves more than the publication of isolated hypertexts, such as HyperCard stacks. This paper follows Jeff Conklin in taking 'a facility for machine support of arbitrary cross-linking between items' as the primary criterion of hypertext. Hypertext publishing systems can provide an open, relatively inexpensive medium having the expressiveness of print augmented by links. Electronic publication of reference-links, indexes, and works will speed the transmission of ideas; criticism-links and filtering mechanisms will speed their evaluation. The nature and value of such systems is the topic of the balance of this paper. Randy Trigg has stated an ambitious long-term goal for computer media and publishing: - In our view, the logical and inevitable result will be the transfer of all such activities to the computer, transforming communication within the scientific community. All paper writing, critiquing, and refereeing will be performed online. Rather than having to track down little-known proceedings, users will find them stored in one large distributed computerized national paper network. New papers will be written using the network, often collaborated on by multiple authors, and submitted to online electronic journals. In spirit, this embraces a broader goal: transforming communication within the community of serious thinkers, including those outside the scientific community. It also embraces a narrower goal: transforming communication within smaller communities which still must use paper media to publicize their results. None of these goals entail competing with local newspapers, glossy magazines, or popular books; they aim only at providing better tools for communities of knowledge workers. Kinds of hypertext With these goals in mind, it may help to distinguish among several sorts of hypertext. Full vs. semi-hypertext: Full hypertext supports links, which can be followed in both directions; semi-hypertext supports only pointers or references, which can be followed in only one direction. As we shall see, true links are of great value to critical discussion, and hence to the evolution of knowledge. Fine-grained vs. coarse-grained hypertext: This embraces two issues. First, can one efficiently publish short works, such as brief comments on other works? Second, can a critic link to paragraphs, sentences, words, and links - or only to author-defined chunks of text? Fine-grained linking has value chiefly in a critical context: given fine-grained publishing, authors can structure their work to match their ideas, but critics will often want to pick nits or blast small, vital holes in parts of an author's structure - parts that may not be separate objects. To do so neatly requires fine-grained linking. Public vs. private hypertext: A public hypertext system will be a hypertext publishing system - if it is any good. A public system must be open to an indefinitely large community, scalable to large sizes, and distributed both geographically and organizationally; no central organization can control access or content. Closed or centrally-controlled systems are effectively private. Public systems will aid public discussion. Filtered vs. bare hypertext: A system that shows users all local links (no matter how numerous or irrelevant) is bare hypertext. A system that enables users to automatically display some links and hide others (based on user-selected criteria) is filtered hypertext. This implies support for what may be termed social software, including voting and evaluation schemes that provide criteria for later filtering. 'Hypertext publishing': This paper will use the terms hypertext publishing and hypertext medium as shorthand for filtered, fine-grained, full-hypertext publishing systems. The lack of any of these characteristics would cripple the case made here for the value of hypertext in evolving knowledge. Lack of fine-grained linking would do injury; lack of any other characteristic would be grievous or fatal. Most important is that the system be public: the difference between using a small, private system and using a large, public system will be like the difference between using a typewriter and filing cabinet and using a publisher and a major library. To support the evolution of knowledge effectively, a hypertext publishing medium must meet a variety of conditions. Nelson [6,7] and Hanson have specified some of them; the following describes an overlapping set and relates it to the evolution of knowledge. Several conditions are included because they conflict with common practice in computer systems administration, yet seem necessary for a functioning publishing system. Must support effective criticism Hypertext publishing must support links across a distributed network of machines, and these links must be visible regardless of the wishes of the linked-to author. The resulting medium can greatly enhance the effectiveness of critical discussion. Since this conclusion is pivotal to the argument of this paper, it deserves detailed consideration. Consider how the critical process works in paper text, the current medium of choice, and how it may be expected to work in hypertext: In each case, we start with a published paper making a plausible statement on an important issue - but a statement that happens to be wrong. Imagine the results in the medium of paper text and in hypertext. In both, some readers see that the statement is wrong. In both, a few know how to say why, clearly and persuasively. Then the cases diverge. Faced with a paper publication, these critics may (1) fume, (2) complain to an officemate or spouse, (3) scribble a cryptic note in the margin, or (4) write a critical letter that may (5) eventually be published in a subsequent issue. Steps (1-3) contribute little to critical discussion in society: they fail to reach a typical reader of the offending paper and leave no public record. Step (4) is an expensive gamble in time and effort: it demands not only the effort of handling paper and addressing an envelope, but that of describing the context, specifying the objectionable points, and stating what may (to the critic) seem a stale truism that everyone should know already. Depending on editorial whim, step (5) then may or may not result. Even at best, readers won't see the critical letter until weeks or months after they have read and absorbed sthe offending paper. In a hypertext publishing medium, critics can be more effective for less effort. Those who wish to can write a critical note and publish it immediately. They can avoid handling papers and envelopes because the tools for writing will be electronic (and at hand). They can avoid describing the context and the objectionable points because they can link directly to both. They can quote a favorite statement of the truism by linking to it, rather than restating it; if its relevance is clear enough, they needn't even write an explanatory note. And not only is all this easier than in paper text, but the reward is greater: publication is assured and prompt, and links will show the criticism to readers while they are reading the erroneous document, rather than months later. In short, criticism will be easier, faster, and far more effective; as a consequence, it will also be more abundant. Abundant, effective criticism will decrease the amount of misinformation in circulation (thereby decreasing the generation of further misinformation). Abundant, effective criticism of criticism will improve its quality as well. Reflection on the ramifying consequences of this suggests that the improvement in the overall quality of critical discussion could be dramatic. Must serve as a free press To maximize the effectiveness of criticism, a hypertext publishing system must serve as a genuine free press. In addition to being scalable, open, and having diverse ownership, it should allow anonymous reading (and perhaps authoring under partially-protected pseudonyms). These conditions all facilitate broad participation with a minimum of constraints, aiding expression and criticism. As Ithiel de Sola Pool notes, in the U.S., restrictions on free speech in new media have typically stemmed from their identification as tools of commerce, rather than as forms of speech or publication . To reduce the chance of bad legal decisions regarding First Amendment rights in hypertext publishing, we should recognize that the participants are authors, publishers, libraries, and readers; we should avoid commercial terms such as information providers, vendors, and buyers. Must handle machine-use charges To have a free press, it seems that one must charge for machine use. Computer time and storage space have become cheap and abundant, but not free and unlimited. Even cheap and abundant resources must be rationed - imagine a hacker deciding to store the integers from one to infinity on a 'free' system. The choice is not whether to ration, but how. One can ration machine resources by reserving them for free use by a small, subsidized elite that is implicitly subject to strong social controls: this is a solution used by institutions on the ARPANET. One can ration storage space by having a privileged editor delete authors' material: this is a solution used by many computer conferences and bulletin boards. One can ration by imposing wasteful costs on people, making them wait in lines long enough to cut demand to match supply. Or, one can charge what the service costs, so that additional users will pay for additional machines, allowing indefinite expansion and access without editing or discrimination. Charging is the solution that has made on-line services available to high-school kids and retired farmers. It is worth noticing just how low those charges can be. The cost of long-term storage of data on a spinning disk drive is now in the range of cents per kilobyte - this makes text it cheaper to store than to write, even if one types at full speed without thinking and values one's time at minimum wage. The cost of an hour's rental of a processor and a megabyte of RAM is again a fraction of minimum wage. (Telecommunications is a greater expense, but its charges are harder to fudge.) In short, the main cost of using computers (telecommunications aside) is already the value of the time one spends. On the whole, charging will increase openness and convenience, as it does in the free-press system of conventional publishing. Must handle royalties To have the familiar incentives of a free press, hypertext publishing must handle royalties. Royalties can eventually enable people to make a living as writers, and will encourage the production of boring but valuable works, such as indexes. The experience of conventional publishing suggests that royalties will be inexpensive for readers: if a hardcover book costs twenty dollars and takes six hours to read, typical author's royalties amount to roughly fifty cents per reading-hour. Paperback royalties and magazine-writer's earnings are less. Must support flexible filtering An open publishing medium with links presents a major problem: garbage. If anyone can comment on anything, important works will become targets for hundreds or thousands of links, most bearing comments that readers will regard as worthless or redundant. A bare hypertext system would become useless precisely where its content is most interesting. To deal with this problem, authors must have exclusive rights to unique names, so readers can use those names as indicators of quality. Readers must be able to rate what they read, so that their judgments can aid later readers' choices. Readers must be able to use automatic filters (configured to match their preferences) to sift sets of links and choose which are worth displaying. Making it easy for readers to send each other pointers to documents would aid personal recommendation. Further, readers should be able to attach triggers to items - for example, a trigger that sends a message whenever a (highly-rated) item appears in a place of special interest. This could dramatically reduce the effort of scanning and re-scanning the key writings in a field to find links to relevant advances. Without such mechanisms, critical discussion would choke on masses of low-quality material. With them, as we shall see, effective processes seem possible. As important as functions are inabilities - in some ways, they are more important, because they are harder to add as afterthoughts. The above goals imply that no one should be able to: - retract or alter publications, save by annotation - hide published comments on a piece of work - read works from libraries without paying royalties - monitor who is reading published documents - trace pseudonymous authors without a warrant - publish under another's unique name or pseudonym Original web version prepared by Russell Whitaker.
<urn:uuid:43379505-a5bf-4cac-a60e-3294c1fe2b86>
CC-MAIN-2016-26
http://e-drexler.com/d/06/00/Hypertext/HPEK1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92573
3,960
3.375
3
There was no Tech Universe on Monday 27 January 2014. Tech Universe: Tuesday 28 January 2014 - HOUSE IN PRINT: Imagine you’re having a new house built. Workers prepare the building site then a 6 metre tall crane-like gantry is brought in and installed on rails either side of the house. The machine rolls back and forth extruding fast-drying concrete and building up your house layer by layer. In 24 hours it’s done, including conduits for electrical, plumbing and air-conditioning and the gantry is removed. The machine can create a 230 square metre house over a couple of working days. This 3D printer is being developed at the University of Southern California and uses contour crafting, a method of building by layering. The technology could be used for many purposes, including emergency housing and building habitats on other planets or the Moon. Just how much concrete is there on the Moon? NDTV. - BOUNCING BALLS OF LIGHT: It takes a lot to put a mirror in space, so they’re necessarily smaller than some astronomers would like. Scientists at the Swiss Federal Institute of Technology are exploring an idea that could create a huge mirror in space using lasers. They used a single laser to trap polystyrene beads 150 micrometres across against a sheet of glass. Because the beads were grouped together the light didn’t bounce off in all directions but instead created a flat reflective surface that acted exactly like a mirror. The researchers hope that in future a mirror 35 metres across yet weighing only 100 grams could be possible, but acknowledge there are quite a few problems to solve first. And as for the notion of releasing polystyrene bedas in space … New Scientist. - FIRE THE MICROPARTICLES: After a heart attack inflammatory cells may turn up and damage the muscle tissue. Researchers at the University of Sydney found they could prevent major damage with an injection of microparticles less than 24 hours after the heart attack. What they injected were balls of a biodegradable compound, poly lactic-co-glycolic acid, 200 times smaller than the thickness of a human hair. The microparticles are picked up by the inflammatory cells and diverted to waste disposal systems and to the spleen. The microparticles could also help reduce inflammatory damage with problems like multiple sclerosis, inflammatory bowel disease, peritonitis, viral inflammation of the brain and kidney transplant. Clinical trials should begin within a couple of years. That’s clever: distracting the inflammatory cells on their way to create mayhem. University of Sydney. - SWEET EYES: Using miniature electronics embedded in a contact lens researchers at Google[x] hope to change how people with diabetes monitor their blood sugar. They’ve developed a lens that has a tiny wireless chip and a miniaturised glucose sensor embedded between two layers of soft contact lens material. Prototypes may be able to generate a reading once per second. The developers hope others will join them to take the prototype and develop apps and working lenses to change lives. OK, but somehow it will need to be charged up. Google Blog. - POWER STRIP: Researchers at the University of Illinois created piezoelectric strips that generate 0.2 microwatts per square centimetre of electricity when attached to a beating heart in animals roughly the same size as humans. That’s enough energy to power a pacemaker. The lead zirconate titanate on a flexible silicone base conforms to the changing shape of a moving organ. Having demonstrated that the strips can successfully generate power the researchers now need to test what happens when the strips stay inside the body for a long time, perhaps years. There’s a start to a wired body. New Scientist. Tech Universe: Wednesday 29 January 2014 - NOT THE EDIBLE KIND OF SPAM: You might not be too surprised if a friend’s computer were compromised and used to send out spam, but what if you heard it was their smart TV or fridge that did it? Earlier this year a spam attack sent out around 750,000 messages, of which 25% didn’t pass through laptops, desktops or smartphones. Instead, kitchen appliances, home media systems and web-connected TVs were infected by malware and used to send out spam. Many such devices have poor security, are poorly configured or use default passwords so can be compromised by smart spammers. Oh great: now we’ll have to set up, remember and use passwords for all our appliances too? BBC. - HEAT TO LIGHT: Conventional photovoltaic cells collect energy directly from some wavelengths of sunshine. Researchers at MIT though believe photovoltaic cells could be much more efficient and are working on solar thermophotovoltaic cells. An outer array of multiwalled carbon nanotubes very efficiently absorbs a broad spectrum of sunlight and turns it to heat. Bonded to that array is a layer of photonic crystal which collects the heat and glows with infrared light that can be collected by a conventional photovoltaic cell. That whole process allows the solar panel to collect energy from wavelengths of light that ordinarily go to waste, improving performance. Hey, if the sun’s shining it’s only fair to make the most of it. MIT News. - WALK SOFTLY: Some people with neuromuscular disorders of the foot and ankle must wear a brace to help them walk, but over time their muscles can atrophy rather than being simply supported. A rigid exoskeleton may help but also restricts the motion of the foot. Researchers at Carnegie Mellon are working on a soft orthotic device with artificial tendons and pneumatic artificial muscles. Because it’s soft it’s harder to control, so it uses a touch-sensitive artificial skin made of rubber sheets whose microchannels are filled with a liquid metal alloy. Stretching or pressing the sheet causes changes in the electrical resistance of the alloy. The device needs more development before it can be tested on patients though. For one thing its artificial muscles are very bulky. Carnegie Mellon University. - AT A STRETCH: Sensors to measure strain, pressure, human touch and bioelectronic signals such as electrocardiograms are often somewhat fragile: try bending or stretching them and they’ll break. That limits their usefulness. Scientists at North Carolina State University took an insulating material and screen printed silver nanowires on to it to create highly conductive and elastic sensors. The sensors respond in only 40 milliseconds so can be used to monitor strain, pressure and finger touch in real time. As the sensors can be stretched to 150% or more of their original length without losing functionality, they could be useful in controlling robotic or prosthetic devices. No word on how often the sensors can be stretched. North Carolina State University. - PARTIAL PRINTS: It can be annoying to print an entire page when all you want is an address or coupon. The tiny 220 gram Cocodori prints only what you’ve selected on screen onto 75mm wide roll paper. Two types of paper are available: a memo roll suitable for printing coupons and a Fusen type that is slightly sticky like a Post-it. But does it connect to a smartphone or tablet or only a PC? Akihabara News. Tech Universe: Thursday 30 January 2014 - BREATHE NORMALLY: To use a standard snorkel mask you need to breathe only through your mouth which may not come easy to nose-breathers. The Easybreath mask is a full-face snorkel mask that offers the wearer an unobstructed 180 degree field of vision, and uses a double air-flow system to prevent fogging. The wearer can breathe normally inside the mask, while a special mechanism plugs the top of the snorkel tube if it goes under water. It sounds like the new standard to meet. Tribord. - FROZEN RABBIT: China’s Jade Rabbit Lunar Rover landed in mid-December 2013 for a 3 month mission of geological surveys and astronomical observations. Unfortunately it has now suffered a mechanical control abnormality that may prevent it from closing its solar panels for the upcoming 2 week lunar night. The lunar daytime temperature can reach 100 degrees Celsius, while at night it plunges to minus 180C. If the rover can’t close the panels vital internal electrical components may freeze and stop working even after the rover wakes up again. It sure is a harsh environment up there. South China Morning Post. - A CLEAR BENEFIT: SolTech roof panels collect heat from the sun, but they aren’t standard solar panels. Instead they’re tiles made of clear glass laid over a black nylon canvas that absorbs the sun’s rays. Below that layer of canvas are columns of air that absorb the heat and in turn warm water that is connected to the house’s heating system via an accumulator. The system generates about 350 kWh heat per square metre. A glass roof to catch the sun: a simple but clever idea. InHabitat. - IF THE SUIT FITS: Buying clothes online can be a risky business: should you choose the Small or Medium size, or perhaps the Large, and would you like a tight or loose fit? Fits.me is a virtual fitting room that aims to help online shoppers try on clothes before they buy. Customers take a photo of themselves then upload it to the site. They tell the computer where their hands and feet are and provide information about their height, weight, age and gender. A server cleans up the photo to remove the background and works out the buyer’s body shape. Using data provided by the retailer, the software then recommends the correct size for the shopper and shows the garment on a mannequin. That could boost online sales of clothing enormously. BBC. - SMART PHONE, CLEAN PHONE: The Gorilla Glass in your smartphone already resists cracks and scratches but in future it will kill bacteria too. Corning announced they will add silver ions to the mix that creates Gorilla Glass. Since silver has natural antimicrobial properties that should help keep the nasties that might accumulate on your phone at bay. Washing your hands could help too. A New Domain. Tech Universe: Friday 31 January 2014 - GAME, SET, DATA MATCH: The Babolat Play Pure Drive is a flash name for a tennis racquet, but it does a few interesting things apart from allowing you to hit the ball. The handle of the racquet includes sensors that detect string vibration and movement and analyse your game. The racquet connects to a smartphone via Bluetooth or a computer via USB. The racquet counts swings such as forehand and backhand, the spin you put on the ball and other features of your game. The rules of the International Tennis Federation allow for the use of Player Analysis Technology like this racquet in games, but players may only access the data once the match is over. The next problem of course is sporting espionage where a competitor is able to spy on a player’s data and use it to their advantage. How about using this to create virtual tennis matches where players don’t even need to be in the same country? BBC. - FIT FOR WORK: Data shows that in the US adults may spend up to 11 hours per day sitting while they work on a computer or watch TV. They are also likely to add around 1 Kg of weight each year. More exercise would help stop that weight gain. Researchers at Penn State University had test subjects use a compact elliptical device to increase physical activity while sitting in a standard office chair. The device is low cost, quiet and takes only a small amount of space. They found that the majority of the participants could expend enough energy in one hour a day to prevent weight gain. Add a little generator and perhaps you could pedal to charge your phone too. Penn State. - SOMETHING IN THE AIR: Firefighters have a challenging job that could be helped with an accurate view of a fire, and that’s where drones come in. Dubai Civil Defence aim to use 15 quadcopters to patrol high-risk areas, such as industrial zones, to monitor and record fires. The drones can be deployed from patrol bikes, and start imaging a fire while the firefighters are still on their way. Flying in smoke and heat will be challenging for the little robots. The National. - SOMETHING’S AFOOT: Swedish researchers have developed a system to help track firefighters as they move around a burning building. Sensors inside the boot include an accelerometer and gyroscope, along with a processor. Data goes to a wireless module on the shoulder and then on to operational command. In practice the system worked even when firefighters were 25 metres below ground. Precise information about locations and movements helps emergency coordinators ensure that firefighters remain effective and safe in extremely dangerous conditions. The current system puts sensors in the heel of a boot but further development aims to use them in an insole that would allow more flexibility and more uses. That wireless module on the shoulder seems to be a point of weakness though. KTH Royal Institute of Technology. - LIFT THE GAME: What does the lift in your building know about you? In the Microsoft Research Centre a smart lift can figure out where you’re going without prior programming or facial recognition. Instead the lift studies the motions of people in hallways and learns that certain types of people go to certain places at certain times of the day. After 3 months of training the lift correctly intuited the destinations of its passengers in a trial. The developers say the system could be made even more accurate with the addition of more sensors. And when it gets it wrong? Would you like to start your work day fighting with the lift? io9.
<urn:uuid:ef31f530-e7e6-4ffb-800b-78c198921c8f>
CC-MAIN-2016-26
http://knowit.co.nz/2014/02/27-to-31-january-2014-tech-universe-digest
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928369
2,860
3.046875
3
Financial statements reveal a lot about a company's financial health. Different types of companies have different types of financial statements. If you are interested in analyzing the balance sheets of different types of companies, you need to understand the key differences. For example, merchandising companies and service companies share the same balance sheet format. However, there are some important differences in the types of accounts listed on each. Merchandising companies deal with the resale of items. Typically, merchandising companies are referred to as retailers or wholesalers. Wholesale companies sell products to retailers. Retailers, in turn, sell the product to the end consumer (the customer) at a higher price than they paid when they purchased it. Merchandising companies usually have two types of expenses -- expenses related to the products they are selling, called cost of goods sold, and expenses related to the day-to-day operations of the business. The latter would include rent, utilities, office supplies and staff salaries. Service companies also deal in products. However, their products are usually intangible. Service companies provide services for their customers. This type of company includes law firms, accounting firms, salons and spas, among others. For example, a service product is a tax return prepared by an accounting firm. A product for a salon could include a haircut or manicure. Typically, service companies have only expenses relating to the daily operations of the business. Balance Sheet Differences Because merchandising companies and service companies sell different things, they also have some balance sheet differences. The balance sheet lists all of the company's assets, liabilities and equity. Both types of company will still maintain these sections. However, there is one main difference in the accounts listed. This difference is found in the asset section. Merchandising companies will have an asset for inventory, whereas service companies do not. This is listed as a current asset. Other differences can include the types of accounts payable a merchandising company has. For example, a merchandising company may have a standing account payable to a wholesale company for the purchase of its products. A service company may have a service revenue receivable account for expected payment for services provided. Balance Sheet Similarities Even though merchandising companies and service companies have one main difference on their respective balance sheets, overall the balance sheets are nearly the same. The balance sheet is still divided into "assets" and "liabilities and equity." In the assets section, similar items remain, such as buildings, accumulated depreciation, vehicles and prepaid insurance. In the liabilities and equity section, many of the usual balance sheet items can be found on the balance sheets of both types of company. These can include notes payable, accounts payable and retained earnings. - PhotoObjects.net/PhotoObjects.net/Getty Images
<urn:uuid:80b537a2-2ba5-4e05-ad78-09f7c169be24>
CC-MAIN-2016-26
http://smallbusiness.chron.com/difference-between-balance-sheet-merchandising-company-service-company-14649.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959685
573
2.765625
3
Empire State Building The Empire State Building is a privately owned office building completed in 1931 during the Great Depression as the tallest building in the world, a distinction it enjoyed for decades until it was surpassed by the World Trade Center in 1972. It is the tallest building in New York City. To this day the Empire State Building is considered by a random poll of Americans to be their favorite piece of American architecture. It is also considered the most fire-proof building in America. Architect William F. Lamb (1883 – 1952) was the principal designer. The Empire State Building is the symbol of New York, just as the Eiffel Tower is to Paris and Big Ben is to London. It is still the biggest tourist attraction in the biggest tourist city in the world. Beautifully finished in the Art Deco style, the vertical lines of the structure give it the appearance of a soaring spire that rises one-fifth of a mile high. This famous skyscraper is also an engineering masterpiece supported by an elastic steel skeleton. The design was the finest work of architect William Lamb, Chief designer for the firm of Shreve, Laub, and Harmon. The "tallest building" when it was finished in April 1931, it lost its title in the 1970's to both Chicago's Sear's Tower and the World Trade Center in New York. It remains, however, New York city's most widely recognized architectural symbol. The Empire State Building contains steel reinforced by concrete, making it extremely fire-proof. Deputy Chief Fire Department of New York (F.D.N.Y.) Vincent Dunn (ret.) wrote: - "The more mass the more fire resistance. The best fire resistive building in America is a concrete structure. The structures that limit and confine fires best, and suffer fewer collapses are reinforced concrete pre-WWII buildings such as housing projects and older high rise buildings like the empire state building, The more concrete, the more fire resistance; and the more concrete the less probability of total collapse. The evolution of high- rise construction can be seen, by comparing the Empire State Building to the World Trade Center. The estimate is the ratio of concrete to steel in the empire state building is 60/40." In 1945, a B-25 bomber crashed into the 79 floor of the Empire State Building in heavy fog, killing 14 persons and igniting a fire. But there was only $1 million in damage and the structure of the building easily survived the impact and the resultant fire. Deputy Chief Dunn noted: - "A plane that only weighed 10 tons struck the Empire State Building and the high-octane gasoline fire quickly flamed out after 35 minutes. When the firefighters walked up to the 79 floor most of the fire had dissipated. The Empire State Building in my opinion, and most fire chiefs in New York City, is the most fire safe building in America. I believe it would have not collapsed like the WTC towers. I believe the Empire State Building, and for that matter any other skeleton steel building in New York City, would have withstood the impact and fire of the terrorist’s jet plane better than the WTC towers." - Empire State Building Celebrates 75th Anniversary. The U.S. Department of State's Bureau of International Information Programs. - Steel Building System. Extract from "Great Buildings The Empire State Building". Written by Gini Holland. Wayland (Publishers) Limited, England. 1997.
<urn:uuid:73313b49-f626-49dd-a962-89a6bfc8dbe2>
CC-MAIN-2016-26
http://www.conservapedia.com/Empire_State_Building
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947651
700
3.328125
3
Coastal and Marine Geology Map of proposed pipelines, geographic, and geologic features mentioned in this report. LAX is the Los Angeles International Airport; AES is the name of a company (from figure 1). This report examines the geologic hazards that could affect the OceanWay Secure Energy Project, a proposal by Woodside Natural Gas to build liquefied natural gas (LNG) facilities offshore of the Palos Verdes Peninsula in southern California. These facilities would include a Deepwater Port (DWP), including submersible buoys, manifolds, and risers, which would be situated in 3,000 feet of water about 23 miles offshore. The DWP would be connected to onland facilities by 35 miles of pipeline, which would come onshore near the Los Angeles International Airport. This report also examines the geologic hazards that could affect a proposed alternate location for the DWP that would be located approximately 20 miles offshore of Orange County, with the pipeline making landfall near the AES energy plant at Huntington Beach (note: AES is the company’s name, not an acronym). The U.S. Geological Survey (USGS) does not make any recommendation for or against the OceanWay Secure Energy Project. Instead, it is the USGS’s goal to provide accurate and up-to-date geologic information for use by public policy officials involved in the approval process and for use by engineers in the design process if such a project does go forward. As part of the Deepwater Port license application, Fugro West, Inc., has prepared a document discussing geologic hazards in the area, titled “Exhibit B Topic Report 6 – Geological Resources” (Fugro West, Inc., 2007); hereafter, this will be called the “Geological Resources document.” Our report summarizes the regional geologic hazards, reviews the Geological Resources document, and makes recommendations for future work to more fully assess the geologic hazards. The LNG facility is proposed to lie in a region of known geologic hazards that include: The regional geologic hazards and the Geological Resources document were reviewed by 27 scientists from the USGS and the California Geological Survey (CGS). Overall, the reviewers found that the Geological Resources document represents most of the geologic hazards in the project area. However, there are also some hazards not completely represented. We note that there are new consensus seismic hazard reports that have been released since the Geological Resources document was written. In some cases, as detailed throughout the rest of the report, additional scientific studies are recommended to improve geological hazard assessments. New scientific assessments based on our recommendations will not necessarily reveal increased hazard. For example, the Geological Resources document calculates greater seismic hazard in the project area than do the updated National Seismic Hazard Maps (Petersen and others, 2008). Conversely, we make recommendations for more detailed assessment of hazards posed by tsunamis and sediment transport events because we believe that the impact of such events may be underrepresented in the Geological Resources document. This enhanced scientific information would provide a better basis for evaluating this application and for the engineering design of the project should it go forward. Download this report as a 66-page PDF file (of2008-1344.pdf; 1.2 MB) For questions about the content of this report, contact Stephanie Ross. Suggested citation and version history Download a free copy of the latest version of Adobe Reader. PDF help | Publications main page | | Western Open-File Reports for 2008 | | Geology | Western Coastal and Marine Geology |
<urn:uuid:dd03d220-c65b-4db4-967a-6843c8fe4aa7>
CC-MAIN-2016-26
http://pubs.usgs.gov/of/2008/1344/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928004
735
2.796875
3
Honrath Receives 2006 Research Award August 8, 2006— Richard Honrath, whose work has helped shed light on some of the fundamental processes behind atmospheric change, has been selected to receive Michigan Tech's 2006 Research Award. Honrath, a professor of civil and environmental engineering, was most recently honored for his efforts to establish--and then give away--the PICO-NARE atmospheric research station in the Azores. Honrath spearheaded the construction of PICO-NARE (stands for Pico International atmospheric Chemistry Observatory-North Atlantic Regional Experiment) in 2001. The sauna-sized observatory sits atop Mt. Pico, the highest point in the Azores and the only spot in the mid-Atlantic where the air is high enough to escape the effects of the ocean environment. This has made it a perfect place to measure pollution drifting from North America. Since PICO-NARE was built, is has been generating a stream of data on atmospheric pollution, and in its five years of existence, two surprising discoveries have come to light. First, far more of the greenhouse gas and air pollutant ozone is being generated by the population centers of the Atlantic Seaboard than previously believed. And secondly, forest fires play a much greater role in atmospheric chemistry than anyone imagined. Honrath and his colleagues measured large spikes in carbon monoxide drifting over the Azores and determined that they had originated in wildfires from as far west as Alaska and even Siberia. The fires had generated even more CO than activities such as industrial pollution and tailpipe emissions. "You might think that it's amazing that natural things can pollute, but these wildfires might not be strictly natural," Honrath says. "Fires have increased, and this increase is probably related to global climate change, because the summers are hotter." Research Chemist David Parrish, of the National Oceanic and Atmospheric Administration, which funded PICO-NARE, praised both the quality of Honrath's work and the extraordinary efforts he made to engage in it. "Richard literally is willing to go to the ends of the world to conduct the science that he views as important," Parrish said. Despite the obstacles posed by the PICO-NARE site--"no road access, no electrical power, no facilities of any kind"--Honrath created a remarkable observatory and equally remarkable research, he said. "Richard's data sets and resulting papers have had a strong impact on our understanding of long-range transport of air pollutants," said Parrish. In June 2006, Honrath arranged to transfer ownership of PICO-NARE to the Azores, with the goal of establishing it as a permanent observatory. Scientists anticipate that the station will eventually be part of Global Atmospheric Watch, a United Nations-sponsored network of more than 20 observatories worldwide that provide high-quality atmospheric data. Before his work on the Mt. Pico summit, Honrath focussed his attention on an even colder, lonelier place than Mt. Pico: Greenland, where his work sparked an entirely new line of scientific inquiry. "Richard has made a very high impact on the field of polar atmospheric chemistry," said Eric Wolff, principal investigator for climate and chemistry at the British Antarctic Survey. "Indeed, I would say that he spawned an entire industry." Honrath and his colleagues discovered that the snow in Greenland was giving off the key ingredients of smog: NO and NO2, collectively known as NOx. "We were measuring way higher than what we expected," Honrath recalled. "We found two to three times as much NO and NO2 as we should have, and 10 times as much in the snow." After leaving the tailpipe, NOx turns into nitric acid in the atmosphere and then precipitates out in rain or snow. Thus, when Honrath and his team started studying Greenland's snow and ice, the scientific community expected the air to be free of NOx and the snow to contain nitric acid. As it turns out, they found NOx coming out of the snow instead. Sunlight reacts with nitric acid in snow and turns it back into NO and NO2, changing what was thought to be a permanent sink into a source of new NOx. And that doesn't happen only in Greenland. "We did a study in Ahmeek," Honrath said. In the Keweenaw as in Greenland, snow was giving off NOx too, by the same process. NOx is highly reactive in the atmosphere, so the team's discoveries have precipitated numerous other studies. "It gets chemistry going in the atmosphere that's the same as in a place where you have tailpipes. It's become a whole field that has taken off," Honrath said. As for receiving the Research Award, "It's really nice to be recognized for what you work hard on," said Honrath. "Everyone has been really appreciative of everything we've done, and it's great to know that people think you're doing something useful." Neil Hutzler, professor and chair of the Department of Civil and Environmental Engineering, nominated Honrath for the Research Award. "Richard has been a leader in promoting research at Michigan Tech, and the research he's done has given us international visibility," he said. "His work puts him in a pretty select group of scientists." "He also contributes to the department, and he provides leadership to new faculty, particularly regarding what it takes to conduct a successful research program," Hutzler added. "Plus he has an excellent group of students working with him, and he gives them quite a bit of responsibility. Richard is an example of a good researcher who is also a good teacher." Honrath said his success is due in part to the nature of Michigan Tech. "Working among departments is easier here than elsewhere," he said. "That's why the atmospheric sciences program is so successful." Honrath came to the university in 1992. Since then, he has received over $3.1 million in support of his atmospheric chemistry research program. He has authored or coauthored 36 journal publications, which have been cited over 1,000 times; in particular, his 2002 paper on the photochemistry of the Greenland snowpack was named a "hot new paper" by the research analysis firm Thomson Scientific. The Research Award carries a cash honorarium of $2,500. Michigan Technological University (www.mtu.edu) is a leading public research university developing new technologies and preparing students to create the future for a prosperous and sustainable world. Michigan Tech offers more than 120 undergraduate and graduate degree programs in engineering; forest resources; computing; technology; business; economics; natural, physical and environmental sciences; arts; humanities; and social sciences.
<urn:uuid:5a5fbd11-9b4d-4e82-b6c4-18100678d251>
CC-MAIN-2016-26
https://www.mtu.edu/news/stories/2006/august/honrath-receives-2006-research-award.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972989
1,409
2.828125
3
In the economic devastation that followed the Civil War, Greensborough’s reputation as a city of comfortable hotel accommodations was badly bruised. Earlier fine hotels such as the Southern Hotel on West Street and the Planter Hotel on East Street were becoming worn and did not meet the improved standards set by wealthy northern industrial and port cities. In recognizing the diminished state of Gate City hostelries, Dr. D. W. C. Benbow erected Benbow House one block south of Court House Square on South Street (South Elm Street). The hotel was built at a cost of $40,000, a lofty sum during the Reconstruction Era. The Benbow opened with great fanfare in late May 1871, featuring a dedication speech by former New York Governor David Hill, and former North Carolina Governor Zebulon Vance being the first to register as a guest. The building was an architectural masterpiece of French-inspired Second Empire style, featuring tall windows popularized in Paris and a diamond-patterned slate mansard roof enclosing the uppermost floor. Classical brick quoins at the corners of the building, balustrades and balconies centered above the front door, a wide modillion cornice, and a chorus line of dormer windows completed the continental European design. When the hotel opened, an early guest from New York was disheartened to learn that no rooms had private baths. No worries! Upon discovering this inconvenience, he visited Odell Hardware on South Street and purchased a tin bathtub there for $2.50. He returned to his hotel room with his newly purchased tub with the honor of the occupying Greensboro’s first hotel room with a private bath! Tragically, the Benbow House hotel burned (with no loss of life) at noon on June 17, 1899. By nightfall, only the tall brick walls remained of Greensboro’s prominent hotel. The ruin was purchased by B. H. Merrimon, his wife Nellie S. Merrimon and E. P. Wharton and rebuilt at a cost of $80,000. The facility was rechristened the Hotel Guilford. The new building was even more grand than the original, complete with Wilton carpets, marble floors, and – and private baths! The Hotel Guilford was destroyed around 1930 and replaced with a building occupied by F. W. Woolworths…today under development of the International Civil Rights Museum on South Elm Street.
<urn:uuid:a4d548b3-e990-4ef6-9637-cfc3cf285f3c>
CC-MAIN-2016-26
http://preservationgreensboro.typepad.com/weblog/2008/03/lost-greensboro.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966798
503
2.640625
3
Asa Mathat | D: All Things Digital Elon Musk is the most interesting technology entrepreneur in the world right now. He's the CEO of electric car company Tesla, CEO of space exploration company SpaceX, and chairman of solar power installation company Solar City. In his spare time, he says he came up with a way to get people from San Francisco to Los Angeles in 30 minutes. He calls it the "Hyperloop" and he says he's going to reveal his plan for how it could work on August 12. Last July, he outlined his vision for the Hyperloop: "This system I have in mind, how would you like something that can never crash, is immune to weather, it goes 3 or 4 times faster than the bullet train. It goes an average speed of twice what an aircraft would do. You would go from downtown LA to downtown San Francisco in under 30 minutes. It would cost you much less than an air ticket than any other mode of transport. I think we could actually make it self-powering if you put solar panels on it, you generate more power than you would consume in the system. There's a way to store the power so it would run 24/7 without using batteries. Yes, this is possible, absolutely." While it sounds like a crazy dream, there's research to suggest it's a possibility. In 1972, Rand Corporation scientist R.M. Salter published a 17-page report detailing how something like the Hyperloop could be a reality. Salter called his transportation system the Very High Speed Transit, or VHST. "The general principles are fairly straightforward: electromagnetically levitated and propelled cars in an evacuated tunnel," said Salter. The VHST, as envisioned by Salter, would be vacuum sealed tubes buried underground. They would go from Los Angeles to Amarillo, Texas to Chicago to New York city. At each major stop there would be offshoots to take you to major cities. "The VHST's 'tubecraft' ride on, and are driven by, electromagnetic waves much as a surfboard rides the ocean's wave," said Salter. "The EM waves are generated by pulsed, or by oscillating, currents in electrical conductors that form the roadbed structure in the evacuated tube way. Opposing magnetic fields in the vehicle are generated means of a loop superconducting cable carrying on the order of a million amperes of current." The VHST could travel as fast as 14,000 miles per hour, allowing for a trip between Los Angeles and New York City in under 30 minutes in a straight trip. Importantly, Salter didn't believe there was much holding back the VHST from a technological perspective: " The technical problems associated with the VHST development are manifold and difficult - but no scientific breakthroughs are required," said Salter at the time. He considered construction of the VHST a political problem. Digging giant tunnels underground isn't something that every town is willing to accept. When Musk finally reveals his master plan for the Hyperloop, let's hope he includes a plan for how to get local, state, and federal government on board. Because without political support, the Hyperloop is destined to be as much a reality as the VHST.
<urn:uuid:8540364c-7470-4e10-9c2d-551891a0ad7d>
CC-MAIN-2016-26
http://www.businessinsider.com/elon-musk-hyperloop-2013-7
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00195-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969504
687
2.78125
3
Benefits of the Death Penalty Have you ever thought about if the person next to you is a killer or a rapist? If he is, what would you want from the government if he had killed someone you know? He should receive the death penalty! Murderers and rapists should be punished for the crimes they have committed and should pay the price for their wrongdoing. Having the death penalty in our society is humane; it helps the overcrowding problem and gives relief to the families of the victims, who had to go through an event such as murder. First, people should know the history of the death penalty. The death penalty has a long history dating back to the 16th Century BC. "In 16th Century BC Egypt, a death sentence was ordered for members of nobility, who were accused of magic. They were ordered to take their own life. The non-nobility was usually killed with an ax"(Burns). During the 18th Century BC, King Hammurabi of Babylon had a code that arranged the death penalty for 25 different crimes although murder was not one of them (Burns). The death penalty has been around since the time of Jesus Christ. Executions have been recorded from the 1600s to present times. From about 1620, the executions by year increased in the US. It has been a steady increase up until the 1930s; later the death penalty dropped to zero in the 1970s and then again rose steadily. US citizens said that the death penalty was unconstitutional because it was believed that it was "cruel and unusual" punishment (Amnesty International). In the 1970s, the executions by year dropped between zero and one then started to rise again in the 1980s. In the year 2000, there were nearly one hundred executions in the US (News Batch). On June 29, 1972, the death penalty was suspended because the existing laws were no longer convincing. However, four years after this occurred, several cases came about in Georgia, Florida, and Texas where lawyers wanted the death penalty. This set new laws in these states and later the Supreme Court decided that the death penalty was constitutional under the Eighth Amendment (Amnesty International). The very first legal executions came in the United States was during the Revolutionary War against Great Britain. British soldiers hung the first person to die by the death penalty, Nathan Hale, for espionage (Farrell). The reason that I have included this history is to prove that if something has been working, why stop this from working.Some people say that sending the murderers to death row is inhumane because these people deserve a right to live. This is wrong because they have given up their right to live for the horrible and heinous crimes they committed. There also has been the problem of overcrowding in prisons and jails. Some people say that this is a problem but having more jails built will solve this problem. Having more prisons or jails built may help solve the problem but the death penalty effectively stops draining more money from the taxpayers to house murderers. These murderers get three warm meals a day; they do nothing all day, and have a place to sleep just because the taxpayers fund these facilities. Murderers on death row do not deserve to get a place to stay. They deserve to get their life taken away from them because of the atrocious crimes that these criminals have done. The people who are on death row come from all types of race. The national death row population is 3,525, split with 3,477 men and 48 women. The ethnicity is much more varied. There are 1,610 whites, 1,490 blacks, 344 Latinos, 39 Native Americans, 41 Asians and 1 unknown, since August 5, 2003 (death row statistics). The total executions since 1976 are 870, which seem to be a lot, but in all reality, it is a small number compared to the 3,525 inmates still on death row (Farrell). Regardless of their race, they should be killed if they committed murder. With the statistics above it proves that any race can be put on death row, so there should be no problem of putting them to death. Several countries use the death penalty, including China, Iraq, Iran, U.S.A., and Saudi Arabia. In the United States, twelve of the states do not allow the death sentence. The twelve states are Michigan, Wisconsin, Maine, Minnesota, North Dakota, Hawaii, Alaska, Iowa, West Virginia, Massachusetts, Rhode Island, and Vermont. These states say it is inhumane to kill someone but I believe if a murderer kills someone they should be killed as well. Less than one percent of murderers are sentenced to death, while only two percent of death row inmates are executed. The reason that this is relevant is because having this many people on death row drains the taxpayer's money. Today more than 75 inmates on death row have sat more than 20 years. If an inmate has been on death row for over 20 years then he deserves to die because that person is draining the taxpayer's money. In May of 2000, a recent study on the death penalty found that 65 percent of the US supports the death penalty (Farrell). With that amount of people supporting the death penalty, there should be no problem putting murderers to death because the majority likes the death penalty. Many people say that the death penalty does not even help because there are not enough people being executed. One major way the death penalty helps is that it could relieve a family if someone is murdered and the convicted criminal is put to death. A perfect example was Timothy McVeigh when he was put to death in 2001, which was the first execution by the government since 1963 (CNN.com). The death penalty is good because the inmates who deserve to be killed, should be killed. This is a circular argument, which is a logical fallacy. A circular argument is when someone reaches a conclusion because it is true by not proven by facts, in other words the argument chases its own tail. I believe that if the people are just going in a circular argument then there is no way that, the people will gain ground to get rid of the death penalty. In the year 2002, there were at least 1,526 people executed in 31 countries, and at least 3,248 people were sentenced to death in 67 countries. In addition, 81 percent of the executions took place in China, Iran, and the U.S.A. (Farrell). Those facts are just for 2002 and it seems that the number of people executed during this time was a large number. The death penalty is helping cut down the population of inmates on death row. There is no reason for them to believe that this is not helping. It seems that there is not a large number but if someone was to look at the statistics, it is actually a lot (Justice For All).Giving relief to friends and families for the murders on the their sibling or friend is done through the death penalty. The death penalty solves the overcrowding problem and this process is a humane action. Many people are losing their tax dollars to the government to pay for death row murderers, while these murderers should receive the death. These murderers do not deserve to live and have all of their expenses paid for committing those crimes. Now, why should anyone agree with not having the death penalty? They should not! The death penalty helps resolve many problems, such as the overcrowding problem. This process is humane and the persons that perform this task are not playing God. In the Bible, God has said that the people should uphold the law (Holy Bible). In the future, many problems could be resolved keeping the death penalty and not getting rid of it. Amnesty International. "The Death Penalty." The Death Penalty. 22 Oct. 2003 http://web.amnesty.org/pages/deathpenalty_index_eng. Burns, Kari Sable. "Death Penalty." KariSable.com. 27 Oct. 2003 http://www.karisable.com/crpundeath.htm. CNN.com. "Timothy McVeigh Dead". 11 June 2001. 14 Nov. 2003 http://www.cnn.com/2001/LAW/06/11/mcveigh.01/index.html. Death Penalty Information Center. "Death Penalty Info." Death Penalty Information Center. 21 Oct. 2003 http://www.deathpenaltyinfo.org/. Death Penalty Information Center. "The History of the Death Penalty." Death Penalty Information Center. 21 Oct. 2003 http://www.deathpenaltyinfo.org/article.php?did=199&scid=15. The Holy Bible Farrell, Mike. "Death Penalty." Death Penalty Focus. 22 Oct. 2003 http://www.deathpenalty.org/facts/other/facts_statistics.shtml. Justice For All. "Pro-Death Penalty." Pro-Death Penalty. 22 Oct. 2003 http://www.prodeathpenalty.com/Resources.htm. News Batch. 15 Oct. 2003. 27 Oct. 2003 http://www.newsbatch.com/deathpenalty.htm.
<urn:uuid:e0bda9f9-9487-4591-96d4-4e53f352e93e>
CC-MAIN-2016-26
http://www.123helpme.com/view.asp?id=18710
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96829
1,870
2.796875
3
I don't know of any other subject which is taught in such an anti-historical way as mathematics. Although mathematicians are often fairly scrupulous in giving credit to the original discoverers of theorems, they also are energetic in restating these theorems in terms of concepts which the original discoverers would have been completely unfamiliar with. When Emil Artin taught Galois Theory, he did apparently discuss Galois's own approach. He tells an anecdote to the effect that he asked one of his classes how much of his book on the subject Galois himself would have recognized, and one of his students suggested that probably the title would have been the only recognizable thing in the whole book. And then another student said, "No, he probably would say, 'Okay, Galois, that's me, but who's this guy Theory?' " Artin's teaching in this respect was exceptional. In general, the teaching of mathematics gives students little way of understanding where mathematical ideas have come from and what the original motivation for the development of various mathematical topics was. Graduate students learn all sorts of high-powered concepts and theorems about Banach spaces, for instance, before they ever have any idea of why mathematicians ever got interested in such spaces or what the theory they are learning is good for. (Many students never do learn this.) In my opinion this has a lot to do with the fact that today we see a splintering of mathematics into zillions of tiny little subspecialties, many of whose practitioners know almost nothing about any mathematics except their own little splinter. I am not a historian of mathematics by any means. Here, I simply present a brief sketch of the development of modern Algebra (sometimes called Abstract Algebra) taken from the book by Bourbaki, Elements of the History of Mathematics (French title, Elélements d'Histoire des Mathématiques). We generally don't teach students how revolutionary the axiomatic approach is. Typically, in an undergraduate course in Modern Algebra (what is often referred to as a Herstein level course), we simply take the axiomatic approach for granted from Day One. This approach is so familiar and so comfortable to a contemporary mathematician that we seldom give much thought about how bewildering it is to students whose previous experience of mathematics has been limited to courses like calculus. (However sometimes students have seen a little of the axiomatic approach in Linear Algebra, but with very little explanation about why it was ever decided to deal with a seemingly concrete topic like vector spaces in such an abstract way.) The axiomatic approach is not simply a matter of using axioms in mathematics. The use of axioms, after all, goes back as far as Euclid. But Euclid, before giving his axioms, starts out by defining the primitive notions of geometry. A point is defined as, roughly speaking, "That which has position but no size." And a line is defined as, "That which has length but no breadth." (I don't remember how Euclid goes about defining the concept of straightness. I don't think he defined a straight line as being the shortest distance between two points.) The contemporary axiomatic approach, on the other hand, is basically the attitude that when we do mathematics, we don't need to know what the things we are working with are. We only need to know what the rules are. (So that in geometry, we don't need to know what a point or a line is. We only need to know the axioms.) This is very different from the teaching of mathematics in grammar school and high school and college calculus courses. There, it is considered very important that students understand what numbers are (albeit in a way that to mathematicians is shockingly informal) and what addition, subtraction, and multiplication are, before learning the rules that enable one to actually do arithmetic. And it is very important to thoroughly master arithmetic before going on to represent it in symbolic form in high school algebra. And it is very important to be familiar with a number of specific examples of functions and to understand the concepts of differentiation and integration before going on to learn the rules which enable one to actually differentiate and integrate functions. (In fact, calculus teachers are often annoyed when students, inventing the axiomatic approach on their own, as it were, discover that it is not really necessary to understand the concepts in order to do the calculations.) But a typical undergraduate course in Modern Algebra starts out saying something like, "A group consists of a set of elements which can be multiplied in such as a way that the following three axioms are satisfied." (Four axioms if one includes closure, which was in fact the key axiom and to some extent was the only one in the original development of group theory, associativity and the existence of an identity element and inverses being taken as pretty much self-evident.) It is understandable that a student might ask in bewilderment, "But what are these elements? And how does this multiplication work?" And the answer given by the Axiomatic Approach is, "It doesn't matter. Only the rules matter." This attitude, that it is possible to study things without knowing what one is talking about, is an incredible cognitive leap, and it is the real foundation (not the mathematical foundation, but the psychological foundation) of abstract mathematics. Of course in order to provide students with the sense that there is some tangible reality to what we are talking about, we immediately provide them with some familiar examples of groups (or rings, or whatever). And one strategy students might use when they can't cope with the level of abstraction they are given is to say, "Okay, when the professor says 'group,' I'm going to think he's talking about the integers. And when he says, 'multiplication,' I'm going to think about addition." (I've used this strategy myself sometimes, when learning a new kind of mathematics.) But this strategy doesn't work very well. It's misleading, because any particular example will have a number of special properties that will not be true of groups in general. (Addition of integers is commutative, for example, and the integers form a cyclic group.) And so if one needs to have concrete things to think about (and I think that almost all of us do), one needs to think not just in terms of one example, but in terms of a number of very dissimilar examples. And my own experience was that even after I got to be good at this sort of abstract thinking, I would still come to certain concepts (such as the concept of the free product of non-abelian groups, or that of the tensor product) that were so abstract and where it was so difficult to find any natural examples, that for quite a while I still found them very difficult to think about. But here the axiomatic approach can rescue you on a higher level. You don't really need to think about what a free product or a tensor product actually is ("what it looks like," in my words). You simply need to find a set of axioms that describe the way it behaves. (This has become pretty much the standard way of thinking about tensor products, and I always felt a sort of contempt for mathematicians who proved theorems about tensor products by starting with the construction.) This is certainly one advantage of the axiomatic approach: that one can work with quite complicated objects (and most mathematical constructions, even the natural numbers, are actually quite complicated) without needing to think about what they "look like." But the primary advantage of the approach is that usually a single set of axioms will describe a very large number of vastly dissimilar mathematical systems, and so by starting from axioms, one can prove theorems that apply to a huge number of different things. (Most of us don't make a big point of using the axiomatic characterization of the real numbers, for instance, because the field of real numbers is the only mathematical thing to which that complete set of axioms applies.) But how did this revolutionary new way of mathematical thinking come about? It came about, actually, in a very gradual and somewhat natural way. It came about because over the course of the 19th century, mathematicians started becoming more and more interested in a new kind of subject matter having to do with algebra, but not algebra in the sense of solving equations (although the interest in solving algebraic equations was certainly one of the roots of this new interest). But rather this was algebra in more or less the sense we use the word today (but without thinking of it in abstract terms), namely the study of structures in which one could work in very much the same way that traditional algebra operates in the realm of rational numbers, real numbers, or complex numbers. Some of these structures were: the complex numbers, the quaternions, various algebraic number rings (certain subrings of the complex numbers), in addition to the algebra of matrices developed by Sylvester and Cayley and the algebra of logic developed by Boole. In addition, there was the study of permutation groups, which was originally not thought of as being algebra at all, I believe, but where the basic concepts were developed by Legendre, Abel, and Galois as an approach to understanding the solution of algebraic equations. All of these subjects were originally studied for very natural and practical reasons having to do with questions in geometry, analysis, number theory, and the theory of equations. What was new about all these subjects was the interest primarily in the structure as a whole, rather than in doing calculations within that structure. This was perhaps especially clear in the work of Legendre, Abel, and Galois on permutation groups, where what was important was the set of subgroups rather than the individual permutations. Bourbaki identifies three main streams leading to the development of modern Algebra: (1) The theory of algebraic numbers, developed by Gauss, Dedekind, Kronecker, and Hilbert. (2) The theory of groups of permutations (and, later, groups of geometric transformations), where the work of Galois and Abel was fundamental. (3) The development of linear algebra and hypercomplex systems. But as group theory was further developed by other mathematicians (Galois himself, of course, was killed in a duel, apparently because of his political activism, immediately after finishing his treatise), gradually it started becoming clear that the study of permutation groups actually had very little to do with permutations themselves. And Jordan in 1868 began the study of infinite groups, specifically groups consisting of transformations of geometric space. This study was continued by Felix Klein and Poicaré, and was especially encouraged by Felix Klein's Erlanger Program for geometry. (At this point, there were a number of different kinds of geometry, such as Euclidean geometry, non-Euclidean geometry, projective geometry, affine geometry, and differential geometry. Klein suggested that each particular form of geometry should be characterized as the study of those properties which are invariant under a particular group of transformations. For instance, Euclidean geometry consists of the study of those geometric properties which are not changed by rigid motions.) The concepts and theorems which had been developed for permutation groups applied just as well to these groups of transformations. By the late 19th century, Cayley and Dedekind and many other mathematicians were becoming very aware that what was really relevant in group theory was the law of composition (multiplication) in a group and not the nature of the objects making up the group. But the importance of groups at this point still had to do with their concrete applications. Groups were still seen as consisting of operators of some sort and Dedekind and Cayley stopped short of defining groups in an axiomatic way and seeing them as structures which were of interest for their own sake. The theory of algebraic numbers was further developed by Dirichlet, Hermite, Kummer, Kronecker, and Dedekind. Kronecker and Dedekind used two different methods (which although very dissimilar are ultimately equivalent) to introduce of certain "ideal numbers" into algebraic number rings to remedy the lack of unique factorization. Dedekind's method was the invention of what we today call, in an arbitrary ring, ideals. In his work, Dedekind basically established the foundations of modern commutative ring theory. However the methods of Dedekind and Kronecker fell short of providing a proof of Fermat's Last Theorem, although they did enable proofs in many special cases. The other main thread leading to modern commutative ring theory came from algebraic geometry, and I won't really discuss that here except to mention that mathematicians were becoming very aware that the algebra of functions defined on an algebraic curve or surface had a great deal in common with algebraic number rings. Here we see that importance of the fact that mathematicians working in what originally seemed very different specialties were familiar with each others work and influenced by it. (There existed still a third major example of commutative rings, namely those consisting of functions defined by power series.) Bourbaki identifies the 142 page article by Steinitz in 1910 titled The Algebraic Theory of Fields as having given birth to the modern concept of Algebra. (One can also note that much earlier, Peano, in 1888, gave the axiomatic definition of a real vector space and defined the concept of a linear transformation between vector spaces.) The word field had first been used by Dedekind, whose concern was with certain fields contained within the complex numbers (algebraic number fields). And it was Dedekind and Hilbert who had first seen Galois Theory as a correspondance between subfields and subgroups of the Galois group. (Dedekind was the first to think of the Galois group as consisting of the automorphisms of the field extension rather than permutations of the roots of the polynomial in question.) Steinitz in his 1910 article developed the notions of prime field (by this time there had been a lot of work on the theory of finite fields), separable extension, and transcendence degree, and proved that every field has an algebraically closed extension. But what makes his article thoroughly modern is that instead of defining a field as a set of complex numbers or congruence classes or the like, Steinitz simply defined a field to be a structure consisting of a set of elements in which two operations are defined (to be referred to as addition and multiplication) satisfying a certain set of rules. The concept of a ring was first used by Dedekind, who used the word "order" (or rather, of course, its German equivalent, "ordnung.") The word "ring" (which is actually the same in German and in English) was introduced by Hilbert. The point is that in an algebraic number ring (or any finite integral extension of a base ring), if one looks at the powers of an element then one finds a point where subsequent powers can be expressed as linear combinations of the preceding ones. Thus the multiplication in a certain sense turns back on itself in a way that is somewhat like a geometric ring. But it was not until 1914 when the first paper where the general notion of a ring is defined axiomatically: "On Zero Divisors and Decomposition of Rings," by Fraenkel. Although this gave the general definition, the paper itself was concerned with commutative artinian local rings where the unique prime ideal is principal. In the same year, 1914, Hausdorff in his Grundzüge der Mengenlehre, gave an axiomatic definition of general topology. Of course this was a time when algebraists, analysts, and topologists talked to each other, were interested in each other's work, were influenced by each other, and in many cases were actually the same individuals. In 1878, Frobenius proved that the quaternions were the only possible (finite-dimensional) associative extension of the complex numbers in which division was possible and the only (finite-dimensional) non-commutative extension of the real numbers in which division was possible. This was independently proved two years later by C. S. Pierce. (Gauss had been convinced that the field of complex numbers was the only finite dimensional commutative field extension of the real number system. This was subsequently proved by Weierstrass.) Later Cayley noted that there exists a set of two by two matrices satisfyiing the multiplication table of the quaternions. (The concept of a matrix is due to Sylvester, who introduced matrices as a shorthand for substitutions of variables, i.e. what we now call linear transformations.) But not until about 1870 was it noted, by the Americans B. Pierce and C.S. Pierce, that the set of square matrices of a given size form an algebraic system which permits addition, subtraction, and multiplication (i.e., in modern terminology, a ring). The term "an algebra" seems to have been used by the Americans and British in pretty much its modern sense, i.e. a ring which is a finite-dimensional vector space over the complex numbers (or real numbers). On the other hand, the Germans generally preferred the term "hypercomplex system." Aside from Hamilton's quaternions, the main example before 1850 was Grassman's "exterior algebras," but the analogy to the quaternions and other algebras was only much later seen. Other examples of algebras over the complex numbers were seen during the period 1850 to 1860, but the general study of algebras (and thus the roots of non-commutative ring theory) begins only in 1870 in the work of B. Pierce and C.S. Pierce, who introduce the concepts of idempotent and nilpotent elements and the decomposition of an idempotent element into a sum of orthogonal primitive idempotents. Cayley and Sylvester and other British and American mathematicians then started working on the problem of classifying algebras of small dimension over the complex numbers. During this time, the development of Lie groups and algebras (which are non-associative) was proceeding and some of the fundamental concepts in the theory of associative algebras (the concept of the radical, for instance) were developed first for Lie algebras. Another key source of ideas and examples was the concept of a group algebra, which had been essentially defined by Dedekind in 1896, in a letter to Frobenius. Dedekind was very clear on the relation of this to the general theory of algebras, although the theory of group representations as developed by Burnside and Schur (around 1905) did not at that time explicitly use ring-theoretic methods. The concept of a simple algebra over the complex numbers had been defined in 1893 by the German mathematician T. Molien, who then proved the first version of the Wedderburn Theorem, i.e. that a simple algebra over the complex numbers is isomorphic to the ring of n by n matrices over the complex numbers. It was at this point that the concept of a two-sided ideal became current and a lot of theorems were proved about them. But it is not clear from the Bourbaki survey whether the word "ideal" was originally used, and it is possible that the analogy to Dedekind's work on commutative rings was not immediately seen. The concept of a semi-simple algebra was introduced by Elie Cartan. (Unfortunately, I don't have a date.) The development around 1900 of the theory of finite fields by the American mathematicians E. H. Moore and L. E. Dickson was what motivated the generalization of the theory of algebras to the case where the base field was unrestricted. Wedderburn, another American, in 1905 proved that every finite skew field (a.k.a. division algebra) is in fact commutative. In 1903, in a memoir on the algebraic solution of differential equations, Poincaré had defined the concepts of left ideal and right ideal for an algebra. (As mentioned, two-sided ideals had been essentially known since Molien's paper in 1893.) In this memoir, Poincaré proves that the minimal left ideals in the ring of n by n matrices have dimension n. However this result was not noticed by the algebraists. The notions of left and right ideals were rediscovered in 1907 by Wedderburn, who proved that the radical was the largest nilpotent left ideal and proved his well known "Wedderburn Theorem" (later generalized by Emil Artin) which states that every semi-simple algebra over an arbitrary base field is a direct product of matrix rings over skew fields. In 1920, Emmy Nöther and W. Schmeidler used the concepts of left and right ideal in a paper devoted to rings of differential operators. But otherwise, these concepts were ignored after Wedderburn's paper until 1927, when Emmy Nöther, and Brauer (and later A.A. Albert and Hasse) resumed the study of them. By 1934, the basic theory of semi-simple rings was essentially complete. Bourbaki's summary statement is, "The axiomatization of algebra was begun by Dedekind and and Hilbert, and then vigorously pursued by Steinitz (1910). It was then completed in the years following 1920 by Artin, Nöther and their colleagues at Göttingen (Hasse, Krull, Schreier, van der Waerden). It was presented to the world in complete form by van der Waerden's book (1930)." What we see from all this (at least in my view) is that the development of modern Algebra was never motivated by mathematicians seeking abstraction for its own sake. Instead, algebraists working on quite concrete problems were trying to invent tools that might help with their investigation of these problems, and slowly (very slowly, as we look at their work in retrospect) began to notice that the same logical patterns recurred over and over again in different examples.
<urn:uuid:678f27eb-1b22-40ff-a5db-9240ebc263d6>
CC-MAIN-2016-26
http://www.math.hawaii.edu/%7Elee/algebra/history.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974058
4,655
2.625
3
Manufacturing is moving beyond assembly lines that make standardized products to customized equipment and parts. The demand for customization may best be met by the growing trend of additive manufacturing—and the next generation of engineers able to utilize the technology. The Ex One Co., a national manufacturing organization, has donated a ProMetal RX-D 3-D printer to the Earl W. Brinkman Laboratory at Rochester Institute of Technology. The equipment will serve as a teaching tool for RIT’s students and as a resource for campus researchers striving to further the breakthrough technology. The 3-D printer is a key component of today’s additive manufacturing technologies, which have evolved from rapid prototyping and other three-dimensional laser technologies. According to David Burns, president of Ex One Co., RIT is positioned to be a leader in the further development of additive manufacturing due to its digital printing, optical capture and micro-engineering capabilities. All the foundation, or predecessor technologies, to develop additive manufacturing techniques can be found at RIT, he says. “RIT is a world leader in research in both optical capture and conversion to digital modeling. The necessary steps that precede additive manufacturing are the creation or capture of digital information. No matter how you get it, either capture it or create it, RIT is right on the leading edge of those technologies,” says Burns, a member of the RIT Board of Trustees and former chief executive officer of Gleason Corp. Early developers of additive manufacturing technologies used plastics and other composite materials to build prototypes of equipment such as automobile parts. Using a computer-aided design technology, the objects are drawn, first through 3-D imaging and optical capture, then input into the 3-D printer. The machine prints layer-upon-layer of the object. Today, some companies, including Ex One, have capabilities to develop products using glass or metals through the additive manufacturing process. “Although we have had the ability to print plastic parts in the Brinkman Lab for some time now, this donation is very exciting in that it will allow us to directly print metallic components for the first time,” says Denis Cormier, the Earl W. Brinkman Professor of Machining and Manufacturing in RIT’s Kate Gleason College of Engineering. Cormier is one of the foremost scholars in the area of additive manufacturing. He is working on a research project funded by the U.S. Department of Energy for “Science-based Nano-structure Design and Synthesis of Heterogeneous Functional Materials for Energy Systems.” The research continues his work in rapid prototyping and additive manufacturing with a focus on printing nano-inks to produce energy devices such as fuel cells and batteries. “Additive manufacturing, by its definition, means that you can make precisely what you want, when you want it, in the quantity that you want, and it gives manufacturers the opportunity to have each separate piece be uniquely configured,” says Burns. “So with the culture of the U.S, where we are depending more and more on speed and customization, it is a remarkable field to study. At the end of the day, additive manufacturing may be about the re-emergence and survival of the manufacturing base in this country.”
<urn:uuid:bd12fd21-e037-40ad-8c7b-bfa6aaf4e7e1>
CC-MAIN-2016-26
http://www.rit.edu/news/story.php?id=48333
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948074
675
3.125
3
When the Minister of finance reported on the state of government debt a few days ago, saying that it was going off the scale, President Ma Ying-jeou’s (馬英九) reaction was: “Government debt in some countries stands at 200 percent of GDP, but they are doing just fine. That is why a lot of people wonder what we are afraid of, saying ‘Go on, borrow some more.’ Why can’t we issue more debt?” This response is deeply regrettable and a cause for great concern given the uncertain future of today’s young and future generations. On the face of it, Taiwan’s national debt stands at 38.6 percent, a lot less than in other high-debt countries. This seems to imply that Taiwan’s debt situation is both sound and safe. However, in reality this hides a risk for potential national bankruptcy, because the government’s definition of national debt differs from the international standard. This also means that when the government makes debt comparisons with other countries, the results are often ridiculous because of the different calculation standards. According to articles 4 and 5 of the Public Debt Act (公共債務法), the government issues non-self-liquidating debt with a maturity of one year or more, excluding short-term debt with a maturity of less than one year. Furthermore, in 2010, the Control Yuan issued a correction to the Cabinet because it thinks that although the formula for calculating national debt meets the legal requirements, it does not comply with the international definition and therefore it is not suitable for making comparisons with other countries. In June, the legislature passed an amendment to the Public Debt Act that replaces GNP with GDP as the basis for debt calculation and amends the debt ceiling for the national and local governments so that debt may not exceed 40.6 percent and 9.4 percent respectively, of the average nominal GDP for the previous three fiscal years. These changes are the main ways in which the government wants to restructure local governments, amend the debt restriction structure for the different government levels and meet international standards. However, this is nothing but a flawed recipe for issuing more debt. This flawed amendment, together with the repeatedly raised debt ceiling, will not be a positive recipe for handling the deteriorating debt situation. It is more like a sugar-coated poison pill that will do nothing to control the situation, but instead will increase debt by NT$72 billion (US$2.43 billion). It has been 20 years since the government was told that it must understand the taxes paid by the public are the precondition for the government services that they need and enjoy, and that continuously borrowing money will not promote public happiness and satisfy more needs. The transformation and upgrading of the country’s industrial structure is not going very well, unscrupulous businesses create scandal after scandal, overall economic performance over the past few years has started out promising much and ended in desperation, and the government is only capable of promoting tax incentives, as if tax cuts were a cure-all. The result has been that Taiwan’s tax burden is among the world’s lowest. This in turn has affected the constantly deteriorating fiscal situation, which in future will force the government to issue debt simply to be able to finance its policies and budgets. This has led to a situation in which the government needs to raise debt to service debt, just as so-called “credit card slaves” must rely on their cards to make ends meet, placing the government in a vicious circle that it is unable extract itself from.
<urn:uuid:b13b4fb5-3d5f-47a0-83ec-31f30a57c73a>
CC-MAIN-2016-26
http://www.taipeitimes.com/News/editorials/archives/2013/11/18/2003577120/1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961191
753
2.578125
3
Science is a pretty big subject to teach. There are so many variables that it can be difficult to know where to begin. Supercharged Science was started by Aurora Lipper, a real rocket scientist, about 10 years ago. Aurora is committed to helping students become excited about learning Science instead of being bored with it. Crew Members were given 6 months of access to the e-Science Premium Membership. Supercharged Science is an online science curriculum for grades K-12 that allows parents access to 20 units of study which include video instruction presented by Aurora herself, along with step-by-step videos detailing each science experiment, and even shopping lists so you can plan ahead for the lessons you want to do. The program works in a way that students can work at their own pace, and you as the parent do not need a background in Science. Aurora does the teaching for you. With hands-on learning, Science can be fun again! Here is what you get with the e-Science program: - Self-guiding lessons - Detailed video-based instruction taught by a real science teacher (Aurora) - Step-by-step videos showing how to do each experiment, activity and project - Comprehensive teacher guides - Textbook readings - Exercises & Quizzes - Unlimited support - A safe self-contained learning environment Click on the banner below to read the Crew Reviews! A big thank you to Becca Carroll of C Family of 6 for writing this introductory post.
<urn:uuid:5de13738-ee7a-4be7-ad9b-ffe5e4e400c2>
CC-MAIN-2016-26
http://schoolhousereviewcrew.com/supercharged-e-science-review/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951756
309
3
3
DEKATRON reborn: Full details on World's Oldest Digital Computer OK iPad kids, let's see how you do on paper tape Feature The world’s oldest working digital computer was rebooted on Tuesday following a painstaking three-year refurbishment. The slow-but-steady machine will now be used to educate school-age programmers. The Harwell Dekatron – AKA The WITCH Computer - crunched its first calculations 61 years ago and was used to build Britain’s first nuclear reactors. It has been restored by The National Museum of Computing (TNMOC) at Bletchley Park. At an unveiling ceremony following the rebuild work, one of the Dekatron's early users Bart Fosey, 85, raced the machine using a modern hand-calculator to see which completed its task first. The race was declared a draw. Now students visiting TNMOC can build programs on a PC and convert the code to paper tape to be run on the cleaned-up computer. The paper tape output by the machine “They get to produce something and then run their program on the actual hardware,” Kevin Murrell, the TNMOC fellow who restored the Dekatron, told The Reg. He explained the appeal of this old clunker: “Because the machine was never designed to be fast, we can follow its thinking.” Murrell has something of a life-long relationship with the Dekatron: he first saw it in a museum in the 1970s. At the time the physics student was building his own computer kits, and eventually defected to computing. TNMOC is regularly visited by students who program its collection of BBC Micros, and with so many school groups coming through its doors it’s booked past Christmas. Now the WITCH refurb project will not just to save one of the first digital computers from the scrap heap, it will also inspire others as a hands-on exhibit. When you think of old room-sized computers, your mind may turn to Colossus - the beast designed by Tommy Flowers that was used to crack the German military's encryption codes during the Second World War. A Colossus replica sits in the low-rise maze of war-era brick huts at TNMOC. Or perhaps you may think of the US Army’s Electronic Numerical Integrator And Computer (ENIAC), considered the world’s first general-purpose electronic computer. It was designed and built around the same time as the Dekatron, and used to calculate missile trajectories; it was also operated by a team of women whose job title was “computer” – a first in a sector then, as now, dominated by men. The last living member of the pioneering team was Jean Bartik, who died last year. The restored Dekatron is not a replica: it has been rebuilt from original parts and its use by visitors is encouraged - unlike the ENIAC on display at the Computer History Museum in Mountain View, California: you can look but you can’t touch that monster – it’s for display purposes only. The WITCH is a simple beast. Rather than follow the architects of ENIAC and build a general-purpose computer, its creators kept it simple: they built a 2.5-ton calculator that ate numbers and spat out answers. Of the original Dekatron team, Ted Cooke-Yarborough designed the electronics; Dick Barnes made the relays, and control and timing electronics; and Gurney Thomas created the Dekatron memory. “This was purely a mathematics machine,” museum spokesperson Stephen Fleming told The Reg. “It’s the number cruncher of its time.” Dekatron man: Murrell with the machine he helped rebuild The Atomic Energy Research Establishment - known simply as “Harwell” after its base near Harwell, Oxfordshire - commissioned the cruncher to automate the boring work assigned to a team of six university-educated mathematicians pounding Brunsviga calculators, who could easily make mistakes. The machine was used in fundamental research and to design early power stations producing electricity for the national grid, including the Calder Hall nuclear plant. It calculated properties such as the ideal thickness of the nuclear reactor’s concrete housing. A simple division would take the Dekatron more than 10 seconds to complete, but speed was never the issue - as proved by this Tuesday's test. Reliability was the problem. The Dekatron isn’t unique. It’s a relay computer not unlike the Imperial College Counting Engine from the same era and the German Z3 in 1941. What helped it stand out, though, was its particular use of electronic components made of gas-filled glass tubes that could count to ten. These Dekatron devices were in use from the 1940s until the 1970s for computing, calculating and frequency-division jobs. Each Dekatron is packed with a ring of paired electrodes that emit a soft neon light when powered up. The electrodes light up in sequence every time an input signal is detected, producing what looks like a moving dot that completes revolutions inside its tube; one revolution indicates 10 electrical pulses have been received. Each electrode pair is twinned with a cathode that outputs a signal when the electrode is lit up. These signals can feed into other tubes as input pulses, thus building up a chain of counters and frequency dividers.
<urn:uuid:cf5f3159-96c0-4bfe-b9b4-d2ba5f596f45>
CC-MAIN-2016-26
http://www.theregister.co.uk/2012/11/21/harwell_dekatron_reboot/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959527
1,127
2.96875
3
BERLIN — Last year Germany produced a record amount of energy from solar panels installed on rooftops and in fields across the country. With a total of about 25 gigawatts of installed panels, Germany now has half of the world's entire solar energy capacity. An unprecedented 7.5 gigawatts of panels were added to the country’s energy system in 2011, twice the government’s target. One would think that for a country firmly committed to ambitious targets for renewable energy and emissions reductions, this would be a good thing. Instead, politicians in Berlin are furiously negotiating to find a way to slow down this rapid expansion, due to the huge costs involved in paying for solar power. Two ministries in Berlin have been at loggerheads over their different priorities in tackling the problem. While Economics Minister Philipp Roesler, leader of the Free Democrats, wants to make the energy costs cheaper, the Environment Ministry, headed by Norbert Roettgen, a member of Chancellor Angela Merkel’s Christian Democrats has to balance economic interests with Germany’s climate goals. Read more: Germany's most dangerous export? Solar, it appears, will be the likely loser, as both ministries foresee less support for the industry in the future. The current debate is focused on the solar subsidies that are largely paid for out of consumers’ pockets. Critics argue that the costs have shot up, as solar expanded from just 1 percent of energy in 2009 to 3.5 percent in 2011. It’s on target to rise as much as 4.5 percent this year. This expansion is pushing up energy costs in general for the German economy. Supporters say prices are already starting to fall, that the majority of expenses have already been invested. They say that support for the industry needs to be maintained now more than ever. In a sense, Germany’s solar energy policy is a victim of its own success. Read more: Germany's jobs miracle Over a decade ago, politicians enacted a complicated subsidy system designed to kick-start the green-energy sector, which faced enormous competitive barriers when pitted against mature energy sources like coal and nuclear. The Renewable Energy Sources Act of 2000 set up a system under which energy companies were obliged to buy all electricity generated by green-energy producers at elevated prices, known as a feed-in-tariffs. The government established feed-in-tariffs, or FITs, that gradually decreased over a 20-year period, to reward rapid adoption of renewables, while also ensuring that green-power generation would eventually become competitive. In practice, the cost of producing solar dropped more quickly than the tariff cuts. That meant that customers were in effect simply ploughing money into massive profits for the booming solar industry. Politicians in Berlin have become alarmed that the policy is becoming harmful for Germany. The rapid rate of panel installation has translated into high costs for utility companies and customers. Moreover, overcapacity and competition from cheaper solar modules, produced in Asia, mean that many domestic companies can’t compete. Around half of the panels now being installed in Germany are imported from China. The result has been a spate of German solar companies going bust. To address the problem, the government has already doubled the pace of tariff decreases each year from 5 percent to around 10 percent. The Environment Ministry is now proposing cutting the tarrifs more rapidly so that they keep pace with the dwindling costs. Roettgen has told solar industry representatives that he would like to reduce these subsidies on a monthly basis instead of twice a year. However, the economics minister is proposing much more drastic measures. In a draft bill Roesler sent to lawmakers in January, he envisages capping German solar panel installations to just 1 gigawatt a year on average through 2020. The proposal has alarmed the already struggling solar manufacturers. Environmentalists say it could undermine efforts to develop renewables to compensate for the nuclear stations that Chancellor Merkel has decided, since Fukushima, should close by 2022. Germany’s current target is for green technologies to provide 35 percent of the country’s energy needs by 2020. For Roesler, the issue is political. He is hoping his opposition to solar subsidies will boost his profile. His party is languishing in the polls, and the issue gives him an opportunity to argue that he is protecting the consumers. There are many — not only in his own party but also in Roettgen and Chancellor Angella Merkel’s party — who sympathize with his view. Roesler’s position is supported by the pro-industry Rhine-Westphalia Institute for Economic Research, or RWI, which has calculated that solar panels installed in Germany between 2000 and 2011 will cost consumers a staggering 100 billion euros ($130 billion) over 20 years. “The most important reason to cut the solar subsidies is that from an economic perspective, they are simply a waste of money,” RWI expert Manuel Frondel told GlobalPost. “We estimate that the average German household will have to pay 1,000 euros over the next 20 years as a result of the photovoltaic panels installed in Germany up to now.” Frondel said that the attraction of installing solar panels has to be urgently reduced. “Otherwise the costs for the energy consumer will continue to rise massively.” He points out that wind power is far cheaper to produce than solar, and suggests an alternative quota system for green technology would see utility companies opting for wind over solar. Keeping the solar industry on life support with ongoing tariffs is pointless, Frondel said. “Many of these companies are facing bankruptcy. And of course one could delay their demise for a while by continuing subsidization, whereby the energy consumer pays high subsidies for solar energy, but it makes no sense from an economic point of view. You cannot prevent the collapse of the German solar industry.” This view is forcefully opposed by Germany’s strong ecological movement, which sees solar as a vital part of Germany’s energy mix. Capping installation or abolishing subsidies would “bring to a standstill the switch to clean-energy sources,” the BEE Renewable Energy lobby and its green movement allies wrote to the chancellor in late January. Companies that have invested heavily in solar are deeply concerned with the push from the Economics Ministry. An installation cap would mean “photovoltaic is dead in Germany,” said Franz Fehrenbach, CEO of Robert Bosch, the German car-parts supplier that has branched out into the solar industry in recent years. For environmentalists, the fact that Roesler wants to kill off the support is incomprehensible. “It is as if we sowed seeds and plants grew and finally started bearing fruit and then shortly before the fruit was ripe, they come with their tractors and mowed them down,” said Gerd Rosenkranz of the non-profit German Environmental Aid. Rosenkranz admits that solar power has been expensive but argues that most surveys have shown green-conscious Germans are happy to pay a bit more for green technology if it helps the environment. And while the figure of 100 billion euros may seem massive, he points out that this is spread over 20 years for the whole population. “It is not actually that dramatic, which is why people are not rebelling against it.” He argues that the FITs enabled the German solar industry to eventually produce much cheaper panels, something that benefits not just Germany but the world, as it fights climate change. “We cannot change the fact that it was expensive in the past, but what is coming now is not as costly,” Rosenkranz said. Cutting off the subsidies at this point, however, would help kill off an industry that is on the cusp of being viable without FITs, he said, adding that setting an installation cap just as Germany abandons nuclear power, is simply “not logical." And while wind is cheaper than solar, there is currently insufficient infrastructure in Germany to get the energy from the coast to the rest of the country, whereas solar panels can be installed anywhere. Solar is now on target to make up around 10 percent of the energy mix by 2020. Yet, even with the flood of cheap Chinese solar panels, that development could be put in jeopardy if much of the industry in Germany is allowed to collapse, environmentalists say. Even solar supporters admit that many companies became lazy on the back of the subsidies and did not invest enough in R&D. “Whenever there is such rapid development then there is going to be a phase of consolidation, during which some companies fail,” Rosenkranz said. “That is terrible for the owners and the employees but that is something that always happens when developments are so fast.” The companies that remain are dealing with constant uncertainty, awaiting the result of the political wrangling in Berlin. They complain that the constant tinkering with the system makes it difficult to plan or invest. “It would be crazy if an industry which has been developed with the money of many citizens is driven to the wall before it can really be economically viable,” Rosenkranz said. “It would really be the dumbest thing Germany could do.”
<urn:uuid:00af857c-9759-4fe0-9cb9-e1736a396bbf>
CC-MAIN-2016-26
http://www.globalpost.com/dispatch/news/regions/europe/germany/120217/germany-battles-over-the-future-solar-energy?page=0,1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96696
1,928
3.046875
3
Transition to Adulthood Transition: Planning for Life Beyond High School Transition is the official term for the coordinated, systematic set of activities that creates a bridge between school and adult life for students with disabilities age 14 to 21. Transition services help students become a part of the adult community – get ready for work and other aspects of adult life, obtain further education, etc. The Transition years (age 14 to 21) can be a challenging time for parents of students with disabilities. Some of the questions parents ask include: - What will my child do after graduation? - Will he or she be able to get a job? - Which agencies will help my child? - Should my child receive more education or training? What is available and how much does it cost? Is help available for the cost? - Will my child be able to live independently in the community? Which agencies can help me in this process? - Which public benefits will my child be eligible for? Will working affect his/her public benefits? - What supports are available for adults with the most significant disabilities? Transition Starts at Age 14 Transition is a requirement of federal law (Individuals with Disabilities Education Improvement Act [IDEIA], 2004) and the Pennsylvania Special Education Regulations and Standards. School districts are responsible for the education of students with disabilities through age 21, unless the student graduates before age 21. Beginning at age 14, students with disabilities must have a Transition Plan with measurable annual goals as part of the Individual Education Plan (IEP). Between the age of 14 and graduation, the Transition plan may change as the achievement and interests of the student change. But even if the student does not know what he/she wants to do after graduation, a Transition plan must be developed. Transition-age students should participate in the IEP process to the extent that they are able. Participation helps students define realistic outcomes and identify adults who can help them reach their goals after high school. Participating in the IEP process also helps students learn to advocate for themselves. Even if the student does not attend the IEP meeting, the school must take steps to ensure that the student’s preferences and interests are considered. Ongoing Transition Planning: Ages 15 – 21 To prepare your child for post-high school activities and services, make sure that psychological evaluations are up to date. The college admission and placement processes for students with disabilities, as well as most transition and vocational programs, often require the results of these tests to be less than three years old of the time of application. Because of the complexities of public benefit systems for adults with disabilities (Social Security, Office of Vocational Rehabilitation, Medicaid waivers, etc.), the student, parents, educators, and other professionals must work collaboratively on a Transition plan that includes all components of young adult life. This collaborative planning ensures that the necessary services are in place and that the student develops the skills needed to be successful upon graduation. During the Transition years, students and their parents must learn about public benefits for adults with disabilities (Social Security, Office of Vocational Rehabilitation, Medicaid waivers, etc.) and the community agencies that may be part of the young adult’s life.
<urn:uuid:8d9f9f46-b721-4d41-83ae-1aca5d0280a0>
CC-MAIN-2016-26
http://www.familyresourceguide.org/trans-to-adulthood/index.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950127
663
3.328125
3
Much of the differences in approaches to controller algorithms and tuning can be traced back to assumptions made about the type and importance of disturbances. Each method has merits based on the disturbance frequency, location, and time lag. Here we gain an understanding of how to reduce process variability from upsets originating from changes in raw materials, production rates, weather, operating conditions, or other loops. The emphasis in control literature is on setpoint response. When the ability to handle disturbances is studied a step disturbance is typically shown as entering the loop at the process output at the point of the measurement. Often sensor and measurement delays and lags, filter time, and PID module execution time are not included. Consequently, the disturbance appears immediately at the PID input. This approach simplifies the mathematical analysis and the dynamic compensation of feedforward signals and shows the advantage of model based algorithms, such as internal model control. The tuning for a step disturbance in the process output is the same as for a setpoint change for a PI on error structure. In contrast the tuning for a step disturbance in the process input (load disturbances) uses more aggressive tuning to minimize peak and integrated errors. We can use this aggressive tuning for load disturbances for setpoint changes by the use of an alternate structure, such as two degrees of freedom (2DOF) or the introduction of a setpoint filter or lead-lag to reduce overshoot without excessive increase the time to reach setpoint. For step disturbances on the process output the lack of options, such as these just mentioned for dealing with setpoint changes, has led to the development of different tuning rules and special algorithms. The rules and algorithms also depend on whether the process has a self-regulating, dead time dominant, near or true integrating, or runaway open loop response. As you can imagine this sets us up for a remarkable spectrum of proposed solutions and a considerable difference of opinions. Often proponents of a particular rule or algorithm are focusing on a specific disturbance location and type of open loop response. The disturbance most commonly encountered that is of greater interest enters into the process typically upstream of the process dynamics and is a change in flow, typically feed flow. This disturbance entering the process about the same point as the manipulated flow is termed a load disturbance. Changes in composition or temperature of the feed or manipulated flow are also considered load disturbance but these are usually much slower. For a given size disturbance, the impact increases with the rate of change of the disturbance. The worst case is the step disturbance seen throughout the literature. Step disturbances result from compressors, fans, or pumps starting or stopping and from relief valves or on-off valves opening or closing. These actions are typically initiated by manual actions, sequences (e.g. batch operations and automated startups and transitions), and safety instrumentation systems. Snubbers (restrictors in the air lines) can be used to slow down the stroke of on-off valves but the adjustment is not as accessible or flexible as the tuning settings and analog output (AO) block setpoint rate limits in a PID. Most disturbances are not a step change because flows are typically manipulated by PID with reset action. If a flow loop is used the PID tuning uses more integral action rather than proportional action (e.g. PID gain = 0.2 and reset time = 2 seconds) to deal with the valve nonlinearities. The flow control closed loop time constant (lambda) and thus the disturbance time constant for the process loops affected by the flow change is about 10 sec. If there is no secondary flow loop, the feedback action of primary process composition and temperature loops has even larger closed loop time constants. However, when for the case of setpoint changes rather than load disturbances, there is a large initial step from proportional action and a kick from derivative action for a structure with PID on error. If a secondary flow loop is not used, the primary PID output changes are immediately passed on as abrupt valve position or speed changes and hence flow changes to affected loops. For continuous operations there are not many setpoint changes. The disruptive nature of setpoint changes is more an issue for batch operations and automated startups and transitions in product grade or type. Note that for many batch pressure and temperature loops, the time to reach setpoint is more important for reducing batch cycle time than minimizing steps in utility flows. An analog output (AO) block setpoint rate of change limit can be used with external reset feedback to slow down the action of the valve making the upset to utility systems less abrupt. Thus, the advantage of a secondary flow loop extends beyond isolating the primary process loop from valve and speed nonlinearities to slowing down the most prevalent fast disturbance being flow and enabling flow feedforward control (e.g. flow ratio control) to deal with the disturbance directly compensating for most of disturbance before it affects other primary process loops. The fastest reasonable response is a lambda equal to one dead time. Due to unknowns, a lambda equal to two to four times the dead time is used. For a disturbance that ramps due to a near or true integrating process, the open loop error (process variable error if PID is in manual) is replaced by the open loop ramp rate (process variable ramp rate if PID is in manual). The time units of this open loop error are cancelled out by the time units in the integrating process gain in the equations for the integrated error and peak error for closed loop control (PID in automatic or cascade mode). Slow load disturbances will exhibit longer recovery times (slow protracted approach to go back to setpoint). An increase in integral action (decrease in reset time) can help the PID deal with the continual increase in the load with time. Oscillatory disturbances are particularly problematic because a perpetual state of upset is created and the possibility of resonance exists. If the period of the disturbance is near the ultimate period of a loop, closed loop control will increase the amplitude (resonance). The best solution is of course to eliminate the oscillatory disturbance. Most often these oscillatory disturbances are caused by inappropriate tuning, valves with excessive backlash or stiction, batch operations, and on-off control. PID control should replace on-off control (e.g. level measurement and PID control instead of level switches). In terms of tuning, the most common mistake is a reset time that is too small particularly for level loops. For surge tank level control, the transfer of a change in inlet flow from batch operations to manipulated outlet flow can be smoothed to take advantage of available inventory. For valves, the use of rotary valves designed for tight shutoff with piston actuators is the most frequent culprit. If the disturbance period is significantly less than twice the ultimate period, the amplification can be reduced by tuning the affected PID slower (smaller gain and rate time and greater reset time). Feedforward can provide a preemptive action reducing the need for feedback control and the consequences of slowing down the PID tuning. If the disturbance period is much larger than twice the ultimate period, the tuning solution is to make the PID faster (larger gain and rate time and smaller reset time). The application of special notch filters is highly dependent upon accurate knowledge of the noise. The careful judicious use of the standard DCS first order filter offers the greatest general utility.
<urn:uuid:3da394be-3b11-4e10-b903-47ceb29075df>
CC-MAIN-2016-26
http://www.controlglobal.com/blogs/controltalkblog/effect-of-disturbance-dynamics-perspective-tips/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924474
1,476
3.015625
3
Gay communities across Africa often run into the sharp end of prejudice against their sexual orientation, yet a transvestite fraternity in the South African coastal city of Cape Town has gained a level of acceptance that allows them to publicly practice their lifestyle with minimal fear of retribution. The rights of gays are enshrined in the country's constitution, but the murder of homosexuals and the "corrective rape" of lesbians often feature in the headlines; now, the city's "moffie culture" - a term for the mainly coloured, or mixed race, transvestites - is managing to transcend these barriers to a degree. Marlow Valentine, deputy director of the Cape Town-based Triangle Project, a support group for lesbian, gay, bisexual and transgender (LGBT) people, told IRIN that transvestites had been living openly in the city since the early 1960s. "The moffie subculture emerged in District Six in Cape Town during the 1940s and '50s, an inner-city area that truly reflected the idea of a 'rainbow nation', as it was home to people of different ethnic backgrounds and religious beliefs during the earlier days of apartheid," he said. District Six was demolished by the apartheid government, which saw the mixed community of blacks, coloureds and whites living cheek-by-jowl as flouting the official policy of racial separation. "Even though homosexuality was a criminal offence at the time [during apartheid], men who cross-dressed and participated in drag shows were accepted. It seems gay men who retained a level of masculinity were not accepted, but effeminate men were, as their sexual orientation was not seen as threatening," Valentine said. "When the apartheid regime began to racially segregate communities in Cape Town, people from District Six, including the transvestites, were moved out into the coloured townships, and this was how the moffie subculture became established in the wider communities," he explained. |The so-called moffies, or transvestites, have become accepted rather than shunned| Valentine believes the flamboyant drag queen personas taken on by many transvestites, and the perception that they are successful business owners, have been key to transgender people's ability to integrate more successfully than the general gay population. "Transvestites are still known for putting on drag shows in their local communities, and many straight people go to these shows because of the entertainment value, as the shows provide a level of comic relief that is affordable," he said. "They also often run successful businesses, like hair salons and beauty parlours, which usually affords them a level of respect in their communities, because of the high unemployment that exists there. These factors have created a situation ... in which the so-called moffies, or transvestites, have become accepted rather than shunned." The wider coloured community has become so accepting of the subculture that they have adopted Gayle, the rhyming street lingo developed in District Six so that transvestites could converse without others understanding, now spoken across Cape Town's coloured townships. Marawaan Jumah, 30, a social worker and transvestite who lives and works in Manenberg, a township outside Cape Town, told IRIN he was widely known in his community because of his work, and people accepted his lifestyle. "I live openly as a transgender person and most people accept me for who I am - those who do have issues with my lifestyle are more concerned about the sexual aspect of it rather than the cultural side, which involves cross-dressing and drag shows," he said. "People who have real issues with the way I live often come from a religious background and are usually men - they seem to feel that by embracing my feminine side I have betrayed them as men, but I think that is just ignorance," Jumah commented. When asked why African communities were less accepting of the transgender subculture than people from coloured communities, he said he thought it was linked to the morals and values of the different groups. "I've got a lot of gay African friends who suffer jibes from their own people. It is much easier to be transgender in the coloured communities, and I think this is because our cultures are very different," Jumah said. Valentine said although transvestites were more accepted in the coloured community, "When it comes to discussing the deeper issues around homosexuality, the same community members who accept transvestites will be openly homophobic. The broader idea of homosexuality is still not accepted by the majority of people."
<urn:uuid:e7448fc1-7992-4f22-be84-fa74ba710a2d>
CC-MAIN-2016-26
http://www.irinnews.org/news/2010/04/08/gay-lifestyle-okay-being-gay-not
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980269
948
2.546875
3
Writing Better Objective Tests Joseph Ryan, Department of Education The objective test is only one of many ways in which students can be evaluated. Tests can be formal or informal, oral or written; and no one form of testing is necessarily better or worse than another. Objective tests, however, do offer some advantages over other forms of testing. By definition these testing procedures are more objective than other procedures. That is, they are less dependent on personal opinion than some other forms of testing. Objective tests also tend to be more reliable than other types of testing; and the objective format allows instructors to test a large number of students on a wide range of topics in a relatively brief period of time. Before an appropriate test can be written, the knowledge or skills taught in class need to be defined with some care. Instructors must determine the content or material that has been taught and the type of skill a student should be able to demonstrate with respect to the content or material. This is illustrated by the three test items below. Each of these three items deals with the same content, namely the freezing temperature of water. The items differ, however, in the intellectual skills that must be applied to that content. In Item 1 the student must recognize the freezing temperature of water. In Item 2 the student must recall this temperature. In Item 3 the student must demonstrate comprehension of the freezing temperature of water. When writing the objective tests, review each item after it has been written to judge whether the content and skill it requires have, in fact, been taught in Multiple Choice Items Multiple choice questions contain two major parts, the stem which presents the problem and several alter-native answers. The following checklist can be used to create or evaluate multiple choice questions. - The stem, not the responses, should introduce what is expected of the student. - The stem should be free of irrelevant material. - All the options should be plausible and homogenous. - All the options should be grammatically consistent with the - Obvious verbal associations between the stem and the correct answer should be eliminated. - Overlapping options should be eliminated. - All options should be approximately the same A true-false test item is written in the form of a declarative sentence. The student must judge whether the sentence is a true or a false statement. Some instructors prefer to use the true-false format with the additional requirement that students indicate how the false items can be changed to make them true. This adaptation requires that the instructor provide very clear standards for scoring these answers. Use the following checklist to create or evaluate true/false items. - The language of the items should be simple and - The statement should be specific enough to allow a judgement to be made. - The statement should be clearly true or false. - Specific determiners (e.g., always, never, sometimes, ever) should be - Use only a single idea in each statement. - The number of true statements and false statements should be approximately equal. The matching item is a modification of the multiple choice question. In a matching test item, a list of words or phrases is presented in a column, generally on the left side of the page. These words or phrases are called the premises of the item. A second column, generally on the right side of the page, contains words or phrases called responses that are to be matched with the premises. When there are exactly as many premises as there are responses and when each response is used once and only once in the matching process, the test item is said to have perfect matching. When some of the responses are used more than once or not at all, the item is said to have imperfect matching. Imperfect matching makes guessing more difficult. Following are suggestions for writing matching test items. - Clearly explain the basis on which the matching is to be made in the directions. - Make sure that the directions make clear whether each response can be used only once or not at all. It is usually better to have more responses than premises and to state that each response may be used more than once and that some responses may not be used at all. - Keep the lists of premises and responses short (5 or 6). If the lists are too long, the items will be testing the students memory and reading skills. - Keep the lists of premises and responses relatively homogeneous. - Write the responses in the form of short phrases, single words, numbers, or symbols and arrange them in an obvious order--alphabetical, chronological, etc. The preceding sections offer specific recommendations for improving the writing of three types of objective test questions. In addition, the following general guidelines may be useful when preparing any type of objective test item. - Design each item to measure an important learning outcome as defined by course objectives. - Include only one central idea in each test item. - Write the stem and options for each item in simple, clear language. - Do not make items more difficult through use of tricks of ambiguity. Increase the difficulty level by changing the stems or options. - Make each test item independent of other items on the test. - If negatives are used, the negative should be emphasized by capitalization or underlining, e. g., NOT, none, Ryan, J. , Lackey, G. & Bell, (1981). Improving your classroom tests: Writing better objective questions. University of South Carolina, Department of Educational Research STUDENT CENTER | FACULTY CENTER | ONLINE STUDY GUIDE | TEACHING PSYCHOLOGY HANDBOOK | SUPPLEMENTAL LECTURE NOTES | FACULTY SUPPLEMENTS |
<urn:uuid:a3bc7887-73b8-4dd9-873c-bbf5e91ac9e1>
CC-MAIN-2016-26
http://www.abacon.com/lefton/objective.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.868003
1,260
3.859375
4
was the first building 'that was ever known to have drawn its own picture', wrote William Henry Fox Talbot in The Pencil of Nature, the first published photographically illustrated book. Talbot was one of the most influential inventors of photography; his two-step method was particularly suitable for book illustration because multiple copies of photographs could be produced from one negative.
<urn:uuid:98be06e5-72c1-4ddc-8d64-75058e563b63>
CC-MAIN-2016-26
http://www.bl.uk/collections/early/victorian/photogra/photog1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980881
79
3.0625
3
Friday, February 12, 2010 Centralization & Decentralization The extent to which authority is concentrated at the top management levels. The act or process of centralizing, or the state of being centralized; the act or process of combining or reducing several parts into a whole; as, the centralization of power in the general government; the centralization of commerce in a city. Situation in which decision-making power is at the top of an organization and there is little delegation of authority. It is the opposite of Decentralization. Centralization and decentralization are really a matter of degree. Full centralization means minimum autonomy and maximum restrictions on operations of subunits of the organization. As an organization grows in size and complexity, decentralization is generally considered to be effective and efficient. Centralisation, or centralization, is the process by which the activities of an organisation, particularly those regarding Planing decision-making, become concentrated within a particular location and/or group. In political science, this refers to the concentration of a government's power - both geographically and politically, into a centralised government. In neuroscience, centralization refers to the evolutionary trend of the nervous system to be partitioned into a central nervous system and peripheral nervous system. In business studies centralisation and decentralisation is about where decisions are taken in the chain of command. The extent to which authority is delegated to lower management levels. Delegation of decision-making to the subunits of an organization. It is a matter of degree. The lower the level where decisions are made, the greater is the decentralization. Decentralization is most effective in organizations where subunits are autonomous and costs and profits can be independently measured. The benefits of decentralization include: (1) decisions are made by those who have the most knowledge about local conditions; (2) greater managerial input in decision-making has a desirable motivational effect; and (3) managers have more control over results. The costs of decentralization include: (1) managers have a tendency to look at their division and lose sight of overall company goals; (2) there can be costly duplication of services; and (3) costs of obtaining sufficient information increase. 1. The devolution of ‘decision-making powers to the lowest levels of government authority…to promote democracy and participation, such that local people are directly involved in decisions and developments which affect them personally’ (Nel and Binns, Geography 88). 2. A process counteracting the growth of urban areas, and known also as counter-urbanization. Even while the city is still growing, it has many negative externalities such as congestion, noise, pollution, crime, and high land values. Such problems are a spur to spontaneous movement away from the cities which has been compounded by the increasing locational freedom of shops, offices, and industries to move to out-of-town shopping centres, office parks, and industrial estates, respectively, together with the increase in numbers of white-collar workers and the consequent rise in incomes, and mass car ownership. Research in the late 1970s indicated that a number of city regions in the UK and north-west Europe were undergoing absolute or relative decline in their cores while growth continued in their hinterlands, and by the mid-1980s similar trends were observed in Mediterranean cities, especially in Italy. On a national scale, governments may favour decentralization to restore the fortunes of declining regions which are suffering from out-migration to the extent that services and infrastructure may be under-used. Governments may attempt to decentralize by discouraging new investment at the centre and encouraging growth in the depressed areas. Incentives for such relocation include grants, loans, tax concessions, and the provision of industrial premises. Decentralization or Decentralisation is the process of dispersing decision-making governance closer to the people and/or citizen. It includes the dispersal of administration or governance in sectors or areas like engineering, management science, political science, political economy, sociology and economics. Decentralization is also possible in the dispersal of population and employment. Law, science and technological advancements lead to highly decentralized human endeavours. "While frequently left undefined (Pollitt, 2005), decentralization has also been assigned many different meanings (Reichard & Borgonovi, 2007), varying across countries (Steffensen & Trollegaard, 2000; Pollitt, 2005), languages (Ouedraogo, 2003), general contexts (Conyers, 1984), fields of research, and specific scholars and studies." (Dubois and Fattore 2009) A central theme in decentralization is the difference between a hierarchy, based on: authority: two players in an unequal-power relationship; and an interface: a lateral relationship between two players of roughly equal power. The more decentralized a system is, the more it relies on lateral relationships, and the less it can rely on command or force. In most branches of engineering and economics, decentralization is narrowly defined as the study of markets and interfaces between parts of a system. This is most highly developed as general systems theory and neoclassical political economy. Decentralization in history Decentralization and centralization are themes that have played major roles in the history of many societies. An excellent example is the gradual political and organizational changes that have occurred in European history. During the rise and fall of the Roman Empire, Europe went through major centralization and decentralization. Although the leaders of the Roman Empire created a European infrastructure, the fall of the Empire left Europe without a strong political system or military protection. Viking and other barbarian attacks further led rich Romans to build up their latifundia, or large estates, in a way that would protect their families and create a self-sufficient living place. This development led to the growth of the manorial system in Europe. This system was greatly decentralized, as the lords of the manor had power to defend and control the small agricultural environment that was their manor. The manors of the early Middle Ages slowly came together as lords took oaths of fealty to other lords in order to have even stronger defense against other manors and barbarian groups. This feudal system was also greatly decentralized, and the kings of weak "countries" did not hold much significant power over the nobility. Although some view the Roman Catholic Church of the Middle Ages as a centralizing factor, it played a strong role in weakening the power of the secular kings, which gave the nobility more power. As the Middle Ages wore on, corruption in the church and new political ideas began to slowly strengthen the secular powers and bring together the extremely decentralized society. This centralization continued through the Renaissance and has been changed and reformed until the present centralized system which is thought to have a balance between central government and decentralized balance of power. Decentralization—the transfer of authority and responsibility for public functions from the central government to subordinate or quasi-independent government organizations and/or the private sector—is a complex and multifaceted concept. It embraces a variety of concepts. Different types of decentralization shows different characteristics, policy implications, and conditions for success. Typologies of decentralization have flourished (Dubois & Fattore 2009). For example, political, administrative, fiscal, and market decentralization are the types of decentralization. Drawing distinctions between these various concepts is useful for highlighting the many dimensions of successful decentralization and the need for coordination among them. Nevertheless, there is clearly overlap in defining these terms and the precise definitions are not as important as the need for a comprehensive approach (see Sharma, 2006). Political, administrative, fiscal and market decentralization can also appear in different forms and combinations across countries, within countries and even within sectors. Administrative decentralization seeks to redistribute authority, responsibility and financial resources for providing public services among different levels of governance. It is the transfer of responsibility for the planning, financing and management of public functions from the central government or regional governments and its agencies to local governments, semi-autonomous public authorities or corporations, or area-wide, regional or functional authorities. The three major forms of administrative decentralization -- deconcentration, delegation, and devolution -- each have different characteristics. Delegation is a more extensive form of decentralization. Through delegation central governments transfer responsibility for decision-making and administration of public functions to semi-autonomous organizations not wholly controlled by the central government, but ultimately accountable to it. Governments delegate responsibilities when they create public enterprises or corporations, housing authorities, transportation authorities, special service districts, semi-autonomous school districts, regional development corporations, or special project implementation units. Usually these organizations have a great deal of discretion in decision-making. They may be exempted from constraints on regular civil service personnel and may be able to charge users directly for services. Main article: Devolution Devolution is an administrative type of decentralisation. When governments devolve functions, they transfer authority for decision-making, finance, and management to quasi-autonomous units of local government with corporate status. Devolution usually transfers responsibilities for services to local governments that elect their own elected functionaries and councils, raise their own revenues, and have independent authority to make investment decisions. In a devolved system, local governments have clear and legally recognized geographical boundaries over which they exercise authority and within which they perform public functions. Administrative decentralization always underlies most cases of political decentralization. Dispersal of financial responsibility is a core component of decentralisation. If local governments and private organizations are to carry out decentralized functions effectively, they must have an adequate level of revenues – either raised locally or transferred from the central government– as well as the authority to make decisions about expenditures. Fiscal decentralization can take many forms, including self-financing or cost recovery through user charges, co-financing or co-production arrangements through which the users participate in providing services and infrastructure through monetary or labor contributions; expansion of local revenues through property or sales taxes, or indirect charges; intergovernmental transfers that shift general revenues from taxes collected by the central government to local governments for general or specific uses; and authorization of municipal borrowing and the mobilization of either national or local government resources through loan guarantees. In many developing countries local governments or administrative units possess the legal authority to impose taxes, but the tax base is so weak and the dependence on central government subsidies so ingrained that no attempt is made to exercise that authority. Don’t trust subordinates Decisions are always made by top management There is close participation between the employees and top management Decisions are made throughout organization
<urn:uuid:f7f436d8-e29c-46c1-81b5-397e26b99216>
CC-MAIN-2016-26
http://management4best.blogspot.com/2010/02/centralization-decentralization.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94256
2,186
3.4375
3
Transparent electronics is an emerging technology that focuses on producing optoelectronic devices and invisible electronic circuitry. It conducts transparent conducting oxides (TCO) such as In2O3, SnO2, ZnO, and CdO. Transparent electronics dramatically change the use and look of electronic devices. They increase the potential of innovative products by adding features like the ability to display information on an automobile windshield or surfing the Web on top of a tea table. In the last decade, the market for different materials used in transparent electronic applications has grown robustly. This emerging market is expected to grow at a speculative rate in the future. Due to a large number of tech savvy people, a majority of this growth is expected from North America and Europe. The market was dominated by transparent conducting oxides (TCO) due to their wide range of applications in touch display panels, optical coatings, flat panel displays, solar cells, defrosters, heaters, optical coating, and smart windows. TCOs can be used as passive optical or electrical coatings. The silicon compound segment is dominating the transparent electronic materials market. Zinc oxide is the cheapest and an environment friendly compound; it also has significant potential in non-volatile flash memory. Applications of transparent electronics include transportation, consumer electronics, energy sources and others. Owing to stringent regulations in energy consumption and emission reduction, the demand for building integrated photovoltaic’s (BIVP) has increased, which will accelerate the demand for transparent electronic materials. Transparent electronics reduce the size and increase the memory capacity of electronic devices. Moreover, market growth is further accelerated by technology innovation and new product developments, which will reduce the cost and size of electronic devices. Lack of skilled manpower from different verticals such as pure and applied science, physics, chemistry, electrical, electronics, circuit engineering required to achieve desired application specific properties depending on desired properties of the final product, is a major challenge for this industry. The Oregon State University has developed zinc tin oxide based resistive random access memory (RRAM) which is also referred to as ‘memristor’. This is a new transparent technology which the computer memory operates on resistance. Products incorporated with this technology become cheaper, smaller, and faster. This research report analyzes this market depending on its market segments, major geographies, and current market trends. Geographies analyzed under this research report include - North America - Asia Pacific - Rest of the World This report provides comprehensive analysis of - Market growth drivers - Factors limiting market growth - Current market trends - Market structure - Market projections for upcoming years This report is a complete study of current trends in the market, industry growth drivers, and restraints. It provides market projections for the coming years. It includes analysis of recent developments in technology, Porter’s five force model analysis and detailed profiles of top industry players. The report also includes a review of micro and macro factors essential for the existing market players and new entrants along with detailed value chain analysis. Reasons for Buying this Report - This report provides pin-point analysis for changing competitive dynamics - It provides a forward looking perspective on different factors driving or restraining market growth - It provides a technological growth map over time to understand the industry growth rate - It provides a seven-year forecast assessed on the basis of how the market is predicted to grow - It helps in understanding the key product segments and their future - It provides pin point analysis of changing competition dynamics and keeps you ahead of competitors - It helps in making informed business decisions by having complete insights of market and by making in-depth analysis of market segments - It provides distinctive graphics and exemplified SWOT analysis of major market segments
<urn:uuid:11cfadde-4a08-40da-8ec7-0d69d02173c3>
CC-MAIN-2016-26
http://www.transparencymarketresearch.com/transparent-electronics-market.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934109
770
2.9375
3
Discovering Addiction Genes Using the Candidate Gene Approach In Pedigree Investigator, you can follow a family with nicotine addiction and identify a gene that may increase their susceptibility. Below, you can learn more about the candidate gene approach used in that activity. Searching among all of our 21,000 or so genes for a few that are involved in a complex disease can be quite costly and time consuming. That is why researchers often use what they and others know about a disease to narrow their focus down to a smaller number of "candidate genes." Where Do We Begin? A smart way to start narrowing down the number of candidate genes is by looking at what we already know. As part of the scientific process, researchers routinely publish their discoveries in journals. Other researchers read the journal articles and then build on these past discoveries. Putting Our Heads Together Researchers often work together, or collaborate, to tackle a problem from several angles at once. This allows researchers to pool their areas of expertise to everyone's benefit. In order to determine whether a gene is associated with nicotine addiction, a protein biochemist might need to collaborate with both a clinician and a DNA analyst. Protein biochemist - Studies protein structure and function, and how proteins interact with one another. Provides a basic understanding of the problem. Clinician - Treats patients for nicotine addiction. Provides access to patients and collects DNA samples from nicotine addicts and their family members. DNA Analyst - Can rapidly analyze the DNA sequence from hundreds of blood samples. Analyzes and compares the DNA sequences of nicotine addicts and non-addicts. How Genes Affect Behavior How can a difference in someone's gene sequence cause a behavioral change, like being more susceptible to nicotine addiction? Why isn't it 100%? After collecting DNA from several people with nicotine addiction and ADHD, you analyze their CHRNA4 gene sequences to determine which alleles they carry. Upon comparing addicts with ADHD to non-addicts and addicts without ADHD, you determine that allele 2 seems to be significantly more common in addicts with ADHD. However, not everyone who has allele 2 is addicted, and not everyone who is addicted has allele 2. Clearly, there is more to this story than this one gene. Such is the nature of complex diseases. Nicotine addiction and ADHD are complex diseases affected by many genes and environmental factors. CHRNA4 is just one gene that may play a role in addiction. The next step is to identify other addiction genes and determine how they work together to produce this complex disease. Understanding these genes' roles and how they interact can help lead to more effective treatments for nicotine addiction.
<urn:uuid:d5ddff82-c013-4f50-9046-248f03e893c1>
CC-MAIN-2016-26
http://learn.genetics.utah.edu/content/addiction/candidate/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937708
541
2.515625
3
The IPCC Summary from Policymakers (after all they got to approve every sentence of the thing) is occupying the blogs, with continued focus on climate sensitivity, ocean heat content, sea level rise and more. To Eli, an important bottom line can be found in the boxed statement on the last page Cumulative emissions of CO2 largely determine global mean surface warming by the late 21st century and beyond. Most aspects of climate change will persist for many centuries even if emissions of CO2 are stopped. This represents a substantial multi-century climate change commitment created by past, present and future emissions of CO2.which is expanded on below, A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions. Due to the long time scales of heat transfer from the ocean surface to depth, ocean warming will continue for centuries. Depending on the scenario, about 15 to 40% of emitted CO2 will remain in the atmosphere longer than 1,000 years.Bunnies have to keep in mind that the most optimistic of the scenarios, RCP 2.6, the one where the world gets serious about climate change, requires huge reductions in the carbon dioxide emissions, By 2050, annual CO2 emissions derived from Earth System Models following RCP2.6 are smaller than 1990 emissions (by 14% to 96%). By the end of the 21st century, about half of the models infer emissions slightly above zero, while the other half infer a net removal of CO2 from the atmosphere.The Sueddeutscher Zeitung reports that this originally said that by 2050 emissions had to be halved to avoid the 2 C boundary, but that the Saudi's insisted this be changed to (by 14% to 96%). The scientific consensus limited the damage because in a meeting of nations credibility is important and the full Inhofe only produces sniggling. Global surface temperature change for the end of the 21st century is likely to exceed 1.5°C relative to 1850 to 1900 for all RCP scenarios except RCP2.6. It is likely to exceed 2°C for RCP6.0 and RCP8.5, and more likely than not to exceed 2°C for RCP4.5. Warming will continue beyond 2100 under all RCP scenarios except RCP2.6. Warming will continue to exhibit interannual-to-decadal variability and will not be regionally uniformand pay careful attention to that word exceed Increase of global mean surface temperatures for 2081–2100 relative to 1986–2005 is projected to likely be in the ranges derived from the concentration driven CMIP5 model simulations, that is, 0.3°C to 1.7°C (RCP2.6), 1.1°C to 2.6°C (RCP4.5), 1.4°C to 3.1°C (RCP6.0), 2.6°C to 4.8°C (RCP8.5). The Arctic region will warm more rapidly than the global mean, and mean warming over land will be larger than over the ocean (very high confidence).Somebunnies will attempt to use the wide range of projections, 0.3 to 4.8 to cast doubt. Point out that the range for no action 2.6°C to 4.8°C, is much more precise, very scary, and in no way a walk in the park for their kid's kid's. Summaries at Real Climate,
<urn:uuid:e155e8f9-93c7-4f68-a344-6772003a28d7>
CC-MAIN-2016-26
http://rabett.blogspot.com/2013/09/looking-to-future.html?showComment=1380494555562
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918207
774
3.25
3
I-80 150 Years Old It may sound ludicrous to say that Interstate 80 is 150 years old in 1996, but its antecedent highway, the Mormon Trail, was established in 1846. It essentially followed the same route as modern I-80, the great Platte River. The Interstate's more immediate predecessors, the automobile highway, had rather inauspicious beginnings. A 1926 article by a writer for the Western Newspaper Union provides an interesting status report of conditions a mere seventy years ago. At that time there were only 6000 miles in the state highway system, and 1600 miles were still unpaved. Maps in 1926 indicate that between Omaha, Lincoln, and Grand Island most roads were surfaced, while elsewhere only short segments met the "all-weather" rating. Officials noted that traffic through Nebraska from out-of-state cars was increasing rapidly, and was perhaps "50% greater than in the average state." In 1916 100,000 automobiles were estimated to have traveled across Nebraska, while after eleven months of 1926 over 337,000 had been counted. Nebraska already had the reputation of being the "Gateway to the West." Boosters in Missouri and Kansas tried to divert traffic from the northern route to their southern one by advertising. However, Nebraska retained most of the traffic. In 1925, when federal and state officials met to number and mark the main routes across America, seven of these ran through Nebraska, counting both east-west and north-south roads. Neat metal markers, together with warning signs, soon replaced the old painted telephone pole method of marking routes. The covered wagon or prairie schooner was chosen to grace these markers in Nebraska. The Western Newspaper Union journalist expressed confidence that Nebraska would retain its title of "Gateway to the West." Today's Interstate 80 suggests that this is still the case. Return to Timeline Index
<urn:uuid:3df0e5f0-04d6-4be2-91b8-51f1d9e5b855>
CC-MAIN-2016-26
http://www.nebraskahistory.org/publish/publicat/timeline/I-80_150_years_old.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973535
378
3.359375
3
Details about Philosophic Values and World Citizenship: In Philosophic Values and World Citizenship: Locke to Obama and Beyond, Alain Locke the central promoter of the Harlem Renaissance, America's most famous African American pragmatist, the cultural referent for Renaissance movements in the Caribbean and Africa is placed in conversation with leading philosophers and cultural figures in the modern world. The contributors to this collection compare and contrast Locke's views on values, tolerance, cosmopolitanism, and American and world citizenship with philosophers and leading cultural figures ranging from Aristotle, Immanuel Kant, James Farmer, William James, John Dewey, Jose Vasconcelos, Hans G. Gadamer, Fredrick Nietzsche, Horace Kallen, Leroi Jones (Amiri Baraka) to the cultural and political figure of Barack Obama. This important collection of essays eruditely presents Locke's views on moral, emotional, and aesthetic values; the principle of tolerance in managing value conflict; and his rhetorical style, which conveyed his views of cultural reciprocity and tolerance in the service of the values of citizenship and cosmopolitanism. For teachers and students of contemporary debates in pragmatism, diversity, and value theory, these conversations define new and controversial terrain." Back to top Rent Philosophic Values and World Citizenship 1st edition today, or search our site for other textbooks by Leonard Harris. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Lexington Books. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
<urn:uuid:c8cf3a69-fdd1-4f9f-a975-529b5ada95ed>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/philosophic-values-and-world-citizenship-1st-edition-9780739148037-0739148036?om_ss=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.869721
325
2.65625
3
Cheap sensors that help cars avoid collisions could emerge from research into a lens-less imaging system. US scientists have used metamaterials to build the imaging system, which samples infra-red and microwave light.Metamaterials are materials that have properties purposefully designed rather than determined by their chemistry.The sensor also compresses the images it captures in contrast to current compression systems, which only squash images after they are taken.Most imaging systems, such as those found in digital cameras, use a lens to focus a scene on a sensor studded with millions of tiny sensors. More sensors means more detail is captured and, generally, produces a higher resolution image. The imaging system developed by graduate student John Hunt and colleagues at Duke University in North Carolina has no lens and instead combines a metamaterial mask or aperture and complicated mathematics to generate an image of a scene.The aperture is used to focus different wavelengths of light in different parts of a scene onto a detector. The different frequencies in the scene are sampled sequentially.This sampling helped to work out the distribution and mix of light wavelengths and their relative intensities found in a scene, said Mr Hunt. “Then we use some very elegant maths which was developed in computational imaging to turn that data into a 2D picture,” he told the Science podcast. The wavelength sampling was done electronically so happens very fast, he added.Currently the imaging system could capture about 10 images per second, he said. In addition, the imaging system compressed the information as it was gathered. Most other image compression systems, such as the widely used Jpeg format, are applied after an image has been snapped. While imaging systems that capture infra-red and microwave wavelengths already existed, said Mr Hunt, they were typically expensive, bulky or complicated to build.By contrast, the Duke imaging system used a thin strip of metamaterial mated with some electronics and processing software. Although it did not yet work with visible wavelengths of light, Mr Hunt said it could lead to a range of cheap, small, portable sensors that could find a role in many different fields.”You could build an imager into the body of a car to do collision-avoidance imaging,” he said, “or you could have a cheap handheld device to look through walls for wires and pipes.”A research paper detailing the work has appeared in the journal Science. – BBC
<urn:uuid:116e3934-acd6-48fc-8747-291b202ac26c>
CC-MAIN-2016-26
http://timesofpakistan.pk/technology/2013-01-18/lens-less-camera-emerges-from-metamaterials-work/70631/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00180-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962986
492
3.71875
4
BEIJING— The Chinese city of Shanghai will reduce the energy intensity of its economy by 3 percent this year by shifting from coal to natural gas and will limit the growth of carbon dioxide emissions to 8.5 million tons, the city government said. The city said in an energy-saving and climate-change plan for 2014 that it would curb growth in year-on-year energy consumption to 4 million tons of standard coal equivalent, keeping it on track to meet a total consumption cap in 2015 of 34.64 million tons. However, the energy generated would be cleaner as dirty coal would be replaced by alternative sources, according to the plan, posted on a municipal government website. Shanghai will “increase electricity imports, increase the use of natural gas, encourage distributed gas and renewable energy like wind, solar and biomass,” the city government said. New manufacturing facilities for iron and steel, building materials and non-ferrous metals would not be allowed in 2014, it said. While pollution in Shanghai is generally not as severe as in the capital, Beijing, the city is seeing more days when thick smog settles over the land. Authorities in China regularly publish policies and plans aimed at addressing increasingly severe environmental problems but they have long struggled to bring big polluting industries and growth-obsessed administrations to heel. Shanghai has an overall target of cutting energy intensity to 18 percent below 2010 levels by 2015. Energy intensity refers to the energy use per unit of gross domestic product. Under Shanghai's plan, carbon dioxide emissions, which contribute to climate change, from new energy sources would rise by 8.5 million tons this year, but it did not give estimates for changes in emissions from existing sources. As one of seven regions picked by the central government to pilot carbon trading, Shanghai last November launched an emissions trading scheme capping CO2 emissions from nearly 200 facilities in power generation, manufacturing, petrochemicals, aviation and ports. Under the plan, the city government said Shanghai would be seeking to expand its market by opening up for trading with other regions, but it provided no details. Permits in the Shanghai market traded Tuesday at 39 yuan ($6.29).
<urn:uuid:aac0d5ca-26ab-4a1b-bae2-b049d8e677f0>
CC-MAIN-2016-26
http://www.voanews.com/content/reu-shanghai-aims-for-cleaner-energy-lower-co2-growth/1874295.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940365
448
2.875
3
Health Conditions and Diseases Cardiovascular Disorders Vascular Disorders Thromboangiitis Obliterans Also known as Buerger's Disease. Arteries, most often in the legs and feet, become inflamed and accluded, causing burning, numbness, tingling, and if inadequate blood supply continues, phlebitis, tissue damage and possible gangrene. Related categories 1 The alternative names, causes, incidence and risk factors of the condition. Vascular Associates of Bangalore Newsletter The etiology and pathophysiology, clinical presentation, diagnostic studies and duplex scanning of buerger's disease. An e-mail list for those who have beurger's disease. Other languages 1 Last update:August 3, 2015 at 1:47:42 UTC
<urn:uuid:3883327a-deb9-4924-b9a8-5ba2750d09af>
CC-MAIN-2016-26
http://www.dmoz.org/Health/Conditions_and_Diseases/Cardiovascular_Disorders/Vascular_Disorders/Thromboangiitis_Obliterans/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.811768
163
2.515625
3
James Ford Rhodes (18481927). History of the Civil War, 18611865 1917. for those who could not in the winter of 186263 see with the eyes of to-day. Had his other qualities been enhanced by Washingtons dignity of manner, not so many had been deceived; but as it was we cannot wonder that his contemporaries failed to appreciate his greatness. Since his early environment in fostering his essential capabilities had not bestowed on him the external characteristics usually attributed to transcendent leaders of men, it was not suspected that, despite his lowly beginning, he had developed into a man of extraordinary mental power. Seward, with his amiable and genial manners, was an agreeable man in council. Fertile in suggestion, he must, in spite of his personal failings, have been exceedingly helpful to Lincoln, whose slow-working mind was undoubtedly often assisted to a decision by the various expedients which his Secretary of State put before him; for it is frequently easier for an executive to choose one out of several courses than to invent a policy. The members of the Cabinet who filled the public eye were Seward, Chase and Stanton and they demand a proportionate attention from the historian. It was either on Seward or Stanton that the President leaned the most; and the weight of evidence, confirmed by the fact of his urbanity, points to the Secretary of State as his favorite counsellor. Though Lincoln made up his mind slowly, once he had come to a decision, he was thenceforth inflexible. By gradual steps he had evolved the policy of emancipation and he was determined to stick to it in spite of the defeat of his party at the ballot-box and of his principal army in the field during the hundred days that intervened between the preliminary proclamation of September 22 and the necessary complement of January 1, 1863. Although the form of the preliminary proclamation implied that some of
<urn:uuid:f683d9dc-0335-4692-94f2-c6713c156f25>
CC-MAIN-2016-26
http://www.bartleby.com/252/pages/page196.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00092-ip-10-164-35-72.ec2.internal.warc.gz
en
0.99335
386
2.75
3
HERE LIES THE BLITHE SPRING by: Thomas Dekker - ERE lies the blithe Spring, - Who first taught birds to sing, - Yet in April herself fell a-crying: - Then May growing hot, - A sweating sickness she got, - And the first of June lay a-dying. - Yet no month can say, - But her merry daughter May - Stuck her coffins with flowers great plenty: - The cuckoo sung in verse - An epitaph o'er her hearse, - But assure you the lines were not dainty. POEMS BY THOMAS DEKKER 'Here Lies the Blithe Spring' was originally published in The Sun's Darling (1656).
<urn:uuid:165b47bb-b2e3-4f43-ad3b-2f4751b3e44e>
CC-MAIN-2016-26
http://www.poetry-archive.com/d/here_lies_the_blithe_spring.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.903459
169
2.875
3
If kids can't be healthy, they can't be successful. Los Angeles, California (PRWEB) March 15, 2013 “Now that a greater proportion of kids are suffering from obesity than ever before, GERD has surprisingly become common, says Dr. Michael Omidi, co-founder of the Children’s Obesity Fund. "Unfortunately, GERD is something that can have a corrosive impact on an adult’s health if not addressed by either healthy lifestyle choices or medical intervention.” Obese children are at increased risk for adult health problems including diabetes and high blood pressure. We tend to think of GERD as another condition primarily suffered by adults. However, research from a study at the Baylor College of Medicine now links childhood obesity to gastroesophageal reflux disease, also known as GERD. A recent report published in the Journal of Clinical Gastroenterology confirms that acid reflux can occur in children suffering from obesity. Babies spit up; that’s just what they do. Every new parent keeps a towel draped over their shoulders to catch the inevitable spit up when picking up a new baby that has just fed. Because the human gastrointestinal tract is not fully formed until nearly a year after birth, babies commonly spit up directly after feeding, or whenever the infant coughs or has been jostled excessively. Spitting up can be reduced if parents adjust feeding times and amounts. However, if there are additional symptoms such as reluctance to feed, trouble breathing, traces of blood in the spit up, or failure to “grow out of it,” there could be cause for concern about other digestive issues. Persistent acid reflux can eventually lead to esophageal scarring later in life. Symptoms include heartburn, chest pain, bad breath and hoarseness. If dietary modifications are not successful, then prescription or over-the-counter antacids may be employed under a doctor’s supervision. Nevertheless, in less severe cases, the avoidance of acidic foods such as citrus, tomatoes, peppermint and caffeinated beverages can help. Regular exercise, proper hydration and a healthy diet can help adults and children either avoid or eliminate GERD. Co-founded by Julian Omidi and Michael Omidi, M.D., the Children’s Obesity Fund (http://www.childrensobesityfund.org) hopes to help reverse the trend of rising obesity rates in America. The goal of the non-profit charity is to help people fully understand the obesity issue and its dire impacts on individuals and society as a whole -- and to use that knowledge to encourage children to grow up strong and healthy. Children’s Obesity Fund partners with other organizations to educate and support parents, educators and others so that we can all work together to raise healthy, active, social, and happy children. While the organization does not accept donations, it does encourage direct contributions of money and talents to the associations featured on our website. Children’s Obesity Fund is on Facebook at: http://www.facebook.com/pages/Childrens-Obesity-Fund/264244577009536?fref=ts and can also be found on Google+, Twitter and Pinterest.
<urn:uuid:afbd2f05-f3de-4d48-adc4-8fee10949534>
CC-MAIN-2016-26
http://www.prweb.com/releases/prweb2013/3/prweb10530712.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946311
666
3.1875
3
Functions of Teeth – Humans use teeth to tear, grind, and chew food in the first step of digestion. Teeth also play a role in human speech. Additionally, Functions of Teeth also provide structural support to muscles in the face and form the human smile and other facial expressions. So, broadly the main functions of the teeth can be summarized as follows: 1. Helps in mastication. 2. Aids in articulation and speech. 3. Gives shape and beauty to the face. 4. Helps in giving facial expressions. 5. Like in animals, it may be used for self- protection and attack. MASTICATORY Functions of Teeth One of the main functions of teeth is the mastication of the food, For the proper and faster digestion of the food, the act of swallowing of the food is preceded by its cutting, chopping and grinding by the teeth. So, the first step of digestion involves the mouth and teeth. Food enters the mouth and is immediately broken down into smaller pieces by our teeth. Each type of tooth serves a different function in the chewing process. Functions of teeth also includes that the Incisors cut foods when you bite into them. The sharper and longer canines tear food. The premolars, which are flatter than the canines, grind and mash food. Molars, with their points and grooves, are responsible for the most vigorous chewing. All the while, the tongue helps to push the food up against our teeth for the proper operation of functions of teeth. As we chew, salivary glands in the walls and floor of the mouth secrete saliva, which moistens the food and helps break it down even more. Saliva makes it easier to chew and swallow foods (especially dry foods), and it contains enzymes that aid in the digestion of carbohydrates. Once food has been converted into a soft, moist mass, it is pushed into the throat (or pharynx) at the back of the mouth and is swallowed. When we swallow, the soft palate closes off the nasal passages from the throat to prevent food from entering the nose. So, the process of chewing in the oral cavity not only help in tearing the food into swallowable pieces, but also allow the enzymes and lubricants to be released in the mouth to further digest, or break down, food. Without our teeth—which structurally so strong that they are found to be in great condition in fossils, when the body’s skin and bones have disappeared—we’d have to eat nothing but soft, mashed food. ARTICULATION AND SPEECH Functions of Teeth The mouth—especially the teeth, lips, and tongue—is essential for speech, one of the very important functions of teeth. The teeth, lips, and tongue are used to form words by controlling airflow through the mouth. The tongue, which allows us to taste, also enables us to form words when we speak. The lips that line the outside of the mouth both help hold food in, while we chew, and pronounce words when we talk. With the lips and tongue, functions of teeth help form words by controlling air flow out of the mouth. The tongue strikes the teeth as certain sounds are made. The th sound, for example, is produced by the tongue being placed against the upper row of teeth. If your tongue touches your teeth when you say words with the s sound, you may have a slip. Speech has, during the last 500,000 years, superseded chewing, as main function of the mouth. Simpson (1968) states that “Language has become far more than a means of communication in man. It is also one of the principal means of thought, memory, introspection, problem solving and other mental activities.” Recently a very experienced dentist who was watching small children shift the tongue to its natural nose breathing position by singing said “We have to come to accept that the mandible is undergoing a change in function. It is no longer designed for chewing, but for speech”. Human tongues, along with their associated nerves, the respiratory system, and the functions of teeth and lips, are much more versatile than those of other animals, allowing humans the ability to speak unlike any other species on Earth. Functions of Teeth in FACIAL SHAPE AND BEAUTY The importance of the face in social interaction is widely recognized. The teeth play an important role in giving facial fullness and aesthetically pleasant facial shapes. Absence of teeth, due to any reason, not only hampers the masticatory activity of the individual, but also affect the facial features to great extend, affecting the concerned person physiologically, emotionally and socially. Functions of Teeth in FACIAL EXPRESSIONS Your smile, formed by your mouth at your brain’s command, is often the first thing people notice when they look at you. It is the facial expression that most engages others. With the help of the functions of teeth—which provide structural support for the face muscles—your mouth also forms your frown and lots of other expressions that show on your face. Facial expressions can set the mood in many situations and usually tell us what people are thinking or feeling. For example, if we walk toward someone with a smile on our face, we are much more inviting than if we wear an expression of a scowl and pursed lips. Without a mouth and its structures, we would not be able to display our emotions through our expressions. Our lips, teeth, jaws, cheeks, and facial muscles all play an important role in creating facial expressions. We are able to make facial expressions without functions of teeth because of the complex muscular structure of the face. We have 22 muscles on either side of the face; humans have more facial muscles than any other animal. SELF PROTECTION AND ATTACK These functions of teeth is not of much importance in the modem era; however it has played a significant role in survival of early man and also in the case of animals. Many carnivorous (meat-eating) animals, such as tigers, have developed long, sharp teeth for clamping down on and killing prey. Beavers have chisel-like front teeth that they use to cut down large trees for building dams.
<urn:uuid:7a1e642d-b26d-472f-b313-cbb2f9797db6>
CC-MAIN-2016-26
http://drmuna.com/functions-of-teeth/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950844
1,294
3.703125
4
That last chapter was a monster, so let's tackle something easier. We've seen how to define buffer-local mappings and options, so let's apply the same idea to abbreviations. bar files again, switch to foo, and run the following :iabbrev <buffer> --- — While still in foo enter insert mode and type the following text: Hello --- world. Vim will replace the --- for you. Now switch to bar and try it. It should be no surprise that it's not replaced, because we defined the abbreviation to be local to the Let's pair up these buffer-local abbreviations with autocommands to set them to make ourselves a little "snippet" system. Run the following commands: iff abbreviation. Then open a Python file and try it there too. Vim will perform the appropriate abbreviation depending on the type of the current file. Create a few more "snippet" abbreviations for some of the things you type often in specific kinds of files. Some good candidates are return for most for HTML files. Add these snippets to your Remember: the best way to learn to use these new snippets is to disable the old way of doing things. Running :iabbrev <buffer> return NOPENOPENOPE will force you to use your abbreviation instead. Add these "training" snippets to match all the ones you created to save time.
<urn:uuid:4129d92f-4a4c-4823-9e87-5ff70b256121>
CC-MAIN-2016-26
http://learnvimscriptthehardway.stevelosh.com/chapters/13.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.840644
315
2.65625
3
ASSET (2) Class IX (11) Class IX - Multiple Choice Questions (10) Class IX - NCERT EXERCISE (21) Class IX - Power Point Presentation (17) Class IX - Questions Within The Lesson (7) Class IX - Short Notes (18) Class VIII (26) Class X (8) Class X - NCERT EXERCISE (22) Class X - Power Point Presentation (15) Class X - Short Notes (29) General Knowledge (64) IESO (29) MCQ - Class IX (24) MCQ - Class X (30) NTSE (132) NTSE (CLASS VIII) (3) NTSE Old Papers (6) OTBA (Open Text Based Assessment) (1) Power Sharing (1) PSA Class IX (1) Short Notes (1) STSE (10) Summative Assessment (2) Summative Assessment - Class IX (19) Summative Assessment - Class X (23) Wednesday, 22 August 2012 Questions Within The Lesson Q.1. What is the difference between multiple cropping and modern farming methods? Ans. Difference between Multiple Cropping and Modern Farming : Multiple cropping and modern farming are two ways of increasing production from the same piece of land. Under multiple cropping, production is increased by growing more than one crop on a piece of land during the year. It is the most common way of raising agricultural Under modern farming method, production is increased by using modern technology in place of traditional agricultural practices. Under this method, high yielding varieties (HYVs) of seeds are used in place of simple seeds. HYV seeds promise to produce much greater amounts of grain on a single plant. Again, chemical fertilisers are used in place of cow dung and other natural manures. Q.2. The following table shows the production of wheat and pulses in India after the Green Revolution in units of million tonnes. Plot this on a graph. Was the Green Revolution equally successful for both the crops? Discuss. Table 1.2 : Production of pulses and wheat Production of Pulses Production of Wheat 1965 – 66 10 10 1970 – 71 12 24 1980 – 81 11 36 1990 – 91 14 55 2000 – 01 11 70 Ans. Graph showing production of pulses and wheat. The graph clearly shows that Green Revolution was more successful in wheat crop. In fact, there was nothing like Green Revolution in case of pulses. Q.3. What is the working capital required by the farmer using modern farming methods? Ans. Working capital required by the farmer using modern farming includes the following : (i) HYV seeds (ii) Chemical fertilisers (iii) Pesticides (iv) Water (v) Diesel (vi) Cash or money in hand Q.4. What kind of farming methods — modern or traditional or mixed do the farmers use? Write a note. Ans. In India, some farmers (mainly large farmers) use modern methods of farming. Farmers of Punjab, Haryana and western U.P. use these methods. However, small and marginal farmers all over the country still use traditional methods of cultivation. However, some of them have begun to use better seeds, chemical fertilisers, etc. In fact, we find farmers using modern methods along with the farmers who still use traditional methods. Q.5. What are the sources of irrigation? Ans. (i) Canals (ii) Tubewells (iii) Tanks A majority of the farmers in India continue to depend on rains as source of irrigation. Q.6. How much of the cultivated land is irrigated? (very little/nearly half/majority/all) Ans. Nearly half. Q.7. From where do farmers obtain the inputs that they require? Ans. Farmers obtain the required inputs from the traders. Q.8. Why are farm labourers like Dala and Ramkali poor? Ans. Both Dala and Ramkali are among the poorest people in village Palampur. Dala is a landless farm labourer who works on daily wages. He fails to get regular work in the fields because of mechanisation of agriculture. Similarly, Ramkali hopes to get lesser work even during the harvesting season this year. Last year she worked for less than five months in the entire year. Due to past debt, the village moneylender has refused to give her any more loan. So Dala and Ramkali are poor. Q.9. Gosaipur and Majauli are two villages in north Bihar. Out of a total of 850 households in the two villages, there are more than 250 men who are employed in rural Punjab and Haryana or in Delhi, Mumbai, Surat, Hyderabad or Nagpur. Such migration is common in most villages across India. Why do people migrate? Can you describe (based on your imagination) the work that the migrants of Gosaipur and Majauli might do at the place Ans. Some people (250 in number) of Gosaipur and Majauli have migrated to the rural areas of Punjab, Haryana, Mumbai, Nagpur etc. The migrants are employed by the large farmers of these regions either as regular workers or as daily wage workers. Q.10. What does Tejpal Singh do with his earnings? Ans. Tejpal Singh — a large farmer of the village — deposits most of his earnings in the bank. Then he uses this accumulated money for lending to poor farmers like Savita. He also uses this money to arrange for the working and fixed capital for cultivation.Q.11. (a) What capital did Mishrilal need to set up his jaggery manufacturing unit? Who provides the labour in this case? (b) Can you guess why Mishrilal is unable to increase his profit? (c) Could you think of any reasons when he might face a loss? (d) Why does Mishrilal sell his jaggery to traders in Shahpur and not in his village? Ans. (a) Sugarcane crushing machine and sugarcane. (b) Mishrilal is unable to increase his profit because of high price of sugarcane. (c) He might face a loss when — (i) sugarcane price rises further (ii) demand for jaggery declines (d) Mishrilal sells his jaggery to traders in Shahpur because he gets a better price. Q.12. (a) In what ways is Kareem’s capital and labour different from Mishrilal’s? (b) Why didn’t someone start a computer centre earlier? Discuss the possible reasons. Ans. (a) Mishrilal’s capital is used to produce jaggery (gur), while Kareem’s capital is used in the production of service. Similarly, Mishrilal employs unskilled labour, whereas Kareem has employed technically trained workers. (b) There was no computer centre in the village before that of Kareem. Also, there were no degree-holders in computer applications in the village before. Moreover, computer has become a popular subject only in the recent years. Q.13. (a) What is Kishora’s fixed capital? (b) What do you think would be his working capital? (c) In how many production activities is Kishora involved? (d) Would you say that Kishora has benefitted from better roads in Palampur? Ans. (a) Kishora’s fixed capital includes — a buffalo, wooden cart. (b) Kishora had a loan from the bank which could be his working capital. (c) Kishora is involved in the following activities : (i) He works as a farm labourer (ii) Dairying is another activity. He sells baffalo’s milk. (iii) He is also involved in transport activity. (d) Yes, because he is involved in transport activity. - ASSET (2) - Class IX (11) - Class IX - Multiple Choice Questions (10) - Class IX - NCERT EXERCISE (21) - Class IX - Power Point Presentation (17) - Class IX - Questions Within The Lesson (7) - Class IX - Short Notes (18) - Class VIII (26) - Class X (8) - Class X - NCERT EXERCISE (22) - Class X - Power Point Presentation (15) - Class X - Short Notes (29) - General Knowledge (64) - IESO (29) - MCQ - Class IX (24) - MCQ - Class X (30) - NTSE (132) - NTSE (CLASS VIII) (3) - NTSE Old Papers (6) - OTBA (Open Text Based Assessment) (1) - Power Sharing (1) - PSA Class IX (1) - Short Notes (1) - STSE (10) - Summative Assessment (2) - Summative Assessment - Class IX (19) - Summative Assessment - Class X (23)
<urn:uuid:dd54d873-0cae-4945-814d-75071fb46278>
CC-MAIN-2016-26
http://socialscience4u.blogspot.com/2012/08/ix-story-of-village-palampur-questions.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923924
2,015
3.8125
4
- view fewer The Paleogene record of Himalayan erosion: Bengal Basin, Bangladesh. EARTH PLANET SC LETT 1 - 14. A knowledge of Himalayan erosion history is critical to understanding crustal deformation processes, and the proposed link between the orogen's erosion and changes in both global climate and ocean geochemistry. The most commonly quoted age of India-Asia collision is similar to 50 Ma, yet the record of Paleogene Himalayan erosion is scant - either absent or of low age resolution. We apply biostratigraphic, petrographic, geochemical, isotopic and seismic techniques to Paleogene rocks of the Bengal Basin, Bangladesh, of previously disputed age and provenance. Our data show that the first major input of sands into the basin, in the > 1 km thick deltaic Barail Formation, occurred at 38 Ma. Our biostratigraphic and isotopic mineral ages date the Barail Formation as spanning late Eocene to early Miocene and the provenance data are consistent with its derivation from the Himalaya, but inconsistent with Indian cratonic or Burman margin sources. Detrital mineral lag times show that exhumation of the orogen was rapid by 38 Ma. The identification of sediments shed from the rapidly exhuming southern flanks of the eastern-central Himalaya at 38 Ma, provides a well dated accessible sediment record 17 Myr older than the previously described 21 Ma sediments, in the foreland basin in Nepal. Discovery of Himalayan detritus in the Bengal Basin from 38 Ma: I) resolves the puzzling discrepancy between the lack of erosional evidence for Paleogene crustal thickening that is recorded in the hinterland; 2) invalidates those previously proposed evidences of diachronous collision which were based on the tenet that Himalayan-derived sediments were deposited earlier in the west than the east; 3) enables models of Himalayan exhumation (e.g. by mid crustal channel flow) to be revised to reflect vigorous erosion and rapid exhumation by 38 Ma, and 4) provides evidence that rapid erosion in the Himalaya was coincident with the marked rise in marine Sr-87/Sr-86 values since similar to 40 Ma. Whether 38 Ma represents the actual initial onset of vigorous erosion from the southern flanks of the east-central Himalaya, or whether older material was deposited elsewhere, remains an open question. (C) 2008 Elsevier B.V. All rights reserved. |Title:||The Paleogene record of Himalayan erosion: Bengal Basin, Bangladesh| |Keywords:||Bengal Basin, Himalayan erosion, Barail Formation, Bangladesh, detrital thermochronology, Surma Basin, INDIA-ASIA COLLISION, FORELAND BASIN, TECTONIC EVOLUTION, SOUTHERN TIBET, MASS-SPECTROMETRY, DETRITAL MODES, NORTHERN INDIA, SYLHET TROUGH, RIVER SYSTEM, DECCAN TRAPS| |UCL classification:||UCL > School of BEAMS > Faculty of Maths and Physical Sciences UCL > School of BEAMS > Faculty of Maths and Physical Sciences > Earth Sciences UCL > VP Research Archive Staff Only
<urn:uuid:301789ee-4034-425a-be7a-ca40aeeff071>
CC-MAIN-2016-26
http://discovery.ucl.ac.uk/46471/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.867029
683
2.90625
3
Ross River virus disease risk increases in south-west The Department of Health is warning residents and visitors in the southwest of the State to avoid mosquito bites following the detection of Ross River virus (RRV) activity in mosquito populations for the first time this season. The Department of Health's Managing Scientist of Environmental Health Hazards, Dr Michael Lindsay said the Department's mosquito and virus surveillance program (undertaken by the University of Western Australia) has now detected RRV at coastal mosquito breeding sites in the south-west. Symptoms of RRV include painful or swollen joints, sore muscles, skin rashes, fever, fatigue and headaches. Symptoms can last for weeks or months and the only way to properly diagnose the viruses is by having a specific blood test. There is no cure for RRV so it is very important that people take care to prevent being bitten by mosquitoes. "Above average rainfall this spring has enabled breeding of mosquitoes in large numbers in many coastal and inland areas of the south-west and wheatbelt", Dr Lindsay said. The Department now has evidence that RRV is active in coastal mosquito populations. This activity may also spread to other regions where mosquito populations have already established as a result of the recent rains. Local Government mosquito management programs have been underway since August in some areas and will continue in regions with a recognised risk of RRV. "However, it is not realistic to rely on mosquito management programs to keep mosquitoes below nuisance levels, especially when unfavourable environmental conditions reduce the effectiveness of control methods. Therefore, people need to take their own precautions to avoid mosquito bites," Dr Lindsay said. People living in or travelling to mosquito-affected areas in the southwest of WA should take extra precautions, such as: - avoiding outdoor exposure particularly around dawn and dusk (and the first few hours after dark) - wearing protective (long, loose-fitting, light coloured) clothing when outdoors - applying a personal repellent containing 20% diethyl toluamide (DEET) or picaridin to exposed skin or clothing. The most effective and long-lasting formulations are lotions or gels. Natural or organic repellents may not be as effective as DEET or picaridin, or may need to be reapplied more frequently - ensuring insect screens are installed and in good condition. The use of bed nets will offer further protection - using mosquito nets or mosquito-proof tents when camping or sleeping outdoors - ensuring infants and children are adequately protected against mosquito bites, preferably with suitable clothing, bed nets or other forms of insect screening. With summer approaching, it is also a timely reminder for residents to minimise mosquito breeding around the home by taking some simple steps to remove or modify breeding sites. Residents should: - Dispose of all containers which hold water - Stock ornamental ponds with fish and keep vegetation away from the water's edge - Keep swimming pools well chlorinated, filtered and free of dead leaves - Fill or drain depressions in the ground that hold water - Fit mosquito proof covers to vent pipes on septic tank systems. Seal all gaps around the lid and ensure leach drains are completely covered - Screen rainwater tanks with insect proof mesh, including inlet, overflow and inspection ports - Ensure guttering does not hold water - Empty pot plant drip trays once a week or fill them with sand - Empty and clean animal and pet drinking water bowls once a week. Media contact: (08) 9222 4333
<urn:uuid:b12ddd21-a43e-434b-9c08-749a5b9f050a>
CC-MAIN-2016-26
http://www.health.wa.gov.au/press/view_press.cfm?id=1362
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939824
718
3.0625
3
In scientific terms an occult is a very useful astronomical event. The word itself has obvious connotations but the actual word itself means a concealment (look it up, I kid you not). It happens when something passes in front of something else. Usually it's important and we'd call it an eclipse, but on other occasions an occultation is little more than a curiosity. When the moon of Pluto, Charon, passed in front of a star, scientists could look at the changes in light coming from the star to determine first of all what sort of atmosphere it has and secondly what it's made of. The theory is that the light passing through a gas gets split a little bit, and you can measure what light frequencies come out theother side. Something went a bit odd. When the occultation happened, the light didn't disperse but just switched on and off. This can only mean one thing... there is no atmosphere on Charon. What a pity. A lifeless place without an atmosphere, it's a bit like out local council meetings. Pluto and Charon are also the target of a NASA mission. The New Horizons spacecraft is expected to be launched in their direction on 17 January, it is expected to arrive in 2015, that's a 9 year trip. At the end the spacecraft wil probably doing about 90,000mph. When I look up at the night sky I can't help but be amazed at the sheer immensity of it all. Something travelling at 90,000mph takes 15 years to get where it's going. Aeroplanes take 24hrs at about 450mph to go half way around the world. To get to Pluto on a 747 would take 2.7 million years. It makes you feel really small doesn't it? One of the interesting results of the findings was how big Charon is. It is thought to be about 700 miles in diameter. Something 700 miles big and they still are able to observe an occultation from 3660 million miles away. Space is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the drug store, but that's just peanuts to space. All of this was brought into being with just the command of a word? Whoa! For since the creation of the world God's invisible qualities, his eternal power and divine nature have been clearly seen, being understood from what has been made, so that men are without excuse. Is there more of a blatantly obvious statement of how big God is than the entire of creation? Concealment? Not likely.
<urn:uuid:2f0de35b-f25c-430d-82c4-67a7fba9f729>
CC-MAIN-2016-26
http://rollo75.blogspot.com/2006/01/horse-475-space-is-big.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966424
545
2.609375
3
22M:001 Basic Algebra I (3 s.h.) Students who wish to enroll: Link to enrollment information Us for more information about Guided Independent Study and other programs available through the Center for Credit Programs or if you are having technical problems with our Web site. Top of page About the coursewriter and instructor Foster Baker received his B.S. in Education from Northwest Missouri State University and his M.S. from the University of Kansas, with additional work at the University of Kansas City and The University of Iowa. Mr. Baker taught for over thirty years in the secondary schools of Missouri, Kansas, and Iowa. He has also taught at Kirkwood Community College in Cedar Rapids. Since 1965, Mr. Baker has instructed and written study guides for several Guided Independent Study courses at The University of Iowa. Top of page This course provides an introduction to elementary college algebra. Among the topics covered are: operations with real numbers, linear equations and inequalities, polynomials, quadratic equations, and rational expressions. This course does not count toward the total credit required for graduation at The University of Iowa. Top of page Required textbook and materials The required textbook may be ordered from the bookseller of your choice or from Iowa Book, L.L.C., Iowa City, IA: Please do not order textbooks before you receive your study guide and the textbook order form provided, as texts and editions of texts - Johnson, L. Murphy, and Arnold R. Steffansen. Elementary Algebra, second edition. Glenview, IL: Scott, Foresman and Company, 1989. Each section of the textbook includes a number of examples with solutions provided. These examples should be thoroughly studied. Note that answers to odd-numbered exercises appear at the back of the textbook. Top of page There are 20 lessons in this study guide, each consisting of four parts: reading assignment, discussion, practice exercises, and written assignment. The required reading assignments are in the course textbook, although some optional material is included in Appendices AD of this study guide. The coursewriter's discussion is designed to highlight the important aspects of the textbook and in some cases to supplement its discussion of certain topics. You should read the discussion section before turning to the reading assignment; you may wish to review the discussion during or after your reading. The discussion sometimes refers you to specific pages in one of the five appendixes (Fractions; Ratio and Proportion; Percentages; Perimeter, Circumference, Area; Positive and Negative Numbers). You may wish to work through each appendix in its entirety if you feel you need such a review. Completion of the exercises in the appendixes is recommended for review purposes, though this is not required. The third section of each lesson assigns practice exercises from the textbook. The practice exercises are always odd-numbered problems, and, as noted, answers to these can be found at the back of the textbook (pp. 397448). You should work through all of the practice exercises before submitting your written assignment. Practice exercises should not be submitted for grading. However, if you have a question about any of the practice exercises, copy the exercisedo not merely refer to its numberon a sheet of paper and submit it along with the written assignment for the lesson. Your instructor will then furnish you with a solution to the exercise. The written assignment for each lesson consists of selected even-numbered problems from the textbook. In the preparation of a written assignment, use a good grade of 8- by 11-inch white paper and a soft pencil or pen. Please use coordinate (graph) paper for graphing. Write only on one side of the paper and record the number of each problem and the page of the textbook on which it is to be found. Be sure to show your work so that the instructor can find any errors you may have made. The yellow Assignment Identification Sheet at the end of each lesson must be attached to your work before mailing. If you submit more than one lesson in a mailing, be certain to staple or clip each lesson This course is available on the Web; to access this course, go to GIS Online at . Although some material is available to the public, to access lessons themselves, a username and password are required. These may be obtained by calling or e-mailing our office (800-272-6430 or ). Technical assistance, including FAQs, software demos and downloads, and contact information and e-forms are provided on our SOS pages (Support for Online Students) at . To access the course Web pages, you will need Adobe Acrobat Reader; if you do not have Acrobat Reader installed on your system, it may be downloaded free from the Adobe Web site. Top of page There are three two-hour supervised examination. Examination 1 covers Lessons 16; Examination 2 covers Lessons 713; the final examination covers the entire course, with emphasis on the material in Lessons 1418. The exams consist of problems to be solved. You may not schedule an examination until all the written assignments prior to that exam have been submitted and returned to you. Directions for arranging examinations follow Lessons 6, 13, and 20. Top of page You will receive letter grades of A, B, C, D, or F in this course. Your final course grade will be determined primarily by the following rules: 1. Written assignments and examinations alike will be graded according to the following A = 90100 percent correct 8089 percent correct 6579 percent correct 5564 percent correct 054 percent correct Partial credit will be given on test items, provided your work is shown. 2. Your course grade will be determined as follows: 10 percent Written assignments Examination #3 (final) 3. If two of the examinations are graded F, the course grade will be F, notwithstanding rule 2 above. Your final grade in this course will be recorded on your official transcript as one of the following grades: Top of page Minimum completion time There is a limit to the number of assignments that may be submitted at one time. Under current regulations, the minimum completion time for a Guided Independent Study course is two weeks per semester hour of credit or six weeks for this three semester hour course. Since there are twenty lessons in this course, no more than three lessons (written assignments) will be graded in one week, although you should expect it may take up to two weeks to receive graded assignments back. (If you have not done so already, please read the notice on time limitations in the General Directions.) Plan your work so that lessons and examinations are completed at least two weeks before any deadline that you have to meet (such as arranging to have a transcript mailed for graduation, certification, or eligibility). This amount of time is needed for evaluation of your papers and for processing the final grade report. Although every effort will be made to correct and return your papers in a timely manner, neither the instructor nor GIS can assume responsibility for meeting your deadlines. Top of page List of lessons Lesson 1 Fundamental Concepts Lesson 2 Real Numbers Lesson 3 The Distributive Law Lesson 4 Linear Equations Lesson 5 Applied Problems Lesson 6 Inequalities, Graphing Examination 1 (Two Hours) Lesson 7 Equations of Lines Lesson 8 Systems of Linear Equations Lesson 9 Linear Inequalities - Polynomials Lesson 10 Special Products Lesson 11 Factoring Polynomials Lesson 12 Solving Quadratic Equations Lesson 13 Addition and Multiplication of Rational Expressions Examination 2 (Two Hours) Lesson 14 Fractional Equations Lesson 15 Ratio, Complex Fractions Lesson 16 Radicals Lesson 17 Radical Equations Lesson 18 Factoring Lesson 19 Quadratic Equations Lesson 20 Applications Final Examination (Two Hours) Note: A written assignment follows each lesson; you may submit up to three assignments per week.
<urn:uuid:43429e48-9fcb-49e8-a165-64b00e203701>
CC-MAIN-2016-26
http://softmath.com/tutorials/special-products-and-factoring-solver-program-download.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900336
1,738
2.765625
3
Elizabeth Plourde looks at the dangers of sunscreens for people and the environment - Book announcement Elizabeth Plourde, CLS, PhD, is a California-based scientist who has spent her career researching various medical topics. While swimming in Hawaii and contemplating the supposed effect of global warming on the loss of coral habitat, it occurred to her that the ocean was actually a lot colder than she remembered. When she began investigating this, she came across data showing that the chemicals in sunscreens can kill coral in 96 hours. This inspired her to complete more research on the subject. She discovered that not only do sunscreens fail to protect us against cancer, they may actually increase it. Her book, Sunscreens – Biohazard: Treat as Hazardous Waste, provides extensive evidence of the dangers of sunscreens and their negative effect on the environment. At a recent presentation at the Cancer Control Society Conference in Hollywood, California, Dr. Plourde explained why this is so. According to her research, many sunscreens only protect against UVB rays. These UVB rays are the ones that cause the burning sensation on the epidermis, or outer layer of the skin. Protecting against these rays does indeed stop the burning sensation and coloring effect, but it gives a false sense of safety as it encourages people to stay in the sun longer. Many sunscreens do not stop the sun's UVA and infra-red rays. These other rays penetrate into deeper layers of tissue and are more strongly linked with melanoma. While some newer sunscreens offer protection against UVA, none protect against infra-red. Dr. Plourde presented research data showing that levels of malignant melanoma and all skin cancers increased significantly as the percentage of sunscreen users rose over time. Many of the chemicals in sunscreens are known carcinogens and also endocrine disrupting chemicals (EDC). These EDCs have properties that disrupt both androgens and estrogens. In areas where there has been much exposure to ED chemicals, coral and other sea populations have died off and the prevalence of dual-sexed fish has risen. Dr. Plourde presented research on mice and sunscreen exposure that showed increases in both pup and maternal mortality as well as reproductive issues in subsequent generations. To make matters worse, most sunscreen manufacturers use nano particles of titanium and zinc oxide in their formulas. The FDA currently has no requirements for noting the presence of nano particles on cosmetic product labels. Nano particles are so small that they can penetrate cell walls and cross the blood-brain barrier. This leads to more cell oxidation and damage, further increasing the possibility of skin cancer and other potential long-term side effects. Sadly, sunscreen residues have polluted many of our water sources, including not just oceans but inland lakes, rivers and municipal drinking water. Testing has shown that 97% of Americans have sunscreen chemicals in their blood. These chemicals can pass through the placenta and are found in breast milk. EDCs can alter male/female sex differentiation, cause men's breasts to grow, impact brain development, disrupt thyroid function and impact both male and female fertility. Sunscreen also leads to Vitamin D deficiency which is increasingly linked to reduced immunity, cancer, auto-immune issues and a number of other health issues. What can you do? - Stop using sunscreens and petition manufacturers to remove toxic chemicals from their products. - If a tan is your objective, your body's melanin is your best protection. Start with 10-20 minutes per day, gradually increasing sun exposure over time. - Use hats and clothing to cover up. - Make your own sunscreen. There are several great recipes on the internet. Be careful about recipes that use zinc oxide – you may need to ask the product manufacturer if it is really free of nano particles. Also, there is also some evidence that zinc oxide is harmful to fish. - Sunshine provides Vitamin D, a nutrient critical to our health. The sun is not our enemy and we need to use it wisely. - Learn more and spread the word. Read Dr. Plourde's book: Sunscreens – Biohazard: Treat as Hazardous Waste, available at Amazon.
<urn:uuid:74df3b0e-3198-468e-8e8e-1e88eba7931e>
CC-MAIN-2016-26
http://www.faim.org/sunscreen-as-biohazard
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946305
857
3.25
3
The Mendicant by Robert Wechsler is a project that shows us three different size cubes made by notching and joining pennies in perfect orientation to one another. About the project: Joined at perpendicular angles, the coins create a lattice structure allowing tunnel like passages of light from certain angles. As one moves around them, the cubes seem to fluctuate from material to ethereal. The number of pennies increases exponentially with the size of the cube. The Mendicant 26,982 includes pennies from all years featuring the Lincoln Memorial (1959-2008). The Mendicant 3,672 includes Lincoln Memorial pennies prior to the shift from copper to zinc production (1959-1982). The Mendicant 540 was built from pennies retrieved from a wishing well. The colorful patina of these coins is the result of exposure to the water and chemicals over time. With fifty billion currently in circulation, the penny is one of humanity’s most numerous objects, but despite its commonality, it is an extraordinarily rich artifact. As a symbol of American culture, it is on par with the Statue of Liberty. It is a monument to a beloved president. It is a proclamation of a national faith and creed. It is a time stamped record of our civilization. As much ornament as legal tender, the penny is equal parts form and function. It defines elegance just as its ubiquity, low monetary value, and high symbolic value defines humility. Mendicant is a term for one who has no possessions, is supported by the goodwill of others, and relies exclusively on charity to survive. Typically a position assumed after living a productive life and attending to all worldly concerns, a Mendicant is considered honorable. To be a Mendicant is to make a conscious choice to sacrifice conventional concerns in favor of humility, modesty and enlightenment. All images © Robert Wechsler – Website
<urn:uuid:e6dac4f7-4f5b-4c44-b0c0-bb7053e7b6bb>
CC-MAIN-2016-26
http://www.dejoost.com/the-mendicant-by-robert-wechsler/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924445
389
3.1875
3
Sleeping Satyr, Munich Glyptothek, c. 220 BC Satyrs (Satyri) in Greek mythology are half-man half-beast nature spirits that haunted the woods and mountains, companions of Pan and Dionysus. Although they are not mentioned in Homer, in a fragment of Hesiod they are called brothers of the mountain nymphs and Kuretes, and an idle and worthless race. They are strongly connected with the cult of Dionysus. Satyrs are male followers, the female followers of Dionysus are maenads. Satyrs bear on their foreheads small bony protuberances that in a goat would grow into horns. On Attic painted vases, satyrs are strongly built with flat noses, large pointed ears, long curled (archaic?) hair, full beards, and horses' tails or goats' tails. Sometimes they have teat-like protuberances (pherea) on the neck. Maenad and Satyr dancing with the infant Dionysus, Terra cotta Relief, British Museum Wine Production by Satyrs Wine Production by Silenoi and Maenads They were a roguish but faint-hearted folk: subversive and dangerous, but shy and cowardly. As Dionysiac creatures they are lovers of wine and women, roaming to the music of pipes (auloi) and cymbals, castanets and bagpipes, dancing with the nymphs or pursuing them, and striking terror into men. They tended to engage in revelry with Dionysus and play only minor roles in myths and legends. They had a special form of dance called Sikinnis. They are instinctively ready for every physical pleasure. Wreaths of vine or ivy circle their heads. They are naked, with erect phalluses ('ithyphallic'), but drape themselves with spotted panther skins, goatskins or fawn skins, like Dionysus. Unlike immortal creatures, the satyrs do grow old. On painted vases and in other Greek art, satyrs are represented in the three stages of a man's life. Mature satyrs are bearded, and they are shown as balding, a humiliating and unbecoming disfigurement in Greek culture. The older ones are commonly sileni, who may be distilled to a single personification of satyrlike dotage, drunken Silenus, the tutor of Dionysus. They are often represented with a winecup in hand, and satyrs appear often in the decoration of winecups. Satyrs often carry the thyrsus, the rod of Dionysus tipped with a fircone. They are depicted in a number of ways, the most common being that of the upper half of a man and the lower half of a goat, sometimes possessing horns. They were less often depicted with the lower halves of horses. In either form, however, they possessed a long thick tail and constantly erect penis. As time progressed they were depicted as 'more human' with less animalistic characteristics, until only the tail remained to show that they were satyrs. A Maenad uses the Thyrsus against a Satyr, Euphronios Painter, Berlin Museum In earlier Greek art they appear as old and ugly, but in later art, especially in works of the Attic school, this savage character is softened into a more youthful and graceful aspect. There is a famous statue supposed to be a copy of a work of Praxiteles, representing a graceful satyr leaning against a tree with a flute in his hand. In Attica there was a species of drama known as the Satyric; it parodied the legends of gods and heroes, and the chorus was composed of satyrs. Euripides's play of the Cyclopi is the only extant example of this kind of drama. The older satyrs were called Sileni, the younger Satyrisci. By the Roman poets they were often confounded with the Fauns. The symbol of the shy and timid satyr was the hare. In some districts of modern Greece the spirits known as Calicantsars offer points of resemblance to the ancient satyrs; they have goats' ears and the feet of asses or goats, are covered with hair, and love women and the dance. The herdsmen of Parnassus believe in a demon of the mountain who is lord of hares and goats. In the Authorized Version of Isa. Xiii. 25, xxxiv. 14 the word "satyr" is used to render the Hebrew sh'lrlm, "hairy ones." A kind of demon or supernatural being known to Hebrew folk-lore as inhabiting waste places is meant; a practice of sacrificing to the sh'irlm is alluded to in Lev. xvii. ~ hii where E. V. has "devils." They correspond to the "shaggy demon of the mountain-pass" (azabb al-akaba) of old Arab legend. In the Athenian 'satyr plays,' (q.v.) 5th century BC, a chorus of satyrs and sileni commented on the action. This 'satyric drama' burlesqued the serious events of the mythic past with lewd pantomime and subversive mockery. One complete satyr play from the 5th century BC survives. It is the Cyclops of Euripides. A papyrus bearing a long fragment of a satyr play by Sophocles, given the title 'Tracking Satyrs' (Ichneutae), was found at Oxyrhynchus in Egypt, 1907. Roman satyrs were confounded in the popular and poetic imagination with Latin spirits of woodland, the Fauns. Satyrs might also be associated with the attendants of the rustic spirit Pan, called the Panes. Roman satyrs were reimagined as goatlike from the haunches to the hooves. Roman satyrs were often pictured with larger horns, even ram's horns. Christian mythology demonized all pagan nature spirits such as satyrs, by associating them with demons and devils, though in fairness they do resemble the Jewish goat-man demon Azazel to whom the scapegoats were sent. Roman satire (q.v.) is a literary form, a poetic essay that was a vehicle for biting, subversive social and personal criticism. Though Roman satire is sometimes thoughtlessly linked to the Greek satyr plays, satire's only connection to the satyric drama is through the subversive nature of the satyrs themselves, as forces in opposition to urbanity, decorum, and civilization itself. Harry Thurston Peck Harpers Dictionary of Classical Antiquities, 1898: 'Faunus', 'Pan', 'Silenus'.
<urn:uuid:d5c27198-ec81-4349-8e17-fdb6adbfe3cd>
CC-MAIN-2016-26
http://www.mlahanas.de/Greeks/Mythology/Satyr.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965682
1,405
3.03125
3
Teeth decay resulting in rotten teeth is most likely to cause oral cancer as well. Teeth-decay, being an infectious ailment, can cause various problems, not only to the one who is suffering from it, but also to the closed ones. The bacterium that causes this ailment may be transmitted to other person through various activities, such as sharing the food or utensils and also by kissing. This can result in some serious infections in various body parts, especially among people having a weaker immune system. Tooth decay, as well as the bone & gum diseases that are related to mouth, is also associated to the development of oral cancer. When the decay gets spread to other teeth, it needs to be removed with the help the dentist. But in case it is not diagnosed earlier and people keep on ignoring it, this can create quite a big problem resulting to oral cancer in future. In such cases, the tooth decay starts spreading to its root canal and consequently, the blood vessels along with the nerves get infected by this bacterium. This may also escape to other parts of the body and can result in several degenerative diseases. It is, thus, very important to get in touch with the dentist if one faces such a problem of tooth decay because if ignored now, it may take shape of oral cancer afterwards. Though it is not clearly claimed or established till now, it is considered that this condition may lead to oral cancer because of the fact that the bacterium responsible for rotting teeth has metastasizing nature. More Articles : - Overview Of Oral Cancer - Medications For Oral Cancer - Can Rotting Teeth Cause Oral Cancer ? - Chewing Tobacco And Oral Cancer - Dental Prosthetic And Oral Cancer - Does Mouth Cancer Cause Blood Blisters ? - Early Stages Of Lip Cancer - How Long Does It Take To Cure Oral Cancer ? - Is Oral Cancer Hereditary ? - What Does Mouth Cancer Look Like ?
<urn:uuid:b7ed4ba5-dcea-4dad-9e51-f8371c59de91>
CC-MAIN-2016-26
http://www.rocketswag.com/medicine/disease-prevention/cancer/oral-cancer/Can-Rotting-Teeth-Cause-Oral-Cancer.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950196
404
3.34375
3
In 2011, a team of palaeontologists led by Nancy Stevens, unearthed a single molar in Tanzania’s Rukwa Rift Basin. It was a tiny fossil, but its distinctive crests, cusps and clefts told Stevens that it belonged to a new species. What’s more, it belonged to the oldest known Old World monkey—the group that includes modern baboons, macaques and more. They called it Nsungwepithecus. A year later, and 15 kilometres away, the team struck palaeontological gold again. They found another jawbone fragment, this one containing four teeth. Again, a new species. And again, an old and distinctive one. The teeth represent the oldest fossils of any hominoid or ‘ape’. They called it Rukwapithecus. Together, these two new species fill in an important gap in primate evolution. Based on the genes of living species, we know that Old World monkeys and apes must have diverged from each other between 25 and 30 million years ago. But until now, there weren’t any fossils from either group during that window. The ones we found were all 20 million years old or younger. But Nsungwepithecus and Rukwapithecus were both found in sediments that could be precisely dated to 25.2 million years ago. They imply that apes had already split away from Old World monkeys by that time. Finally, fossils had corroborated the story that genes were telling. And they suggested that the split between these two groups took place against a backdrop of geological upheaval. I wrote about the discoveries for The Scientist so head over there for the full story.
<urn:uuid:4d81b491-0fa7-4cac-9558-7bae0ff5b687>
CC-MAIN-2016-26
http://phenomena.nationalgeographic.com/2013/05/15/two-new-fossils-reveal-details-of-apemonkey-split/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97196
350
3.96875
4
Let me be a free man—free to travel, free to work, free to follow the religion of my forefathers, and I will obey every law or submit to the penalty. -- Chief Joseph, Nez Perce. Indigenous rights are never freely given—they must be demanded, wrested away, then vigilantly protected. That is the essence of freedom. -- Walter Echo-Hawk, Pawnee. INDIAN RIGHTS IN THE U.S. ARISE from a foundation fashioned in the 19th Century. Much of that foundation remains sound today and should be retained, especially the "inherent tribal sovereignty" doctrine of Worcester v. Georgia (1833) and its "protectorate framework" for protecting Indian nations that exist in the Republic as "domestic dependant nations." However, other foundational principles are embarrassingly outmoded and make Indian rights vulnerable. Those include the doctrines of discovery, conquest, and of unlimited Congressional power in Indian Affairs, as well as engrained legal fictions that deem Indians racially and culturally inferior. Rights that spring from that dark well are forever vulnerable, and invariably discriminatory. A stronger, more just foundation for Indian rights is needed--one grounded in a modern world that rejected colonialism long ago. We must find justifications for Supreme Court decisions other than conquest, colonization, or racial superiority. The pivotal question becomes: What should the new foundation for Native rights be? That new foundation is provided by precepts of the UNDRIP, listed in the preambular paragraphs at the beginning of this international instrument. The Indigenous rights guaranteed in the UNDRIP are founded upon values that spring from the human rights framework of contemporary international law. These UNDRIP principles allow us to reconceptualize the foundation for Native American rights in the United States: * Equality: Indigenous peoples are "equal to all other peoples" and they "should be free from discrimination of any kind." Racism is rejected as an illegitimate source upon which to base Indian rights: "All doctrines, policies, and practices based on or advocating superiority of peoples or individuals on the basis of national origin or racial, religious, ethnic or cultural differences are racist, scientifically false, legally invalid, morally condemable and socially unjust." * Inherent Rights: Indigenous rights are "inherent rights" that derive from Indigenous peoples' "political, economic and social structures and from their cultures, spiritual traditions, histories and philosophies, especially their rights to their lands, territories, and resources." These rights are not "given" to them by nation-states, but already belong to them (akin to fundamental rights enjoyed by other peoples under natural law). Recognition of Indigneous rights is an important nation-building process that enhances harmonious and cooperative relations between the State and Indigenous peoples based on principles of justice, democracy, respect for human rights, non-discrimination and good faith. * Self-determination: The centerpiece for Indigenous rights is "control by Indigenous peoples" over developments that affect them and their lands that enables them to strengthen their institutions, cultures, traditions and to promote development in accordance with their needs and aspirations. Integration of Indigenous peoples into the fabric of society through this means strengthens consensual partnerships between Indigenous peoples and nations. By contrast, colonialism and dispossession are invalid sources for defining Indigenous rights, because those are sources of "historic injustice" that deny Indigenous Peoples their right to self-determination and prevent them from exercising the right to development in accordance with their needs and aspirations. These precepts can supplement the Worcester foundation for Indian rights in the United States if incorporated into federal Indian law during the implmentation of the UNDRIP, and they can replace the nefarious principles that have long weakened Indian rights. A sounder foundation for Indian rights arises from notions of justice and human rights found in contemporary international law. We will begin examining the minimum standards of the UNDRIP next week.
<urn:uuid:3ac2f395-68d5-409f-a5b0-a3f8fa08d992>
CC-MAIN-2016-26
http://www.walterechohawk.com/?page=blog&action=viewPost&postID=24
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930207
793
3.25
3
Research on observer rating of memory in children is examined in relation to the potential to develop screening instruments to improve efficiency in memory assessment, to shed light on the area of everyday memory in children, and to develop observer rating to the point where it may substitute for objective assessment. Several scales including the Parent Memory Questionnaire, the Children's Memory Questionnaire, the Observer Memory Questionnaire - Parent Form and the Working Memory Rating Scale are reviewed. Only the Working Memory Rating Scale has been published. Some of the other scales have good internal consistency and test-retest reliability but none have proven to be effective screening instruments and none can yet be recommended for clinical application. Relationships with objective test results have been at best modest, an issue that requires more detailed analysis if such instruments are to become effective screeners or even substitutes for objective assessment. Further observer rating research will shed light on everyday memory in children including its relationship to objective assessment and its place in models of memory. It remains to be established whether observer ratings add unique information to memory assessment or whether they can become a reliable, cost-effective substitute for objective assessment.
<urn:uuid:0659597d-37e9-4bd7-a576-5ba31c7cf36a>
CC-MAIN-2016-26
http://nova.newcastle.edu.au/vital/access/manager/Repository/uon:11346?f0=subject%3A%22children%22&sort=normalizedDate%2Ftitle%2F
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00152-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93139
222
2.625
3
- slide 1 of 5 Introduction to HTF Pumps for Solar Installations - Design Criteria and Selection Heat transfer pumps (HTF) must be capable of pumping a variety of high temperature and pressure heating mediums around a solar power installation. The popular mediums are synthetic oil for Cylindrical Parabolic Receivers and molten salts for a Central Receiver and Solar Fresnel Linear Plants. This is an article on heat transfer pumps, their specifications and operating parameters. Here we shall examine several pumps from different manufacturers; we begin then by having a quick look at the three different types of Concentrated Power systems in operation today. - slide 2 of 5 Concentrated Solar Power (CSP) Systems – an Overview There are three types of CSP systems. 1. Solar Power Plant with a Central Receiver This solar plant consists of numerous flat mirrored heliostats that automatically track the sun as it moves across the sky. They beam the reflected concentrated radiation to a central receiver fixed at a predetermined height on a tower. The receiver normally holds molten salts that are a mixture of nitrates that convert easily to liquid and withstand the high temperatures required by the system. The HTF pump circulates the salts from the receiver to a heat exchanger that uses this high temperature medium to convert water to superheated steam in a two pass process. The superheated steam is then used to drive the power generators. 2. Solar Parabolic Collector Power Plant This type of solar power plant uses mirrored parabolic shaped troughs to track the sun across the sky and concentrate its radiation onto an integral metal tube, enclosed in a glass tube under a vacuum. The HTF pump circulates the medium, usually synthetic oil, through numerous parabolic components, raising its temperature before feeding it through a heat exchanger. The heat exchanger converts water to superheated steam, which is used to run steam turbines and drive power generators. 3. Linear Solar Fresnel Power Plant This solar power plant uses numerous linear mirrors to reflect the sun's radiance onto a receiver tube, usually located above the array of linear mirrors. The HTF pump circulates molten salts from the receiver tube through a heat exchanger that converts water to superheated steam. As with the other two CSP power plants, the superheated steam runs the turbines used to drive power generators. - slide 3 of 5 HTF Pumps for Solar Installation – Design & Specification Criteria The pumps are generally horizontal, but can also be of vertical design, being centrifugal, single, or multistage stage. They have durable stainless and/or duplex steel internals to match the sometimes abrasive and hazardous fluids, but they always handle high temperature and pressure to suit the medium that they circulate. HTF pumps are specifically designed for the CSP power plants and as such have special seals, bearings, drive shafts, impellors, and casings. Some solar installations have a main and auxiliary HTF pump, both built to meet the plant conditions. The main pump is located on the discharge from the panels to a tower receiver, with the auxiliary one being on the return line from the heat exchanger. High Temperature Fluid Pumps must be fully compliant with ISO 13709/API 610. Some of the components encompassed are listed below: - Pump casing and baseplate design and dimensions. - Allowable nozzle loads. - Pump impellors. - Capacity, heads, temperature, and pressure - Wear-ring diameters. - Casing gaskets. - Coupling guard. - Ball and radial bearings. - Shaft mechanical and gas seals. - Seal chamber. - Drive shafts. - Vibration levels. - Materials of construction. - Net Positive Suction Head (NPSH). The API 610 specifies the HTF pump design. Below are some HTF pump manufacturers design and operating parameters, along with images: Vertical Turbine Pump - Flowrate 13, 600 m3/hr / 60,000gall/min - Heads 530 m / 740 ft • - Pressures 100 bar / 1450 psi - Temperatures 600°C /1100°F - Temperatures to and above 400ºC - Pressures of 100 bar and above - Ready for heat transfer fluid (HTF) - Equipped for variable speeds - Suitable for 50 and 60 Hz applications 3. Sulzer Pumps - Temperatures 450 °C / 840 °F - Flowrate 2600m3/hr / 11000 US Galls/hr - Pressure 100bar / 1400psi - Head 300m / 1000ft - Core pump technologies in particular designed to handle fluids at temperatures of over 400°C whilst enduring extremely high and changing temperatures and pressures every day - slide 4 of 5 Selection of a Heat Transfer Pump for a Solar Power Plant The selection of the pump very much depends on it application and the type of CSP power plant it is to be used in. Most of the horizontal centrifugal pumps can be designed to circulate oil or molten salts, but the vertical pumps are more suited to molten salts. I have sailed as a ships engineer in charge of operating and maintenance of Sulzer marine engines and found them to be excellent source of main power. I have also worked as an engineer in the offshore oil and gas industry and installed Sulzer pumps to most of the production platforms fabricated at the offshore construction yard. Again there were good examples of engineering, and therefore I would be inclined to select a Sulzer HTF pump due to my personal experience of working with them. - slide 5 of 5 Heat Transfer Pumps are used in Concentrated Solar Power plants to circulate a heating medium through receivers. These receivers are bombarded by the sun’s radiance reflected from parabolic or linear mirrors or from flat mirrored heliostats, depending on the system. The medium is usually synthetic oil or molten salts and the HTF pumps have to cope with high temperature and pressures as well as high heads when circulating molten salts through a receiver fixed on a high solar tower. High Temperature Fluid Pumps must be fully compliant with ISO 13709/API 610, both in their design and operating parameters. This ensures that the Heat Transfer (HTF) Pumps for concentrated Solar and large-tract mirrored receivers are capable of pumping molten salts and synthetic oils at high temperature and pressures. For information about concentrating solar power technologies, please see Dr. Harlan Bengtson's four-part series here at Bright Hub.
<urn:uuid:0c3baff0-180c-45ce-bf14-390d563f1746>
CC-MAIN-2016-26
http://www.brighthubengineering.com/power-plants/94664-htf-pumps-for-solar-installations-design-criteria-and-selection/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00078-ip-10-164-35-72.ec2.internal.warc.gz
en
0.904152
1,375
3.015625
3
What Parents Need to Know Parents need to know that women are depicted as exceptionally intelligent and respected, compared to other movies of the era. Helen speaks of the prejudice she faced as a Mexican woman, and Amy listens sympathetically. - Families can talk about how everyone seems to have a different reason for not helping Will. How many can you identify? Which reasons seem the best to you? Which seem the worst? What makes Amy change her mind? Why does Will throw his badge in the dirt? Do you think the screenwriter chose the name "Will" for any special reason? How do you decide when to stay and fight and when to run? How do you evaluate the risks? What should the law be?
<urn:uuid:733721a6-94d0-4e8e-a1e0-1e09fbad053d>
CC-MAIN-2016-26
http://www.movies.com/movie-reviews/high-noon-review/m60999
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975748
146
3.046875
3
04 Jun 2013: Nanofilter System Can Deliver Clean Water to Rural Families for $2.50 Indian scientists have developed a filter system they say can provide clean water to rural families for less than $2.50 per year and help reduce incidences of diarrhea that cause tens of thousands of deaths in the developing world annually. Writing in the Proceedings of the National Academy of Sciences , researchers from the Indian Institute of Technology Madras (IITM) describe the filter, which contains a composite of nanoparticles, held within a sieve, that emit a stream of silver ions that eradicate water-based microbes. In producing the filter, the team used a material called aluminium oxyhydroxide-chitosan, which, because of its structure and the diameter of the silver nanoparticles, is optimal for releasing the silver ions at temperatures of between five to 35 degrees C. In addition, the material is widely available, and environmentally friendly, and it keeps concentrations of the silver ions below safe drinking water standards, lead author Thalappil Pradeep told ScieDev.Net . So far, the scientists have installed the filters in water treatment plants in West Bengal, but are now seeking a company to produce the devices for widespread use. Yale Environment 360 is a publication of the Yale School of Forestry & Environmental Studies Yale Environment 360 articles are now available in Spanish and Portuguese on Universia , the online educational network. Visit the site. Business & Innovation Policy & Politics Pollution & Health Science & Technology Antarctica and the Arctic Central & South America Tribal people and ranchers join together to stop a project that would haul coal across their Montana land. Watch the video. is now available for mobile devices at e360.yale.edu/mobile An aerial view of why Europe’s per capita carbon emissions are less than 50 percent of those in the U.S. View the photos. Ugandan scientists monitor the impact of climate change on one of Africa’s most diverse forests and its extraordinary wildlife. Learn more. video series looks at the staggering amount of food wasted in the U.S. – a problem with major human and environmental costs. Watch the video. video goes onto the front lines with Colorado firefighters confronting deadly blazes fueled by a hotter, drier climate. Watch the video. A three-part series Tainted Harvest looks at the soil pollution crisis in China, the threat it poses to the food supply, and the complexity of any cleanup. Read the series.
<urn:uuid:6ed06e5e-7242-4bd7-808b-7f79c980ccec>
CC-MAIN-2016-26
http://e360.yale.edu/content/digest.msp?id=3860
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00019-ip-10-164-35-72.ec2.internal.warc.gz
en
0.872413
538
3.234375
3
Alaska Science Center Part of the United States Geological Survey’s Biological Resources Division, this center plays a pivotal role in conducting research on wildlife, including sea otter populations in Alaska. California Department of Fish and Game This state agency protects fish and wildlife, and is one of the collaborators in research into what’s killing California sea otters. The agency’s core of sea otter specialists work out of its Marine Wildlife Veterinary Care and Research Center in Santa Cruz http://www.dfg.ca.gov/ Defenders of Wildlife A national advocacy organization based in Washington, D.C., DOF is dedicated to the protection of all native wild animals and plants in their natural communities. They focus on the accelerating rate of extinction of species with the associated loss of biological diversity, and habitat alteration and destruction. Friends of the Sea Otter This organization, based in Pacific Grove, CA., is an advocacy group dedicated to actively working with state and federal agencies to maintain the current protections for sea otters as well as to increase and broaden these preservation efforts. Marine Mammal Center The center, based in the Marin Headlands just north of the Golden Gate Bridge linking San Francisco and Marin County, has rescued and treated more than 9,000 ill, injured or orphaned marine mammals at its facility since 1975. It returns as many as possible to the wild, and, through scientific inquiry, increases knowledge of marine mammals, their health and environment, through education. Monterey Bay Aquarium Another collaborator in the research to find out what’s killing California sea otters, the Monterey Bay Aquarium is active in sea otter rescue, rehabilitation, research and education. Its sea otter exhibit is one of the most popular in the aquarium. It runs a sea otter rescue and rehabilitation program. Its Web site has a sea otter Web cam, and provides detailed information about California sea otters. National Science Foundation- Ecology of Infectious Disease Some of the material presented on this site was funded by a National Science Foundation- Ecology of Infectious Disease grant. This link takes you to the NSF-EID multimedia in the news section and highlights our research efforts as well as others. The Otter Project The project, based in Marina, CA., was set up to promote the rapid recovery of the California sea otter, by facilitating research and communicating research results to the public. This site is the largest resource on the Internet for information about the Earth’s 13 species of otters, including river otters, as well as habitat overviews for the five continents on which otters live, including which otters live in each country, the threats they face, and their conservation status U.S. Fish and Wildlife Service The California sea otter is listed as a threatened species under the federal Endangered Species Act. This government agency protects sea otters, and its site has information on threatened and endangered species as well as aspects of wildlife conservation. UC Davis Wildlife Health Center The Wildlife Health Center is a multidisciplinary program within the School of Veterinary Medicine at UC Davis that focuses on the health of free-ranging and captive terrestrial and aquatic wild animals. It is the umbrella organization under which faculty, staff, students, and other partners come together to address the complex issues surrounding conservation in a changing world. Western Ecological Research Center A part of the United States Geological Survey, the center’s Santa Cruz field station http://www.werc.usgs.gov/santacruz is one of the collaborators in research into what’s killing California sea otters. Its Web site provides information about the natural history of sea otters, current research, and status. The center focuses on providing research, scientific understanding, and technology needed to support sound management of Pacific Southwestern ecosystems.
<urn:uuid:80c6cd0c-e7cb-42ce-b186-cefb3ed9954a>
CC-MAIN-2016-26
http://www.vetmed.ucdavis.edu/whc/seaotters/links.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90065
793
3.171875
3
Beliefs That Count by Georgia Harkness Georgia Harkness was educated at Cornell University, Boston University School of Theology, studied at Harvard & Yale theological seminaries and at Union Theological Seminary of New York. She has taught at Elmira College, Mount Holyoke, and for twelve years was professor of applied theology at Garrett Biblical Institute. In 1950 she became professor of applied theology at the Pacific School of Religion, in Berkeley, California. Published by The Graded Press, Nashville, Tennessee, 1961. This material was prepared for Religion Online by Ted & Winnie Brock. Chapter 11: We Believe in Divine Judgment God is not only the Creator but he is also the Judge of all the earth. All men and nations stand before His judgment bar. The moral law and the Christian ethic judge both sinner and saint. Beyond all human laws, customs, and opinions there is one divine Law which remains absolute and unchanging. Men may break themselves and their civilizations upon that Law but the Law itself stands forever. The judgments of the Almighty are true and everlasting. The Judge of All the Earth One of the early stories recorded in the Book of Genesis has some searching words: "Shall not the Judge of all the earth do right?" (Genesis 18:25.) It is the story of the projected destruction of Sodom for its sinfulness, and Abraham protests that the Judge of all the earth will surely not slay the righteous with the wicked! Thus in one sentence both the judgment and the mercy of God are suggested. These two motifs are found throughout the Bible, and together they are imbedded in the Christian faith. To take away either is to withdraw from the other something vital and indispensable. Yet God’s mercy and judgment have not always been held in proper balance. The experienced fact of human sinfulness and the promise of salvation through the unmerited forgiveness of sin have placed much emphasis on divine judgment in traditional Christian thinking. Both the Old and New Testaments refer many times to the wrath of God. The apocalyptic passages in the New Testament also contain a number of statements of which this one at the conclusion of the parable of the weeds is typical: "The Son of man will send his angels, and they will gather out of his kingdom all causes of sin and all evildoers, and throw them into the furnace of fire; there men will weep and gnash their teeth. Then the righteous will shine like the sun in the kingdom of their Father." (Matthew 13 :41-43.) It is not surprising, therefore, that there developed very early a doctrine of heaven and hell with a sharp separation of the righteous from the wicked at death and the eternal punishment and torment of the latter. So deeply imbedded is this concept that, as was noted earlier, many people have trouble in thinking of salvation in any other terms than those of escaping hell and reaching heaven after death. They cannot conceive of it in any other fashion, In recent years the belief in hell has waned among Protestants partly because of the difficulty of locating it in space but more from the conviction that a loving God would not want to condemn anyone -- even a hardened sinner, to say nothing of a kind and highly moral person who is not a Christian -- to endless torment. The ancient question "Shall not the Judge of all the earth do right?" leads us, as it led Abraham, to think that it would not be right for God to be so destructive. As a consequence, we sometimes get too sentimental about the loving-kindness of God and forget that he is exacting as well as loving. The belief in a "fire and brimstone" hell we may well surrender; we shall say more about this later. But we cannot overlook the belief in divine judgment -- and with it divine punishment -- without distorting the Christian faith. So, what may we believe about it ? What Is Divine Judgment? Divine judgment will be clearer, perhaps, if we think of it in terms of the justice of God. Both words are derived from the Latin jus, which means "right" or "law." A true judgment is passed when the decision reached is one that is right and just. We do not need to think of the judgments of God legalistically, as we do when a human judge instructs the jury in a court case; yet the judgments of God are directly related to the laws of God. Likewise, we do not need to think of God’s punishment as vindictive, retributive, or retaliatory. Our human tendency is to think of justice as "getting even," as one small boy strikes another and the other strikes back, or as a supposedly mature individual or nation thinks it must give back to enemies either the treatment received or something more severe. This ancient idea of the lex talionis -- "an eye for an eye and a tooth for a tooth" -- was explicitly repudiated by Jesus. But it still persists in our society even when our better sentiments recoil from it. How, then, is punishment justified? Our best analogy is the human family, although even this must, of course, fall short of the infinite love of God. The child who is always indulged and never punished becomes a "spoiled brat" and grows up with less strength of character than one who is firmly and justly disciplined. Brutal or arbitrary punishment will not do; loving and just punishment is a necessity for the fullest achievement of character. God is infinitely loving and just. He takes sin seriously, and all men are sinners. God would not be a God worthy of our worship -- certainly not the God of Jesus -- if he smiled indulgently upon our sins, bypassed them, and let us go on sinning with no evidence of divine disfavor. The Hebrew prophets proclaimed again and again the judgment of God on a sinful nation. These words of Amos still ring in our ears: Thus says the Lord: Yet there is another side of this message -- a note of hope in the midst of doom. Again we read the words of Amos: Seek good, and not evil, Today, as in the eighth century before Christ, we find greed, exploitation, callous indifference to human need, and vast amounts of conflict and strife between nations and social groups. These situations always cause tension and suffering; sometimes they break out in war. But we must remember that these wars do not occur because God desires them. They take place because a just God has so ordered his world that sin inevitably brings suffering and distress in its wake. God respects the freedom he has imparted to his children and does not interrupt our sinning by any forced conformity to his will. But God the Judge is always the Ruler of his world. There is a moral order in the world. When we break the laws of God, we are broken upon them. This does not always seem apparent in individual lives, however, for an obviously sinful person may seem to get along pretty well. Therefore we are inclined to ask Jeremiah’s question, Why does the way of the wicked prosper? Yet inwardly there is a difference between the love, joy, and peace of the dedicated Christian and the person who demands more and more for himself in defiance of God and at the expense of other persons. Whether in a sense of guilt and inner unrest which drives many to psychiatrists or in the perhaps more terrible lethargy that drugs conscience to insensibility, punishment for unrepented sin is an inescapable fact of life. Sometimes this judgment is interpreted as automatic punishment that goes on without God’s concern simply because the world is made this way. But such a view of an inflexible moral order is not enough to express the full meaning of divine judgment. The personal God who punishes in love does not simply leave us to our own destruction. He yearns to save us, and his just condemnation never cancels his mercy. This is why he sent his Son, Jesus Christ, for our redemption, and this is the major message of our faith. Law and Grace Beyond all human laws, customs, and opinions there is one divine Law which remains absolute and unchanging. Men may break themselves and their civilizations upon that Law but the Law itself stands forever. Because all human laws, customs, and opinions change from time to time and vary from place to place, we tend to think of right and wrong as relative to the particular culture in which we live. To cite a familiar example, some Christians think it is perfectly all right to drink a cocktail occasionally if they do not get drunk; others see this as a sin against God. Extend this dilemma to problems of family life and business dealings, to the moot problems of school integration and the use of atomic and hydrogen bombs in war, and it becomes evident that there is no unanimity among Christians as to the will of God in concrete matters of ethical decision. When we look to the Bible for an absolute set of rules, we fail to find it. There are, to be sure, many sources of guidance in the Bible, but neither the Ten Commandments nor the words of Jesus tell us everything. If we take everything in the Bible literally as a mandate for today, we run into strange developments. For example, Deuteronomy 25:5 specified that if a man died without having a son, his brother must marry the widow and try to beget a son who would bear the dead man’s name. And he had to do this regardless of whether or not he had a wife already! I do not know of any Christian in our time, however much of a biblical literalist, who feels obligated to keep this command. Such factors may lead us into an ethical relativism regarding our duty as Christians. But ought this to happen? The statement quoted affirms that "there is one divine Law which remains absolute and unchanging." This is true and vitally important. There is only one absolute and unchanging law laid for us by God through Christ. This is the law of love. Some would not call it a law, since love is not subject to command. But in any case, it is a supreme obligation. Jesus stated duties on which "depend all the law and the prophets" when he answered the inquiring lawyer’s question with the words: "You shall love the Lord your God with all your heart, and with all your soul, and with all your mind. This is the great and first commandment. And a second is like it, You shall love your neighbor as yourself. . . ." (Matthew 22:37-39.) Under all circumstances, the Christian is obligated to do the most loving, serving thing he can. This will not always be the same thing under differing circumstances. Words that cut must sometimes be spoken if healing is to take place, while under other circumstances the same words would simply be unloving or even spiteful. Christians will not always agree as to what is the most loving course of action to take, as Christians today are not in agreement over participation in war. Yet love stands always as the one supreme Christian obligation. And how is love related to justice? No end of theological and ethical writing has been done on this theme, and we cannot go into all the issues here. In brief, love and justice must be united, even as judgment and mercy are united in the nature of God. Love without justice becomes sentimentality; justice without love is no longer just, but vindictive. Then the coercive power that is a necessary instrument of justice replaces concern for persons. Many of the world’s major tangles today stem from attempts to preserve justice by force without the love of neighbor which alone makes force justifiable. Apply this idea to the international scene, to labor disputes, to racial tensions, or to almost any other social problem, and it becomes evident. This brings us to the idea of law and grace. Law is the instrument of justice, whether human or divine, although, as we have seen, there is only one supreme and unchanging divine law. Love is the expression of grace. The overflowing and, on our part, unmerited, love of God forgives our sin even though we still stand under the divine judgment, and the love of God for us enables us to love our neighbor. "We love, because he first loved us." (1 John 4:19.) Amid the relativities and clashing opinions of our time, the law of God stands forever. It is a law that is more than law because its source is the grace of God. It is a justice that is more than judgment because it springs from divine mercy. From this fountainhead Christians are called to love all men as brothers and to treat all men with a justice that finds its criterion and springs of action in love. The judgments of the Almighty are true and everlasting. We come now to say a few words about that disputed subject, the reality of hell and the possibility of everlasting punishment meted out by God. Here opinions differ greatly among Christians, and anything we say must be tentative. As we shall observe more fully in the next chapter, eternal life is a basic conviction of Christian faith. It is thought of in different ways, but this faith and hope are central in the faith of the Christian church. We do not know all about heaven because it lies beyond our observation and the Bible does not tell us all we should like to know. But most Christians believe that God provides such an eternal dwelling place for those who love him. Eternal life in this affirmative sense, as indicated in the Gospel of John, is not simply continuance after death; it is a quality of life which begins here and is endless. Because this is true, can we not then assume that the rejection of the call to love and serve God lies also on both sides of death? Hell in this life is certainly a reality; there is no sufficient reason to think that it ends with death. Hell must not be thought of as physical torment or endless burning in a sea of fire. This is pictorial imagery like the pearly gates and streets of gold with which heaven is often pictured. The basic ideas in the meaning of hell are alienation and separation from God by persistent rejection of him, the tighter forging of the chains of sin as we misuse our freedom, and the loneliness, remorse, and inner turmoil which are sin’s worst punishment. It is unwarranted to suppose that the Judge of all the earth remits these penalties in life or beyond death if persons persistently and impenitently refuse his grace. We have said that the wrath of God must not be taken to mean vindictiveness. It means God’s inevitable condemnation and terrible judgment upon sin. It is because sin is so serious and divine judgment is so real that hell in the sense just indicated is a reality upon earth and may well be after death. God forces no man to love and serve him; but when we refuse his invitation, we bear the penalty. Will all men after death eventually be won to acceptance? Some noted theologians think so on the ground that otherwise the redemptive purpose of the ever-loving God would be unfulfilled. Others, including the present writer, believe that human freedom is so basic to personality that its misuse in rejecting God’s grace, whether in this life or the next, will always be possible. We do not know; we must leave this in the hands of a just and loving God. "The judgments of the Almighty are true and everlasting" -- yes, and righteous altogether. It is as good to be aware of these stern certainties as it is to have the equal assurance that in God’s grace we shall find our peace. Viewed 89765 times.
<urn:uuid:9634a5fa-0281-4f31-a0d9-cd5866956fce>
CC-MAIN-2016-26
http://www.religion-online.org/showchapter.asp?title=582&C=780
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960214
3,237
2.671875
3
Decision Counseling Program Important health decisions may relate to lifestyle changes, early detection tests, risk assessment, treatment, quality of life, or survivorship. Making good health decisions can be difficult for patients and providers, especially when there is a lot of information to understand and there is uncertainty about possible outcomes. Informed decision making by patients and shared decision-making that involves patients and health care providers are recognized as hallmarks of quality medical care. Decision counseling enables patients and providers to explore the pros and cons associated with challenging health care decisions, identify important factors that influence decision making, weigh the influence of relevant factors, clarify personal preference among available options, and encourage selection of an option that makes sense. Outcomes of decision counseling may include: - Increased patient and provider awareness and understanding of available healthcare options - Increased provider awareness and understanding of patient preferences - Increased patient and provider satisfaction - Decreased time required by providers for patient education and support - Improved clinical outcomes
<urn:uuid:a42daa7b-3474-489e-8e8a-bc4ee7a33761>
CC-MAIN-2016-26
http://www.jefferson.edu/university/jmc/departments/medical_oncology/divisions/population_science/center_for_health_decisions/decision_counseling.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921229
199
2.71875
3
The purpose of this short text is to give the reader a basic understanding of the various temperaments and tunings used on keyboard instruments (harpsichord, organ) in the past. It will not give detailed tuning instructions (my next project -- some practical tuning instructions in French are available, intended for legal paper size) nor much more than general indications on the suitability of the various historical temperaments in different contexts. When discussing temperaments, one cannot avoid being a bit technical. However, I have also tried to be practical by discussing temperaments that can be useful to modern keyboardists, and by stressing their important acoustical properties (i.e. how they sound) rather then getting into some complex theories. No special skills in mathematics are required of the reader (footnotes will be used to convey some extra material). This text is complemented by a Java applet that demonstrates tunings and temperaments (I'm also thinking of a virtual instrument one could actually practice tuning on), and by some short musical examples in various tunings and temperaments (MIDI and some .au files). A PDF version of this document (about 65K -- you'll need Adobe's Acrobat reader to read it) is also available for printing (will give you much better graphics). Anyone seriously interested in temperaments must read Margo Schulter's remarkable Pythagorean Tuning and Medieval Polyphony. Notwithstanding the title, her discussion of Baroque meantones and irregular temperaments is also very thorough. In addition to the physics of the problem, she also addresses the musical and musicological implications, with many references to the sources of the time. Copyright 1978, 1998 (yes, 20 years!) by Pierre Lewis, . Version 1.2 (incorporates a few changes inspired by Margo Schulter's article). To have false octaves has always been unthinkable, so we will have to accept that some of the other intervals will be more or less out of tune, often deliberately so as when we temper an interval (i.e. tune it slightly false in the process of setting a temperament). We will see later what compromises can be made. From acoustics, we know that pure intervals correspond to simple ratios (of the frequencies involved) such as, for example, 3/2 for the fifth. Table 1 gives the size in cents of the pure consonant intervals, computed (see footnote 2) from the given ratios. Sizes in cents (as with semitones) can be added or subtracted as we add or subtract the corresponding intervals. For example, the major second obtained by tuning a pure fifth up then a pure fourth down is 702 - 498 = 204 cents (or 7 - 5 = 2 semitones, ratio 9/8). We can now express the examples above (section 1.1) in cents. If, from C, we tune 12 pure fifths up, we will form an interval of 12 × 702 = 8424 cents. On our keyboards, this corresponds to 7 octaves or 7 × 1200 = 8400 cents. The difference between the two is 24 cents and is known as the ditonic comma. Similarly, four pure fifths up from C give 4 × 702 = 2808 cents. Subtracting two octaves (2 × 1200 = 2400), we find that the third thus formed is 408 cents, 22 cents larger than a pure third. This difference is known as the syntonic comma, and this wide third is known as the Pythagorean third and sounds quite harsh (or tense, depending on the point of view). A tuning is laid out with nothing but pure intervals, leaving the comma to fall as it must. A temperament involves deliberately mistuning some intervals to obtain a distribution of the comma that will lead to a more useful result in a given context. Solutions can be grouped into three main classes: The choice of a particular solution depends on many factors such as We will look in more detail at some of the more important solutions to this problem after some further preparation. To compute the size of the other intervals, we will consider them as being "formed" of those fifths (or fourths if going counterclockwise) which separate the two notes of the interval on the circle. We will use the shortest route to simplify the computations, but the other way around would give the same results. This does not, in general, correspond to the actual process of tuning such as, in particular, when one of the fifths involved is a wolf fifth. Figure 1 shows how the major and minor thirds are considered to be "formed" in terms of fifths or fourths. We have already seen that four pure fifths give a Pythagorean major third of 408 cents. If some or all of the fifths forming (contained in) a given major third are tempered, we obtain the size of the major third by simply adding the deviations of the fifths (which will be negative if they are flattened as is normally the case) to 408 cents. For example, if all four fifths of a major third are tempered by -2 cents, the major third will be 408 + 4 × -2 = 400 cents (the equal-tempered third). A minor third is formed with three ascending fourths. If the fourths are pure, the minor third will be 3 × 498 = 1494 cents; subtracting an octave we find that the minor third thus formed is 294 cents, 22 cents flat, and is the Pythagorean minor third. And, as above, if the fourths are tempered, we add their deviations (which will be opposite to those of the corresponding fifths) to 294 cents to obtain the size of the resulting third. The sizes of other intervals can be obtained similarly. As a consonant interval deviates more and more from pure, it eventually becomes a wolf interval, i.e. too false to be musically useful. The point at which this happens depends on what hearers are used to and are willing to tolerate. For thirds, we usually take the deviation of the Pythagorean third as the limit, i.e. about 22 cents. For fifths, half a syntonic comma, i.e. 11 cents, is about the limit. These numbers are derived from what is found in old temperaments, i.e. what appears to have been accepted at some time. That is not to say that modern ears will accept those limits. Dissonant intervals will not be considered much here since it seems to matter little whether a dissonant interval is in tune or not. Nevertheless, dissonant intervals can sound different in the different temperaments: one that strikes me in particular is the tritone of Aaron's meantone (more on this later). However, equal temperament has not been the obvious solution in all contexts: the complete freedom to modulate has not always been necessary, and the sameness of all keys found in equal temperament has not always been appreciated. In addition, equal temperament is one of the more difficult temperaments to tune. Equal temperament was used quite early on fretted instruments (it's the only arrangement that works, because many strings share a fret). Historically, the first important system is Pythagore's tuning (which is not a temperament as no interval is tempered). It is obtained by tuning a series of 11 pure fifths, typically from Eb to G#, the remaining fifth (diminished sixth really) receiving all of the ditonic comma and therefore being 24 cents flat. The resulting diagram is shown in Figure 2. All thirds, major or minor, except those which include the diminished sixth, will be Pythagorean thirds since they include only pure fifths, and will therefore be quite tense (harsh). The four major thirds (diminished fourths) which include the diminished sixth will be 408 - 24 = 384 cents, nearly pure (2 cents flat). Similarly for the minor thirds. In brief, except for one wolf fifth, all intervals are usable if not pleasant. In the common keys, the thirds will be harsh which makes this tuning unsatisfactory for tonal music; but it can be quite effective for medieval music where in fact the tenseness of the thirds was musically important. Around 1400, the four nearly pure thirds were put to good use, contrasting with the usual tense thirds, by placing the wolf between B and Gb; triads such as D-F#-A became nearly just. The sharp keys now moved to the new Renaissance ideal (stable thirds), while the flat keys stayed with the old ideal. Also, the semitones are of two different sizes (90 and 114 cents) which lends a characteristic expressiveness to this tuning. Pythagore's tuning was prevalent in most of the Gothic era. It is ironic that the most modern temperament, equal temperament, is in fact quite close to Pythagore's tuning with its nearly pure fifths and fairly tense thirds, and it is therefore quite effective for Medieval music. We will now take a look at just intonation. Just intonation is based only on pure octaves, fifths and thirds, i.e. simple-ratio intervals: any note can be obtained from any other by tuning pure fifths and/or thirds. This tuning is mostly of theoretical interest since any attempt to impose it upon fixed-intonation instruments necessarily leads to serious flaws which make it impractical. As Barbour said in Tuning and Temperament, "it is significant that the great music theorists ... presented just intonation as the theoretical basis of the scale, but temperament as a necessity". We will look at Marpurg's monochord number 1 which Barbour presented as the model form of just intonation. Figure 3 shows how the various notes are obtained from one another by tuning pure intervals: horizontal lines represent pure fifths, vertical lines pure major thirds, and diagonal lines pure minor thirds. For example, B is obtained from C by tuning a fifth to G, then a third. This results in the circle shown in Figure 4. The various segments of pure fifths are liked by pure thirds. Notice that, besides the diminished sixth, there are three bad fifths which were necessary to obtain the pure thirds (and the deviations of the fifths correspond to the difference between a pure and a pythagorean third). In particular, there is always one bad fifth between C and E, typically D-A (as here), or G-D, which is a serious flaw! Any triad whose notes are neighbourghs in Figure 3 will be pure, e.g. F-A-C or C-Eb-G, but others, such as D-F#-A will be unusable. This results in the circle shown in Figure 5 (in just intonation, one in every four fifths was 22 cents flat, a whole comma -- compare Figures 4 and 5). The total deviation of 11 such fifths is 11 × -5.5 = -60.5 cents (please bear with the fractional cents; this excessive precision is maintained only to make the major thirds exactly pure). Hence, the remaining wolf fifth (diminished sixth) will have to be 36.5 cents sharp to bring the total deviation around the circle to -24 cents. This wolf fifth is conventionally placed between G# and Eb, but is frequently placed elsewhere, depending on the music to be played (on the harpsichord, a few notes can easily be retuned between pieces). The major thirds that do not include the diminished sixth are pure by design. Those that include it (they are, in fact, diminished fourths) are 408 + 3 × -5.5 + 36.5 = 428 cents (42 cents sharp) and are not usable as major thirds. Similarly with the minor thirds. In meantone, only 16 out of 24 possible major and minor triads are usable which severely restricts modulation (notes cannot be used enharmonically, e.g. a G# will not do where and Ab is wanted). However, the good triads sound more harmonious than in equal temperament because of the pure thirds, even though the fifths are tempered nearly three times as much; this makes meantone interesting for music which does not modulate beyond its bounds (or does so intentionally). In a way, this is a worse solution than Pythagore's tuning: it has more wolves, and the wolf fifth is much worse; this was the price to pay to get the stable thirds. The diminished fourth F#-Bb (enharmonically equivalent to a major third in equal temperament) is a wolf in meantone when trying to use it as a third, but it is usable in a context where it is intended such as in the tremblement appuyé on A found in the last measures of the Sarabande from d'Anglebert shown in Figure 6 (from the second suite in G minor). Of course, such a sequence will not sound the same as in equal temperament (.au files demonstrating this: in equal temperament, in Aaron's meantone). Let us, as an aside, compute the size of the tritone F-B. It is formed of six fifths. If they were pure, the size of the tritone would be 6 × 702 cents; each fifth being 5.5 cents flat, the size is therefore 6 × (702 - 5.5) = 4179 cents. Subtracting 3 octaves, we get 4179 - 3600 = 579 cents. This happens to be close to the size of a pure interval whose ratio is 7/5, simple enough to be perceived, and which corresponds to 583 cents. This explains why it sounds different from the tritone of equal temperament (600 cents). Table 2 summarizes the properties of some of the more important regular temperaments: the numbers represent deviations from pure in cents. The table gives the difference between enharmonic equivalents (such as G# and Ab), a positive number indicating that the sharp enharmonic (such as G#) is the lower of the two; this number is also the difference between the chromatic and diatonic semitones. The last row will be explained with the forthcoming practical tuning instructions. The "ultimate" regular temperament is of course equal temperament: the wolf is gone and replaced by a fifth of the same size as all the others. But it is atypical and uncharacteristic of regular temperaments because of its rather wide thirds and the absence of any difference between enharmonics. The last group of temperaments we shall look at are the irregular temperaments (also know as well temperaments) which are now believed to have been very important in the past (especially during the Baroque). They are characterized by having more than one size of good fifths (and thus thirds), by having no wolf intervals to limit modulation (as in the previous temperaments except equal), and by having a more or less orderly progression in the acoustic quality of the triads from near to remote keys, i.e. a tonal palette. Generally speaking, the ditonic comma (-24 cents) is distributed unevenly around the circle: most of it is given to the fifths of the near keys, and little, if any, to the fifths of the remote keys (in some cases, such as the French temperament ordinaire, the first fifths are tempered a bit too much, with the result that the last fifths of the circle have to be a bit sharp, a waste). The consequence of the above arrangement is that, in the near keys, the thirds are much purer and the fifths less so than in the remote keys. In the near keys, irregular temperaments resemble meantone, and in the remote keys, they resemble (the near keys of) Pythagore's tuning (with the tense thirds). This gives added variety to modulation, which was appreciated in the past, and probably explains the different characters of the different keys mentioned in the literature of the time. This kind of variety is absent in the regular temperaments including equal temperament of course. This temperament can be rotated halfway to obtain a "Well Pythagore", i.e. a temperament that is Pythagorean in its near keys, yet usable in all keys. The "ultimate" irregular temperament is of course equal temperament with all fifths equal. But it is atypical of irregular temperaments because it is completely regular and all keys are musically equivalent with their uniform and active thirds (and no stable thirds as in meantone). It has one color, and, from a Renaissance/Baroque point of view, the wrong color. The inequality of equal temperament is 0.0; in practice however, the inequality of a competent tuner's work might be around 1.0 (based on data in Grove's, 1965, article on tuning). If one plays music typical of the 18th century using one of the irregular temperaments above, one will find that, on average (weighted) the thirds are about 9 cents sharp as opposed to 14 sharp, as always, in equal temperament. The thirds, therefore, sound purer, but the fifths are more tempered. Many other interesting temperaments exist, and we might close by proposing to the interested reader that he or she studies a few of them on his/her own. For example, in Marpurg's I temperament, three fifths are tempered by 8 cents and placed symmetrically around the circle, the others being pure. This results in an approximation of equal temperament. In Grammateus' temperament, the diatonic notes are tuned according to Pythagore's tuning, and the chromatic notes are placed halfway between neighboring diatonics. A complete table of cent values for the temperaments discussed here (and then some) is available as an annex. |My main work is software development (telephony signalling protocols) at Alcatel-Lucent. But, in a previous life, I was very much into (early) music, and also into tuning (as a semi-professional harpsichord tuner); this led me to a study of historical tunings and temperaments.|
<urn:uuid:81b407ea-c392-41a4-bfa3-02be41310854>
CC-MAIN-2016-26
http://leware.net/temper/temper.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956969
3,759
3.125
3