text stringlengths 4 602k |
|---|
In an era of designer mice and complex multi-component DNA constructs, it is difficult to imagine that there was a time (not so long ago) when biologists lacked the ability to manipulate DNA sequences. The dawn of the modern molecular biology era was brought about by a series of influential innovations known as the Molecular Biology Revolution. Here are a few of them:
In the early 1950s, two groups described “restriction factors” in bacteria that prevented bacteriophage infection1,2, but it wasn’t until 15 years later that the first restriction enzymes were isolated by Arber and Linn, and independently by Smith and Wilcox3,4. Since then, numerous restriction enzymes have been isolated and become commercially available.
In the early 1960s, the work of Jean Weigle, Matthew Meselson, and Grete Kellenberger demonstrated that DNA molecules could reassemble by ligation5,6. In 1967, enzymes with this ligase activity were isolated by several groups7-11. For the first time, DNA fragments could be assembled in a test tube in specific arrangements.
The power of restriction enzymes and ligases was quickly harnessed by Paul Berg, who assembled the first truly recombinant DNA molecules using DNA from E. coli, bacteriophage and Simian virus 4012. At nearly the same time, Cohen et al. performed the first complete demonstration of the power of modern molecular biology: They used restriction digestion, ligation, and transformation to transfer an engineered, functional DNA molecule into a bacterial strain13.
In the early 1980s, two key advances were made that revolutionized molecular biology even further. First, Marvin Caruthers developed phosphoramidite DNA synthesis, which made automated oligonucleotide synthesis practical14,15. Second, Kary Mullis published polymerase chain reaction (PCR)16. Using oligo synthesis and PCR together, researchers suddenly had the ability to selectively amplify, and therefor clone, virtually any target DNA sequence. The use of many oligos with complimentary overlaps, coupled with ligation and PCR, also allowed large DNA fragments with de novo sequence to be created from scratch.
Most of the innovations leading to the Molecular Biology Revolution are now considered common knowledge among biologists. Custom DNA constructs have become ubiquitous tools in numerous types of experiments, and the assembly of a recombinant plasmid is no longer considered a significant scientific achievement. In fact, many labs have begun using external resources, such as core facilities, plasmid repositories, and commercial cloning services for their cloning needs, abandoning molecular cloning in their lab altogether.
VectorBuilder is the latest addition to the Molecular Biology Revolution. It is a highly innovative web tool that allows you to design complex custom DNA vectors with just a few mouse clicks. You can then purchase the physical vector for as little as $100, and get it mailed to your lab in week or two. You can choose from a wide range of vector systems, including regular plasmids, lentiviral vectors, shRNA vectors, CRISPR/Cas9 vectors, and many more.
A DNA vector is just a reagent, not a research project. So why spend weeks cloning your own vector when you can get it so cheaply and quickly from VectorBuilder?
Come join the new Molecular Biology Revolution… Come join VectorBuilder!
We will respond to you in 1-2 business days. |
Climate of India
Analysed according to the Köppen system, the climate of India resolves into six major climatic subtypes; their influences give rise to desert in the west, alpine tundra and glaciers in the north, humid tropical regions supporting rain forests in the southwest, and Indian Ocean island territories that flank the Indian subcontinent. Regions have starkly different—yet tightly clustered—microclimates. The nation is largely subject to four seasons: winter (December to February), summer (March to May), a monsoon (rainy) season (June to September), and a post-monsoon period (October and November).
India's geography and geology are climatically pivotal: the Thar Desert in the northwest and the Himalayas in the north work in tandem to effect a culturally and economically break-all monsoonal regime. As Earth's highest and most massive mountain range, the Himalayan system bars the influx of frigid katabatic winds from the icy Tibetan Plateau and northerly Central Asia. Most of North India is thus kept warm or is only mildly chilly or cold during winter; the same thermal dam keeps most regions in India hot in summer.
Though the Tropic of Cancer—the boundary between the tropics and subtropics—passes through the middle of India, the bulk of the country can be regarded as climatically tropical. As in much of the tropics, monsoonal and other weather patterns in India can be wildly unstable: epochal droughts, floods, cyclones, and other natural disasters are sporadic, but have displaced or ended millions of human lives. There is widespread scientific consensus that South Asia is likely to see such climatic events, along with their aleatory unpredictability, to change in frequency and are likely to increase in severity. Ongoing and future vegetative changes and current sea level rises and the attendant inundation of India's low-lying coastal areas are other impacts, current or predicted, that are attributable to global warming.
- 1 History
- 2 Regions
- 3 Seasons
- 4 Statistics
- 5 Disasters
- 6 Extremes
- 7 Global warming
- 8 Atmospheric pollution
- 9 Notes
- 10 Citations
- 11 References
- 12 Further reading
- 13 External links
During the Triassic period of some 251–199.6 Ma, the Indian subcontinent was part of a vast supercontinent known as Pangaea. Despite its position within a high-latitude belt at 55–75° S—as opposed to its current position between 5 and 35° N, latitudes now occupied by Greenland and parts of the Antarctic Peninsula—India likely experienced a humid temperate climate with warm and frost-free weather, though with well-defined seasons. India later merged into the southern supercontinent Gondwana, a process beginning some 550–500 Ma. During the Late Paleozoic, Gondwana extended from a point at or near the South Pole to near the equator, where the Indian craton (stable continental crust) was positioned, resulting in a mild climate favourable to hosting high-biomass ecosystems. This is underscored by India's vast coal reserves—much of it from the late Paleozoic sedimentary sequence—the fourth-largest reserves in the world. During the Mesozoic, the world, including India, was considerably warmer than today. With the coming of the Carboniferous, global cooling stoked extensive glaciation, which spread northwards from South Africa towards India; this cool period lasted well into the Permian.
Tectonic movement by the Indian Plate caused it to pass over a geologic hotspot—the Réunion hotspot—now occupied by the volcanic island of Réunion. This resulted in a massive flood basalt event that laid down the Deccan Traps some 60–68 Ma, at the end of the Cretaceous period. This may have contributed to the global Cretaceous–Paleogene extinction event, which caused India to experience significantly reduced insolation. Elevated atmospheric levels of sulphur gases formed aerosols such as sulphur dioxide and sulphuric acid, similar to those found in the atmosphere of Venus; these precipitated as acid rain. Elevated carbon dioxide emissions also contributed to the greenhouse effect, causing warmer weather that lasted long after the atmospheric shroud of dust and aerosols had cleared. Further climatic changes 20 million years ago, long after India had crashed into the Laurasian landmass, were severe enough to cause the extinction of many endemic Indian forms. The formation of the Himalayas resulted in blockage of frigid Central Asian air, preventing it from reaching India; this made its climate significantly warmer and more tropical in character than it would otherwise have been.
India is home to an extraordinary variety of climatic regions, ranging from tropical in the south to temperate and alpine in the Himalayan north, where elevated regions receive sustained winter snowfall. The nation's climate is strongly influenced by the Himalayas and the Thar Desert. The Himalayas, along with the Hindu Kush mountains in Pakistan, prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes. Simultaneously, the Thar Desert plays a role in attracting moisture-laden southwest summer monsoon winds that, between June and October, provide the majority of India's rainfall. Four major climatic groupings predominate, into which fall seven climatic zones that, as designated by experts, are defined on the basis of such traits as temperature and precipitation. Groupings are assigned codes (see chart) according to the Köppen climate classification system.
A tropical rainy climate governs regions experiencing persistent warm or high temperatures, which normally do not fall below 18 °C (64 °F). India hosts two climatic subtypes that fall under this group. The most humid is the tropical wet climate—also known as a tropical monsoon climate—that covers a strip of southwestern lowlands abutting the Malabar Coast, the Western Ghats, and southern Assam. India's two island territories, Lakshadweep and the Andaman and Nicobar Islands, are also subject to this climate. Characterised by moderate to high year-round temperatures, even in the foothills, its rainfall is seasonal but heavy—typically above 2,000 mm (79 in) per year. Most rainfall occurs between May and November; this moisture is enough to sustain lush forests and other vegetation for the rest of the mainly dry year. December to March are the driest months, when days with precipitation are rare. The heavy monsoon rains are responsible for the exceptionally biodiverse tropical wet forests in parts of these regions. In India a tropical wet and dry climate is more common. Noticeably drier than areas with a tropical monsoon climate, it prevails over most of inland peninsular India except for a semi arid rain shadow east of the Western Ghats. Winter and early summer are long and dry periods with temperatures averaging above 18 °C (64 °F). Summer is exceedingly hot; temperatures in low-lying areas may exceed 50 °C (122 °F) during May, leading to heat waves that can each kill hundreds of Indians.
The rainy season lasts from June to September; annual rainfall averages between 750–1,500 mm (30–59 in) across the region. Once the dry northeast monsoon begins in September, most precipitation in India falls on Tamil Nadu, leaving other states comparatively dry. The state's normal annual rainfall is about 945 mm (37.2 in), of which 48% is delivered by the northeast monsoon and 32% by the southwest monsoon. Since the state is entirely dependent on rains for recharging its water resources, monsoon failures lead to acute water scarcity and severe drought. Tamil Nadu is classified into seven agro-climatic zones: northeast, northwest, west, southern, high rainfall, high altitude hilly, and the Kaveri delta, the last being the most fertile agricultural zone. The table below shows the maximum and minimum temperatures that the state experiences in the plains and hills. The Ganges Delta lies mostly in the tropical wet climate zone: it receives between 1,500 to 2,000 mm (59 to 79 in) of rainfall each year in the western part, and 2,000 to 3,000 mm (79 to 118 in) in the eastern part. The coolest month of the year, on average, is January; April and May are the warmest months. Average temperatures in January range from 14 to 25 °C (57 to 77 °F), and average temperatures in April range from 25 to 35 °C (77 to 95 °F). July is on average the wettest month: over 330 mm (13 in) of rain falls on the delta.
A tropical arid and semi-arid climate dominates regions where the rate of moisture loss through evapotranspiration exceeds that from precipitation; it is subdivided into three climatic subtypes. The first, a tropical semi-arid steppe climate, predominates over a long stretch of land south of Tropic of Cancer and east of the Western Ghats and the Cardamom Hills. The region, which includes Karnataka, inland Tamil Nadu, western Andhra Pradesh, and central Maharashtra, gets between 400–750 millimetres (15.7–29.5 in) annually. It is drought-prone, as it tends to have less reliable rainfall due to sporadic lateness or failure of the southwest monsoon. Karnataka is divided into three zones – coastal, north interior and south interior. Of these, the coastal zone receives the heaviest rainfall with an average rainfall of about 3,638.5 mm per annum, far in excess of the state average of 1,139 mm (45 in). In contrast to norm, Agumbe in the Shivamogga district receives the second highest annual rainfall in India. North of the Krishna River, the summer monsoon is responsible for most rainfall; to the south, significant post-monsoon rainfall also occurs in October and November. In December, the coldest month, temperatures still average around 20–24 °C (68–75 °F). The months between March to May are hot and dry; mean monthly temperatures hover around 32 °C, with 320 millimetres (13 in)precipitation. Hence, without artificial irrigation, this region is not suitable for permanent agriculture.
Most of western Rajasthan experiences an arid climatic regime. Cloudbursts are responsible for virtually all of the region's annual precipitation, which totals less than 300 millimetres (11.8 in). Such bursts happen when monsoon winds sweep into the region during July, August, and September. Such rainfall is highly erratic; regions experiencing rainfall one year may not see precipitation for the next couple of years or so. Atmospheric moisture is largely prevented from precipitating due to continuous downdrafts and other factors. The summer months of May and June are exceptionally hot; mean monthly temperatures in the region hover around 35 °C (95 °F), with daily maxima occasionally topping 50 °C (122 °F). During winters, temperatures in some areas can drop below freezing due to waves of cold air from Central Asia. There is a large diurnal range of about 14 °C (25.2 °F) during summer; this widens by several degrees during winter.
To the west, in Gujarat, diverse climate conditions obtain. The winters are mild, pleasant, and dry with average daytime temperatures around 29 °C (84 °F) and nights around 12 °C (54 °F) with virtually full sun and clear nights. Summers are hot and dry with daytime temperatures around 41 °C (106 °F) and nights no lower than 29 °C (84 °F). In the weeks before the monsoon temperatures are similar to the above, but high humidity makes the air more uncomfortable. Relief comes with the monsoon. Temperatures are around 35 °C (95 °F) but humidity is very high; nights are around 27 °C (81 °F). Most of the rainfall occurs in this season, and the rain can cause severe floods. The sun is often occluded during the monsoon season.
East of the Thar Desert, the Punjab-Haryana-Kathiawar region experiences a tropical and sub-tropical steppe climate. Haryana's climate resembles other states of the northern plains: extreme summer heat of up to 50 °C and winter cold as low as 1 °C. May and June are hottest; December and January are coldest. Rainfall is varied, with the Shivalik Hills region being the wettest and the Aravali Hills region being the driest. About 80% of the rainfall occurs in the monsoon season of July–September, which can cause flooding. The Punjabi climate is also governed by extremes of hot and cold. Areas near the Himalayan foothills receive heavy rainfall whereas those eloigned from them are hot and dry. Punjab's three-season climate sees summer months that spans from mid-April to the end of June. Temperatures typically range from–2 °C to 40 °C, but can reach 47 °C (117 °F) in summer and −4 °C in winter. The zone, a transitional climatic region separating tropical desert from humid sub-tropical savanna and forests, experiences temperatures that are less extreme than those of the desert. Average annual rainfall is 300–650 millimetres (11.8–25.6 in), but is very unreliable; as in much of the rest of India, the southwest monsoon accounts for most precipitation. Daily summer temperature maxima rise to around 40 °C (104 °F); this results in natural vegetation typically comprises short, coarse grasses.
Most of Northeast India and much of North India are subject to a humid subtropical climate. Though they experience hot summers, temperatures during the coldest months may fall as low as 0 °C (32 °F). Due to ample monsoon rains, India has only one subtype of this climate under the Köppen system: Cwa. In most of this region, there is very little precipitation during the winter, owing to powerful anticyclonic and katabatic (downward-flowing) winds from Central Asia.
Humid subtropical regions are subject to pronounced dry winters. Winter rainfall—and occasionally snowfall—is associated with large storm systems such as "Nor'westers" and "Western disturbances"; the latter are steered by westerlies towards the Himalayas. Most summer rainfall occurs during powerful thunderstorms associated with the southwest summer monsoon; occasional tropical cyclones also contribute. Annual rainfall ranges from less than 1,000 millimetres (39 in) in the west to over 2,500 millimetres (98 in) in parts of the northeast. As most of this region is far from the ocean, the wide temperature swings more characteristic of a continental climate predominate; the swings are wider than in those in tropical wet regions, ranging from 24 °C (75 °F) in north-central India to 27 °C (81 °F) in the east.
India's northernmost areas are subject to a montane, or alpine, climate. In the Himalayas, the rate at which an air mass's temperature falls per kilometre (3,281 ft) of altitude gained (the dry adiabatic lapse rate) is 9.8 °C/km. In terms of environmental lapse rate, ambient temperatures fall by 6.5 °C (11.7 °F) for every 1,000 metres (3,281 ft) rise in altitude. Thus, climates ranging from nearly tropical in the foothills to tundra above the snow line can coexist within several hundred metres of each other. Sharp temperature contrasts between sunny and shady slopes, high diurnal temperature variability, temperature inversions, and altitude-dependent variability in rainfall are also common. The northern side of the western Himalayas, also known as the trans-Himalayan belt, is a region of barren, arid, frigid, and wind-blown wastelands. Most precipitation occurs as snowfall during the late winter and spring months.
Areas south of the Himalayas are largely protected from cold winter winds coming in from the Asian interior. The leeward side (northern face) of the mountains receives less rain while the southern slopes, well-exposed to the monsoon, get heavy rainfall. Areas situated at elevations of 1,070–2,290 metres (3,510–7,510 ft) receive the heaviest rainfall, which decreases rapidly at elevations above 2,290 metres (7,513 ft). The Himalayas experience their heaviest snowfall between December and February and at elevations above 1,500 metres (4,921 ft). Snowfall increases with elevation by up to several dozen millimetres per 100 metre (~2 in; 330 ft) increase. Elevations above 6,000 metres (19,685 ft) never experience rain; all precipitation falls as snow.
- Winter, occurring from December to March. The year's coldest months are December and January, when temperatures average around 10–15 °C (50–59 °F) in the northwest; temperatures rise as one proceeds towards the equator, peaking around 20–25 °C (68–77 °F) in mainland India's southeast.
- Summer or pre-monsoon season, lasting from April to June (April to July in northwestern India). In western and southern regions, the hottest month is April; for northern regions, May is the hottest month. Temperatures average around 32–40 °C (90–104 °F) in most of the interior.
- Monsoon or rainy season, lasting from July to September. The season is dominated by the humid southwest summer monsoon, which slowly sweeps across the country beginning in late May or early June. Monsoon rains begin to recede from North India at the beginning of October. South India typically receives more rainfall.
- Post-monsoon or autumn season, lasting from October to November. In northwestern India, October and November are usually cloudless. Tamil Nadu receives most of its annual precipitation in the northeast monsoon season.
The Himalayan states, being more temperate, experience an additional season, spring, which coincides with the first weeks of summer in southern India. Traditionally, Indians note six seasons or Ritu, each about two months long. These are the spring season (Sanskrit: vasanta), summer (grīṣma), monsoon season (varṣā), autumn (śarada), winter (hemanta), and prevernal season (śiśira). These are based on the astronomical division of the twelve months into six parts. The ancient Hindu calendar also reflects these seasons in its arrangement of months.
Once the monsoons subside, average temperatures gradually fall across India. As the Sun's vertical rays move south of the equator, most of the country experiences moderately cool weather; temperatures change by about 0.6 °C (1.08 °F) per degree of latitude. December and January are the coldest months, with mean temperatures of 10–15 °C (50–59 °F) in Indian Himalayas. Mean temperatures are higher in the east and south, where reach 20–25 °C (68–77 °F).
In northwestern India region, virtually cloudless conditions prevail in October and November, resulting in wide diurnal temperature swings; as in much of the Deccan Plateau, they register at 16–20 °C (61–68 °F). However, from January to February, "western disturbances" bring heavy bursts of rain and snow. These extra-tropical low-pressure systems originate in the eastern Mediterranean Sea. They are carried towards India by the subtropical westerlies, which are the prevailing winds blowing at North India's range of latitude. Once their passage is hindered by the Himalayas, they are unable to proceed further, and they release significant precipitation over the southern Himalayas.
There is a huge variation in the climatic conditions of Himachal Pradesh due to variation in altitude (450–6500 metres). The climate varies from hot and sub-humid tropical (450–900 metres) in the southern low tracts, warm and temperate (900–1800 metres), cool and temperate (1900–2400 metres) and cold glacial and alpine (2400–4800 metres) in the northern and eastern high elevated mountain ranges. By October, nights and mornings are very cold. Snowfall at elevations of nearly 3000 m is about 3 m and lasts from December start to March end. Elevations above 4500 m support perpetual snow. The spring season starts from mid February to mid April. The weather is pleasant and comfortable in the season. The rainy season starts at the end of the month of June. The landscape lushes green and fresh. During the season streams and natural springs are replenished. The heavy rains in July and August cause a lot of damage resulting into erosion, floods and landslides. Out of all the state districts, Dharamsala receives the highest rainfall, nearly about 3,400 mm (134 in). Spiti is the driest area of the state, where annual rainfall is below 50 mm. The six Himalayan states (Jammu and Kashmir in the extreme north, Himachal Pradesh, Uttarakhand, Sikkim, Northern West Bengal and Arunachal Pradesh) experience heavy snowfall, Manipur and Nagaland are not located in the Himalayas but experience snowfall; in Jammu and Kashmir, blizzards occur regularly, disrupting travel and other activities.
The rest of North India, including the Indo-Gangetic Plain, almost never receives snow. Temperatures in the plains occasionally fall below freezing, though never for more one or two days. Winter highs in Delhi range from 16 to 21 °C (61 to 70 °F). Nighttime temperatures average 2–8 °C (36–46 °F). In the plains of Punjab, lows can fall below freezing, dropping to around −6 °C (21 °F) in Amritsar. Frost sometimes occurs, but the hallmark of the season is the notorious fog, which frequently disrupts daily life; fog grows thick enough to hinder visibility and disrupt air travel 15–20 days annually. In Bihar in middle of the Ganges plain, hot weather sets in and the summer lasts until the middle of June. The highest temperature is often registered in May which is the hottest time. Like the rest of the north, Bihar also experiences dust-storms, thunderstorms and dust raising winds during the hot season. Dust storms having a velocity of 48–64 km/h (30–40 mph) are most frequent in May and with second maximum in April and June. The hot winds (loo) of Bihar plains blow during April and May with an average velocity of 8–16 km/h (5–10 mph). These hot winds greatly affects human comfort during this season. Rain follows. The rainy season begins in June. The rainiest months are July and August. The rains are the gifts of the southwest monsoon. There are in Bihar three distinct areas where rainfall exceeds 1,800 mm (71 in). Two of them are in the northern and northwestern portions of the state; the third lies in the area around Netarhat. The southwest monsoon normally withdraws from Bihar in the first week of October. Eastern India's climate is much milder, experiencing moderately warm days and cool nights. Highs range from 21 °C (70 °F) in Patna to 23 °C (73 °F) in Kolkata (Calcutta); lows average from 7 °C (45 °F) in Patna to 9 °C (48 °F) in Kolkata.
Frigid winds from the Himalayas can depress temperatures near the Brahmaputra River. The Himalayas have a profound effect on the climate of the Indian subcontinent and the Tibetan plateau by preventing frigid and dry Arctic winds from blowing south into the subcontinent, which keeps South Asia much warmer than corresponding temperate regions in the other continents. It also forms a barrier for the monsoon winds, keeping them from travelling northwards, and causing heavy rainfall in the Terai region instead. The Himalayas are indeed believed to play an important role in the formation of Central Asian deserts such as the Taklamakan and Gobi. The mountain ranges prevent western winter disturbances in Iran from travelling further east, resulting in much snow in Kashmir and rainfall for parts of Punjab and northern India. Despite being a barrier to the cold northernly winter winds, the Brahmaputra valley receives part of the frigid winds, thus lowering the temperature in Northeast India and Bangladesh. The Himalayas, which are often called "The Roof of the World", contain the greatest area of glaciers and permafrost outside of the poles. Ten of Asia's largest rivers flow from there. The two Himalayan states in the east, Sikkim and Arunachal Pradesh, receive substantial snowfall. The extreme north of West Bengal centred around Darjeeling experiences snowfall, but only rarely. Parts of Uttar Pradesh are also affected by snowfall of several metres in places. Rainfall in that state ranges from 1,000–2,000 mm (39–79 in) in the east to 600–1,000 mm (24–39 in) in the west.
In South India, particularly the hinterlands of Maharashtra, Madhya Pradesh, parts of Karnataka, and Andhra Pradesh, somewhat cooler weather prevails. Minimum temperatures in western Maharashtra, Madhya Pradesh and Chhattisgarh hover around 10 °C (50 °F); in the southern Deccan Plateau, they reach 16 °C (61 °F). Coastal areas—especially those near the Coromandel Coast and adjacent low-elevation interior tracts—are warm, with daily high temperatures of 30 °C (86 °F) and lows of around 21 °C (70 °F). The Western Ghats, including the Nilgiri Range, are exceptional; lows there can fall below freezing. This compares with a range of 12–14 °C (54–57 °F) on the Malabar Coast; there, as is the case for other coastal areas, the Indian Ocean exerts a strong moderating influence on weather. The region averages 800 millimetres (31 in) per year, most of which falls between October and December. The topography of the Bay of Bengal and the staggered weather pattern prevalent during the season favours the northeast monsoon, which has a tendency to cause cyclones and hurricanes rather than steady precipitation. As a result the coast is hit by what can mildly be termed as inclement weather almost every year between October and January.
Summer in northwestern India lasts from April to July, and in the rest of the country from March to June. The temperatures in the north rise as the vertical rays of the Sun reach the Tropic of Cancer. The hottest month for the western and southern regions of the country is April; for most of North India, it is May. Temperatures of 50 °C (122 °F) and higher have been recorded in parts of India during this season. Another striking feature of summer is the Loo (wind). These are strong, gusty, hot, dry winds that blow during the day in northwestern India. Direct exposure to these winds may be fatal. In cooler regions of North India, immense pre-monsoon squall-line thunderstorms, known locally as "Nor'westers", commonly drop large hailstones. In Himachal Pradesh, Summer lasts from mid April till the end of June and most parts become very hot (except in alpine zone which experience mild summer) with the average temperature ranging from 28 °C (82 °F) to 32 °C (90 °F). Winter lasts from late November till mid March. Snowfall is generally common in alpine tracts that are above 2,200 metres (7,218 ft), especially those in the higher- and trans-Himalayan regions. Near the coast the temperature hovers around 36 °C (97 °F), and the proximity of the sea increases the level of humidity. In southern India, the temperatures are higher on the east coast by a few degrees compared to the west coast.
By May, most of the Indian interior experiences mean temperatures over 32 °C (90 °F), while maximum temperatures often exceed 40 °C (104 °F). In the hot months of April and May, western disturbances, with their cooling influence, may still arrive, but rapidly diminish in frequency as summer progresses. Notably, a higher frequency of such disturbances in April correlates with a delayed monsoon onset (thus extending summer) in northwest India. In eastern India, monsoon onset dates have been steadily advancing over the past several decades, resulting in shorter summers there.
Altitude affects the temperature to a large extent, with higher parts of the Deccan Plateau and other areas being relatively cooler. Hill stations, such as Ootacamund ("Ooty") in the Western Ghats and Kalimpong in the eastern Himalayas, with average maximum temperatures of around 25 °C (77 °F), offer some respite from the heat. At lower elevations, in parts of northern and western India, a strong, hot, and dry wind known as the Looblows in from the west during the daytime; with very high temperatures, in some cases up to around 45 °C (113 °F); it can cause fatal cases of sunstroke. Tornadoes may also occur, concentrated in a corridor stretching from northeastern India towards Pakistan. They are rare, however; only several dozen have been reported since 1835.
The southwest summer monsoon, a four-month period when massive convective thunderstorms dominate India's weather, is Earth's most productive wet season. A product of southeast trade winds originating from a high-pressure mass centred over the southern Indian Ocean, the monsoonal torrents supply over 80% of India's annual rainfall. Attracted by a low-pressure region centred over South Asia, the mass spawns surface winds that ferry humid air into India from the southwest. These inflows ultimately result from a northward shift of the local jet stream, which itself results from rising summer temperatures over Tibet and the Indian subcontinent. The void left by the jet stream, which switches from a route just south of the Himalayas to one tracking north of Tibet, then attracts warm, humid air.
The main factor behind this shift is the high summer temperature difference between Central Asia and the Indian Ocean. This is accompanied by a seasonal excursion of the normally equatorial intertropical convergence zone (ITCZ), a low-pressure belt of highly unstable weather, northward towards India. This system intensified to its present strength as a result of the Tibetan Plateau's uplift, which accompanied the Eocene–Oligocene transition event, a major episode of global cooling and aridification which occurred 34–49 Ma.
The southwest monsoon arrives in two branches: the Bay of Bengal branch and the Arabian Sea branch. The latter extends towards a low-pressure area over the Thar Desert and is roughly three times stronger than the Bay of Bengal branch. The monsoon typically breaks over Indian territory by around 25 May, when it lashes the Andaman and Nicobar Islands in the Bay of Bengal. It strikes the Indian mainland around 1 June near the Malabar Coast of Kerala. By 9 June, it reaches Mumbai; it appears over Delhi by 29 June. The Bay of Bengal branch, which initially tracks the Coromandal Coast northeast from Cape Comorin to Orissa, swerves to the northwest towards the Indo-Gangetic Plain. The Arabian Sea branch moves northeast towards the Himalayas. By the first week of July, the entire country experiences monsoon rain; on average, South India receives more rainfall than North India. However, Northeast India receives the most precipitation. Monsoon clouds begin retreating from North India by the end of August; it withdraws from Mumbai by 5 October. As India further cools during September, the southwest monsoon weakens. By the end of November, it has left the country.
Monsoon rains impact the health of the Indian economy; as Indian agriculture employs 600 million people and composes 20% of the national GDP, good monsoons correlate with a booming economy. Weak or failed monsoons (droughts) result in widespread agricultural losses and substantially hinder overall economic growth. Yet such rains reduce temperatures and can replenish groundwater tables, rivers, and lakes.
During the post-monsoon months of October to December, a different monsoon cycle, the northeast (or "retreating") monsoon, brings dry, cool, and dense Central Asian air masses to large parts of India. Winds spill across the Himalayas and flow to the southwest across the country, resulting in clear, sunny skies. Though the India Meteorological Department (IMD) and other sources refers to this period as a fourth ("post-monsoon") season, other sources designate only three seasons. Depending on location, this period lasts from October to November, after the southwest monsoon has peaked. Less and less precipitation falls, and vegetation begins to dry out. In most parts of India, this period marks the transition from wet to dry seasonal conditions. Average daily maximum temperatures range between 28 and 34 °C (82 and 93 °F).
The northeast monsoon, which begins in September, lasts through the post-monsoon seasons, and only ends in March. It carries winds that have already lost their moisture while crossing central Asia and the vast rain shadow region lying north of the Himalayas. They cross India diagonally from northeast to southwest. However, the large indentation made by the Bay of Bengal into India's eastern coast means that the flows are humidified before reaching Cape Comorin and rest of Tamil Nadu, meaning that the state, and also some parts of Kerala, experience significant precipitation in the post-monsoon and winter periods. However, parts of West Bengal, Orissa, Andhra Pradesh, Karnataka and Northeast India also receive minor precipitation from the northeast monsoon.
Shown below are temperature and precipitation data for selected Indian cities; these represent the full variety of major Indian climate types. Figures have been grouped by the four-season classification scheme used by the IMD;[N 1] year-round averages and totals are also displayed.
Climate-related natural disasters cause massive losses of Indian life and property. Droughts, flash floods, cyclones, avalanches, landslides brought on by torrential rains, and snowstorms pose the greatest threats. Other dangers include frequent summer dust storms, which usually track from north to south; they cause extensive property damage in North India and deposit large amounts of dust from arid regions. Hail is also common in parts of India, causing severe damage to standing crops such as rice and wheat.
Floods and landslides
In the Lower Himalaya, landslides are common. The young age of the region's hills result in labile rock formations, which are susceptible to slippages. Rising population and development pressures, particularly from logging and tourism, cause deforestation. The result, denuded hillsides, exacerbates the severity of landslides, since tree cover impedes the downhill flow of water. Parts of the Western Ghats also suffer from low-intensity landslides. Avalanches occur in Kashmir, Himachal Pradesh, and Sikkim.
Floods are the most common natural disaster in India. The heavy southwest monsoon rains cause the Brahmaputra and other rivers to distend their banks, often flooding surrounding areas. Though they provide rice paddy farmers with a largely dependable source of natural irrigation and fertilisation, the floods can kill thousands and displace millions. Excess, erratic, or untimely monsoon rainfall may also wash away or otherwise ruin crops. Almost all of India is flood-prone, and extreme precipitation events, such as flash floods and torrential rains, have become increasingly common in central India over the past several decades, coinciding with rising temperatures. Mean annual precipitation totals have remained steady due to the declining frequency of weather systems that generate moderate amounts of rain.
Tropical cyclones, which are severe storms spun off from the Intertropical Convergence Zone, may affect thousands of Indians living in coastal regions. Tropical cyclogenesis is particularly common in the northern reaches of the Indian Ocean in and around the Bay of Bengal. Cyclones bring with them heavy rains, storm surges, and winds that often cut affected areas off from relief and supplies. In the North Indian Ocean Basin, the cyclone season runs from April to December, with peak activity between May and November. Each year, an average of eight storms with sustained wind speeds greater than 63 km/h (39 mph) form; of these, two strengthen into true tropical cyclones, which have sustained gusts greater than 117 km/h (73 mph). On average, a major (Category 3 or higher) cyclone develops every other year.
During summer, the Bay of Bengal is subject to intense heating, giving rise to humid and unstable air masses that morph into cyclones. The 1737 Calcutta cyclone, the 1970 Bhola cyclone, and the 1991 Bangladesh cyclone rank among the most powerful cyclones to strike India, devastating the coasts of eastern India and neighbouring Bangladesh. Widespread death and property destruction are reported every year in the exposed coastal states of West Bengal, Orissa, Andhra Pradesh, and Tamil Nadu. India's western coast, bordering the more placid Arabian Sea, experiences cyclones only rarely; these mainly strike Gujarat and, less frequently, Kerala.
Cyclone 05B, a supercyclone that struck Orissa on 29 October 1999, was the deadliest in more than a quarter-century. With peak winds of 160 miles per hour (257 km/h), it was the equivalent of a Category 5 hurricane. Almost two million people were left homeless;another 20 million people lives were disrupted by the cyclone. Officially, 9,803 people died from the storm; unofficial estimates place the death toll at over 10,000.
Indian agriculture is heavily dependent on the monsoon as a source of water. In some parts of India, the failure of the monsoons result in water shortages, resulting in below-average crop yields. This is particularly true of major drought-prone regions such as southern and eastern Maharashtra, northern Karnataka, Andhra Pradesh, Orissa, Gujarat, and Rajasthan. In the past, droughts have periodically led to major Indian famines. These include the Bengal famine of 1770, in which up to one third of the population in affected areas died; the 1876–1877 famine, in which over five million people died; the 1899 famine, in which over 4.5 million died; and the Bengal famine of 1943, in which over five million died from starvation and famine-related illnesses.
All such episodes of severe drought correlate with El Niño-Southern Oscillation (ENSO) events. El Niño-related droughts have also been implicated in periodic declines in Indian agricultural output. Nevertheless, ENSO events that have coincided with abnormally high sea surfaces temperatures in the Indian Ocean—in one instance during 1997 and 1998 by up to 3 °C (37 °F)—have resulted in increased oceanic evaporation, resulting in unusually wet weather across India. Such anomalies have occurred during a sustained warm spell that began in the 1990s. A contrasting phenomenon is that, instead of the usual high pressure air mass over the southern Indian Ocean, an ENSO-related oceanic low pressure convergence centre forms; it then continually pulls dry air from Central Asia, desiccating India during what should have been the humid summer monsoon season. This reversed air flow causes India's droughts. The extent that an ENSO event raises sea surface temperatures in the central Pacific Ocean influences the extent of drought.
Extreme Temperatures: Low
India's lowest recorded temperature was −45 °C (−49 °F) in Dras, Ladakh, in eastern Jammu and Kashmir; the reading was taken with non-standard equipment. Figures as low as −30.6 °C (−23 °F) have been taken in Leh, further east in Ladakh. However, temperatures on the disputed but Indian-controlled Siachen Glacier near Bilafond La (5,450 metres or 17,881 feet) and Sia La (5,589 metres or 18,337 feet) have fallen below −55 °C (−67 °F), while blizzards bring wind speeds in excess of 250 km/h (155 mph), or hurricane-force winds ranking at 12—the maximum—on the Beaufort scale. These conditions, not hostile actions, caused more than 97% of the roughly 15,000 casualties suffered among Indian and Pakistani soldiers during the Siachen conflict.
Extreme Temperatures: High
The highest reliable temperature reading was 50.6 °C (123.1 °F) in Alwar, Rajasthan in 1955. The India Meteorological Department doubts the validity of 55 °C (131 °F) readings reported in Orissa during 2005.
The average annual precipitation of 11,871 millimetres (467 in) in the village of Mawsynram, in the hilly northeastern state of Meghalaya, is the highest recorded in Asia, and possibly on Earth. The village, which sits at an elevation of 1,401 metres (4,596 ft), benefits from its proximity to both the Himalayas and the Bay of Bengal. However, since the town of Cherrapunji, 5 kilometres (3.1 mi) to the east, is the nearest town to host a meteorological office—none has ever existed in Mawsynram—it is officially credited as being the world's wettest place. In recent years the Cherrapunji-Mawsynram region has averaged between 9,296 and 10,820 millimetres (366 and 426 in) of rain annually, though Cherrapunji has had at least one period of daily rainfall that lasted almost two years. India's highest recorded one-day rainfall total occurred on 26 July 2005, when Mumbai received more than 650 mm (26 in); the massive flooding that resulted killed over 900 people.
Remote regions of Jammu and Kashmir such as Baramulla district in the east and the Pir Panjal Range in the southeast experience exceptionally heavy snowfall. In southern areas around Jammu the climate is typically monsoonal, though the region is sufficiently far west to average 40–100 mm (2–4 in) of rain monthly from January and March. In the hot season, Jammu city is very hot and can reach up to 40 °C (104 °F) while in July and August, very heavy—though erratic—rainfall occurs with monthly extremes of up to 650 millimetres (26 in). Rainfall declines in September; by October conditions are extremely dry, with temperatures of around 29 °C (84 °F). Across from the Pir Panjal range, the South Asian monsoon is no longer a factor and most precipitation falls in the spring from southwestern cloudbands. Because of its closeness to the Arabian Sea, Srinagar receives as much as 25 inches (635 mm) of rain from this source, with the wettest months being March to May with around 85 mm (3.3 inches) per month.
Ladakh and the Zanskars
North of the main Himalaya Range, even the southwestern cloudbands break up or founder; hence the climate of Ladakh and the Zanskars is extremely dry and cold. Annual precipitation is only around 100 mm (4 inches) per year and humidity is very low. This region is almost entirely above 3,000 metres (9,750 ft) above sea level; thus winters are extremely cold. In the Zanskars, the average January temperature is −20 °C (−4 °F) with extremes as low as −40 °C (−40 °F). All rivers freeze over; locals cross unbridged rivers in winter because summer glacier melt deepens the waters and inhibits fording. Summer in Ladakh and the Zanskars are a pleasantly warm 20 °C (68 °F), but the low humidity and thin air can render nights cold. Kashmir's highest recorded monthly snowfall occurred in February 1967, when 8.4 metres (27.6 ft) fell in Gulmarg, though the IMD has recorded snowdrifts up to 12 metres (39.4 ft)in several Kashmiri districts. In February 2005, more than 200 people died when, in four days, a western disturbance brought up to 2 metres (6.6 ft) of snowfall to parts of the state.
Current sea level rise, increased cyclonic activity, increased ambient temperatures, and increasingly fickle precipitation patterns are effects of global warming that have affected or are projected to impact India. Thousands of people have been displaced by ongoing sea level rises that have submerged low-lying islands in the Sundarbans. Temperature rises on the Tibetan Plateau are causing Himalayan glaciers to retreat, threatening the flow rate of the Ganges, Brahmaputra, Yamuna, and other major rivers; the livelihoods of hundreds of thousands of farmers depend on these rivers. A 2007 World Wide Fund for Nature (WWF) report states that the Indus River may run dry for the same reason.
Severe landslides and floods are projected to become increasingly common in such states as Assam. Ecological disasters, such as a 1998 coral bleaching event that killed off more than 70% of corals in the reef ecosystems off Lakshadweep and the Andamans and was brought on by elevated ocean temperatures tied to global warming, are also projected to become increasingly common. Meghalaya and other northeastern states are also concerned that rising sea levels will submerge much of Bangladesh and spawn a refugee crisis. If severe climate changes occurs, Bangladesh and parts of India that border it may lose vast tracts of coastal land.
The Indira Gandhi Institute of Development Research has reported that, if the predictions relating to global warming made by the Intergovernmental Panel on Climate Change come to fruition, climate-related factors could cause India's GDP to decline by up to 9%. Contributing to this would be shifting growing seasons for major crops such as rice, production of which could fall by 40%. Around seven million people are projected to be displaced due to, among other factors, submersion of parts of Mumbai and Chennai if global temperatures were to rise by a mere 2 °C (3.6 °F). Such shifts are not new. Earlier in the Holocene epoch (4,800–6,300 years ago), parts of what is now the Thar Desert were wet enough to support perennial lakes; researchers have proposed that this was due to much higher winter precipitation, which coincided with stronger monsoons. Kashmir's erstwhile subtropical climate dramatically cooled 2.6–3.7 Ma and experienced prolonged cold spells starting 600,000 years ago.
Thick haze and smoke originating from burning biomass in northwestern India and air pollution from large industrial cities in northern India often concentrate over the Ganges Basin. Prevailing westerlies carry aerosols along the southern margins of the sheer-faced Tibetan Plateau towards eastern India and the Bay of Bengal. Dust and black carbon, which are blown towards higher altitudes by winds at the southern margins of the Himalayas, can absorb shortwave radiation and heat the air over the Tibetan Plateau. The net atmospheric heating due to aerosol absorption causes the air to warm and convect upwards, increasing the concentration of moisture in the mid-troposphere and providing positive feedback that stimulates further heating of aerosols. India has 3 seasons, the hot season, the cool season, and the rainy season.
- The IMD-designated post-monsoon season coincides with the northeast monsoon, the effects of which are significant only in some parts of India.
- Ravindranath, Bala & Sharma 2011.
- Rowley 1996.
- Chumakov & Zharkov 2003.
- CIA World Factbook.
- Grossman et al. 2002.
- Sheth 2006.
- Iwata, Takahashi & Arai 1997.
- Karanth 2006.
- Wolpert 1999, p. 4.
- Chang 1967.
- Posey 1994, p. 118.
- NCERT, p. 28.
- Heitzman & Worden 1996, p. 97.
- Chouhan 1992, p. 7.
- Farooq 2002.
- Caviedes 2001, p. 124.
- Singhvi & Kar 2004.
- Kimmel 2000.
- Das et al. 2002.
- Carpenter 2005.
- Singh & Kumar 1997.
- India Meteorological Department B.
- Michael Allaby (1999). "A Dictionary of Zoology". Retrieved 2012-05-30.
- Hatwar, Yadav & Rama Rao 2005.
- Hara & Kimura Yasunari.
- Government of Bihar.
- Air India 2003.
- Singh, Ojha & Sharma 2004, p. 168.
- Blasco, Bellan & Aizpuru 1996.
- Changnon 1971.
- Pisharoty & Desai 1956.
- Peterson & Mehta 1981.
- Collier & Webb 2002, p. 91.
- Bagla 2006.
- Caviedes 2001, p. 118.
- Burroughs 1999, pp. 138–139.
- Burns et al. 2003.
- Dupont-Nivet et al. 2007.
- India Meteorological Department A.
- Vaswani 2006b.
- BBC 2004.
- BBC Weather A.
- Caviedes 2001, p. 119.
- Parthasarathy, Munot & Kothawale 1994.
- Library of Congress.
- O'Hare 1997.
- BBC Weather B.
- Weather Channel.
- Weather Underground.
- Balfour 2003, p. 995.
- Allaby & Garratt 2001, p. 26.
- Allaby 1997, pp. 15, 42.
- Goswami et al. 2006.
- AOML FAQ G1.
- AOML FAQ E10.
- Typhoon Warning Centre.
- BAPS 2005.
- Nash 2002, pp. 22–23.
- Collier & Webb 2002, p. 67.
- Kumar et al. 2006.
- Caviedes 2001, p. 121.
- Caviedes 2001, p. 259.
- Nash 2002, pp. 258–259.
- Caviedes 2001, p. 117.
- McGirk & Adiga 2005.
- Ali 2002.
- Desmond 1989.
- Mago 2005.
- NCDC 2004.
- BBC Giles.
- Kushner 2006.
- BBC 2005.
- The Hindu 2006.
- Vaswani 2006a.
- GOI Ministry of Home Affairs 2005.
- Harrabin 2007.
- Times of India 2007.
- BBC 2007.
- Dasgupta 2007.
- Aggarwal Lal.
- Normile 2000.
- Union of Concerned Scientists.
- Hossain 2011, p. 130.
- Sethi 2007.
- Enzel et al. 1999.
- Pant 2003.
- Badarinath et al. 2006.
- Lau 2005.
- Ali, A. (2002), "A Siachen Peace Park: The Solution to a Half-Century of International Conflict?", Mountain Research and Development (November 2002) 22 (4): 316–319, doi:10.1659/0276-4741(2002)022[0316:ASPPTS]2.0.CO;2, ISSN 0276-4741
- Badarinath, K. V. S.; Chand, T. R. K.; Prasad, V. K. (2006), "Agriculture Crop Residue Burning in the Indo-Gangetic Plains—A Study Using IRS-P6 AWiFS Satellite Data" (PDF), Current Science 91 (8): 1085–1089, retrieved 1 October 2011
- Bagla, P. (2006), "Controversial Rivers Project Aims to Turn India's Fierce Monsoon into a Friend", Science (August 2006) 313 (5790): 1036–1037, doi:10.1126/science.313.5790.1036, ISSN 0036-8075, PMID 16931734
- Blasco, F.; Bellan, M. F.; Aizpuru, M. (1996), "A Vegetation Map of Tropical Continental Asia at Scale 1:5 Million", Journal of Vegetation Science (Journal of Vegetation Science, published October 1996) 7 (5): 623–634, doi:10.2307/3236374, JSTOR 3236374
- Burns, S. J.; Fleitmann, D.; Matter, A.; Kramers, J.; Al-Subbary, A. A. (2003), "Indian Ocean Climate and an Absolute Chronology over Dansgaard/Oeschger Events 9 to 13", Science 301 (5638): 635–638, Bibcode:2003Sci...301.1365B, doi:10.1126/science.1086227, ISSN 0036-8075, PMID 12958357
- Carpenter, C. (2005), "The Environmental Control of Plant Species Density on a Himalayan Elevation Gradient", Journal of Biogeography 32 (6): 999–1018, doi:10.1111/j.1365-2699.2005.01249.x
- Chang, J. H. (1967), "The Indian Summer Monsoon", Geographical Review 57 (3): 373–396, doi:10.2307/212640, JSTOR 212640
- Changnon, S. A. (1971), "Note on Hailstone Size Distributions", Journal of Applied Meteorology 10 (1): 168–170, Bibcode:1971JApMe..10..169C, doi:10.1175/1520-0450(1971)010<0169:NOHSD>2.0.CO;2, retrieved 6 April 2007
- Chumakov, N. M.; Zharkov, M. A. (2003), "Climate of the Late Permian and Early Triassic: General Inferences" (PDF), Stratigraphy and Geological Correlation 11 (4): 361–375, retrieved 1 October 2011
- Das, M. R.; Mukhopadhyay, R. K.; Dandekar, M. M.; Kshirsagar, S. R. (2002), "Pre-Monsoon Western Disturbances in Relation to Monsoon Rainfall, Its Advancement over Northwestern India and Their Trends" (PDF), Current Science 82 (11): 1320–1321, retrieved 1 October 2011
- Dupont-Nivet, G.; Krijgsman, W.; Langereis, C. G.; Abels, H. A.; Dai, S.; Fang, X. (2007), "Tibetan Plateau Aridification Linked to Global Cooling at the Eocene–Oligocene Transition", Nature 445 (7128): 635–638, doi:10.1038/nature05516, ISSN 0028-0836, PMID 17287807
- Enzel, Y.; Ely, L. L.; Mishra, S.; Ramesh, R.; Amit, R.; Lazar, B.; Rajaguru, S. N.; Baker, V. R. et al. (1999), "High-Resolution Holocene Environmental Changes in the Thar Desert, Northwestern India", Science 284 (5411): 125–128, Bibcode:1999Sci...284..125E, doi:10.1126/science.284.5411.125, ISSN 0036-8075, PMID 10102808
- Goswami, B. N.; Venugopal, V.; Sengupta, D.; Madhusoodanan, M. S.; Xavier, P. K. (2006), "Increasing Trend of Extreme Rain Events over India in a Warming Environment", Science 314 (5804): 1442–1445, Bibcode:2006Sci...314.1442G, doi:10.1126/science.1132027, ISSN 0036-8075, PMID 17138899
- Grossman, E. L.; Bruckschen, P.; Mii, H.; Chuvashov, B. I.; Yancey, T. E.; Veizer, J. (2002), "Climate of the Late Permian and Early Triassic: General Inferences" (PDF), Carboniferous Stratigraphy and Paleogeography in Eurasia: 61–71, archived from the original on 13 November 2005, retrieved 5 April 2007
- Hatwar, H. R.; Yadav, B. P.; Rama Rao, Y. V. (March 2005), "Prediction of Western Disturbances and Associated Weather over the Western Himalayas" (PDF), Current Science 88 (6): 913–920, retrieved 23 March 2007
- Iwata, N.; Takahashi, N.; Arai, S. (1997), "Geochronological Study of the Deccan Volcanism by the 40Ar-39Ar Method", University of Tokyo (PhD thesis) 10: 22, doi:10.1046/j.1440-1738.2001.00290.x, retrieved 1 April 2007
- Karanth, K. P. (2006), "Out-of-India Gondwanan Origin of Some Tropical Asian Biota" (PDF), Current Science (March 2006) 90 (6): 789–792, retrieved 8 April 2007
- Kumar, B.; Rajagopatan; Hoerling, M.; Bates, G.; Cane, M. (2006), "Unraveling the Mystery of Indian Monsoon Failure During El Niño", Science 314 (5796): 115–119, Bibcode:2006Sci...314..115K, doi:10.1126/science.1131152, ISSN 0036-8075, PMID 16959975
- Kushner, S. (2006), "The Wettest Place on Earth", Faces 22 (9): 36–37, ISSN 0749-1387
- Normile, D. (2000), "Some Coral Bouncing Back from El Niño", Science 288 (5468): 941–942, doi:10.1126/science.288.5468.941a, PMID 10841705, retrieved 5 April 2007
- O'Hare, G. (1997), "The Indian Monsoon, Part Two: The Rains", Geography 82 (4): 335
- Pant, G. B. (2003), "Long-Term Climate Variability and Change over Monsoonal Asia" (PDF), Journal of the Indian Geophysical Union 7 (3): 125–134, retrieved 1 October 2011
- Parthasarathy, B.; Munot, A. A.; Kothawale, D. R. (1994), "All-India Monthly and Seasonal Rainfall Series: 1871–1993", Theoretical and Applied Climatology (December 1994) 49 (4): 217–224, Bibcode:1994ThApC..49..217P, doi:10.1007/BF00867461, ISSN 0177-798X
- Peterson, R. E.; Mehta, K. C. (1981), "Climatology of Tornadoes of India and Bangladesh", Meteorology and Atmospheric Physics (December 1981) 29 (4): 345–356
- Pisharoty, P. R.; Desai, B. N. (1956), "Western Disturbances and Indian Weather", Indian Journal of Meteorological Geophysics 7: 333–338
- Rowley, D. B. (1996), "Age of Initiation of Collision Between India and Asia: A Review of Stratigraphic Data" (PDF), Earth and Planetary Science Letters 145 (1): 1–13, Bibcode:1996E&PSL.145....1R, doi:10.1016/S0012-821X(96)00201-4, archived from the original on 17 April 2007, retrieved 31 March 2007
- Sheth, H. C. (2006), Deccan Traps: The Deccan Beyond the Plume Hypothesis (published 29 August 2006), retrieved 1 April 2007
- Singh, P.; Kumar, N. (1997), "Effect of Orography on Precipitation in the Western Himalayan Region", Journal of Hydrology 199 (1): 183–206, Bibcode:1997JHyd..199..183S, doi:10.1016/S0022-1694(96)03222-2
- Singhvi, A. K.; Kar, A. (2004), "The Aeolian Sedimentation Record of the Thar Desert" (PDF), Proceedings of the Indian Academy of Sciences (Earth Planet Sciences) (September 2004) 113 (3): 371–401, retrieved 1 October 2011
- Allaby, M. (1997), Floods, Dangerous Weather, Facts on File (published December 1997), ISBN 978-0-8160-3520-5
- Allaby, M. (author); Garratt, R. (illustrator) (2001), Encyclopedia of Weather and Climate (1st ed.), Facts on File (published December 2001), ISBN 978-0-8160-4071-1
- Balfour, E. G. (2003), Encyclopaedia Asiatica: Comprising the Indian Subcontinent, Eastern, and Southern Asia, Cosmo Publications (published 30 November 2003), ISBN 978-81-7020-325-4
- Burroughs, W. J. (1999), The Climate Revealed (1st ed.), Cambridge University Press, ISBN 978-0-521-77081-1
- Caviedes, C. N. (2001), El Niño in History: Storming Through the Ages (1st ed.), University Press of Florida (published 18 September 2001), ISBN 978-0-8130-2099-0
- Chouhan, T. S. (1992), Desertification in the World and Its Control, Scientific Publishers, ISBN 978-81-7233-043-9
- Collier, W.; Webb, R. H. (2002), Floods, Droughts, and Climate Change, University of Arizona Press (published 1 November 2002), ISBN 978-0-8165-2250-7
- Heitzman, J.; Worden, R. L. (1996), "India: A Country Study", Library of Congress, Area Handbook Series (5th ed.) (United States Government Printing Office, published August 1996), ISBN 978-0-8444-0833-0
- Hossain, M. (2011), "Climate Change Impacts and Adaptation Strategies for Bangladesh", Climate Change and Growth in Asia, Edward Elgar Publishing (published 11 May 2011), ISBN 978-1-84844-245-0
- Nash, J. M. (2002), El Niño: Unlocking the Secrets of the Master Weather Maker, Warner Books (published 12 March 2002), ISBN 978-0-446-52481-0
- Posey, C. A. (1994), The Living Earth Book of Wind and Weather, Reader's Digest Association (published 1 November 1994), ISBN 978-0-89577-625-9
- Singh, V. P. (editor); Ojha, C. S. P. (editor); Sharma, N. (editor) (2004), The Brahmaputra Basin Water Resources (1st ed.), Springer (published 29 February 2004), ISBN 978-1-4020-1737-7
- Wolpert, S. (1999), A New History of India (6th ed.), Oxford University Press (published 25 November 1999), ISBN 978-0-19-512877-2
- Aggarwal, D.; Lal, M., "Vulnerability of the Indian Coastline to Sea Level Rise", SURVAS Flood Hazard Research Centre
- Dasgupta, S. (2007), Warmer Tibet Can See Brahmaputra Flood Assam, Times of India (published 3 February 2007), retrieved 1 October 2011
- Desmond, E. W. (1989), The Himalayas: War at the Top of the World, Time (published 31 July 1989), retrieved 7 April 2007
- Farooq, O. (2002), "India's Heat Wave Tragedy", BBC News (17 May 2002), retrieved 1 October 2011
- Giles, B., "Deluges", BBC Weather
- Hara, M.; Kimura, F.; Yasunari, T., "The Generation Mechanism of the Western Disturbances over the Himalayas", Hydrospheric Atmospheric Research Centre (Nagoya University)
- Harrabin, R. (2007), How climate change hits India's poor, BBC News (published 1 February 2007), retrieved 1 October 2011
- Healy, M., South Asia: Monsoons, Harper College, retrieved 1 October 2011
- Kimmel, T. M. (2000), Weather and Climate: Koppen Climate Classification Flow Chart, University of Texas at Austin, retrieved 8 April 2007
- Lau, W. K. M. (2005), Aerosols May Cause Anomalies in the Indian Monsoon (PHP), Climate and Radiation Branch, Goddard Space Flight Centre, NASA (published 20 February 2005), retrieved 10 September 2011
- Mago, C. (2005), High Water, Heat Wave, Hope Floats, Times of India (published 20 June 2005)
- McGirk, T.; Adiga, A. (2005), "War at the Top of the World", Time (4 May 2005), retrieved 1 October 2011
- Ravindranath, N. H.; Bala, G.; Sharma, S. K. (2011), "In This Issue", Current Science (10 August 2011) 101 (3): 255–256, retrieved 3 October 2011[dead link]
- Sethi, N. (2007), Global Warming: Mumbai to Face the Heat, Times of India (published 3 February 2007), retrieved 18 March 2007
- Vaswani, K. (2006), "India's Forgotten Farmers Await Monsoon", BBC News (20 June 2006), retrieved 23 April 2007
- Vaswani, K. (2006), Mumbai Remembers Last Year's Floods, BBC News (published 27 July 2006), retrieved 1 October 2011
- "Air India Reschedules Delhi-London/New York and Frankfurt Flights Due to Fog", Air India (17 December 2003), 2003, archived from the original on 19 July 2006, retrieved 18 March 2007
- Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division, Frequently Asked Questions: What Are the Average, Most, and Least Tropical Cyclones Occurring in Each Basin?, NOAA, retrieved 25 July 2006
- Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division, Frequently Asked Questions: When Is Hurricane Season?, NOAA, retrieved 25 July 2006
- "1999 Supercyclone of Orissa", BAPS Care International, 2005
- Millions Suffer in Indian Monsoon, BBC News, 1 August 2005, retrieved 1 October 2011
- "India Records Double-Digit Growth", BBC News (31 March 2004), 2004, retrieved 23 April 2007
- Rivers Run Towards "Crisis Point", BBC News, 20 March 2007, retrieved 1 October 2011
- "The Impacts of the Asian Monsoon", BBC Weather, retrieved 23 April 2007
- "Country Guide: India", BBC Weather, retrieved 23 March 2007
- Soil and Climate of Bihar, Government of Bihar, retrieved 13 September 2011
- "Snow Fall and Avalanches in Jammu and Kashmir" (PDF), National Disaster Management Division (Ministry of Home Affairs, Government of India), 28 February 2005, archived from the original on 1 July 2007, retrieved 24 March 2007
- Rain Brings Mumbai to a Halt, Rescue Teams Deployed, The Hindu, 5 July 2006, retrieved 1 October 2011
- Southwest Monsoon: Normal Dates of Onset, India Meteorological Department, retrieved 1 October 2011
- "Rainfall during pre-monsoon season", India Meteorological Department, retrieved 26 March 2007
- "A Country Study: India", Library of Congress Country Studies (Library of Congress (Federal Research Division)), retrieved 26 March 2007
- Global Measured Extremes of Temperature and Precipitation, National Climatic Data Centre (published 9 August 2004), 2004, retrieved 1 October 2011
- "Climate" (PDF), National Council of Educational Research and Training: 28, archived from the original on 28 February 2007, retrieved 31 March 2007[dead link]
- Himalayan Meltdown Catastrophic for India, Times of India, 3 April 2007, retrieved 1 October 2011
- "Tropical Cyclone 05B", Naval Maritime Forecast Centre (Joint Typhoon Warning Centre) (United States Navy)
- "Early Warning Signs: Coral Reef Bleaching", Union of Concerned Scientists, 2005, retrieved 1 October 2011
- "Weatherbase", Weatherbase, retrieved 24 March 2007
- "Wunderground", Weather Underground, retrieved 24 March 2007
- "Weather.com", The Weather Channel, retrieved 23 March 2007
- "India", The World Factbook (Central Intelligence Agency), retrieved 1 October 2011
Maps, imagery, and statistics
- "India Meteorological Department", Government of India
- "Weather Resource System for India", National Informatics Centre
- "Extreme Weather Events over India in the last 100 years" (PDF), Indian Geophysical Union |
MUSIC THEORY GRADE 1
Total Views: 128,405
Module ID: PMT1
In this grade you will:
- Understand Tones and Semitones
- Understand Sharps and Flats
- Understand Octaves
- Learn the notes in the Note Circle (from memory)
- Learn the Open String Note Names (from memory)
You’ll be pleased to hear that this first grade is very easy! Most people should be able to complete this stage in a couple of weeks - some in as little as a few hours if they already have some knowledge of theory or are super smart.
We’ll be looking at some practical exercises to help you remember the most important bits of information. This is to prepare you for Grade 2, where you’ll use what you’ve learnt here to work out the name of every note on the guitar neck. And that’s a pretty big deal.
The Note Circle
This is the Note Circle. It shows the names of the 12 notes (or pitches) that are used in Western music. They are the very foundation of the theory of music and so...
Tones And Semitones
Each step around the Note Circle represents the interval of a SEMITONE (shown in blue) which is equal to a one-fret step on the guitar. Every fret on the guitar ...
Sharps And Flats
A sharp sign (#) raises the pitch of a note by a semitone and a flat sign (b) lowers the note by one semitone. You can remember this easily by remembering that if you ...
Note Circle: By Rote
It’s a pretty old school approach, but I’ve found that most students find it useful to write out the Note Circle a bunch of times. It might seem a bit over the top, ...
Note Circle: Speaking Out Loud
Another very effective way to try and learn the Note Circle is by saying the names of the notes out loud. Start without the sharps and flats by saying: A - B - C - ...
Note Circle: With A Jam Buddy
If you are lucky enough to have a jam buddy to practise with, you can help each other learn the Note Circle by testing each other. It’s really simple but really effect...
Open String Note Names
Knowing the names of the notes of the open strings of the guitar is very important. We’ll be combining these with the note circle in the next grade. The focus here i...
Your Open String Note Mnemonics
Making up your own mnemonics for the Open String Note Names is a good idea. I always encourage students to make up their own, as well as using mine, as it will help yo... |
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (June 2012) (Learn how and when to remove this template message)
- Proto-Indo-European language (PIE) had a series of phonemes beyond those reconstructed with the comparative method.
- These phonemes, according to the most-accepted variant of the theory, were "laryngeal" consonants of an indeterminate place of articulation towards the back of the mouth.
The theory aims to:
- Produce greater regularity in the reconstruction of PIE phonology than from the reconstruction that is produced by the comparative method.
- Extend the general occurrence of the Indo-European ablaut to syllables with reconstructed vowel phonemes other than *e or *o.
In its earlier form (see below), the theory postulated two sounds in PIE. Combined with a reconstructed *e or *o, the sounds produce vowel phonemes that would not otherwise be predicted by the rules of ablaut. The theory received considerable support after the decipherment of Hittite, which revealed it to be an Indo-European language.
Many Hittite words were shown to be derived from PIE, with a phoneme represented as ḫ corresponding to one of the hypothetical PIE sounds. Subsequent scholarship has established a set of rules by which an ever-increasing number of reflexes in daughter languages may be derived from PIE roots. The number of explanations thus achieved and the simplicity of the postulated system have both led to widespread acceptance of the theory.
In its most widely accepted version, the theory posits three phonemes in PIE: h₁, h₂ and h₃ (see below). Other daughter languages inherited the derived sounds, resulting from their merger with PIE short vowels and their subsequent loss.
The phonemes are now recognised as consonants, related to articulation in the general area of the larynx, where a consonantal gesture may affect vowel quality. They are regularly known as laryngeal, but the actual place of articulation for each consonant remains a matter of debate. (see below).
The laryngeals get their name because they were believed by Hermann Möller and Albert Cuny to have had a pharyngeal, epiglottal, or glottal place of articulation, involving a constriction near the larynx. While this is still possible, many linguists now think of "laryngeals", or some of them, as having been velar or uvular.
The evidence for their existence is mostly indirect, as will be shown below, but the theory serves as an elegant explanation for a number of properties of the PIE vowel system that made no sense until the theory, such as the "independent" schwas (as in *pəter- 'father'). Also, the hypothesis that PIE schwa *ə was actually a consonant, not a vowel, provides an elegant explanation for some apparent exceptions to Brugmann's law in Indic languages.
The beginnings of the theory were proposed by Ferdinand de Saussure in 1879, in an article chiefly devoted to something else altogether (demonstrating that *a and *o were separate phonemes in PIE).
In the course of his analysis, Saussure proposed that what had then been reconstructed as long vowels *ā and *ō, alternating with *ǝ, was actually an ordinary type of PIE ablaut. That is, it was an alternation between e-grade and zero grade like in "regular" ablaut (further explanations below), but followed by a previously unidentified element. This "element" accounted for both the changed vowel color and the lengthening (short *e becoming long *ā or *ō).
So, rather than reconstructing *ā, *ō and *ǝ as others had done before, Saussure proposed something like *eA alternating with *A and *eO with *O, where A and O represented the unidentified elements. Saussure called them simply coefficients sonantiques, which was the term for what are now in English more usually called resonants; that is, the six elements present in PIE which can be either consonants (nonsyllabic) or vowels (syllabic) depending on the sounds they are adjacent to: *y w r l m n.
These views were accepted by a few scholars, in particular Hermann Möller, who added important elements to the theory. Saussure's observations, however, did not achieve any general currency, as they were still too abstract and had little direct evidence to back them up.
This changed when Hittite was discovered and deciphered in the early 20th century. Hittite had a sound or sounds written with symbols from the Akkadian syllabary conventionally transcribed as ḫ, as in te-iḫ-ḫi "I put, am putting". This consonant did not appear to be clearly related to any of the consonants then reconstructed for PIE, and various unsatisfactory proposals were made to explain this consonant in terms of the PIE consonant system as it had then been reconstructed.
It remained for Jerzy Kuryłowicz (Études indoeuropéennes I, 1935) to propose that these sounds lined up with Saussure's conjectures. He suggested that the unknown consonant of Hittite was in fact a direct reflex of the coefficients sonantiques that Saussure had proposed.
Their appearance explained some other matters as well; they explained, for example, why verb roots containing only a consonant and a vowel always have long vowels. For example, in *dō- "give", the new consonants allowed linguists to decompose this further into *deh₃. This not only accounted for the patterns of alternation more economically than before (by requiring fewer types of ablaut), but also brought the structure of these roots into line with the basic PIE pattern which required roots to begin and end with a consonant.
The lateness of the discovery of these sounds by Indo-Europeanists is largely because Hittite and the other Anatolian languages are the only Indo-European languages where at least some of them are attested directly and consistently as consonantal sounds. Otherwise, their presence is to be inferred mostly through the effects they have on neighboring sounds, and on patterns of alternation that they participate in. When a laryngeal is attested directly, it is usually as a special type of vowel and not as a consonant.
Varieties of laryngealsEdit
There are many variations of the laryngeal theory. Some scholars, such as Oswald Szemerényi, reconstruct just one laryngeal. Some follow Jaan Puhvel's reconstruction of eight or more (in his contribution to Evidence for Laryngeals, ed. Werner Winter).
Basic Laryngeal SetEdit
Most scholars work with a basic three:
- *h₁, the "neutral" laryngeal
- *h₂, the "a-coloring" laryngeal
- *h₃, the "o-coloring" laryngeal
Some scholars suggest the existence of a fourth consonant, *h₄, which differs from *h₂ in not being reflected as Anatolian ḫ but being reflected, to the exclusion of all other laryngeals, as Albanian h when word-initial before an originally stressed vowel.
E.g. PIE *h₄órǵʰiyeh₂ "testicle" yields Albanian herdhe "testicle" but Hittite arki- "testicle" whereas PIE *h₂ŕ̥tkos "bear" yields Alb. ari "bear" but Hittite hart(ag)ga- (=/hartka-/) "cultic official, bear-person".
When there is an uncertainty whether the laryngeal is *h₂ or *h₄, the symbol *ha may be used.
- *h₁ Doublet
Another such theory, but much less generally accepted, is Winfred P. Lehmann's view, on the basis of inconsistent reflexes in Hittite, that *h₁ was actually two separate sounds. (He assumed that one was a glottal stop and the other a glottal fricative.)
Direct Evidence for LaryngealsEdit
Some direct evidence for laryngeal consonants comes from Anatolian: PIE *a is a fairly rare sound, and in an uncommonly large number of good etymologies it is word-initial. Thus PIE (traditional) *anti "in front of and facing" > Greek antí "against"; Latin ante "in front of, before"; Sanskrit ánti "near; in the presence of". But in Hittite there is a noun ḫants "front, face", with various derivatives (ḫantezzi "first", and so on), pointing to a PIE root-noun *h₂ent- "face" (of which *h₂enti would be the locative singular). (It does not necessarily follow that all reconstructed forms with initial *a should automatically be rewritten *h₂e.)
Similarly, the traditional PIE reconstruction for 'sheep' is *owi- (a y-stem, not an i-stem) whence Sanskrit ávi-, Latin ovis, Greek ὄϊς. But Luwian has ḫawi-, indicating instead a reconstruction *h₃ewis.
Considerable debate still surrounds the pronunciation of the laryngeals and various arguments have been given to pinpoint their exact place of articulation. Firstly the effect these sounds have had on adjacent phonemes is well documented. The evidence from Hittite and Uralic is sufficient to conclude that these sounds were "guttural" or pronounced rather back in the vocal tract. The same evidence is also consistent with the assumption that they were fricative sounds (as opposed to approximants or stops), an assumption which is strongly supported by the behaviour of laryngeals in consonant clusters.
It has been suggested by Beekes (1995) that *h₁ is a glottal stop [ʔ]. However, Winfred P. Lehmann instead theorized, based on inconsistent reflexes in Hittite, that there were two *h₁ sounds: a glottal stop [ʔ] and an h sound [h] as in English hat.
Jens Elmegård Rasmussen (1983) suggested a consonantal realization for *h₁ as the voiceless glottal fricative [h] with a syllabic allophone [ə] (mid central unrounded vowel). This is supported by the closeness of [ə] to [e] (with which it coalesces in Greek), its failure (unlike *h₂ and *h₃) to create an auxiliary vowel in Greek and Tocharian when it occurs between a semivowel and a consonant, and the typological likelihood of a [h] given the presence of aspirated consonants in PIE.
In 2004, Alwin Kloekhorst argued that the Hieroglyphic Luwian sign no. 19 (𔐓, conventionally transcribed á) stood for /ʔa/ (distinct from /a/, sign no. 450: 𔗷 a) and represents the reflex of */h₁/; this would support the hypothesis that */h₁/, or at least some cases of it, was [ʔ]. Later, Kloekhorst (2006) claimed that also Hittite preserves PIE *h₁ as a glottal stop [ʔ], visible in words like Hittite e-eš-zi 'he is' < PIE *h₁és-ti, where an extra initial vowel sign is used (so-called plene spelling). This hypothesis has met with serious criticism (e.g. Rieken (2010), Melchert (2010) and Weeden (2011). Recently, however, Simon (2010) has supported Kloekhorst's thesis by suggesting that plene spelling in Cuneiform Luwian can be explained in a similar way. Additionally, Simon's 2013 article revises the Hieroglyphic Luwian evidence and concludes that "although some details of Kloekhorst's arguments could not be maintained, his theory can be confirmed."
An occasionally advanced idea that the laryngeals were dorsal fricatives corresponding directly to the three traditionally reconstructed series of dorsal stops ("palatal", velar, and labiovelar) suggests a further possibility, a palatal fricative [ç].
From what is known of such phonetic conditioning in contemporary languages, notably Semitic languages, *h₂ (the "a-colouring" laryngeal) could have been a pharyngeal fricative such as [ħ] and [ʕ]. Pharyngeal consonants (like the Arabic letter ح (ħ) as in Muħammad) often cause a-coloring in the Semitic languages. Uvular fricatives, however, may also colour vowels, thus [χ] is also a noteworthy candidate. Weiss (2016) suggests that this was the case in Proto-Indo-European proper, and that a shift from uvular into pharyngeal [ħ] may have been a common innovation of the non-Anatolian languages (before the consonant's eventual loss). Rasmussen (1983) suggested a consonantal realization for *h₂ as a voiceless velar fricative [x], with a syllabic allophone [ɐ], i.e. a near-open central vowel.
Likewise it is generally assumed that *h₃ was rounded (labialized) due to its o-coloring effects. It is often taken to have been voiced based on the perfect form *pi-bh₃- from the root *peh₃ "drink". Rasmussen has chosen a consonantal realization for *h₃ as a voiced labialized velar fricative [ɣʷ], with a syllabic allophone [ɵ], i.e. a close-mid central rounded vowel. Kümmel instead suggests [ʁ].
Support for theory from daughter languagesEdit
The hypothetical existence of laryngeals in PIE finds support in the body of daughter language cognates which can be most efficiently explained through simple rules of development.
Direct reflexes of laryngealsEdit
Reflexes of h₂ in Anatolian PIE root Meaning Anatolian reflex Cognates *peh₂-(s)- 'protect' Hittite paḫḫs- Sanskrit pā́ti, Latin pascere (pastus), Greek patéomai *dʰewh₂- 'breath/smoke' Hittite tuḫḫāi- Sanskrit dhūmá-, Latin fūmus, Greek thūmos *h₂ent- 'front' Hittite ḫant- Sanskrit ánti, Latin ante, Greek antí *h₂erǵ- 'white/silver' Hittite ḫarki- Sanskrit árjuna, Latin argentum, Greek árguron, Tocharian A ārki *h₂owi- 'sheep' Luwian hawi-
Sanskrit ávi-, Latin ovis, Greek ó(w)is *péh₂wr̥ 'fire' Hittite paḫḫur, Luwian pāḫur English fire, Tocharian B puwar, Greek pûr *h₂stér- 'star' Hittite ḫasterz English star, Sanskrit stā́, Latin stella, Greek astḗr *h₂ewh₂os 'grandfather' Hittite ḫuḫḫa-, Luwian ḫuḫa-, Lycian χuge- Gothic awo, Latin avus, Armenian haw *h₁ésh₂r̥ 'blood' Hittite ēšḫar, Luwian āšḫar Greek éar, Latin sanguīs, Armenian aryun, Latvian asinis, Tocharian A ysār
Some Hittitologists have also proposed that "h₃" was preserved in Hittite as "ḫ", although only word initially and after a resonant. Kortlandt holds that "h₃" was preserved before all vowels except "*o". Similarly, Kloekhorst believes they were lost before resonants as well.
Reflexes of h₃ in Anatolian PIE root Meaning Anatolian reflex Cognates *welh₃- 'to hit' Hittite walḫ- Latin vellō, Greek ealōn *h₃esth₁ 'bone' Hittite ḫaštāi Latin os, Greek ostéon, Sanskrit ásthi *h₃erbʰ- 'to change status' Hittite ḫarp- Latin orbus, Greek orphanós' *h₃eron- 'eagle' Hittite ḫara(n)- Gothic ara, Greek ὄρνῑς *h₃pus- 'to have sex' Hittite ḫapuš- Greek opuíō
Reconstructed instances of *kw in Proto-Germanic have been explained as reflexes of PIE *h₃w (and possibly *h₂w), a process known as Cowgill's law. The proposal has been challenged but is defended by Don Ringe.
In the Albanian language, a minority view proposes that some instances of word-initial h continue a laryngeal consonant.
PIE root Meaning Albanian Other cognates *h₂erǵʰi- testicles herdhe Greek orkhis
In Western IranianEdit
Martin Kümmel has proposed that some initial [x] and [h] in contemporary Western Iranian languages, commonly thought to be prothetic, are instead direct survivals of *h₂, lost in epigraphic Old Persian but retained in "marginal dialects" ancestral among others to Modern Persian.
- sic, with *h₁ (Kümmel's "h", versus "χ" = *h₂).
Proposed indirect reflexesEdit
In all other daughter languages, comparison of the cognates can support only hypothetical intermediary sounds derived from PIE combinations of vowels and laryngeals. Some indirect reflexes are required to support the examples above where the existence of laryngeals is uncontested.
PIE Intermediary Reflexes eh₂ ā ā, a, ahh uh₂ u ū, uhh h₂e a a, ā h₂o o o, a
The proposals in this table account only for attested forms in daughter languages. Extensive scholarship has produced a large body of cognates which may be identified as reflexes of a small set of hypothetical intermediary sound, including those in the table above. Individual sets of cognates are explicable by other hypotheses but the sheer bulk of data and the elegance of the laryngeal explanation has led to widespread acceptance in principle.
Vowel coloration and lengtheningEdit
In the proposed Anatolian-language reflexes above, only some of the vowel sounds reflect PIE *e. In the daughter languages in general, many vowel sounds are not obvious reflexes. The theory explains this as the result of
- 1 H-coloration. PIE *e is 'coloured' (i.e. its sound-value is changed) before or after h₂ and h₃, but not when next to h₁.
Laryngeal precedes Laryngeal follows h₁e > h₁e eh₁ > eh₁ h₂e > h₂a eh₂ > ah₂ h₃e > h₃o eh₃ > oh₃
- 2 H-loss. Any of the three laryngeals (symbolised here as H) is lost before a short vowel. Laryngeals are also lost before another consonant (symbolised here as C,) with consequent lengthening of the preceding vowel.
Before vowel Before consonant He > e eHC > ēC Ha > a aHC > āC Ho > o oHC > ōC Hi > i iHC > īC Hu > u uHC > ūC
The results of H-coloration and H-loss are recognised in daughter-language reflexes such as those in the table below
After vowels PIE Latin Sanskrit Greek Hittite *iH > ī *gʷih₂-wós vīvus jīva bíos *uH > ū *dʰweh₂- fūmus dhūma thūmós tuwaḫḫaš *oH > ō *sóh₂wl̥ sōl sū́rya hḗlios *eh₁ > ē *séh₁-mn̥ sēmen hêma *eh₂ > ā *peh₂-(s)- pāscere (pastus) pā́ti patéomai paḫḫas *eh₃ > ō *deh₃-r/n dōnum dāna dôron Before vowels PIE Latin Sanskrit Greek Hittite *Hi > i *h₁íteros iterum ítara *Hu > u *pélh₁us plūs purú- polús *Ho > o *h₂owi- ovis ávi ó(w)is Luw. ḫawa *h₁e > e *h₁ésti est ásti ésti ēšzi *h₂e > a *h₂ent
*h₃e > o *h₃érbh- orbus arbhas orphanós ḫarp-
Greek triple reflex vs schwaEdit
Between three phonological contexts, Greek reflexes display a regular vowel pattern that is absent from the supposed cognates in other daughter languages. Before the development of laryngeal theory, scholars compared Greek, Latin and Sanskrit (then considered earliest daughter languages) and concluded the existence in these contexts of a schwa (ə) vowel in PIE, the so-called schwa indogermanicum. The contexts are: 1. between consonants (short vowel); 2. word initial before a consonant (short vowel); 3. combined with a liquid or nasal consonant [r, l, m, n] (long vowel).
- 1 Between consonants
- Latin displays a and Sanskrit i, whereas Greek displays e, a or o
- 2 Word initial before a consonant
- Greek alone displays e, a or o
- 3 Combined with a liquid or nasal
- Latin displays a liquid/nasal consonant followed by ā; Sanskrit displays either īr/ūr or the vowel ā alone; Greek displays a liquid/nasal consonant followed by ē, ā (in dialects such as Doric) or ō
Laryngeal theory provides a more elegant general description than reconstructed schwa by assuming that the Greek vowels are derived through vowel colouring and H-loss from PIE h₁, h₂, h₃, constituting a so-called triple reflex.
*CHC *HC- *r̥H l̥H *m̥H *n̥H *h₁ Greek e e rē lē mē nē Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā *h₂ Greek a a rā lā mā nā Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā *h₃ Greek o o rō lō mō nō Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā
- 1 Between consonants
- An explanation is provided for the existence of three vowel reflexes in Greek corresponding to single reflexes in Latin and in Sanskrit
- 2 Word initial
- The assumption of *HC- in PIE yields an explanation for a dichotomy exhibited below between cognates in the Anatolian, Greek and Armenian languages reflexes with initial a and cognates in the remaining daughters which lack that syllable, The theory assumes initial *h₂e in the PIE root, which has been lost in most of the daughter languages.
- *h₂ster- 'star': Hittite hasterza, Greek astḗr, Armenian astí, Latin stella, Sanskrit tár-
- *h₂wes 'live, spend time': Hittite huis- 'live', Greek á(w)esa 'I spent a night', Sanskrit vásati 'spend the night', English was
- *h₂ner- 'man': Greek anḗr, Armenian ayr (from *anir), Oscan niir, Sanskrit nár
- 3 Combined with a liquid or nasal
- These presumed sonorant reflexes are completely distinct from those deemed to have developed from single phonemes.
*r̥ *l̥ *m̥ *n̥ Greek ra, ar la, al a a Latin or ul em en Sanskrit r̥ r̥ a a
The phonology of the sonorant examples in the previous table can only be explained by the presence of an adjacent phonemes in PIE. Assuming the phonemes to be a following h₁, h₂ or h₃ allows the same rules of vowel coloration and H-loss to apply to both PIE *e and PIE sonorants.
Support from Greek ablautEdit
The hypothetical values for sounds with laryngeals after H-coloration and H-loss (such as seen above in the triple reflex) draw much of their support for the regularisation they allow in ablaut patterns, specifically the uncontested patterns found in Greek.
Ablaut in the rootEdit
In the following table, each row shows undisputed Greek cognates sharing the three ablaut grades of a root. The four sonorants and the two semi-vowel are represented as individual letters, other consonants as C and the vowel or its absence as (V).
e-grade o-grade zero-grade root meaning C(V)C πέτεσθαι
'fly' C(V)iC λείπειν
'leave' C(V)uC φεύγειν
'flee' C(V)r δέρκομαι
'see clearly' C(V)l πέλομαι
'become' C(V)m τέμω
'cut' C(V)n γένος
The reconstructed PIE e-grade and zero-grade of the above roots may be arranged as follows:
e-grade zero-grade C(V)C *pet *pt C(V)iC *leikʷ *likʷ C(V)uC *bʰeug *bʰug C(V)r *derk *drk C(V)l *kʷel *kʷl C(V)m *tem *tm C(V) *gen *gn
An extension of the table to PIE roots ending in presumed laryngeals allows many Greek cognates to follow a regular ablaut pattern.
root meaning cognates C(V)h₁ *dʰeh₁ *dʰh₁ 'put' I : ē : τίθημι (títhēmi)
II : e : θετός (thetós)
C(V)h₂ *steh₂ *sth₂ 'stand' I : ā : Doric ἳστᾱμι (hístāmi)
II : a : στατός (statós)
C(V)h₃ *deh₃ *dh₃ 'give' I : ō : δίδωμι (dídōmi)
II : o : δοτός (dotós)
Ablaut in the suffixEdit
The first row of the following table shows how uncontested cognates relate to reconstructed PIE stems with e-grade or zero-grade roots, followed by e-grade or zero-grade of the suffix –w-. The remaining rows show how the ablaut pattern of other cognates is preserved if the stems are presumed to include the suffixes h₁, h₂, h₃.
root meaning cognates *gen+w- *gn+ew- *gn+w- 'knee' I Hittite genu
II Gothic kniu
III γνύξ (gnuks)
*gen+h₁- *gn+eh₁ *gn+h₁- 'become' I γενετήρ (genetḗr)
II γνήσιος (gnḗsis)
III γίγνομαι (gígnomai)
*tel+h₂- *tl+eh₂- *tl+h₂- 'lift, bear' I τελαμών (telamṓn)
II ἔτλᾱν (étlān)
III τάλας (tálas)
*ter+h₃- *tr+eh₃- *tr+h₃- 'bore, wound' II τιτρώσκω (titrṓskō)
III ἔτορον (étoron)
In the preceding sections, forms in the daughter languages were explained as reflexes of laryngeals in PIE stems. Since these stems are judged to have contained only one vowel, the explanations involved H-loss either when a vowel preceded or when a vowel followed. However, the possibility of H-loss between two vowels is present when a stem combines with an inflexional suffix.
It has been proposed that PIE H-loss resulted in hiatus, which in turn was contracted to a vowel sound distinct from other long vowels by being disyllabic or of extra length.
Early Indo-Iranian disyllablesEdit
A number of long vowels in Avestan were pronounced as two syllables, and some examples also exist in early Sanskrit, particularly in the Rig Veda. These can be explained as reflexes of contraction following a hiatus caused by the loss of intervocalic H in PIE.
Proto-Germanic trimoric oEdit
The reconstructed phonology of Proto-Germanic (P-Gmc), the presumed ancestor of the Germanic languages, includes a long *ō phoneme, which is in turn the reflex of PIE ā. As outlined above Laryngeal theory has identified instances of PIE ā as reflexes of earlier *h₂e, *eh₂ or *aH before a consonant.
However, a distinct long P-Gmc *ō phoneme has been recognised with a different set of reflexes in daughter Germanic languages. The vowel length has been calculated by observing the effect of the shortening of final vowels in Gothic.
length P-Gmc Gothic one mora *a, *i, *u ∅, ∅, u two morae *ē, *ī, *ō, *ū a, i?, a, u? three morae *ê, *ô ē, ō
Reflexes of trimoric or overlong *ô are found in the final syllable of nouns or verbs, and are thus associated with inflectional endings. Thus four P-Gmc sounds are proposed, shown here with Gothic and Old English reflexes:
P-Gmc Reflexes P-Gmc Reflexes bimoric oral *ō Goth -a
trimoric oral *ô Goth -ō
nasal *ō̜ Goth -a
nasal *ǫ̂ Goth -ō
A somewhat different contrast is observed in endings with final *z:
P-Gmc Reflexes P-Gmc Reflexes bimoric *ōz Goth -ōs
trimoric *ôz Goth -ōs
- by H-loss *oHo > *oo > *ô;
- by H-coloration and H-loss *eh₂e > *ae > *â > *ô.
Trimoric ending PIE Reflex P-Gmc Reflexes all stems
*-oHom Sanskrit -ām
[often disyllabic in Rig Veda]
Greek -ῶν (ô̜:n)
*-ǫ̂ Gothic -ō
Old English -a
*-eh₂es Sanskrit –ās
*-ôz Gothic -ōs
Old English -a
Bimoric ending PIE Reflex P-Gmc Reflexes thematic verbs
1st person singular
*-oh₂ Latin -ō
*-ō Gothic -a
Old English -u
*-eh₂ Sanskrit -ā
*-ō Gothic -a
Old English -u
*-eh₂m Sanskrit -ām
*-ō̜ Gothic -a
Old English -e
*-eh₂ns Sanskrit -ās
Latin *-ans > -ās
*-ōz Gothic -ōs
Old English -e
(Trimoric *ô is also reconstructed as word-final in contexts that are not explained by laryngeal theory.)
Balto-Slavic long vowel accentEdit
The reconstructed phonology of the Balto-Slavic languages posits two distinct long vowels in almost exact correspondence to bimoric and trimoric vowels in Proto-Germanic. The Balto-Slavic vowels are distinguished not by length but by intonation; long vowels with circumflex accent correspond to P-Gmc trimoric vowels. A significant proportion of long vowels with acute accent (also described as with acute register) correspond to P-Gmc bimoric vowels. These correspondences have led to the suggestion that the split between them occurred in the last common ancestor of the two daughters.
It has been suggested that acute intonation was associated with glottalisation, a suggestion supported by glottalised reflexes in Latvian. This could lend support to a theory that laryngeal consonants developed into glottal stops before their disappearance in Balto-Slavic and Proto-Germanic.
H-loss adjacent to other soundsEdit
After stop consonantsEdit
PIE resonants (sonorants) *r̥,*l̥,*m̥,*n̥ are predicted to become consonantal allophones *r, *l*, *m, *n* when immediately followed by a vowel. Using R to symbolise any resonant (sonorant) and V for any vowel, *R̥V>*RV. Instances in the daughter languages of a vocalic resonant immediately followed by a vowel (RV) are explained as reflexes of PIE *R̥HV with a laryngeal between the resonant and the vowel giving rise to a vocalic allophone. This original vocalic quality was preserved following H-loss.
Next to semi-vowelsEdit
(see Holtzmann's law)
Laryngeal theory has been used to explain the occurrence of a reconstructed sound change known as Holtzmann's law or sharpening (German Verschärfung ) in North Germanic and East Germanic languages. Existing theory explains that PIE semivowels *y and *w were doubled to P-Gmc *-yy- and *-ww-, and that these in turn became -ddj-and -ggw-respectively in Gothic and -ggj- and -ggw- in early North Germanic languages. However, existing theory had difficulty in predicting which instances of PIE semivowels led to sharpening and which instances failed to do so. The new explanation proposes that words exhibiting sharpening are derived from PIE words with laryngeals.
Example PIE early P-Gmc later P-Gmc Reflexes *drewh₂yo
*trewwjaz with sharpening *triwwjaz Gothic triggws
Old Norse tryggr
without sharpening *triuwjaz Old English trēowe
Old High German gitriuwi
Many of these techniques rely on the laryngeal being preceded by a vowel, and so they are not readily applicable for word-initial laryngeals except in Greek and Armenian. However, occasionally languages have compounds in which a medial vowel is unexpectedly lengthened or otherwise shows the effect of a following laryngeal. This shows that the second word originally began with a laryngeal, and that this laryngeal still existed at the time the compound was formed.
Laryngeals in the Uralic languagesEdit
Further evidence of the laryngeals has been found in Uralic languages. While Proto-Uralic and PIE have not been demonstrated to be genetically related, some word correspondences between Uralic and Indo-European have been identified as likely borrowings from very early Indo-European dialects to early Uralic dialects. One example is the widespread word family including on the Uralic side e.g. Hungarian méz, Finnish and Estonian mesi, met(e)-, Mari мӱ /my/, Komi ма /ma/ 'honey', suggesting Proto-Uralic *meti; and on the Indo-European side, English mead, Greek methu 'wine', German Met 'honey wine', Slavic medъ and Sanskrit mádhu 'honey' etc.
There are several criteria to date such borrowings, the most reliable ones coming from historical phonology. For example, Finnic porsas, Erzya пурцос /purt͡sos/, Mokša пурьхц /pur̥ʲt͡s/ 'piglet' presuppose a common proto-form *porćas at an earlier stage of development. This is etymologized as a loanword from PIE *porḱ-, which gives Latin porcus 'hog', Slavic porsę 'pig', OE fearh (> Engl. farrow 'young pig'), Lithuanian par̃šas 'piglet, castrated boar'. Here loaning must have occurred predating the depalatalisation of centum languages, and the later development into the Baltic *š reflected as Finn. h in borrowings, or Iranian *c medially reflected as Finn. t. If the PIE distinction between palatovelars and plain velars is reconstructed as one of velars and uvulars, then instead of the former condition also a lower limit can be set up for the loan, as postdating the satemization of *ḱ into a palatalized stop or affricate.
Work particularly associated with research of the scholar Jorma Koivulehto has identified a number of additions to the list of Finnic loanwords from an Indo-European source or sources whose particular interest is the apparent correlation of PIE laryngeals with three post-alveolar phonemes (or their later reflexes) in the Finnic forms. If so, this would point to a great antiquity for the borrowings, since no attested Indo-European language neighbouring Uralic has consonants as reflexes of laryngeals. And it would bolster the idea that laryngeals were phonetically distinctly consonantal.
However, Koivulehto's theories are not universally accepted and have been sharply criticized (e. g. by Finno-Ugricist Eugene Helimski) because many of the reconstructions involve a great deal of far-fetched hypotheses and the chronology is not in good agreement with the history of Bronze Age and Iron Age migrations in the Eastern Europe established by archeologists and historians.
Three Uralic phonemes have been posited to reflect PIE laryngeals. In post-vocalic positions both the post-alveolar fricatives that ever existed in Uralic are represented: firstly a possibly velar one, theoretically reconstructed much as the PIE laryngeals (conventionally marked *x), in the very oldest borrowings and secondly a grooved one (*š as in shoe becoming modern Finnic h) in some younger ones. The velar plosive k is the third reflex and the only one found word-initially. In intervocalic position the reflex k is probably younger than either of the two former ones. The fact that Finno-Ugric may have plosive reflexes for PIE laryngeals is to be expected under well documented Finnic phonological behaviour and does not mean much for tracing the phonetic value of PIE laryngeals (cf. Finnish kansa 'people' < PGmc *xansā 'company, troupe, party, crowd' (cf. German Hanse), Finnish kärsiä 'suffer, endure' < PGmc *xarđia- 'endure' (cf. E. hard), Finnish pyrkiä < PGmc. *wurk(i)ja- 'work, work for' etc.).
The correspondences do not differentiate between h₁, h₂ and h₃. Thus
- PIE laryngeals correspond to the PU laryngeal *x in wordstems like:
- Finnish na-inen 'woman' / naa-ras 'female' < PU *näxi-/*naxi- < PIE *[gʷnah₂-] = */gʷneh₂-/ > Sanskrit gnā́ 'goddess', OIr. mná (gen. of ben), ~ Greek gunē 'woman' (cognate to Engl. queen)
- Finnish sou-ta- ~ Samic *sukë- 'to row' < PU *suxi- < PIE *sewh-
- Finnish tuo- 'bring' ~ Samic *tuokë- ~ Tundra Nenets tāś 'give' < PU *toxi- < PIE *[doh₃-] = */deh₃-/ > Greek didōmi, Lat. dō-, Old Lith. dúomi 'give', Hittite dā 'take'
- Note the consonantal reflex /k/ in Samic.
- PIE laryngeals correspond to Finnic *h, whose normal origin is a Pre-Finnic fricative *š in wordstems like:
- Finnish rohto 'medical plant, green herb' < PreFi *rošto < PreG *groH-tu- > Gmc. *grōþu 'green growth' > Swedish grodd 'germ (shoot)'
- Old Finnish inhi-(m-inen) 'human being' < PreFi *inši- 'descendant' < PIE *ǵnh₁-(i)e/o- > Sanskrit jā́- 'born, offspring, descendant', Gmc. *kunja- 'generation, lineage, kin'
- PIE laryngeals correspond to Pre-Finnic *k in wordstems like:
- Finnish kesä 'summer' < PFS *kesä < PIE *h₁es-en- (*h₁os-en-/-er-) > Balto-Slavic *eseni- 'autumn', Gothic asans 'summer'
- Finnish kaski 'burnt-over clearing' < Proto-Finnic *kaski < PIE/PreG *[h₂a(h₁)zg-] = */h₂e(h₁)sg-/ > Gmc. *askōn 'ashes'
- Finnish koke- 'to perceive, sense' < PreFi *koki- < PIE *[h₃okw-ie/o] = */h₃ekw-ie/o/ > Greek opsomai 'look, observe' (cognate to Lat. oculus 'eye')
- Finnish kulke- 'to go, walk, wander' ~ Hungarian halad- 'to go, walk, proceed' < PFU *kulki- < PIE *kʷelH-e/o- > Greek pelomai '(originally) to be moving', Sanskrit cárati 'goes, walks, wanders (about)', cognate Lat. colere 'to till, cultivate, inhabit'
- Finnish teke- 'do, make' ~ Hungarian tëv-, të-, tesz- 'to do, make, put, place' < PFU *teki- < PIE *dʰeh₁ > Greek títhēmi, Sanskrit dádhāti 'put, place', but 'do, make' in the western IE languages, e.g. the Germanic forms do, German tun, etc., and Latin faciō (though OE dón and into Early Modern English still sometimes means "put", and still does in Dutch and colloquial German).
This list is not exhaustive, especially when one also considers a number of etymologies with laryngeal reflexes in Finno-Ugric languages other than Finnish. For most cases no other plausible etymology exists. While some single etymologies may be challenged, the case for this oldest stratum itself seems conclusive from the Uralic point of view, and corresponds well with all that is known about the dating of the other most ancient borrowings and about contacts with Indo-European populations. Yet acceptance for this evidence is far from unanimous among Indo-European linguists, some even regard the hypothesis as controversial (see above).
PIE Laryngeals and Proto SemiticEdit
Several linguists have posited a relationship between PIE and Semitic, almost right after the discovery of Hittite. Among these were Hermann Möller, though a few had argued that such a relationship existed long before the 20th century, like Richard Lepsius in 1836. The postulated correspondences between the IE laryngeals and that of Semitic assist in demonstrating their evident existence. Given here are a few lexical comparisons between the two respective proto languages.
- Semitic ʼ-b-y 'to want, desire' ~ PIE *[hyebʰ-] 'to fuck'
- Semitic ʼ-m-m/y ~ PIE *[h₁em-] 'to take'
- Semitic ʼin-a 'in', 'on', 'by' ~ PIE *[h₁en-] > Sanskrit ni, ~ Greek enōpḗ
- Semitic ʼanāku ~ PIE *h₁eǵ(hom)- 'I'
- Semitic ʻ-d-w 'to pass (over), move, run' ~ PIE *[weh₂dʰ-] 'to pass through'
- Semitic ʻ-l-y 'to rise, grow, go up, be high' ~ PIE *[h₂el-] 'to grow, nourish'
- Semitic ʻ-k-w: Arabic ʻakā 'to rise, be big' ~ PIE *[h₂ewg-] 'to grow, nourish'
- Semitic ʻl 'next, in addition' ~ PIE *[h₂el-] 'in'
- Semitic: Arabic ʻanan 'side', ʻan 'from, for; upon; in' ~ PIE *[h₂en h₂e/u-] 'on'
Explanation of ablaut and other vowel changesEdit
This section needs to be updated.November 2013)(
A feature of Proto-Indo-European morpheme structure was a system of vowel alternations termed ablaut ("alternate sound") by early German scholars and still generally known by that term (except in French, where the term apophonie is preferred). Several different such patterns have been discerned, but the commonest one, by a wide margin, is e/o/∅ alternation found in a majority of roots, in many verb and noun stems, and even in some affixes (the genitive singular ending, for example, is attested as *-es, *-os, and *-s). The different states are called ablaut grades; e-grade and o-grade are together "full grades", and the total absence of any vowel is "zero grade".
Thus the root *sed- "to sit (down)" (roots are traditionally cited in the e-grade, if they have one) has three different shapes: *sed-, *sod-, and *sd-. This kind of patterning is found throughout the PIE root inventory and is transparent:
- *sed-: (Vedic), **sed-: in Latin sedeō "am sitting", Old English sittan "to sit" < *set-ja- (with umlaut) < *sed-; Greek hédrā "seat, chair" < *sed- (Greek systemically turns word-initial prevocalic s to h, i.e. rough breathing).
- *sod-: in Latin solium "throne" (in Latin l sporadically replaces d between vowels, said by Roman grammarians to be a Sabine trait) = Old Irish suideⁿ /suðʲe/ "a sitting" (all details regular from PIE *sod-yo-m); Gothic satjan = Old English settan "to set" (causative) < *sat-ja- (umlaut again) < PIE *sod-eye-. PIE *se-sod-e "sat" (perfect) > Sanskrit sa-sād-a per Brugmann's law.
- *sd-: in compounds, as *ni- "down" + *sd- = *nisdos "nest": English nest < Proto-Germanic *nistaz, Latin nīdus < *nizdos (all regular developments); Slavic gnězdo < *g-ně-sd-os. The 3pl (third person plural) of the perfect would have been *se-sd-ṛ whence Indo-Iranian *sazdṛ, which gives (by regular developments) Sanskrit sedur /seːdur/.
Roots *dō and *stāEdit
In addition to the commonplace roots of consonant + vowel + consonant structure, there are also well-attested roots like *dhē- "put, place" and *dō- "give" (mentioned above): these end in a vowel, which is always long in the categories where roots like *sed- have full grades; and in those forms where zero grade would be expected, if before an affix beginning with a consonant, we find a short vowel, reconstructed as *ə, or schwa (more formally, schwa primum indogermanicum). An "independent schwa", like the one in PIE *pǝter- "father", can be identified by the distinctive cross-language correspondences of this vowel that are different from the other five short vowels. (Before an affix beginning with a vowel, there is no trace of a vowel in the root, as shown below.)
Whatever caused a short vowel to disappear entirely in roots like *sed-/*sod-/*sd-, it was a reasonable inference that a long vowel under the same conditions would not quite disappear, but would leave a sort of residue. This residue is reflected as i in Indic while dropping in Iranian; it gives variously e, a, o in Greek; it mostly falls together with the reflexes of PIE *a in the other languages (always bearing in mind that short vowels in non-initial syllables undergo various developments in Italic, Celtic, and Germanic):
- *dō- "give": in Latin dōnum "gift" = Old Irish dán /daːn/ and Sanskrit dâna- (â = ā with tonic accent); Greek dí-dō-mi (reduplicated present) "I give" = Sanskrit dádāmi; Slavic damъ 'I give'. But in the participles, Greek dotós "given" = Sanskrit ditá-, Latin datus all < *də-tó-.
- *stā- "stand": in Greek hístēmi (reduplicated present, regular from *si-stā-), Sanskrit a-sthā-t aorist "stood", Latin testāmentum "testimony" < *ter-stā- < *tri-stā- ("third party" or the like), Slavic sta-ti 'to stand'. But Sanskrit sthitá-"stood", Greek stásis "a standing", Latin supine infinitive statum "to stand".
Conventional wisdom lined up roots of the *sed- and *dō- types as follows:
|Full Grades||Weak Grades||Meaning|
But there are other patterns of "normal" roots, such as those ending with one of the six resonants (*y w r l m n), a class of sounds whose peculiarity in Proto-Indo-European is that they are both syllabic (vowels, in effect) and consonants, depending on what sounds are adjacent:
Root *bher-/bhor-/bhṛ- ~ bhrEdit
- *bher-: in Latin ferō = Greek phérō, Avestan barā, Sanskrit bharāmi, Old Irish biur, Old Norse ber, Old English bere all "I carry"; Slavic berǫ 'I take'; Latin ferculum "bier, litter" < *bher-tlo- "implement for carrying".
- *bhor-: in Gothic and Scandinavian barn "child" (= English dial. bairn), Greek phoréō "I wear [clothes]" (frequentative formation, *"carry around"); Sanskrit bhâra- "burden" (*bhor-o- via Brugmann's law); Slavic vyborъ 'choice'.
- *bhṛ- before consonants: Sanskrit bhṛ-tí- "a carrying"; Gothic gabaurþs /gaˈbɔrθs/, Old English ġebyrd /jəˈbyɹd/, Old High German geburt all "birth" < *gaburdi- < *bhṛ-tí; Slavic bьrati 'to take'.
- *bhr- before vowels: Ved bibhrati 3pl. "they carry" < *bhi-bhr-ṇti; Greek di-phrós "chariot footboard big enough for two men" < *dwi-bhr-o-.
Saussure's insight was to align the long-vowel roots like *dō-, *stā- with roots like *bher-, rather than with roots of the *sed- sort. That is, treating "schwa" not as a residue of a long vowel but, like the *r of *bher-/*bhor-/*bhṛ-, an element that was present in the root in all grades, but which in full grade forms coalesced with an ordinary e/o root vowel to make a long vowel, with "coloring" (changed phonetics) of the e-grade into the bargain; the mystery element was seen by itself only in zero grade forms:
|Full Grades||Zero Grade||Meaning|
|bher-, bhor-||bhṛ- / bhr-||"carry"|
|deX, doX-||dẊ- / dX-||"give"|
(Ẋ = syllabic form of the mystery element)
Saussure treated only two of these elements, corresponding to our *h₂ and *h₃. Later it was noticed that the explanatory power of the theory, as well as its elegance, were enhanced if a third element were added, our *h₁, which has the same lengthening and syllabifying properties as the other two but has no effect on the color of adjacent vowels. Saussure offered no suggestion as to the phonetics of these elements; his term for them, "coefficients sonantiques", was not however a fudge, but merely the term in general use for glides, nasals, and liquids (i.e., the PIE resonants) as in roots like *bher-.
As mentioned above, in forms like *dwi-bhr-o- (etymon of Greek diphrós, above), the new "coefficients sonantiques" (unlike the six resonants) have no reflexes at all in any daughter language. Thus the compound *mṇs-dheH- "to 'fix thought', be devout, become rapt" forms a noun *mṇs-dhH-o- seen in Proto-Indo-Iranian *mazdha- whence Sanskrit medhá- /mēdha/ "sacrificial rite, holiness" (regular development as in sedur < *sazdur, above), Avestan mazda- "name (originally an epithet) of the greatest deity".
There is another kind of unproblematic root, in which obstruents flank a resonant. In the zero grade, unlike the case with roots of the *bher- type, the resonant is therefore always syllabic (being always between two consonants). An example would be *bhendh- "tie, bind":
- *bhendh-: in Germanic forms like Old English bindan "to tie, bind", Gothic bindan; Lithuanian beñdras "chum", Greek peĩsma "rope, cable" /pêːsma/ < *phenth-sma < *bhendh-smṇ.
- *bhondh-: in Sanskrit bandhá- "bond, fastening" (*bhondh-o-; Grassmann's law) = Old Icelandic bant, OE bænd; Old English bænd, Gothic band "he tied" < *(bhe)bhondh-e.
- *bhṇdh-: in Sanskrit baddhá- < *bhṇdh-tó- (Bartholomae's law), Old English gebunden, Gothic bundan; German Bund "league". (English bind and bound show the effects of secondary (Middle English) vowel lengthening; the original length is preserved in bundle.)
This is all straightforward and such roots fit directly into the overall patterns. Less so are certain roots that seem sometimes to go like the *bher- type, and sometimes to be unlike anything else, with (for example) long syllabics in the zero grades while at times pointing to a two-vowel root structure. These roots are variously called "heavy bases", "dis(s)yllabic roots", and "seṭ roots" (the last being a term from Pāṇini's grammar. It will be explained below).
Root *ǵen, *ǵon, *ǵṇn-/*ǵṇ̄Edit
For example, the root "be born, arise" is given in the usual etymological dictionaries as follows:
- (A) *ǵen-, *ǵon-, *ǵṇn-
- (B) *ǵenə-, *ǵonə-, *ǵṇ̄-
The (A) forms occur when the root is followed by an affix beginning with a vowel; the (B) forms when the affix begins with a consonant. As mentioned, the full-grade (A) forms look just like the *bher- type, but the zero grades always and only have reflexes of syllabic resonants, just like the *bhendh- type; and unlike any other type, there is a second root vowel (always and only *ə) following the second consonant:
- (A) PIE *ǵenos- neut s-stem "race, clan" > Greek (Homeric) génos, -eos, Sanskrit jánas-, Avestan zanō, Latin genus, -eris.
- (B) Greek gené-tēs "begetter, father"; géne-sis < *ǵenə-ti- "origin"; Sanskrit jáni-man- "birth, lineage", jáni-tar- "progenitor, father", Latin genitus "begotten" < genatos.
- (A) Sanskrit janayati "beget" = Old English cennan /kennan/ < *ǵon-eye- (causative); Sanskrit jána- "race" (o-grade o-stem) = Greek gónos, -ou "offspring".
- (B) Sanskrit jajāna 3sg. "was born" < *ǵe-ǵon-e.
- (A) Gothic kuni "clan, family" = OE cynn /künn/, English kin; Rigvedic jajanúr 3pl.perfect < *ǵe-ǵṇn- (a relic; the regular Sanskrit form in paradigms like this is jajñur, a remodeling).
- (B) Sanskrit jātá- "born" = Latin nātus (Old Latin gnātus, and cf. forms like cognātus "related by birth", Greek kasí-gnētos "brother"); Greek gnḗsios "belonging to the race". (The ē in these Greek forms can be shown to be original, not Attic-Ionic developments from Proto-Greek *ā.)
On the term "seṭ". The Pāṇinian term "seṭ" (that is, sa-i-ṭ) is literally "with an /i/". This refers to the fact that roots so designated, like jan- "be born", have an /i/ between the root and the suffix, as we've seen in Sanskrit jánitar-, jániman-, janitva (a gerund). Cf. such formations built to "aniṭ" ("without an /i/") roots, such as han- "slay": hántar- "slayer", hanman- "a slaying", hantva (gerund). In Pāṇini's analysis, this /i/ is a linking vowel, not properly a part of either the root or the suffix. It is simply that some roots are in effect in the list consisting of the roots that (as we would put it) "take an -i-".
But historians have the advantage here: the peculiarities of alternation, the "presence of /i/", and the fact that the only vowel allowed in second place in a root happens to be *ə, are all neatly explained once *ǵenə- and the like were understood to be properly *ǵenH-. That is, the patterns of alternation, from the point of view of Indo-European, were simply those of *bhendh-, with the additional detail that *H, unlike obstruents (stops and *s) would become a syllable between two consonants, hence the *ǵenə- shape in the Type (B) formations, above.
The startling reflexes of these roots in zero grade before a consonant (in this case, Sanskrit ā, Greek nē, Latin nā, Lithuanian ìn) is explained by the lengthening of the (originally perfectly ordinary) syllabic resonant before the lost laryngeal, while the same laryngeal protects the syllabic status of the preceding resonant even before an affix beginning with a vowel: the archaic Vedic form jajanur cited above is structurally quite the same (*ǵe-ǵṇh₁-ṛ) as a form like *da-dṛś-ur "they saw" < *de-dṛḱ-ṛ.
Incidentally, redesigning the root as *ǵenH- has another consequence. Several of the Sanskrit forms cited above come from what look like o-grade root vowels in open syllables, but fail to lengthen to -ā- per Brugmann's law. All becomes clear when it is understood that in such forms as *ǵonH- before a vowel, the *o is not in fact in an open syllable. And in turn that means that a form like jajāna "was born", which apparently does show the action of Brugmann's law, is actually a false witness: in the Sanskrit perfect tense, the whole class of seṭ roots, en masse, acquired the shape of the aniṭ 3sing. forms. (See Brugmann's law for further discussion.)
There are also roots ending in a stop followed by a laryngeal, as *pleth₂-/*pḷth₂- "spread, flatten", from which Sanskrit pṛthú- "broad" masc. (= Avestan pərəθu-), pṛthivī- fem., Greek platús (zero grade); Skt. prathimán- "wideness" (full grade), Greek platamṓn "flat stone". The laryngeal explains (a) the change of *t to *th in Proto-Indo-Iranian, (b) the correspondence between Greek -a-, Sanskrit -i- and no vowel in Avestan (Avestan pərəθwī "broad" fem. in two syllables vs Sanskrit pṛthivī- in three).
- Caution has to be used in interpreting data from Indic in particular. Sanskrit remained in use as a poetic, scientific, and classical language for many centuries, and the multitude of inherited patterns of alternation of obscure motivation (such as the division into seṭ and aniṭ roots) provided models for coining new forms on the "wrong" patterns. There are many forms like tṛṣita- "thirsty" and tániman- "slenderness", that is, seṭ formations to unequivocally aniṭ roots; and conversely aniṭ forms like píparti "fills", pṛta- "filled", to securely seṭ roots (cf. the "real" past participle, pūrṇá-). Sanskrit preserves the effects of laryngeal phonology with wonderful clarity, but looks upon the historical linguist with a threatening eye: for even in Vedic Sanskrit, the evidence has to be weighed carefully with due concern for the antiquity of the forms and the overall texture of the data. (It is no help that Proto-Indo-European itself had roots which varied somewhat in their makeup, as *ǵhew- and *ǵhewd-, both "pour"; and some of these "root extensions" as they're called, for want of any more analytical term, are, unluckily, laryngeals.)
Stray laryngeals can be found in isolated or seemingly isolated forms; here the three-way Greek reflexes of syllabic *h₁, *h₂, *h₃ are particularly helpful, as seen below. (Comments on the forms follow.)
- *h₁ in Greek ánemos "wind" (cf. Latin animus "breath, spirit; mind", Vedic aniti "breathes") < *anə- "breathe; blow" (now *h₂enh₁-). Perhaps also Greek híeros "mighty, super-human; divine; holy", cf. Sanskrit iṣirá- "vigorous, energetic".
- *h₂ in Greek patḗr "father" = Sanskrit pitár-, Old English fæder, Gothic fadar, Latin pater. Also *meǵh₂ "big" neut. > Greek méga, Sanskrit máha.
- *h₃ in Greek árotron "plow" = Welsh aradr, Old Norse arðr, Lithuanian árklas.
The Greek forms ánemos and árotron are particularly valuable because the verb roots in question are extinct in Greek as verbs. This means that there is no possibility of some sort of analogical interference, as for example happened in the case of Latin arātrum "plow", whose shape has been distorted by the verb arāre "to plow" (the exact cognate to the Greek form would have been *aretrum). It used to be standard to explain the root vowels of Greek thetós, statós, dotós "put, stood, given" as analogical. Most scholars nowadays probably take them as original, but in the case of "wind" and "plow", the argument can't even come up.clarification and citation needed: consider "νέμω", seeing as "άνεμος" can be defined as that which is without "νομή"
Regarding Greek híeros, the pseudo-participle affix *-ro- is added directly to the verb root, so *ish₁-ro- > *isero- > *ihero- > híeros (with regular throwback of the aspiration to the beginning of the word), and Sanskrit iṣirá-. There seems to be no question of the existence of a root *eysH- "vigorously move/cause to move". If the thing began with a laryngeal, and most scholars would agree that it did, it would have to be *h₁-, specifically; and that's a problem. A root of the shape *h₁eysh₁- is not possible. Indo-European had no roots of the type *mem-, *tet-, *dhredh-, i.e., with two copies of the same consonant. But Greek attests an earlier (and rather more widely attested) form of the same meaning, híaros. If we reconstruct *h₁eysh₂-, all of our problems are solved in one stroke. The explanation for the híeros/híaros business has long been discussed, without much result; laryngeal theory now provides the opportunity for an explanation which did not exist before, namely metathesis of the two laryngeals. It is still only a guess, but it is a much simpler and more elegant guess than the guesses available before.
The syllabic *h₂ in *ph₂ter- "father" might not really be isolated. Certain evidence shows that the kinship affix seen in "mother, father" etc. might actually have been *-h₂ter- instead of *-ter-. The laryngeal syllabified after a consonant (thus Greek patḗr, Latin pater, Sanskrit pitár-; Greek thugátēr, Sanskrit duhitár- "daughter") but lengthened a preceding vowel (thus say Latin māter "mother", frāter "brother") — even when the "vowel" in question was a syllabic resonant, as in Sanskrit yātaras "husbands' wives" < *yṆt- < *yṇ-h₂ter-).
Laryngeals in morphologyEdit
Like any other consonant, Laryngeals feature in the endings of verbs and nouns and in derivational morphology, the only difference being the greater difficulty of telling what's going on. Indo-Iranian, for example, can retain forms that pretty clearly reflect a laryngeal, but there is no way of knowing which one.
The following is a rundown of laryngeals in Proto-Indo-European morphology.
- *h₁ is seen in the instrumental ending (probably originally indifferent to number, like English expressions of the type by hand and on foot). In Sanskrit, feminine i- and u-stems have instrumentals in -ī, -ū, respectively. In the Rigveda, there are a few old a-stems (PIE o-stems) with an instrumental in -ā; but even in that oldest text the usual ending is -enā, from the n-stems.
- Greek has some adverbs in -ē, but more important are the Mycenaean forms like e-re-pa-te "with ivory" (i.e. elephantē? -ě?)
- The marker of the neuter dual was *-iH, as in Sanskrit bharatī "two carrying ones (neut.)", nāmanī "two names", yuge "two yokes" (< yuga-i? *yuga-ī?). Greek to the rescue: the Homeric form ósse "the (two) eyes" is manifestly from *h₃ekʷ-ih₁ (formerly *okʷ-ī) via fully regular sound laws (intermediately *okʷye).
- *-eh₁- derives stative verb senses from eventive roots: PIE *sed- "sit (down)": *sed-eh₁- "be in a sitting position" (> Proto-Italic *sed-ē-ye-mos "we are sitting" > Latin sedēmus). It is clearly attested in Celtic, Italic, Germanic (the Class IV weak verbs), and Baltic/Slavic, with some traces in Indo-Iranian (In Avestan the affix seems to form past-habitual stems).
- It seems likely, though it is less certain, that this same *-h₁ underlies the nominative-accusative dual in o-stems: Sanskrit vṛkā, Greek lúkō "two wolves". (The alternative ending -āu in Sanskrit cuts a small figure in the Rigveda, but eventually becomes the standard form of the o-stem dual.)
- *-h₁s- derives desiderative stems as in Sanskrit jighāṃsati "desires to slay" < *gʷhi-gʷhṇ-h₁s-e-ti- (root *gʷhen-, Sanskrit han- "slay"). This is the source of Greek future tense formations and (with the addition of a thematic suffix *-ye/o-) the Indo-Iranian one as well: bhariṣyati "will carry" < *bher-h₁s-ye-ti.
- *-yeh₁-/*-ih₁- is the optative suffix for root verb inflections, e.g. Latin (old) siet "may he be", sīmus "may we be", Sanskrit syāt "may he be", and so on.
- *h₂ is seen as the marker of the neuter plural: *-h₂ in the consonant stems, *-eh₂ in the vowel stems. Much leveling and remodeling is seen in the daughter languages that preserve any ending at all, thus Latin has generalized *-ā throughout the noun system (later regularly shortened to -a), Greek generalized -ǎ < *-h₂.
- The categories "masculine/feminine" plainly did not exist in the most original form of Proto-Indo-European, and there are very few noun types which are formally different in the two genders. The formal differences are mostly to be seen in adjectives (and not all of them) and pronouns. Both types of derived feminine stems feature *h₂: a type that is patently derived from the o-stem nominals; and an ablauting type showing alternations between *-yeh₂- and *-ih₂-. Both are peculiar in having no actual marker for the nominative singular, and at least as far as the *-eh₂- type, two things seem clear: it is based on the o-stems, and the nom.sg. is probably in origin a neuter plural. (An archaic trait of Indo-European morpho-syntax is that plural neuter nouns construe with singular verbs, and quite possibly *yugeh₂ was not so much "yokes" in our sense, but "yokage; a harnessing-up".) Once that much is thought of, however, it is not easy to pin down the details of the "ā-stems" in the Indo-European languages outside of Anatolia, and such an analysis sheds no light at all on the *-yeh₂-/*-ih₂- stems, which (like the *eh₂-stems) form feminine adjective stems and derived nouns (e.g. Sanskrit devī- "goddess" from deva- "god") but unlike the "ā-stems" have no foundation in any neuter category.
- *-eh₂- seems to have formed factitive verbs, as in *new-eh₂- "to renew, make new again", as seen in Latin novāre, Greek neáō and Hittite ne-wa-aḫ-ḫa-an-t- (participle) all "renew" but all three with the pregnant sense of "plow anew; return fallow land to cultivation".
- *-h₂- marked the 1st person singular, with a somewhat confusing distribution: in the thematic active (the familiar -ō ending of Greek and Latin, and Indo-Iranian -ā(mi)), and also in the perfect tense (not really a tense in PIE): *-h₂e as in Greek oîda "I know" < *woyd-h₂e. It is the basis of the Hittite ending -ḫḫi, as in da-aḫ-ḫi "I take" < *-ḫa-i (original *-ḫa embellished with the primary tense marker with subsequent smoothing of the diphthong).
- *-eh₃ may be tentatively identified in a "directive case". No such case is found in Indo-European noun paradigms, but such a construct accounts for a curious collection of Hittite forms like ne-pi-ša "(in)to the sky", ták-na-a "to, into the ground", a-ru-na "to the sea". These are sometimes explained as o-stem datives in -a < *-ōy, an ending clearly attested in Greek and Indo-Iranian, among others, but there are serious problems with such a view, and the forms are highly coherent, functionally. And there are also appropriate adverbs in Greek and Latin (elements lost in productive paradigms sometimes survive in stray forms, like the old instrumental case of the definite article in English expressions like the more the merrier): Greek ánō "upwards, kátō "downwards", Latin quō "whither?", eō "to that place"; and perhaps even the Indic preposition/preverb â "to(ward)" which has no satisfactory competing etymology. (These forms must be distinguished from the similar-looking ones formed to the ablative in *-ōd and with a distinctive "fromness" sense: Greek ópō "whence, from where".)
Throughout its history, the laryngeal theory in its various forms has been subject to extensive criticism and revision.
The original argument of Saussure was not accepted by any of the Neogrammarians, the school, primarily based at the University of Leipzig, then reigning at the cutting-edge of Indo-European linguistics. Several of them attacked the Mémoire savagely. Osthoff's criticism was particularly virulent, often descending into personal invective.
For the first half-century of its existence, the laryngeal theory was widely seen as ‘an eccentric fancy of outsiders’. In Germany it was totally rejected. Among its early proponents were Hermann Möller, who extended Saussure’s system with a third, non-colouring laryngeal, Albert Cuny, Holger Pedersen and Karl Oštir. The fact that these scholars were engaged in highly speculative long-range linguistic comparison further contributed to its isolation.
Although the founding fathers were able to provide some indirect evidence of a lost consonantal element (for example, the origin of the Indo-Iranian voiceless aspirates in *CH sequences and the ablaut pattern of the so-called heavy bases, *CeRə- ~ *CR̥̄- in the traditional formulation), the direct evidence so crucial for the Neogrammarian thinking was lacking. Saussure’s structural considerations were foreign to the leading contemporary linguists.
After Kuryłowicz’s convincing demonstration that the Hittite language preserved at least some of Saussure’s coefficients sonantiques, the focus of the debate shifted. It was still unclear how many laryngeals are to be posited to account for the new facts and what effect they have had exactly. Kuryłowicz, after a while, settled on four laryngeals, an approach further accepted by Sapir, Sturtevant, and through them much of American linguistics. The three-laryngeal system was defended, among others, by Walter Couvreur and Émile Benveniste. Many individual proposals were made, which assumed up to ten laryngeals (André Martinet). While some scholars, like Heinz Kronasser and Giuliano Bonfante, attempted to disregard Anatolian evidence altogether, the ‘minimal’ serious proposal (with roots in Pedersen’s early ideas) was put forward by Hans Hendriksen, Louis Hammerich and later Ladislav Zgusta, who assumed a single /H/ phoneme without vowel-colouring effects.
By 2000s, however, a widespread, though not unanimous agreement was reached in the field on reconstructing Möller’s three laryngeals. One of the last major critics of this approach was Oswald Szemerényi, who subscribed to a theory similar to Zgusta’s (Szemerényi 1996).
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (October 2008) (Learn how and when to remove this template message)
- Zair, N., The Reflexes of the Proto-Indo-European Laryngeals in Celtic (Brill, 2012), pp. 3-4.
- Mallory, J. P.; Adams, Douglas Q. (2006). The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World. Oxford University Press. p. 55. ISBN 978-0-19-929668-2.
- Encyclopedia of Indo-European culture By J. P. Mallory, Douglas Q. Adams Edition: illustrated Published by Taylor & Francis, 1997 ISBN 1-884964-98-2, ISBN 978-1-884964-98-5 pp. 9-10, 13-14, 55.
- Rasmussen (1999), p. 77
- Rasmussen (1999), p. 71
- Rasmussen (1999), p. 76
- Kloekhorst, Alwin (2004). "The Preservation of *h₁ in Hieroglyphic Luwian. Two Separate a-Signs". Historische Sprachforschung. 117: 26–49.
- Kloekhorst, Alwin (2006). "Initial Laryngeals in Anatolian". Historische Sprachforschung. 119: 77–108.
- Rieken, Elisabeth (2010). "Review of A. Kloekhorst, Etymological Dictionary of the Hittite Inherited Lexicon". Kratylos. 55: 125–33.
- Melchert, Craig (2010). "Spelling of Initial /a-/ in Hieroglyphic Luwian". In Singer, Itamar (ed.). Ipamati kistamati pari tumatimis. Tel Aviv University: Institute of Archaeology. pp. 147–58.
- Weeden, Mark (2011). "Spelling, phonology and etymology in Hittite historical linguistics". Bulletin of the School of Oriental and African Studies. 74: 59–76. doi:10.1017/s0041977x10000716.
- Simon, Zsolt (2010). "Das Problem der phonetischen Interpretation der anlautenden scriptio plena im Keilschriftluwischen". Babel und Bibel. 4: 249–65.
- Simon, Zsolt (2013). "Once again on the Hieroglyphic Luwian sign *19 〈á〉". Indogermanische Forschungen. 118: 1–22, page 17. doi:10.1515/indo.2013.118.2013.1.
- Watson, Janet C. E. (2002). The Phonology and Morphology of Arabic. Oxford Univ. Press. p. 46. Retrieved 2012-03-18.
- Weiss, Michael (2016). "The Proto-Indo-European Laryngeals and the Name of Cilicia in the Iron Age". In Byrd, Andrew Miles; DeLisi, Jessica; Wenthe, Mark (eds.). Tavet Tata Satyam: Studies in honor of Jared H. Klein on the Occasion of His Seventieth Birthday. Ann Arbor: Beech Stave Press. pp. 331–340.
- Kümmel, Martin (November 2012). "On historical phonology, typology, and reconstruction" (PDF). Enlil.ff.cuni.cz. Institute of Comparative Linguistics, Charles University, Prague. p. 4. Retrieved 17 June 2019.
- Clackson p. 56.
- Clackson p. 58.
- Ringe pp. 68–70
- Kümmel, Martin (2016). "Is ancient old and modern new? Fallacies of attestation and reconstruction (with special focus on Indo-Iranian)". Proceedings of the 27th Annual UCLA Indo-European Conference. Bremen: Hempen.
- Ramat p. 41.
- Clackson p. 57.
- Clackson p. 58
- Palmer pp. 216–218
- Palmer pp. 219–220
- Ringe pp. 73–74
- Ringe pp. 74–75
- http://inslav.ru/images/stories/books/BSI1988-1996(1997).pdf (in Russian)
- De Mauro, Tullio (1972). "Notes bibliographiques et critiques sur F. de Saussure". Cours de linguistique générale. By de Saussure, Ferdinand. Paris: Payot. pp. 327–328. ISBN 2-22-850070-4.
- Szemerényi 1996, p. 123.
- Szemerényi 1996, p. 134.
- Cuny, Albert (1912). "Notes phonétique historique. Indo-européen et sémitique". Révue de phonétique. 2.
- Kuryłowicz, Jerzy (1927). "ə indo-européen et ḫ hittite". In Taszycki, Witold; Doroszewski, Witold (eds.). Symbolae grammaticae in honorem Ioannis Rozwadowski. Kraków: Gebethner & Wolff.
- Kuryłowicz, Jerzy (1935). "Sur les éléments consonantiques disparus en indoeuropéen". Études indoeuropéens. Kraków: Gebethner & Wolff.
- Meier-Brügger, Michael (2003). Indo-European Linguistics. Berlin/New York: De Gruyter. p. 107. ISBN 3-11-017433-2.
- Lehrman, Alexander (2002). "Indo-Hittite laryngeals in Anatolian and Indo-European". In Shevoroshkin, Vitaly; Sidwell, Paul (eds.). Anatolian languages. Canberra: Association for the History of Language. ISBN 0-95-772514-0.
- Voyles, Joseph; Barrack, Charles (2015). On Laryngealism. A Coursebook in the History of a Science. München: Lincom. ISBN 3-86-288651-4.
- Beekes, Robert S. P. (1969). The Development of Proto-Indo-European Laryngeals in Greek (Thesis). The Hague: Mouton.
- Beekes, Robert S. P. (1995). Comparative Indo-European Linguistics: An Introduction. Amsterdam: John Benjamins. ISBN 1-55619-504-4.
- Clackson, James (2007). Indo-European Linguistics: An Introduction. Cambridge: Cambridge University Press. ISBN 978-0-521-65367-1.
- Feuillet, Jack (2016). "Quelques réflexions sur la reconstruction du système phonologique indo-européen". Historische Sprachforschung. 129: 39–65. doi:10.13109/hisp.2016.129.1.39.
- Koivulehto, Jorma (1991). Uralische Evidenz für die Laryngaltheorie, Veröffentlichungen der Komission für Linguistik und Kommunikationsforschung nr. 24. Wien: Österreichische Akademie der Wissenschaften. ISBN 3-7001-1794-9.
- Koivulehto, Jorma (2001). "The earliest contacts between Indo-European and Uralic speakers in the light of lexical loans". In C. Carpelan; A. Parpola; P. Koskikallio (eds.). The earliest contacts between Uralic and Indo-European: Linguistic and Archeological Considerations. Helsinki: Mémoires de la societé Finno-Ougrienne 242. pp. 235–263. ISBN 952-5150-59-3.
- Lehmann, Winfred P. (1993). Theoretical Bases of Indo-European Linguistics, see pp. 107-110. London: Routledge.
- Lindeman, Frederik Otto (1970). Einführung in die Laryngaltheorie. Berlin: Walter de Gruyter & Co.
- Lindeman, Frederik Otto (1997). Introduction to the Laryngeal theory. Innsbruck: Institut für Sprachwissenschaft der Universität Innsbruck.
- Möller, Hermann (1970) . Vergleichendes indogermanisch-semitisches Wörterbuch. Göttingen: Vandenhoek & Ruprecht.
- Palmer, F.R. (1995). The Greek Language. London: Bristol Classical Press. ISBN 1-85399-466-9.
- Ramat, Anna Gicalone & Paolo (1998). The Indo-European Languages. Abingdon & New York: Routledge. ISBN 978-0-415-41263-6.
- Rasmussen, Jens Elmegård (1999) . "Determining Proto-Phonetics by Circumstantial Evidence: The Case of the Indo-European laryngeals". Selected Papers on Indo-European Linguistics. Copenhagen: Museum of Tusculanum Press. pp. 67–81. ISBN 87-7289-529-2.
- Ringe, Don (2006). From Proto-Indo-European to Proto-Germanic (A Linguistic History of English Volume 1). New York: Oxford University Press. ISBN 978-0-19-955229-0.
- Rix, Helmut (1976). Historische Grammatik der Griechischen: Laut- und Formenlehre. Darmstadt: Wissenschaftliche Buchgesellschaft.
- Saussure, Ferdinand de (1879). Memoire sur le systeme primitif des voyelles dans les langues indo-europeennes. Leipzig: Vieweg.
- Szemerényi, Oswald (1996). Introduction to Indo-European Linguistics. Oxford: Clarendon Press.
- Sihler, Andrew (1996). New Comparative Grammar of Greek and Latin. Oxford: Oxford University Press.
- Winter, Werner, ed. (1965). Evidence for Laryngeals (2nd. ed.). The Hague: Mouton.
- "Proto-Indo-European phonology (Nonstandard and Theoretical)". Retrieved 11 November 2005.
- Kortlandt, Frederik (2001): Initial laryngeals in Anatolian (pdf)
- Lexicon of Early Indo-European Loanwords Preserved in Finnish |
hash() function returns the hash value of an object as a fixed sized integer. The object can be of various types, including numbers, strings, tuples, or custom objects that have implemented the
__hash__() method. The hash value is computed using a specific hashing algorithm based on the object’s type. It’s worth noting that the
hash() function is a built-in function in Python and doesn’t require any import statements. Hash values are useful for efficient comparison of dictionary keys during lookups.
hash() function takes a single argument,
object, which represents the object from which to obtain the hash value.
object can be of any hashable type, such as numbers, strings, tuples, or custom objects that implement the
The example below begins by defining a class called
MyClass with an attribute called value. The
__hash__() method is implemented to customize the hashing behavior based on the value attribute using the
Two instances of
obj2, are created with different values. The
hash() function is used to calculate the hash values of these objects. And these values are then printed to the console.
This example demonstrates how to customize the hash function for a class using the
__hash__() method. The
hash() function allows us to obtain the hash value of an object, which is an integer used for quick comparison and dictionary key lookups.
# Define a classclass MyClass:def __init__(self, value):self.value = valuedef __hash__(self):return hash(self.value)# Create instances of MyClassobj1 = MyClass(42)obj2 = MyClass("Hello")# Check the hash valuesprint(hash(obj1))print(hash(obj2))
In the example below, we define
my_tuple as “1, 2, 3”. Subsequently, we use the
hash() function to obtain the hash value of the input
my_tuple. Followed by a call to print the output of the |
A Note On Vectors...
A vector is an object in a multidimensional space. It is represented by its components measured
on a reference system. A reference system is a series of vectors from which the entire space can be
A commonly used mathematical notation for a vector is a lowercase bold letter – v, for
example. So, if the set of vectors u1,...,un is a reference system for a space with n dimensions, then
any vector of the space can be written as,
v = v1u1 + ··· + vnun,
where v1,…,vn are real numbers in the case of a real space. These numbers are called the components of a vector.
A matrix is a linear operator over vectors from one space to vectors in another space not necessarily of
the same dimension. This means that the application of a matrix on a vector is another vector. Matrix is
commonly represented with an uppercase bold letter – M, for example. The application of the matrix M on the
vector V is denoted by M • v. The fact that a matrix is a linear operator means that
M • (αu + βv) = αM • u + βM • v,
for any matrix M, any vectors u and v, and any numbers α and β. |
Chapter 7: Pedagogical differences between media
7.2.1 The unique pedagogical features of text
Ever since the invention of the Gutenberg press, print has been a dominant teaching technology, arguably at least as influential as the spoken word of the teacher. Even today, textbooks, mainly in printed format, but increasingly also in digital format, still play a major role in formal education, training and distance education. Many fully online courses still make extensive use of text-based learning management systems and online asynchronous discussion forums.
Why is this? What makes text such a powerful teaching medium, and will it remain so, given the latest developments in information technology?
184.108.40.206 Presentational features
Text can come in many formats, including printed textbooks, text messages, novels, magazines, newspapers, scribbled notes, journal articles, essays, novels, online asynchronous discussions and so on.
The key symbol systems in text are written language (including mathematical symbols) and still graphics, which would include diagrams, tables, and copies of images such as photographs or paintings. Colour is an important attribute for some subject areas, such as chemistry, geography and geology, and art history.
Some of the unique presentational characteristics of text are as follows:
- text is particularly good at handling abstraction and generalisation, mainly through written language;
- text enables the linear sequencing of information in a structured format;
- text can present and separate empirical evidence or data from the abstractions, conclusions or generalisations derived from the empirical evidence;
- text’s linear structure enables the development of coherent, sequential argument or discussion;
- at the same time text can relate evidence to argument and vice versa;
- text’s recorded and permanent nature enables independent analysis and critique of its content;
- still graphics such as graphs or diagrams enable knowledge to be presented differently from written language, either providing concrete examples of abstractions or offering a different way of representing the same knowledge.
There is some overlap of each of these features with other media, but no other medium combines all these characteristics, or is as powerful as text with respect to these characteristics.
Earlier (Chapter 2, Section 2.7.3) I argued that academic knowledge is a specific form of knowledge that has characteristics that differentiate it from other kinds of knowledge, and particularly from knowledge or beliefs based solely on direct personal experience. Academic knowledge is a second-order form of knowledge that seeks abstractions and generalizations based on reasoning and evidence.
Fundamental components of or criteria for academic knowledge are:
- codification: knowledge can be consistently represented in some form (words, symbols, video);
- transparency: the source of the knowledge can be traced and verified;
- reproduction: knowledge can be reproduced or have multiple copies;
- communicability: knowledge must be in a form such that it can be communicated and challenged by others.
Text meets all four criteria above, so it is an essential medium for academic learning.
220.127.116.11 Skills development
Because of text’s ability to handle abstractions, and evidence-based argument, and its suitability for independent analysis and critique, text is particularly useful for developing the higher learning outcomes required at an academic level, such as analysis, critical thinking, and evaluation.
It is less useful for showing processes or developing manual skills, for instance.
7.2.2 The book and knowledge
Although text can come in many formats, I want to focus particularly on the role of the book, because of its centrality in academic learning. The book has proved to be a remarkably powerful medium for the development and transmission of academic knowledge, since it meets all four of the components required for presenting academic knowledge, but to what extent can new media such as blogs, wikis, multimedia, and social media replace the book in academic knowledge?
New media can in fact handle just as well some of these criteria, and provide indeed added value, such as speed of reproduction and ubiquity, but the book still has some unique qualities. A key advantage of a book is that it allows for the development of a sustained, coherent, and comprehensive argument with evidence to support the argument. Blogs can do this only to a limited extent (otherwise they cease to be blogs and become articles or a digital book).
Quantity is important sometimes and books allow for the collection of a great deal of evidence and supporting argument, and allow for a wider exploration of an issue or theme, within a relatively condensed and portable format. A consistent and well supported argument, with evidence, alternative explanations or even counter positions, requires the extra ‘space’ of a book. Above all, books can provide coherence or a sustained, particular position or approach to a problem or issue, a necessary balance to the chaos and confusion of the many new forms of digital media that constantly compete for our attention, but in much smaller ‘chunks’ that are overall more difficult to integrate and digest.
Another important academic feature of text is that it can be carefully scrutinised, analysed and constantly checked, partly because it is largely linear, and also permanent once published, enabling more rigorous challenge or testing in terms of evidence, rationality, and consistency. Multimedia in recorded format can come close to meeting these criteria, but text can also provide more convenience and in media terms, more simplicity. For instance I repeatedly find analysing video, which incorporates many variables and symbol systems, more complex than analysing a linear text, even if both contain equally rigorous (or equally sloppy) arguments.
18.104.22.168 Form and function
Does the form or technological representation of a book matter any more? Is a book still a book if downloaded and read on an iPad or Kindle, rather than as printed text?
For the purposes of knowledge acquisition, it probably isn’t any different. Indeed, for study purposes, a digital version is probably more convenient because carrying an iPad around with maybe hundreds of books downloaded on it is certainly preferable to carrying around the printed versions of the same books. There are still complaints by students about the difficulties of annotating e-books, but this will almost certainly become a standard feature available in the future.
If the whole book is downloaded, then the function of a book doesn’t change much just because it is available digitally. However, there are some subtle changes. Some would argue that scanning is still easier with a printed version. Have you ever had the difficulty of finding a particular quotation in a digital book compared with the printed version? Sure, you can use the search facility, but that means knowing exactly the correct words or the name of the person being quoted. With a printed book, I can often find a quotation just by flicking the pages, because I am using context and rapid eye scanning to locate the source, even when I don’t know exactly what I am looking for. On the other hand, searching when you do know what you are looking for (e.g. a reference by a particular author) is much easier digitally.
When books are digitally available, users can download only the selected chapters that are of interest to them. This is valuable if you know just what you want, but there are also dangers. For instance in my book on the strategic management of technology (Bates and Sangrà, 2011), the last chapter summarizes the rest of the book. If the book had been digital, the temptation then would be to just download the final chapter. You’d have all the important messages in the book, right? Well, no. What you would be missing is the evidence for the conclusions. Now the book on strategic management is based on case studies, so it would be really important to check back with how the case studies were interpreted to get to the conclusions, as this will affect the confidence you would have as a reader in the conclusions that were drawn. If just the digital version of only the last chapter is downloaded, you also lose the context of the whole book. Having the whole book gives readers more freedom to interpret and add their own conclusions than just having a summary chapter.
In conclusion, then, there are advantages and disadvantages of digitizing a book, but the essence of a book is not greatly changed when it becomes digital rather than printed.
22.214.171.124 A new niche for books in academia
We have seen historically that new media often do not entirely replace an older medium, but the old medium finds a new ‘niche’. Thus television did not lead to the complete demise of radio. Similarly, I suspect that there will be a continued role for the book in academic knowledge, enabling the book (whether digital or printed) to thrive alongside new media and formats in academia.
However, books that retain their value academically will likely need to be much more specific in their format and their purpose than has been the case to date. For instance, I see no future for books consisting mainly of a collection of loosely connected but semi-independent chapters from different authors, unless there is a strong cohesion and edited presence that provides an integrated argument or consistent set of data across all the chapters. Most of all, books may need to change some of their features, to allow for more interaction and input from readers, and more links to the outside world. It is much more unlikely though that books will survive in a printed format, because digital publication allows for many more features to be added, reduces the environmental footprint, and makes text much more portable and transferable.
Lastly, this is not an argument for ignoring the academic benefits of new media. The value of graphics, video and animation for representing knowledge, the ability to interact asynchronously with other learners, and the value of social networks, are all under-exploited in academia. But text and books are still important.
For another perspective on this, see Clive Shepherd’s blog: Weighing up the benefits of traditional book publishing.
7.2.3 Text and other forms of knowledge
I have focused particularly on text and academic knowledge, because of the traditional importance of text and printed knowledge in academia. The unique pedagogical characteristics of text though may be less for other forms of knowledge. Indeed, multimedia may have many more advantages in vocational and technical education.
In the k-12 or school sector, text and print are likely to remain important, because reading and writing are likely to remain essential in a digital age, so the study of text (digital and printed) will remain important if only for developing literacy skills.
Indeed, one of the limitations of text is that it requires a high level of prior literacy skills for it to be used effectively for teaching and learning, and indeed much of teaching and learning is focused on the development of skills that enable rigorous analysis of textual materials. We should be giving as much attention to developing multimedia literacy skills though in a digital age.
If text is critical for the presentation of knowledge and development of skills in your subject area, what are the implications for assessment? If students are expected to develop the skills that text appears to develop, then presumably text will be an important medium for assessment. Students will need to demonstrate their own ability to use text to present abstractions, argument and evidence-based reasoning.
In such contexts, composed textual responses, such as essays or written reports, are likely to be necessary, rather than multiple-choice questions or multimedia reports.
7.2.5 More evidence, please
Although there has been extensive research on the pedagogical features of other media such as audio, video and computing, text has generally been treated as the default mode, the base against which other media are compared. As a result print in particular is largely taken for granted in academia. We are now though at the stage where we need to pay much more attention to the unique characteristics of text in its various formats, in relation to other media. Until though we have more empirical studies on the unique characteristics of text and print, text will remain central to at least academic teaching and learning.
Activity 7.2 Identifying the unique pedagogical characteristics of text
1. Take one of the courses you are teaching. What key presentational aspects of text are important for this course? Is text the best medium for representing knowledge in your subject area; if not, what concepts or topics would be best represented through other media?
2. Look at the skills listed in Section 1.2 of this book. Which of these skills would best be developed through the use of text rather than other media? How would you do this using text-based teaching?
3. What do you think about books for learning? Do you think the book is dead or about to become obsolete? If you think books are still valuable for learning, what changes, if any, do you think should be made to academic books? What would be lost if books were entirely replaced by new media? What would be gained?
4. Under what conditions would it be more appropriate for students to be assessed through written essays and under what conditions would multimedia portfolios be more appropriate for assessment?
5. Can you think of any other unique pedagogical characteristics of text?
Although there are many publications on text, in terms of typography, structure, and its historical influence on education and culture, I could find no publications where text is compared with other modern media such as audio or video in terms of its pedagogical characteristics, although Koumi (2015) has written about text in combination with audio, and Albert Manguel’s book is also fascinating reading from an historical perspective.
However, I am sure that my lack of references is due to my lack of scholarship in the area. If you have suggestions for readings, please send me an email. Also, a study of the unique pedagogical characteristics of text in a digital age might make for a very interesting and valuable Ph.D. thesis.
Koumi, J. (1994) Media comparisons and deployment: a practitioner’s view British Journal of Educational Technology, Vol. 25, No. 1.
Koumi, J. (2006). Designing video and multimedia for open and flexible learning. London: Routledge
Koumi, J. (2015) Learning outcomes afforded by self-assessed, segmented video-print combinations Academia.edu (unpublished)
Manguel, A. (1996) A History of Reading London: Harper Collins |
Read the assignment. In particular, look for task words like those listed below. These are action words used to explain what the assignment requires.
- What topic(s) do you need to cover?
- Does the assignment have a minimum word count?
- Is there a specific citation style that should be used (e.g., MLA, APA, Chicago)?
- Is there a minimum number of sources that need to be used, and are there specific types of sources required (peer reviewed articles, primary sources, books, news articles)?
Ask the professor about aspects of the assignment that are not clear to you.
Account for: Explain, clarify, give reasons for. Important: This is different from 'Give an account of' which asks you to describe something in detail.
Analyze: Break an issue down into its key components, discuss them, and show how they are related to each other.
Assess: Consider the value or importance of something. Pay attention to positive, negative and/or disputable aspects.
Argue: Present a case based on evidence for and/or against a given point of view.
Comment on: This is more than simply describing or summarizing a topic. It requires an analysis or assessment as well.
Compare: Identify the characteristics or qualities of two or more things and describe their commonalities and differences.
Contrast: Similar to comparing, but with an emphasis on the differences between two or more things.
Criticize: Personally judge the value or truth of something. Indicate the criteria used to base your judgment and cite specific instances of how the criteria applies in this case.
Define: Make a statement as to the meaning or interpretation of something. If necessary, provide detail as to how it can be distinguished from similar things.
Describe: Spell out the main aspects of an idea or topic, or the sequence in which a series of things occurred.
Discuss: Investigate or examine a topic through an argument. Examine key points and possible interpretations, providing reasons for and against them. Draw a conclusion.
Evaluate: Appraise the worth/quality of something in light of its apparent truth; include your personal opinion. Similar to 'assess'.
Enumerate: List relevant items, possibly in continuous prose rather than note form, and 'describe' them (see above) if necessary.
Examine: Provide an in-depth analysis of a topic and investigate related implications.
Explain: Examine how something works or how it came to be the way it is. This may include a need to 'describe' and 'analyze' (see above).
To what extent...?: Explore the case for a stated proposition or explanation, similar in manner to 'assess' and 'criticize' (see above), but arguing for a less than total acceptance of the proposition.
How far: Similar to 'to what extent...?' (see above)
Identify: Pick out the key features of something, making clear the criteria you use if necessary.
Illustrate: Similar to 'explain' (see above), but includes the use of specific examples, statistics, maps, graphs, sketches, etc.
Interpret: Clarify something or 'explain' (see above), indicating how something relates to a different thing or perspective.
Justify: Express reasons for accepting or rejecting a particular interpretation or conclusion. May include the need to 'argue' (see above) or provide evidence.
Outline: Indicate the main features of a topic or sequence of events, possibly setting them within a structure or framework to show how they are interrelated.
Prove: Demonstrate the truth of something by offering irrefutable evidence and/or a logical sequence of statements leading from evidence to a conclusion.
Reconcile: Show how two seemingly opposed or mutually exclusive ideas or propositions are similar in important respects. Involves the need to 'analyze' and 'justify' (see above).
Relate: Either 'explain' (see above) how something happened or are connected in a cause-and-effect sense. May imply 'compare' and 'contrast' (see above).
Review: Survey a topic, with an emphasis on 'assess' rather than 'describe' (see above).
State: Express the main points of an idea or topic in the manner of 'describe' or 'enumerate' (see above).
Summarize: 'State' (see above) the main features of an argument in an efficient and concise manner.
Trace: Identify the connection between one thing and another either in a developmental sense over a period of time, or in a cause-and-effect sense. May imply both 'describe' and 'explain' (see above).
If You Are to Select a Topic
- It should meet all the requirements of the assignment.
- It should be broad enough to give you several research options.
- It should be focused enough so as to not be overwhelming.
Begin Gathering Information
If you are unfamiliar with a topic, learn more about the fundamental aspects of the topic so you can decide how it can be narrowed down. Examples of sources that can provide this type of background information include:
Keywords are words that describe the topic. When using databases to find resources keywords can be used to find search results relevant to your topic. Some databases also use Subject headings so that all of the articles on a particular topic all have the same heading assigned to them.
Places to Find Keywords and Subject Headings
- The Library of Congress list of Subject Headings shows terms used by the Library of Congress in Washington D.C. and libraries all over the United States for topics.
- Medical Subject Headings are the Subject Headings used by the National Library of Medicine in Bethesda, Maryland for topics included in the PubMed database and other medical databases.
- Individual Database subject terms can be found in many research databases. They are often listed in a "Thesaurus" in the database. |
EVIDENCE VALIDATES SUMERIAN TALES OF “GODS” FROM NIBIRU by Sasha Alex Lessin, Ph.D. (Anthropology, U.C.L.A.)
Nibiru acted “as a spacecraft that sailed past all the other planets, gave them a chance at repeated close looks.” They labeled the inner planets from the farthest from the sun (Pluto) to the closest (Mercury) from one to twelve, Earth, seven–counting the sun and Earth’s moon as planets, hence Sitchin’s title, The Twelfth Planet [Genesis: 19, 46].
Sumerians lacked telescopes and couldn’t see Uranus’ and Neptune’s orbits the route maps show. Nibiran-dictated maps prove they had astronomical info Sumerians, on their own, didn’t. The maps accurately detail the entire Earth from space, a perspective impossible for ancient Sumerians on their own. On “a clay tablet in the ruins of the Royal Library at Nineva shows how to go through inner solar system. Commander Enlil’s route to Earth: The line that inclines at 45o shows “the spaceship’s descent from a point high, high, high, high through vapor clouds and a lower zone that is vaporless, toward the horizontal point, where the skies and ground meet.” [12th Planet:275]
“In the skies near the horizontal line, the instructions are ‘set, set, set’ their instruments for the final approach, which should be raised up before reaching the landing point because it had to pass over rugged terrain.” [12th Planet:276]
Sumerians began the list of solar system planets with the most distant from the sun and beyond human vision. Nibiru, the most distant, then Pluto, Neptune (only found by modern astronomers in 1846) and Uranus (re-discovered in 1781). They next listed planets seen on Earth without telescopes–Saturn, Jupiter, Mars, Earth, and then Earth’s Moon (which they counted as a planet). They listed last Venus, then Mercury, planets closest to Solaris. The Sumerians wrote that the Nibirans told them of the planets beyond unassisted human vision. The sequence of planets the Nibirans listed reflected their experience when they came to Earth from Nibiru–from beyond the inner solar system toward the sun. Their sequence therefore adds to the evidence the Nibirans were indeed extraterrestrial astronauts. [Lloyd: 2-39; Time: 4-6; UFOTV: Are We Alone? Geneis Revisited http://enkispeaks.com/2012/09/03/1603/]
The astronomical depiction from 5000BCE on the walls of Fodhla’s tomb in Oldcastle Ireland shows the same accurate depiction of our inner solar system that we find on the ancient Sumerian seals.
Datum 2: ET “GODS” SAID THEY SAW, & WE MUCH LATER CONFIRMED, WATER ON PLANETS & MOONS
Clay-tablets from Sumer say Mars had water. They show water on asteroids, comets, Neptune, Uranus, Venus, Saturn, Jupiter, the rings of Saturn and on Saturn’s and Jupiter’s moons. Our astronomers recently confirmed water where Sumerian scribes said. “Mars once had surface water several meters deep over the whole planet. There’s enough water in Mars’ crust to flood the planet 1000 miles deep. Martian canyons have flowing water below the dry riverbeds. Mars, Venus and Earth confirm Sumerian texts of water ‘below the firmament’ on inner planets.” [Genesis: 53 -55; Dark Star: 1-16-17]
Uranus: Our scientists only recently validated water on Uranus. Sumerian scribes long ago said Nibirans said water covered Uranus. Before Voyager 2 proved otherwise, our astronomers dismissed the Sumerian “myth” of water on Uranus. They thought Uranus made of gas only. Voyager 2 showed Uranus covered with a 6000 mile layer of “superheated water.” [Genesis: 12]
Neptune: Sumerians scribes wrote on that Nibiran goldminers marked the orbit, water surface and swamp vegetation on Neptune, three billion miles from Earth, long before Le Verrier and Galle “discovered” Neptune in 1846 (when wobbles in Uranus’ orbit–closer to Earth than Neptune–augured “another celestial body beyond it”). Before Voyager 2 showed Neptune’s “floating surry mixture of ice water,” Sitchin published Sumerian records of Neptune as “blue- green, watery, with patches of swamplike vegetation.” [Genesis: 5 -9]
Datum 3: NIBIRANS NOTED TWIN TRAITS OF URANUS AND NEPTUNE 6000 YEARS AGO Sumerians recorded Nibirans’ observations that Neptune and Uranus were “twins.” Rings surround both, satellites orbit both, water covers both and both show blue-green color. Both planets have 16-hour days and “extreme inclination relative to the planets’ axes of rotation. Sumerians recorded this in 4000 B.C.; NASA didn’t get it till 1989, 6000 years later [Genesis: 13 -14].
Datum 4. NIBIRAN MOON INFO PREDATED OURS
Sumerians recorded Nibirans’ hypothesis that satellite moons evolved to planets with their own orbits around the sun instead of around the planet whose satellite they’d been. Modern astronomers too came to this after they saw the twin qualities of Uranus and Neptune, studied Pluto’s orbit and after Pioneer and Voyager spacecraft showed that “in the past decade Titan, the largest moon of Saturn, was a planet-in-the-making whose detachment from Saturn was not completed.” [Genesis: 16 -18].
Scholars dismissed as myths Sumerian tales of moons on outer planets. But Sumerians said the Anunnaki saw moons circling Mars, Jupiter, Saturn, Uranus, and Neptune. In 1610 A.D., Galileo saw four of Jupiter’s; before that “it was unthinkable for a celestial body to have more than one moon, since Earth had just one.” Mars has 2; Jupiter, more than 16, Saturn, more than 21, Uranus, up to 15 and Neptune, 8. What Sumerians said about outer planets’ moons supports the hypothesis that they saw these moons from beyond the inner solar system. [Genesis: 50]
Datum 5: SUMERIANS KNEW FIRST HOW THE MOON FORMED
4.6 billion years ago, when Tiamat–the proto-Earth–orbited Solaris beyond Mars, Tiamat’s moon, Kingu, almost attained solar orbit. But 600 million years later, Nibiru entered the inner Solar System. Nibiru’s moon, Evil Wind, hit Tiamat into orbit within Mars’and left Kingu circling Earth. Our Pioneer and Voyager probes sent back evidence Kingu formed from Tiamat, the planet that became Earth. Tiamat, then beyond Mars, generated Kingu. Glassy material with nickel in the Moon’s rocks validate the likelihoodthat a moon of Nibiru impacted Kingu 500 million years after Kingu grew into a Tiamat’s satellite, when Kingu had almost attained planetary orbit around the Sun. “Tiamat was split in two; one half shattered [and became the asteroids]; the other half, accompanied by Kingu, thrust into a new orbit to become the Earth and its moon.” [Wood, J., 1984, The Origin of The Moon; ZS, 1990, Genesis: 107 – 131]
The collision depleted most of Kingu’s iron, “resulting in decrease in its density. The mass of the Moon’s core “bears the mark of the ‘big whack’ compressed the moon, just as the Sumerians related. Contrary to views the moon was always inert, it was found in the 1970s and 1980s to possess all attributes of a planet except independent orbit around the Sun: rugged mountains, plains and seas formed by water [or] molten lava. It retained a magnetic field caused by rotation of a molten iron core, heat and water, as true of Earth and other planets” until the Evil Wind struck it. “The Moon witnesses the accuracy of ancient knowledge.” Nibirans, who ruled the Sumerians, knew the Moon’s history long before our scientists did.
Datum 6: NIBIRANS GOT ASTEROID MAKEUP RIGHT FIRST
Nibirans described asteroids as pieces of Earth knocked into space when, four billion years ago, a moon of Nibiru struck Tiamat. “Debris from the lower half of Tiamat stretched into space. Sumerian texts and the biblical version thereof” said the asteroid belt, a bracelet of debris, orbited the sun between Jupiter and Mars, “but our astronomers were not aware of that” until in 1801 Piazzi found the first asteroid, Ceres. “It’s taken modern astronomy centuries to find out what Sumerians knew 6,000 years ago. [Genesis: 51]
Datum 7: SUMERIAN STORY PREDICTED MODERN FINDINGS OF EARTH’S MAKE-UP AND HISTORY
“Earth’s crust, plate tectonics, differences between the continental and oceanic crusts, emergence of Pangaea from under the waters, the primordial encircling ocean: the findings of modern science corroborated ancient knowledge. The only explanation of the way Earth’s landmasses, oceans and atmosphere evolved is a cataclysm four billion years ago. What was that cataclysm? Mankind possessed the Sumerian answer six thousand years: The Celestial Battle” between the planets “Nibiru/Marduk and Tiamat.” [Genesis: 88-106]
The Sumerian tale predicted Earth’s geo-features. “In the aftermath of the Celestial Battle, Earth evolved into an independent planet and attained the shape of a globe dictated by the forces of gravity. Waters gathered into the cavity on the torn-off side. Dry land appeared on the other side of the planet. Earth’s crust is 12 miles to 45 miles thick; but in parts taken up by oceans the crust is only 3.5 miles thick. While the average elevation of continents is 2,300 feet, the average depth of oceans is 12,500 feet. The thicker continental crust reached much further down into the mantle [rock layer], whereas the oceanic crust is a thin layer of solidified sediments. In the Pacific, the crust has been gouged out at some points 7 miles. If we could remove from the Pacific’s floor the crust built up over the last 200 million years, we arrive at depths 12 miles below the water’s surface and 60 miles below the surface.” [Genesis: 93- 98]
Datum 8: NIBIRANS LONG AGO TOLD HOW LIFE EVOLVED ON EARTH
“Scientists now believe Earth’s atmosphere reconstituted initially from gasses spewed out from wounded Earth. Clouds thrown up from these eruptions shielded Earth and it began to cool, the vaporized water condensed and came down in torrential rains. Oxidation of rocks and minerals provided the first reservoir of higher levels of oxygen on Earth; plant life added both oxygen and carbon dioxide to the atmosphere and started the nitrogen cycle with the aid of bacteria. The fifth tablet of the Enuma elish describes the gushing lava as Tiamat’s “spittle” as it poured forth, “assembling the water clouds; after that the foundations of Earth were raised and the oceans gathered” just as the verses of Genesis reiterated. Thereafter life appeared: green herbage upon the continents and “swarms” in the waters.” [Genesis: 134 (Genesis condenses Enuma.)]
3.4 billion years ago, “clays acted as chemical laboratories where inorganic materials were processed into more complex molecules. Inorganic proto-organisms in the clay acted as a template from which living organisms [one- celled microscopic algae like today’s blue-green algae] evolved. Defects in the clays acted as sites where stored energy and chemical directions for the formation of proto-organisms developed.” Green algae’s “the precursor of chlorophyllic plants that use sunlight to convert their nutrients to organic compounds, emitting oxygen in the process after algae spread upon dry land. For plantlike forms to process oxygen, they needed rocks containing iron to bind the oxygen; free oxygen was still poison to life forms. Such banded-iron formations sank into ocean bottoms as sediments, the single-celled organisms evolved into multicelled ones in the water. The covering of the lands with algae preceded the emergence of maritime life” [Genesis: 136 – 139] .
Crick and Orgel, our Nobel laureate scientists, say, in “Directed Panspermia [Icarus, vol. 19], a technologically advanced society on another planet in a spaceship with due protection and a life-sustaining environment, seeded Earth” Crick and Orgel “rule out the possibility that the essential genetic material had time to evolve on Earth.” They found the same twenty amino acids in all living organisms on Earth. All Earth’s organisms, when they evolved, incorporated within themselves the same four nucleotides “that and no other. [Genesis: 152]
The Nibirans “figured out evolution on Earth.” Maritime vertebrates came 500 million years ago; land vertebrates, 100 million years later. 225 million years ago, fish filled the waters. Sea plants and amphibians moved from water to land. Plants lured amphibians to land; amphibians adapted into egg-laying reptiles. Some reptiles evolved into birds; reptiles on land grew to dinosaurs. 65 million years ago, dinosaurs died out. “Full agreement here” among the Enuma, Genesis and modern science.” [Genesis: 141 – 145]
Datum 9: NIBIRANS AND EARTHLINGS SHARE DNA
“300,00 years ago, the Anunnaki jumped the gun on evolution and using genetic engineering, upgraded a hominid, Homo Erectus–to an intelligent, tool-handling Homo Sapiens) to be their serf. It happened in the Great Rift Valley zone of southeast Africa, just north of the goldmining land. “The wild hominid of the Abzu had DNA similar enough to the Anunnaki’s that just a little genetic mixing produced a Being that, according to Sumerians and the Bible, was akin to the ‘gods’ both inwardly and outwardly except for their longevity.
“All life on Earth, from birds to fishes, flora to algae, and down to bacteria and viruses–all have the very same DNA, the four nucleic acid letters from which all genes and genomes are made. The DNA of the Anunnaki was the same as the DNA of all life on Nibiru. The DNA on Earth and the DNA on Nibiru were the same.” Our genome–less than 30,000 genes–holds 223 genes without evolutionary predecessors. These 223 genes, absent in vertebrae evolution, regulate the human body and mind. The theory of panspermia, that Earth was “seeded from elsewhere,” was incised in clay tablets millennia ago. Nibiru gave Earth its DNA during the Celestial Battle. This “explains how life could begin on Earth in the relatively immediate aftermath of the cataclysm. Since Nibiru, at the time of the collision, already possessed formed DNA, evolution began there much earlier. Just 1% earlier would mean a head start of 45,000,000 Earth-years–more than enough evolutionary time for Nibiru’s astronauts to meet Homo Erectus on Earth.” The planet “Nibiru is the ‘Creator of the Primeval Seed who ‘furnished the Seed of Earth,’ culminating with ‘the Seed of All People, all life stemming from the same DNA.” [Giants: 153 -160]
We Homo Sapiens “showed up suddenly, 200,000 years ago in the fossil record with differences from “any other anthrodoid or homanoid” which evolved on Earth. This sudden appearance supports the hypothesis that creationism–which Enki, Ninmah and Ningishzidda claimed on tablets their scribes wrote–was a component of our history (as are both devolution and evolution of earlier humans before the Anunnaki). “Darwin principles do not apply to our unique genesis and subsequent development except as a minor theme in our climactic and incidental regional adaptation.”
Freer lists our differences from other humans on Earth before Enki, et. al. created us modern Earthlings: “we have foreheads, hardly any brow ridges, eye sockets far more rectangular than round; relatively tiny nasal passages; small flat mouths and a chin; far less muscular strength and bone density; our skin, sweat process and glands, body hair, throats, and salt management are completely different. Human females do not have an estrus cycle. We are bipedal. Our brains, different. We are a product of a melding of two racial gene codes where quality control was conditioned by practical purposes [creating obediant slaves] have some four thousand genetic defects rather than none to other species.” [Freer, Sapiens Arising]
Datum 10 NIBIRANS JUGGLED GENOMES BEFORE WE DID
Long before our scientists understood evolution, Nibirans knew the developmental sequence of organisms on Earth. More than 300,000 years ago, they decoded the pan-human genome. They isolated their own, various animals’ and Homo Erectus’ deoxyribonucleic acid (DNA) and mitochondrial DNA (mtDNA) chromosomal sequences. Enki’s symbol, entwined serpents, “emulated the structure of the genetic code, the secret knowledge that enabled Enki to create the Adam and then grant Adam and Eve the ability to procreate.”
Enki’s built a sterile lab; its air-conditioned is “the source of the biblical assertion that after having fashioned the Adam, Elohim ‘blew in his nostrils the breath of life. Enki, Ninmah and Ningishzidda mapped chromosomes, genes and genomes. They fertilized ova in test tube flasks with sperm soaked in Nibiran blood serum and mineral nutrients. They experimented with cloning, cell fusion and recombinant technology–cutting DNA strands with enzymes, targeted viruses, absorbing sperm in genetic material to be used for fertilization and splicing in DNA patches of other species to create, at first, hybrids unable to reproduce. Then Ningishzidda isolated the XX and XY chromosomes that allowed the creation of fertile Nibiran/Erectus mineslaves. [Genesis: 158 – 182, 202]
Datum 11 NIBIRANS MAPPED ANTARCTIC LAND DELUGE DIVULGED
“13,000 years ago, the Ice Age abruptly ended; Antarctica was freed of its ice cover. Its coasts, bays, rivers were seen.” Nibiran Goldmining Expedition personnel, from spacecraft orbiting Earth, saw the Antarctic landmass after the icecap slid into the South Sea. Our ancestors didn’t even know the Antarctic continent existed before “A.D. 1820, when British and Russian sailors discovered it. It was then, as it is now, covered by a massive layer of ice; we know the continent’s true shape under the icecap by means of radar.” Yet, in 1958, Antarctica appears on world maps–ice-free–from the fourteenth centuries A.D.–hundreds of years before the discovery of Antarctica.”
. . . . . . . .
Datum 12: NIBIRANS MAPPED EDEN’S RIVERS—EVEN IF BURIED & MOVED
The first Nibiran to reach Earth, the deposed Nibiran King Alalu, reported his locale as near the confluence of four rivers, two of which are nowadays extinct. Alalu splashed down in a marsh and came ashore ) on land which is nowadays underwater) on the Persian Gulf, where the waters of four rivers–Gehon (Karun), which flowed thro through Iran, the Pishon, which flowed through Northern Arabia, joined the Tigris and Euphrates in Iraq and flowed together into the Persian Gulf marshes.
Datum 13 ETS LEFT HUGE DOODLES & ROCKETS TAKEOFF LINES Evidence of the last Nibiran spaceport on Earth includes over 800 straight take-off trails atop and next to over 70 huge scraped drawings [geogylphs] of “known and imaginary animals and birds. There are over 150 geometrical drawings.
The lines and geoglyphs are made by removal the topsoil several inches. The geoglyphs are executed with one continuous line that curves and twists without crossing over itself. Attempts to show that a horde of workers working at ground level and using scrapers could have created these images failed. Someone airborne used a soil-blasting device to doodle on the ground below.
Nibiran pilots used “the Nazca flatlands in their final spaceport, doodling for fun while killing time before takeoffs.” [Journeys: 192 -211] The Nazcae area’s rich in nitrates, a probable component of Anunnaki rocket fuel.
The Astronaut glyph has one hand pointing to the sky, one to the Earth. .
A huge spider gylph depicts the constellation Orion, the extension the spider’s leg on the viewer’s right, locates the star Sirius. The glyphs and the mandalas formed by the layout of Anunnaki structures may have been signs for off-worlder viewers. In addition to geoglyphs, “the Nazca Lines run straight and stretch (sometimes narrow, wide, short, or long–over hill and vale no matter the shape of the terrain.” Von Daniken’s researchers found very powerful electromagnetic currents eight feet under the Nazca lines they studied.
The straight lines “crisscross each other, sometimes running over and ignoring the animal drawings. These are not made with handheld ray guns. The lines are not horizontally level–they run straight over uneven terrain, ignoring hills, ravines, gullies. They are not runways; they are embedded in soil too loose to hold anything as heavy as an airplane. They may be the result of takeoffs by craft taking off. The rocket engine exhausts left ground ‘lines.’ Childress says the lines are lined up for aircraft incoming on the Pacific Coast to turn left and follow the way to Tiahuanaco to which the lines point. [AAS5D2]
In one part of Nazca, Linda Howe measured a 6-mile long, 24 inch deep perfect triangle something very heavy pressed into the earth. [Ancient Aliens, Season 5] On a nearby mountain, lines of grooves outline a landing corridor; “circles and squares form a cross, as in a modern heliport.” [Journeys:212 -213]
Datum 14 ROCKET IMAGES, DESCRIPTIONS, ROUTE MAPS AND CALCULATIONS
Ancient engravings show spaceports, rockets, launch towers, helicopters, flying saucers, accounts of take-offs, landings and journeys. Rocket and airplane journeys of Anu, Enlil, Anu, Enki, Ea, Anzu, Marduk, Inanna abound.
. . . . . . . Nibirans on Earth had “craft that appear over a place, hover awhile, and disappear from sight again.
Ezekial, on the banks of the Khabur in northern Mesopotamia, reported “a helicopter consisting of a cabin resting on four posts. The craft sported rotary wings. They called it a "whirlbird".
The epic of Gilgamesh details an “ancient account of launching a rocket. First the tremendous thud as the rocket engines ignited (‘the heavens shrieked’), accompanied by the shaking of the ground (‘the earth boomed’). Clouds of smoke and dust enveloped [the Sinai Spaceport] the launching site (‘daylight failed, darkness came’). Then the brilliance of the ignited engines showed through (‘lightning flashed’); as the rocket began to climb skyward, ‘a flame shot up.’ The cloud of dust and debris ‘swelled’ in all directions; then as it began to fall down, ‘it rained death!’ Now the rocket was high in the sky, streaking heavenward (‘the glow vanished; the fire went out’). The rocket was gone from sight; and the debris ‘that had fallen had turned to ashes’” [ZS, 12th Planet: 128 -172]
Canaan Israel’s King Solomon flew, “flying in a heavenly car”between his place in Jerusalem Queen Makeda’s palace in Ethiopia and mountaintop platforms in Persia, Kashmir and Tibet. [Childress, 2000: 155 -156]
Indian texts describe vimanas–Earth-travel and interplanetary Anunnaki rockets, motherships, dirigibles, fighter-planes made of “very light, heat-absorbing metals, “impregnable, unbreakable, non-combustible, indestructible, capable of coming to a dead stop in a twinkling.” Pilots could shield their vehicles from sight. They could see, hear, record and even paralyze crew inside enemy craft. Hindu literature cites aircraft of the Anunnaki, whom Childress calls “The Rama”. The Mahabharata told of a vimana “with sides of iron and wings. The Ramayana describes a vimana as a double-decked cylindrical aircraft with portholes and a dome that gave a humming noise. The Vaimanika Shastra (4th Century BCE) included information on steering, precautions for long flights, protection of airships from storms and lightening and how to switch the drive to solar of free energy.” Vimanas took off straight up and could hover. Vimana pilots controlled climate within their craft. Vimanas parked in hangars located all over the globe, including Rapa-Nui, opposite the Indus Valley civilization. The craft “were propelled by a yellowish-white liguid. [Childress, 2000: 166 – 168]
Datum 15: NIBIRANS MOVED HEAVIER STONES
Our science still can’t cut, move and fit huge rocks as well as the Anunnaki. They cut stones as large as 10 tons with huge cutting tools run on power pulled from the earth and capacitated and amplified by crystals that broadcast energy within pyramids such as the Great Pyramid at Giza and Enki’s Pyramids in South Africa. They used white powder of monoatomic gold to lighten iron-laden, magnetically charged stones for transport to construction sites.
The Anunnaki, other extraterrestrials and then the ancestors of humanity on every continent built megalithic–big rock–sites on Earth’s ley lines–oscillating telepathic internet fields where all who worship fuse their consciousness into one group mind and can communicate with people and their knowledge at other megalithic sites.
“Giant telluric [mind-harmonizing telepathic] waves, undulating vertically and linked to the geomagnetic field of Earth, create a network of crisscrossing lines all around the planet.” The builders marked crossing points with standing stones. Elongated shapes–mehir, steeples or towers) acted as antennae, attracted “cosmo telluric waves” that continue to flow through the worldwide network.
“These cosmo-telluric lines” let the Anunnaki, other ETs on Earth and our ancestors “gather a whole body of knowledge.” Hardy contends we can, at these sites, “trigger a shift to a heightened and more spiritual state of consciousness.” Though the ETs chose the sites, our ancestors in every era and on every land anchored them with stones and buildings. Once fixed, our forfathers’ reinforced the broadcasting power when they prayed at the sites. Our genitors experienced “planetary consciousness when they did rituals at the big-stone sites.” [Sacred Network: 4 – 8]
The chief Nibiran Architect, Ningishzidda, planned and Earthlings built the gigantic astro-navigation landmark pyramid and Sphinx at Giza. Nibirans made spaceports at Sippar, then on the Sinai Peninsula and the Nazca Plateau in Peru. Ningishzidda directed Lagash’s King Gudea who built a temple for Ninurta. Nibirans used their know-how and Earthling labor to build rocket silos and airplane hangers in the cities and temple-complexes of Sumer.
Ancient Aliens on History Channel pictures rocket landing on Baalbek platform.
Nibirans built a launch tower at Baalbek, Lebanon for the goldmining expedition, “on a vast horizontal platform, artificially created 4,000 feet above sea level, surrounded by a wall. The enclosed squarish area, 2,500 feet long, over five million square feet, built before the Flood” [13,000 years ago] was “held together without mortar, rising stage after stage, to incredible heights, placed on a vast stone platform. The massive stones formed an enclosure that surrounded a cavity, a hollow within which stood the rocket about to be launched. The encompassing walls were multileveled, rising in stages to enable servicing the rocketship, its payload, [&] a command module.
Arriving rocketships landed on the vast stone platform adjoining the launch tower, then would be put in place–as had been done to the colossal stone blocks–within the massive stone enclosure ready for launching.” “Baalbek was incorporated into the post-Deluvian Landing Corridor of the Anunnaki when they planned the planning of a spaceport in the Sinai to replace the one in Mesopotamia wiped out by the Deluge. They ran a line from the peaks of Ararat through Baalbek and extended it to Giza, where they built the pyramids. They placed the Great Pyramid and the anchor in Sinai that in the end delineated the Landing Corridor equidistant from Baalbek.”
Baalbek included “stone blocks of incredible size, precisely cut and placed, including three colossal stone blocks that are the largest in the world, the Trilthon. The stone blocks that make up the Trilithon weigh more than 1,100 tons each and are placed upon older immense stone blocks, over sixty feet long with sides of fourteen to twelve feet, cut to have a slanting face that weigh 500 tons each. There is even now no man-made machine, no crane, vehicle or mechanism that can lift such a weight of 1,000 to 1,200 tons–to say nothing of carrying such an immense object over valley and mountainside and placing each slab in its precise position, many feet above the ground. There are no traces of any roadway, causeway, ramp or other earthworks that could suggest hauling these megaliths from their quarry, several miles away.”
“The stone blocks that comprise the platform are “so tightly put together that no has been able to penetrate it and study the chambers, tunnels, caverns and substructures hidden beneath,” though Arabs did penetrate a “460-feet long tunnel at the southeast corner of the platform.” They proceeded through “a long vaulted passage like a railway tunnel under the great in total darkness broken by green lights from puzzling ‘laced windows.”
Nibirans “not only lifted and placed such colossal stone blocks but also carried them from a quarry several miles away. The quarry has been located and in it one of those colossal stone blocks had not been completed, still lies partly attached to the native rock; its size exceeds the Trilithon blocks. [Stairway: 168 – 176; Expeditions: 166 -179]
GOBEKLI TEPE, Turkey, an ancient (9000 BCE) amphitheater, focuses above on the star Deneb in the constellation The Swan or The Vulture. Gobekli sits on a ridge above the Plain of Harran–turf of Moon God Nannar (aka Sin-El-Allah after Noah’s flood). Gobekli sits on a ridge above the Plain of Harran–turf of Moon God Nannar (aka Sin-El-Allah after Noah’s flood).
Thoth organized goldmining and refining at Teotihuacan in southern Mexico. Some to the beardless Indians from northern South America–descendants of Cain, whom Thoth had genetically marked with beardlessness–worked in Mexico for Thoth and his staff. Collectively the Nibirans and their Earthlings in Yucatan, Guatemala and El Salvador created the advanced Mayan civilization. From Palenque, a focus on the twelve-foot king buried under a pyramid:
- Palenque Spaceman, Lord Pacal, in spaceship, model made on basis of carvings atop Lord Pical’s sarcophagus [from Ancient Aliens, Season 4, Disk 1, Tsoukalos narration]
In Mayan statues, we see Kukuklan deplane from the front gate of his aircraft (the serpent’s mouth, in Mayan depiction). When he left Yucatan, the night- as Jaguar, swallowed both Kukuklan and his fire-spewing craft .
Mayan cities all had large playing courts with frescos of bearded Nibirans, their Tree of Life, and their Sumerian overseers. (At Chichen Itza, Maya beheaded losing captains.)
Kukuklan showed the Maya phonetic and character writing and how to make ink and paper books. They made huge sculptures, jade art, weapons, flamethrowers and tools fitted with mirrors. They carried lights.
Kukulkan taught the Maya how use the ideas of place value and zero Kukuklan taught them how to use the ideas of place value and zero for astronomy, and then build stepped pyramids angled to measure planetary and solar events. Then they’d know when Nibiru neared Earth. For, when Nibiru neared, the gods would return.
Kukuklan gave the Maya calendars. The calendars showed how Venus circles the sun every 6000 years. He taught the 26,000-year Precession of the Equinoxes. The accuracy of Mayan predictions supports the case for Nibirans’ presence and influence on Earth.
- Bolam Yakte (Enki/Ptah)
- Kuklukan Emerging from Flying Serpent craft
- The night (jaguar) eats Kukuklan
In South America, Ningishzidda and Adad directed the construction of Mochica, Chan-Chan, Cuzco, Macho Picchu, Chavin, Ollantaytambu and “the Baalbek of the New World”–a metallurgical, temple, and observatory complex at Tiahuanaco, Peru. 2000 years before the Great Flood of 11,000 BC, Adad-Viracocha and his brother Ninurta, Champion of Enlil, Commander or the Nibiran Goldmining Expedition to Earth, created a huge landing platform and initial base camp at Pumapunku, 12,000 feet high (in Bolivia) half a mile from the later city of Tiahuanaco on south shore of what was then a much larger Lake Titicaca.
Sitchin reports harbor facilities at Pumapunku with four giant hollowed-out stone structures inlaid with gold plates held together with gold nails. Adad, Ninurta, and later Ningishzidda built Tiahuanaco with help from Earthlings from Mesopotamia, Africa and Pakistan–as well as Greys. Adad kept his aircraft in a base beneath Lake Titicaca. From Tiahuanaco, Adad’s pilots ferried nitrates from ran nitrate mining on the Nazca Plains, 400 miles south.
2000 years before the Great Flood of 11,000 BC, Adad-Viracocha and his brother Ninurta, Champion of Enlil, Commander or the Nibiran Goldmining Expedition to Earth, created a huge landing platform and initial base camp at Pumapunku, 12,000 feet high (in Bolivia) half a mile from the later city of Tiahuanaco in Peru. They built Tiahuanaco later with help from our Earthling ancestors from Mesopotamia, Africa and Pakistan–as well as Greys. Adad kept his aircraft in a base, Huanaco, beneath Lake Titicaca. On Mars, Nibirans manned a spaceport and lasered a monument to one of their kings.
Nuking, which soil analysis documents, changed the soil of Sinai in 2024 B.C., the year Enki said Nergal and Ninurta bombed there. Sitchin [http://www.sitchin.com/evilwind.htm] wrote that in 1999, scientists determined that abrupt climate change depopulated Sumer [Geology, April 2000; Science, April 27, 2001].
DeMenocal, cited a correlated abrupt change in the area’s vegetation with appearance of rocks called tephera–“burnt-through pieces of blackened gravel-like rock” usually associated with volcanos. Tephera still cover Sinai–which lacks volcanos. Sinai’s tephera resulted when Ninurta’s bombed of the Spaceport. The bombs left a huge black scar on the Sinai plain (where the shuttlecraft runway and launch platform had been) so large you can see it only from satellite. You can find millions of black-blasted rocks, north northeast of the scar in an area where all other color rocks. [See photos, ZS, Wars: 332-334]
Datum 17 ETS TAUGHT SUMERIANS 2150 YEAR PRECESSION OF EQUINOXES
The detailed, multigenerational observation Sumerians needed to predict the sun’s apparent shift backward along the line of Earth’s orbit requires 2150 years for each degree of shift. [Lloyd: 2-36] “The sun’s apparent movement from north to south and back, due to the tilt of 23.5 degrees–causing the seasons–results from the fact that Earth’s axis tilts relative to the plane of its orbit around the sun and is associated with the equinoxes, when the sun passed over the Earth’s equator (once coming and once coming back) times when daytime and nighttime are equal.
“Because of the wobble, the Earth’s orbit around the Sun is retarded each year; the retardation amounts to 1 degree in seventy-two years. Observance of sunrise on the spring equinox in 2160 years is in the preceding [not the following] zodiacal house. In zodiacal time, matching the clockwise direction of Nibiru [all the other planets orbit counterclockwise; this means that] on Earth, in zodiacal time, the Past is the Future.” Enki devised the division of the ecliptic (the plane of planetary orbits around the sun) into twelve to conform to the twelve member composition he invented. “That gave 30o per house. So “retardation of all twelve houses “added up to 2,160 years. The complete Processional Cycle or “Great Year” added up to 25,920 years. Relating 2160 to the then 3,600 cycle of Nibiru’s return to the inner Solar System, Enki gave us the sexagesimal system of mathematics which multiplied 6 by 10 by 6 by10.” [Sitchin, Time:20 – 26]
The Precessional movement is so slow ordinary people could not possibly notice this movement in single lifetimes. Sumerians said Nibirans gave them the 2150 year formula so they could determine which of the twelve zodiacal houses–indications of which Nibiran ruled was ascendant.
Datum 18 EVIDENCE OF ANCIENTS’ LIGHTING: CHEM-LIGHTS: GAVLANIC CELLS FROM ANCIENT IRAQ & EGYPT
Image from Ancient Aliens, Season 5
Ancient Mesopotamian galvanic 4 volt cell lamps in Baghdad Museum before looting. The Bahgdad Bulb is stand-alone. The Dendera Bulb below, from the Hathor’s Temple in Egypt is plugged into a power source.
Datum 19: EVIDENCE OF ANCIENTS’ MASTERY OF ELECTRICITY
They mastered electricity. Our ancestors’ art, nuclear residues, a multitude of references in the enuma elish and other ancient documents all show the Anunnaki collected, amplified and directed energy, and could use the energy as electricity where they choose.
Christopher Dunn And John Cadman [Ancient Aliens, Season 5] have shown how the Great Pyramid at Giza, using water from the Nile could easily produce vast, directed energy, energy, aimed toward the belt of the constellation Orion. Cadman traced Nile water pumped through a channel to a pool under the pyramid. The pump pushed pressure upward and vibrated a low-pressure rarefaction wave toward the so-called King’s Chamber. Water traces and chips from the pressure wave let Dunn sequence the way the pyramid gave the Ancients their working power, and perhaps even a portal to other dimensions–but that’s another story.
Dunn shows how, in the salt-lined Queen’s chamber of the Great Pyramid, hydrochloric acid and hydrated zinc (they left residues he recorded) flowed together down shafts. The zinc ran down a channel, the Northern Shaft; the dilute hydrochloric, down the Southern Shaft. The chemical reaction–which Dunn duplicated in the lab–hydrogen gas rose inside the pyramid. The hydrogen gas, lighter than air, streamed through all the pyramid’s upper chambers. It went from the Queen’s to the King’s Chamber. Then the vibrations of the Earth itself, and underground pool energized the hydrogen atoms into a microwave energy bean which exploded up 27 vertical shafts (shown below). This system kept sending a microwave that sustained itself for hundreds of years, a microwave strong enough to power their tools, cities and spacecraft. “The evidence shows use of hydrogen in the King’s Chamber in a shaft, 8.4 by 4.8 centimeters.”
This channel guided the MASER–Microwave Amplified Through Stimulated Emission Radiation–wave through crystal power-amplification devices.
M. Tellinger has documented collaborative evidence of ET-run energy from an Earth-grid extending over most of the south of Africa. Enki and company used mastered sonar technology and lifter-technology with monoatomic gold powder. Evidence accumulates that the Anunnaki could change the form of energy to suit their purposes.
Datum 20 ANCIENT STRUCTURES ORIENTATION ALL TO ORION
Ancient Tomorrow forward the following on facebook:
Structures at Carnac, Stonehenge, Tiahuanco, Teotihuacan, Ankor Wat, Draco Locales Point to Orion’s Belt in 10,500 B.C. “Megalithic sites throughout the world, all constructed by different cultures, all varying in construction ond complexity but sharing in the simple fact that their buildings correlate with a mirrored constellation, and all on winter solstice 10,500 BCE. Such structures include Carnac, Stonehenge, Teotihuacan, Tiwanaku, Easter Island, and the magnificent Angkor Wat, Cambodia and Draco.
“All of these sites align on a globe in a grid pattern which correlates with other sacred and holy sites. “Dr. Graham Hancock believes this measured the circumference of the earth. The perimeter of the base of the great pyramid of Giza compared to its height, shows the same proportion as the circumference of the earth and its diameter. This is similar to Teotihuacan, where the base of its pyramids relate to their height by the same ratio of the Earth’s circumference to its radius. ” You find the same features, separated by six millenia and the Atlantic ocean at Teotihuacan and, Xian China, whose pyramids align perfectly with, and mirror Orion’s Belt, circa 10,500 BCE.” [http://onlythechanges.blogspot.com/2012/11/10500-bce-lost-epic-of-man.html?m=1]
Drill and saw cuts from ancient Egypt
Ampitheaters like Gobekli abound in this part of Anatolia. The builders made them ovoid-shaped with stone and mortar benches that face two central monolith-pillars. At Gobekli, the amphitheater surrounds the central monoliths stand 18-feet high. They weigh 15 -20 tons apiece and face each other across a space where a priest communed with the gods.
A century after the builders made a theater, it no longer (due to precession of the equinoxes) aligned with Deneb. So they buried the theater and, along the same ridge, build a new one atop or near the old, out of date one. They built ever-smaller theaters each century, as the Earth moved off its alignment with Daneb.
Central pillars in the middle of these theaters feature anthropomorphic statues. The statues’ arms wrap around them like those on Rapa Nui statues that the Anunnaki and the Capensis ETs erected with flood survivors. The builders of both Gobekli and Rapa Nui carved snakes on the statues.
Swidarians, Collins says, flood survivors from the North Mountains–Poland, the Carpathian Mountains and Belarus–came down to Turkey and got local hunters and gatherers to work for them and build these structures. Swindarians were tall hybrids with Capensis as well as Neanderthals; some of the Swindarinans had ELONGATED HEADS and left traces of their DNA in human populations.
Collins considers Deneb, the brightest star in The Swan and the star to which the ampitheaters orient, the home in the stars to which the old megalithic structures in Turkey, Ireland, UK and the Baltics point. Deneb points to “the exact point where the Milky Way splits into two to form The Great Rift–the Signus Rift.”
MALTA (Part of Sicily when these structures were built.)
A temple both below a limestone hill and beneath it too, the Hypogeum near Malta’s capital Valletta, dates back to either before Noah’s flood of 11,000 BCE or from before Thera erupted and killed Minoan hegemony in the eastern Mediterranean. One large room in the Hypogeum, The Oracle Room, amplified sound a hundred times. People curled up in small rounded cubicles carved into the walls to listen.
The part of the Hypogeum Malta lets us see holds 30 rooms that passages, stairs and halls link. The builders cemented the walls with concrete of compacted rock dust and water. In the rooms, they smoothed the walls with imported flint instruments. [Coppens, 2012]
At Enki’s palace–Great Zimbawe–we see a 12-fathom high tower and the lab he shared with Ninmah and Ningishzidda.
There, above the gold mines of the inland plains between the Limpopo and Zambezi rivers, the genetics team first adapted their Nibiran genome to Earth to create Ti-Amat, the first adapted homo sapiens woman and Adamu, the first homo sapiens man.
The bricks, made from a mixture of granitic sand and clay. Great Zimbawe’s 11 meter-high outer wall extends 250 meters. Enki’s palace sits among thousands of miles of twelve-foot wide lanes, lined with graniteboulders. Enki engineered sonar systems to transport water, gold diggings and machines over these lanes.
Ningishzidda designed the 5,955,000 ton Great Pyramid at Giza with eight concave faces. With sonar technology and the help of monoatomic gold to lighten the stone blocks, Ningishzidda and his assistants stacked 2,300,000 stone blocks–250 tons worth–on the pyramid. The pyramid’s outer mantle featured 144,000 polished and flat casing stones 100 inches thick and 15 tons each. The pyramid “covered 592,000 feet in area.”
“Granite used in the Great Pyramid gives off a significant electromagnetic charge, contains 25% quartz and has the ability to enhance sound. The magnitite in the granite is a natural magnet that creates around it a magnetic field. The Giza pyramids’ core consists of red granite, one of the most conductive rocks on earth due to its high contents of quartz, iron, and magnitite. It exudes natural radioactivity. This core is encased in a rough limestone with a high content of magnesium that acts as an electrical conductor. The limestone was dressed on the outside by Tura limestone–finer grained and highly polished–and because it contains minute traces of magnesium, unlike the inner limestone, it serves as an insulator, keeping the energy inside the temple. This energy seeks to escape to the top of the structure, so the tips of the pyramids were capped with a stone of dororite and covered with electrum, a two-thirds mix of gold and silver, making it an excellent conductor.” [Silva, 2012b: 203, 204]
The east/west parallel that crosses the most land and the north/south meridian that crosses the most land intersect here.
The Great Pyramid aligned with Alpha Draconis, the Pole Star at the time. The pyramid aligns also with the center of the land mass of the earth. The east/west parallel that crosses the most land and the north/south meridian that crosses the most land intersect here. “The Great Pyramid divides the oceans and the continents into two equal parts.” It “is a giant sundial. The shadows thrown from mid October to early March indicate the seasons and the length of the year. The length of the stone slabs that surround the Great Pyramid correspond perfectly to the length of the shadow on one day.” [Von Daniken: 236]
Casing stones of polished limestone covered the outside and reflected the sunlight people could, without magnification, see from Israel and from the moon.
The Great Pyramid entrance had a 20-ton nearly invisible swivel door, nearly invisible when closed and lacked a grasp from the outside. The pyramid’s cornerstones have ball and socket construction that adjusts to heat expansion and earthquakes. Temperature inside stayed 68 Degrees Fahrenheit.
The relationship between Pi (p) and Phi (F) is expressed in the fundamental proportions of the Great Pyramid. [Ancient, 2014]
From 3300 to 1900 BCE, 1000 cities in the Indus Valley extended over an area twice size of France. The people of this civilization kept had cattle, sheep and goats. Each village had a crafts area, markets, jewlers’stalls. Inanna’s Indus Valley cities–Harappa, Mohenjo Daro and Dholavira and their agricultural village hinterlands–stretched over a million square kilometers 3000 kilometer along the Indus River. In Mohenjo Daro, 40,000 people concentrated in one square kilometer.
Indus Valley Civilization lacked caches of weapons and had instead a multitude of toys, musical instruments, metal tools, scales, pottery, jewelry, cloth, wheeled carts, statuary and caches of grain for commerce.
Boats from Dholavira traded all the way to the Arabian Peninsula. Indus Valley civilization smelted and traded copper, bronze, lead and tin.
Roads that intersected at right angles connected residential blocks in Mohenjo Daro–a city of 40,000 in one square kilometer. Gutters and rubbish bins lined the roads. The main street kept nine meters width. Atop the highest hill, where residents retreated from periodic Indus flooding, Mohenjo featured a huge public bath. Mohenjo had eighty public toilets and a sewage system that serviced the entire city. In the residential area, every house had its own tile bathtub and its own well.
Along the Indus, Inanna’s people grew barley, wheat, melons, peas, sesame seeds, cotton, cattle, buffalo, sheep and goats.
Harappa featured two-stories high, baked brick identical houses with flat roofs; each with windows that overlooked a courtyard. Outside walls lacked windows. Each home had its own private drinking well and its own private bathroom. Clay pipes ran from the bathrooms to sewers under the roads. The sewers drained into streams.
Sixteen meter high brick walls surrounded Dohlavira, a commercial city of 48 acres surrounded with a population of. 20,000. Dohlavira contained grain-storage bins and reservoirs with flood control dams. The largest reservoir was 7 meters deep,79 long.
The dams kept water around the city for grapes (they made raisins) barley, wheat, peas, cotton and sesame seed crops and flowed downhill from the highest reservoir to lower ones. Rainwater channeled down a city-wide collector reservoir. Dikes diverted an ancient river–the Ghaggar–to water the area between Indus and Ganges.
Indus Valley cities featured reservoirs and multi-storied fired-brick buildings laid out along a grid of wide brick-paved streets with run-off gutters.
Jerusalem: SOLOMON’S TEMPLE
To ready the Jerusalem site for the Enlil’s temple, Israel’s King David, whom Enlil forbade to build the temple, readied its site for his successor, Solomon. David had thirty-three hundred foremen guide seventy thousand carriers and eighty thousand stonecutters in the hills as they took large blocks of quality stone for the Temple’s foundation.
Solomon built Enlil-Yahweh’s first permanent temple on huge stones–too heavy to move and fit in place without Anunnaki technology (so we know that’s how the stones were moved) in 957 BCE.
The Temple’s east-west axis aligned with the equinox.
Solomon set the temple so the sun at dawn entered the Tabernacle at spring and autumn equinoxes.
The temple featured a 100 x 200 foot main hall and a smaller room for Moses’ Arc. Solomon put the Ark on the rock where Abraham started to kill his son Isaac to prove himself loyal to Enlil. The new temple replaced the portable one Moses made in the desert, local sanctuaries and altars in the hills.
The Temple complex had a large basin (called the “Brazen Sea”) 10 cubits wide brim to brim, 5 cubits deep and with a circumference of 30 cubits around the brim on the backs of twelve oxen. The basin held 3,000 baths.
The Temple Palace, 40 cubits long, had walls lined with cedar, on carved with figures of cherubim, palm-trees and open flowers overlaid with gold. Fir-wood overlaid with gold covered the Temple floor. Olive-wood doorposts held doors, also of olive, boasted carved cherubim, palm-trees, and flowers, all being overlaid with gold.
Egypt’s Pharaoh Sheshonk I sacked the Temple a few decades later.
In 931, when Solomon,died, Abraham’s descendants split their turf into the kingdoms of Judea in the south, Israel on the north.
In 835 BCE Jehoash, King of Judah, renovated the Temple, but in 700. Assyrian King Sennacherib stripped it again. In 586, Nebuchadnezzer, Marduk’s Babylonian King, sacked Jerusalem and destroyed by the Temple.
In 539,Cyrus of Persia, whom Marduk welcomed, conquered Babylon and returned Nebuchadnezzar’s hostages to Jerusalem. Cyrus built the Second Temple from 538 to 515.
This second temple narrowly avoided being destroyed again in 332 BCE when the Jews refused recognize Macedonian King Alexander as a god. Ptolemies ruled Judea and the Temple from Egypt after Alexander died.
Seleucid King Antiochus III defeated Egypt in 198 BCE. He prompted a short-lived rebellion in 187 when he introduced Marduk-Zeus and the Greek pantheon into the temple. Antiochus IV Epiphanes again pushed the Greek gods for the Temple and, when the Jews again rebelled and Antiochus again crushed them, he again he forbade circumscision which marked Jews as followers of Enlil. Antiochus banned the Jewish Sabbath, put a statue of Zeus in their temple and had Greek priests sacrifice pigs there. When a Greek ordered Jewish priest Mattathias to perform a Hellenic sacrifice, Matathias killed him.
In 167 BCE the Jews rose up behind Mattathias and his five sons to fight and win their freedom from Seleucid authority. Mattathias’ son Judas Maccabeus re-dedicated the temple in 165 BCE and the Jews celebrate this event to this day as a major part of the festival of Hanukkah.
Judas Maccabaeus rededicated the Temple under Judas Maccabaeus in 164 BCE.
During the Roman era, Pompey entered (and desecrated) the Holy of Holies in 63 BCE, but left the Temple intact.
In 54 BCE, Crassus looted the Temple treasury Jews revolted again but Romans subdued them again in 43 BCE.
During the last revolt of the Jews against the Romans in 132–135 CE, Simon bar Kokhba and Rabbi Akiva led another failed revolt from 132 to 135 and the Romans banned Jews from Jerusalem.
The Temple Palace, 40 cubits long, featured walls lined with cedar with carved cherubim, palm-trees and open flowers overlaid with gold. Fir-wood overlaid with gold covered the Temple floor. Olive-wood doorposts held doors, also of olive.
The Temple complex featured a large basin (called the “Brazen Sea” measured 10 cubits wide brim to brim, 5 cubits deep and with a circumference of 30 cubits around the brim, rested on the backs of twelve oxen It held 3,000 baths for the purification by immersion of the body of the priests.
Pyramids flattened on top for aircraft landing abound in northern and central China’s (though China’s bosses hide this) rival Egypt’s and Central America’s for age–12,000years ago–and size.
Chinese emperors claimed descent from these “skymen-godpeople” who landed in “flying dragons” from another planet. The pyramids show astronomic alignment that dates them to the times the Anunnaki girded Earth with these structures. Records of that time speak of the emperors descending from heaven in flying dragons.
The tallest pyramid reported rises 300 metres high. Its sides measure 500 metres long –two times larger than and twenty times the volume of the Great Pyramid at Giza. The Chinese and Giza pyramids both align their baselines with north-south andwest-east. Stones once covered the Chineses pyramids but now only a few stones lie at the bottom; both have water channels from nearby rivers.
Pyramids of Zian, China and Giza, Egypt mirror Orion in the Sky.
In addition to the Anunnaki, Chinese relate ETs they call the “Dropa,” stranded from an exploration expedition crashed on Earth some 12,000 years ago in the Baian-Kara-Ula Mountains on the Chinese-Tibetan border.
They left stones the Chinese call “Dropa stones.”
The Dropa covered cavern walls with pictograms that show our Solar System and map routes among the stars with intersecting lines and dots. Their pictograms illustrate frail beings with round helmet-like bowls on their heads. In one of their caves a stone disc had a thin spiral groove filled with hieroglyphics. Another disc shows aliens with bulging heads and withered bodies. The disks had a hole in the middle and, when spun, emitted electrical impulses. [Coppins, 2010; Aym, 2011]
Angkor Wat’s huge city complex centers about a square (5,000 feet by 4,000 feet), walled, moated temple.* The temple sits atop an ancient Anunnaki landing platform, power station and metallugy plant. The temple complex features red sandstone paved causeways “lined with stone figures who pull a hooded serpent.” The moat is 623 feet wide, the walls on each side, a mile long. Within the walls, paved courtyards join three galleries. A large tower caps the highest, center gallery.
Before the world-wide Deluge of 11,000 B.C., Preah Pisnokar, a part-Earthling, part-Anunnaki, built a landing-platform, power station and gold-processing plant at Cambodia’s Angkor Wat. He built Angkor with advanced technology from Nibiru, the planet of his mother.
Preah and later kings reduced local sandstone to a slushy concrete. They poured the sandstone concrete into molds that formed continuous mile-long walls on four sides of the temple complex. The walls nestle within a canal that surrounds the temple. Preah poured “magical water onto stone, which made the stone into a concrete which hardened in place as blocks in the structures of Angkor.
He probably used technology Dunn proved in Egypt’s Giza Pyramids. Angkor generated power as part of a world-wide grid Anunnaki pyramidal power stations and monoliths that accessed and augmented power for aircraft, communication, lighting and computing.
Here’s how the Anunnaki, and probably Angkor’s builders, generated power. From the canal around Angor’s walls, they piped water into a stagnant pool under pyramids. They pressured the stagnant pools with pumping and/or sonar devices. This created a powerful vibration moving up the structures. In sealed chambers above the vibrations from the pools, they exploded hydrogen (from hydrochloric acid they mixed with hydrated zinc). [Dunn, C., in http://enkispeaks.com/2013/12/20/mars-and-earths-pyramid-parallel-power-stations-statues-spaceports/]
After Enki ordered Ningishzidda to cede Egypt to Marduk, Ningishzidda built a stone observatory at Stonehenge, a site he chose for it. Ningishzidda–architect of the Anunnaki–goldminers from the planet Nibiru who came to Earth 450,000 years ago–built, and with Middle Eastern and Black Olmec crew from Central America, rebuilt Stonehenge II and III from 2100 and 2000 BCE on the Salisbury Plain in Britain (80 miles from London), among people who had inhabited the area since 30,000 BCE.
“Stonehenge, built initially around 2900 BCE (Stonehenge, Phase I) is the most elaborate of nine hundred ancient stone, wooden and earthen circles in the British Isles, as well as the largest and most complicated one in Europe. Stonehenge, a planned astronomical observatory” adjusted for latitude, let its builders fortell eclipses, solstices, moon as well as changing apparent positions of stars.
Stonehenge features thirty upright stones, of which seventeen remain: “pairs of huge upright stone blocks, each about thirteen feet high, connected at the top by a massive lintel stone to form free-standing Trilithons erected in a semicircle, surrounded in turn by a massive circle of similar giant stones connected at the top by lintels carved to form a continuous ring around the paired uprights. Inside this massive stone ring, smaller stones (bluestones) from 250 miles away over land and two miles down the Avon River from southwestern Wales, of which 29 are still there, form the Bluestone Circle outside the Trilithons and a bluestone semicircle.” “Within this second ring stood five pairs of trilithons, making up the Sarsen Horseshoe of ten massive sarsen blocks. The innermost circle consisted of nineteen bluestones that form the Bluestone Horseshoe. Within this innermost compound, on the axis of the whole Stonehenge complex, stood the Altar Stone–a sixteen-foot long dressed block of blue-grey sandstone half-buried under an upright and the lintel of one of the Trilithons.
“The rings of stone are in turn centered within a large framing circle. It is a deep and wide ditch whose excavated soil was used to raise its banks and forms an encompassing ring around the whole Stonehenge complex, a ring with diameter three hundred feet. “A circle of fifty-six deep pits (Aubrey Holes) surround the inner bank of the ditch. “Two stones, on opposite sides of the ditch’s inner embankment and further down the line, two circular mounds with holes in them once held stones akin to the first two stones and the four called Station Stones, connected by lines outline a perfect rectangle.
The embankment had a wide gap that opened into the concentric rings of stones, holes and earthworks. The opening in the ditch, oriented northeast, leads to a causeway (the Avenue). Two parallel embankment ditches outline this avenue, leaving a passage thirty feet wide for a third of a mile where it branches northward toward the Cursus, an elongated earthwork at an angle to the Avenue; the other branch of the Avenue curves toward the Avon River.
A line drawn through the center of the Avenue passes through the center of the circles and holes to form the structures axis” along which are marker stones, one of which, the Heel Stone, were placed along the axis.
Stonehenge began with a ditch and a berm, an earthen circle with a circumference of 1050 feet at its bottom, twelve feet wide, six feet deep, digging up two raised banks within this outer ring of the circle are 56 pits. Ningishzidda left the northeast part of the dirt ring undug as an entrance to the middle of the circle. The two (now missing) gateway stones that flanked the entrance. The entrance stones gave the Heel Stone, a massive boulder set four feet underground and sixteen feet above on a 24 degree angle round points on which to create lines of sight down the Avenue that movable pegs set into holes on the entrance stones. Ningishzidda put four rounded Station Stones within the circle to form a perfect triange. That was the extent of Stonehenge 1–the Earthen Ring, an entranceway axis, seven stones, and wooden pegs.
About 2100 BCE, Ninghzidda directed the Wessex people to add four-ton bluestones to Stonehenge; now called Stonehenge II. A double Bluestone Circle thus surrounded Stonehenge II. The builders shifted the Heel Stone, widened and realigned the Avenue to keep up with the changes in the Earth’s tilt made in the sunrise point. Ningishzidda and the Wessex moved the “Altar Stone” when the remodeling began.
Stonehenge III: Around 2000 BCE, Ningishzidda re-erected the Heel Stone and dug holes for new sightings. He completely dismantled the Bluestone Circle of Stonehenge II. With Anunnaki sonar technology, he brought 77 fifty-ton sarsen stones from Marlbough Downs, forty miles away. He incorporated nineteen of these bluestones in a new inner oval of stones topped by lintel stones and placed the others bluestones ready to be inserted in holes dug for two new circles (not yet completed). He replaced the old entrance stones with two huge new ones. [Time: 39 – 180]
NEWGRANGE (County Meath): One of Ireland’s many Anunnaki observatories, Newgrange, a large circular mound of stone with a long hall and inner rooms. Anunnaki flood survivors built Newgrange and other stone observatories in Ireland to track the moon, sun, and the precession of the equinoxes in relation to the constellation Signa (aka the Swan or Southern Cross), where they could see Anunnaki craft and Nibiru nearingEarth.
Grass grows atop the layers of earth and stones that make the Newgrange mound. The mound measures 249 feet across and 39 feet high. Inside, a hallway of large stone slabs engraved with star maps stretches 60 feet to three small chambers off a larger central chamber with a high corbelled vault roof.
Each of the smaller chambers has a large flat basin stone. Once a year, at the winter solstice, the rising sun shines directly along the long passage and lights up the inner chamber and, for seventeen minutes, sunlight enters the passage through a specially contrived opening, known as a roofbox, directly above the main entrance.
Sunlight focuses on a triple spiral star map on the front wall of the chamber.
13,000 years ago Earth’s climate deteriorated in the run-up to the perigee of 10,500 years ago that caused Noah’s flood. Enlil, Commander of the goldmining expedition from the planet Nibiru to Earth (the Anunnaki), ordered a second interplanetary spaceport atop the Andes, where his son Adad had built a landing platform around 15,000 BCE [Lost Realms: 222]. Enlil wanted the second rocket base in case Marduk and his ex-astronaut allies (the Igigi) overran the Sinai rocketport that Enlil’s grandson Utu ran.
Enlil, under orders from his father, King Anu, back on Nibiru, expected to return to Nibiru with the Nibirans he commanded as well as enough gold to powder into Nibiru’s atmosphere to protect it from destructive dissipation.
Enlil warned Anu that Marduk, the son of Enlil’s rival, Chief Scientist Enki, had created alliances not only with the ex-astronauts, but also with the hybrid Erectus-Nibiran Earthling miners and slaves Enki had made from the Nibiran genome. Marduk and his allies, Enlil and Anu realized, could push the claim of Marduk to rule Nibiru by dint of the treaty Anu had sworn with Marduk’s mother’s father, Alalu, Anu’s predecessor on the Nibiran throne.
Enlil sent his youngest son, Adad-Viracocha and Adad’s older half-brother General Ninurta to the Andes to scout out a potential second spaceport. They found their ideal site at Lake Titicaca, Earth’s highest (913,861 feet) lake–perfect for boats–20 by 44 miles large, 100 -1000 feet deep and dotted with over 41 islands.
Waters running from the lake gave the Anunnaki placer gold and cassiterite tin and bronze for their European and Middle Eastern centers. The Desaguadero river flows from the southwest corner of Lake Titicaca into the satellite lake, Lake Poopo, 260 miles to the south; “there is copper and silver all the way to the Pacific Coast, where Bolivia meets Chile.” [Lost Realms: 242 – 243]
A moat surrounded Pumapunku and connected to a canal system that ran to lake Titicaca, fifteen miles away through level ground. Upheavals, probably from the same disturbances that caused the destruction of the Nile area that Moses, whom Enlil forewarned, from the nearing of Nibiru or its lagrange points in 1450 BCE, destroyed the huge landing platform and scattered its H-shaped 400-ton twelve by ten by two foot thick red sandstone blocks that Adad had quarried ten miles from Pumapunku.
Adad’s workers survived Noah’s flood on Titicaca and Coati Islands in the sheltered southern portion of the lake.
Adad and his Sumerian foremen had them build, 1/4 of a mile from Pumapunku, Tiahuanacu, aka “Tin City,” [Anuku = “metal granted by the Anunnaki.”] a two-square-mile city, metallurgical, temple, and observatory complex powered by electricity, on the shore and a with subterranean chambers. Tin supplies had run out in Europe after 2600 BCE, then Adad’s Cassites [Kosseans] Earthlings, related to the Hittites and Hurrians, flowed vast amounts of tin from South America to the Near East. [Lost Realms: 243 -245]
The Anunnaki employed “portable power plants” and “rotating magnetic fields” that gave Tiahuanacu AC power.” They “set up hydroelectric or wave stations to generate a large amount of power to send via microwaves to satellites and then redirect them to the remote parts of the earth as a form of usable power.” They sent cargos of precious metals and dried or honey-packed psychedelic mushrooms around the world. [Childress, 2012: 151]
The Anunnaki smelted, at high temperature, alloys including plantinum and extracted mercury from mineral cinnabar. They used the mercury to extract nearby silver. They built also an underwater city, Huanacu, some 80 feet down, hewn into the northern side of Titicaca Island. Tiahuanacu’s “builders planned Tiahuanacu in advance, with diverted rivers, water reservoirs on the top of pyramids (on or in which water washed ores) and massive stone [refining] structures with gigantic solid-stone doors. Pumapunku, the original New World El Dorado-Ophir city (the one to which Israel’s King Solomon flew over the Pacific from the Java Sea) featured gigantic walls covered in sheets of gold, golden masks, sun disks, gold-woven tapestries and drill holes to attach sheets of gold and other gold items.”
Nearby, Bolivians gathered the Fuante Magna Bowl, that bore ancient Sumerian cuneiform writing circa 3000 BCE and the Aymara language that the descendants of Ka-in developed from Proto-Elamite or Akkadian [Childress, 2012: 86, 109, 129 – 131, 146,150].
Tiahuanacu set off Pumapunku with a grand gate, the “Gate of the Sun,” originally a doorway for a solid granite door to for a nine-foot tall person or a person with an elaborate headdress.
The door led to a smashed 400 by 450 foot rectangular astronomical observatory called the “Kalasasaya,” that a moat had surrounded. The building, like a similar building at Pumapunku, had been destroyed and the door frame moved to form an arch leading to Pumapunku.
The Nibirans cut and shaped the gate as it stood in Tiahuanacu from a single hundred-ton, 10 x 20 foot stone block that features a carving of Adad, with golden tears. Tears, which represent the molton gold, tin, iron, platinum and mercury Tiahuanacu refined, run down his cheeks. The statue wears an elaborate headdress and holds Adad’s symbol, the forked lightening the zodiac of Sumerian Anunnaki. Reliefs of 30 “bird men”on Adad’s right–probably Nibiran astronauts–run toward him; one of these holds the trumpet-like object the Anunnaki used to move large stones.
After the building that contained the gate broke apart, the Anunnaki reconstructed it and incorporated as an arch to Pumapunku for a pilgrimage site for Andean “Indians.” Next to the gate stands a wall into which the builders sculpted heads of the various Earthling and ET types that visited the site [Childress, 2012: 88; Lost Realms: 210, 216 -217].
Relief on Pumapunka wall (left) looks like contemporary Grey (Right).
Around 3800 BCE, Nibiru’s King Anu and Queen Antu flew with their grandson, Ninruta, from Sumer to the Tiahuanancu where a gold-plated enclosure (held together with solid gold nails) he and Adad had built awaited them. They saw the Spaceport on the 200 square-mile Pampa plain below where. On the runway, “Anu and Antu’s celestial chariot stood ready, with gold to the brim it was loaded.” Anu pardoned Marduk tor his last offensive against Enlil, then the King and his Queen rocketed off to Nibiru, then to Mars, then to Nibiru. Enlil ordered Adad to guard the Enlilite South American facilities from Marduk while he and the other Nibiran Earth Mission leaders returned to Sumer. [Enki: 272-276, Lost Realms: 255; Journeys: 206]
By 2200 BCE, as supplies of tin for bronze dwindled in Europe, Adad sent tin aplenty from Tiahuanaco back to Sumer, through his Hittite-Cassite subjects in Turkey. Descendants of these Middle Easterners still dwell on Titicaca and Coati Isles. Tiahuanaco, after most of the Anunnaki returned to Nibiru, became a pilgrimage site for the growing “Indian” population. There, Adad directed the construction of Mochica, Chan-Chan, Cuzco, Macho Picchu, Chavin, Ollantaytambu and tutored a couple he chose to create Machu Picchu [Time: 247].
From Lake Titicaca and Tiahuanacu in Bolivia and Peru’s south, Anunnaki spread megalithic culture–landing platforms, metallurgical plants, pyramid power plants, astronomical observatories, palaces, canals, homes, statues, city walls, roads, bridges and quarries. Everywhere they settled, they left deep, extensive tunnels that moderns have not yet explored. Anunnaki culture spread North into ancient, pre-Inca Cusco, Ollantaytambu, Machu Pichu and Chavin. The Anunnaki mined copper and gathered gold and alluvial casseiterite–oxidized, water-washed tin from the Eastern coast of Lake Titicaca and the Lake Poopo area southeast of La Paz (down the Desaguardero River from Tiahuanacu).
Anunnaki brought successive waves of descendants of Ka-in, the South American Indians to coastal settlements along the Peruvian coast. At Paracas Bay, Adad blasted a huge image of his metal tool with its forked lightning to welcome incoming boats and aircraft from the Pacific.
Ollantaytambo rests 637 km from Pumapunku, 45 miles north of Cuzsco. Ollantaytambo lies northwest along the Urubamba, on an exact 45 degree angle line between the Titicaca Island off Tiahuanacu and the Equator. “A 45-degree line originating at Tiahuanacu, combined with squares and circles embraced all the key ancient sites between Tiahuanacu, Cuzco and Ollataytambu.” Earth’s tilt (obliquity) when the Anunnaki laid out this grid was 24 degrees, 08’ in 3172 BCE, the Age of Taurus [Enlil’s Age] between 4000 BCE and 2000 BCE.” [Lost Realms: 199 – 205]
Ollantaytambo, a landing platform (probably for the gold refinery at Sacashuaman, 60 miles to the southeast) rests “atop a steep mountain spur” and overlooks “an opening between the mountains that rise where the Urubamba-Vilcanota and Patcancha rivers meet.”
On the summit “megalithic structures begin with a retaining wall built of fashioned polygonal stones.
Through a gateway cut of a single stone block, one reaches a platform supported by a second retaining wall of polygonal stones of a larger size. On one side, an extension of this wall becomes an enclosure with twelve trapezoid openings–two as doorways and ten false windows. On the other side of the wall stands a massive gate to the main structures.” The Anunnaki channeled a stream through Ollantaytambo’s structures. Childress suggests Ollantaytambo featured a control building for an airport below the plaza along the river. [2012: 315]
“A row of six colossal monoliths stand on the topmost terrace. The gigantic stone blocks are from eleven to fourteen feet high, six or more feet in width and vary in thickness from three to over six feet. These 50-ton or so blocks stand joined together without mortar with long dressed stones inserted between the colossal blocks to create an even thickness. The megaliths stand as a single wall oriented southeast. One of the monoliths touts a relief of the “Stairway symbol” of Tiahuancu” that shows the connection between Earth and Sky.
Something interrupted Ollantaytambu’s construction; “stone blocks lie strewn about,” with T-cuts for poured metal clamps in them to hold the blocks together during earthquakes. The clamp-cuts duplicate those at Tiahanacu. “A levitation device made the stones leap down the road from the quarry to slides, where the stones would be pushed over the edge and retrieved at the bottom. They would again be made to jump to the river and across, then up to the plaza. During this process, certain stones were “lazy” and could not be made to jump properly and were therefore abandoned.” [Childress, 2012: 303]
The Anunnaki carved huge blocks of very hard red [porphyry] granite that holds large-grained quartz crystal, from Kachiqhata, the opposite mountainside of Ollantaytambo’s valley. There builders hewed and shaped the blocks, then, with inverse piezoelectricity, moved them over two streams to slides on each side of the rivers, then up to Ollantaytambo where they raised, precisely placed and fused the blocks together. Though they brought many blocks across the river, the builders left 40 or more on the river’s eastern side. [Childress, 2012: 259 – 303; Lost Realms: 199 – 205]
MACHU PICCHU (Tampu-Tocco)
On the eastern slope of the Andes, 7,585 feet above the sea, Machu covers 32,500 hectares4,000 feet above a bend in the Urubamba River, “which forms a horseshoe gorge half encircling the city’s perch, 75 miles northwest of Cuzco. Machu “was situated to control access to Ollantaytambu and Tiahuanacu.
Machu Picchu “first served as a model for Cuzco, then emulated it.” Both Machu and Cuzco “consisted of twelve wards, royal-priestly groupings on the west and residential-functional ones occupied by the Virgins and clan hierarchies on the east separated by wide terraces. Common people tilled and cultivated the mountainsides. They lived outside the city and in the surrounding countryside.”
“Royal residences are built of ashlars [squared facing stones] laid in courses, finely cut and dressed.”
In the most ancient area, the Temple of Three Windows, Sacred Plaza [landing platform?] and Principal Temple display huge, precisely-cut stone blocks locked together without mortar.
“One of the stones has 32 angles. Cutting, shaping and angling of the hard granite stones was as though they were soft putty. White granite stones had to be brought from great distances, through rough terrain and rivers, down valleys and up mountains.
“The Temple of Three Windows has only three walls” and on its open, western side” faces a 7 foot tall pillar for “astronomical sighting purposes.
“Winding steps lead from the northern edge of the Sacred Plaza up a hill whose top was flattened to serve as a platform for the Intihuatana, a stone cut with precision to measure the movements of the sun, determine the solstices and make the sun return, lest it return the Earth to darkness that occurred before.”
“At the end of the western part of Machu Picchu, the semicircular Torreon is built of ashlars “creates its own sacred enclosure at the center of which there is a rock that’s been cut and shaped and incised with grooves” like the rock in Jerusalem’s Temple Mound and Mecca’s black stone.
Beneath Machu, lies a huge cave “enlarged and shaped artificially to precise geometric forms, masonry of white granite ashlars. This is the cave from which the Anunnaki sent the first Inca king to found Cuzco, 75 miles southeast of Machu. [Childress, 2012:319 -343; Lost Realms 140 – 154]
Cuzco, which the Anunnaki built sometime after Noah’s Deluge of 10,500 BCE, sits on a promontory called Sacshauman (11,500 feet above sea level) that rises above the Tullaumayo and Rodadero Rivers. The site panned gold and featured aircraft landing facilities.
Cuzco’s “older edifices were built of perfectly cut, dressed and shaped stones of brown trachtyte, stones of great size and the oddest shapes that fitted one into another’s angles with precision and without mortar.”
Some of Cuzco’s megalithic stones had been melted with added oxygen in temperatures over 1,100 degrees. This glazed their silicate surfaces, so the “surfaces even if irregular, feel smooth to touch.” The builders put each newly placed but still hot stone next to stones already cool and hardened prior-placed jigsaw polygonal blocks. The new, just placed stone stayed fixed in perfect precision against the hardened stones. The new stone became its own separate block of granite, that would then have more blocks fitted into their interlocking positions in the wall. [Childress, 2012: 249]
The Saschuaman promontory, “shaped like a triangle with its base to the northwest,” rises eight hundred feet above the city below. Gorges form Saschuaman’s sides and “separates it from the mountain chain which it rejoins at its base.”
Tunnels, niches and grooves perforate huge rock outcroppings, cut and shaped into giant platforms. Siphon-fed aquaducts channeled water to wash ores. Childress speculates that one of the tunnels connects Cuzco with Tiahuanacu, though moderns who explored the tunnels never returned to the surface. “Cuzco started out as a mining camp and processing area, then became a temple.” [Childress, 2012: 246]
A flattened area, “hundreds of feet wide and long”–probably an aircraft landing strip marks the promontory’s middle. From here, aircraft lofted away the nuggets the structures panned. “The narrower edge, elevated above the rest of the promontory, contained circular and rectangular structures under which run passages, tunnels and openings beneath a maze cut into natural rock”–all part of Cuzco’s gold-panning operation.
Three massive walls of massive stones “rise one behind the other, each one higher than the one in front of it to a combined height of sixty feet.” The walls run parallel to each other in a zigzag” and protect this area from the rest of the promontory. Earth-fills behind each wall created terraces. The lowest first [Anunnaki-built] wall, built of colossal boulders” weigh ten to twenty tons, many fifteen feet high, fourteen feet long and thick.” One of these boulders in this wall reaches “twenty-seven feet tall and weighs over 300 tons. As in the city below, faces of these boulders have been artificially dressed to perfect smoothness, beveled at the edges. The massive blocks lie atop one another, sometimes separated by a thin stone slab.
Everywhere the stones are polygonal, odd sides and angles fitting without mortar into the odd and matching shapes of the adjoining stone blocks.
The builders quarried the gigantic stone blocks miles away and moved them “over mountains, valleys, gorges and streams.” At the center of the front wall, the Gate of Viracocha made a four-foot opening. “Steps then led to a terrace between the first and second walls from which a passage opened against a transverse wall at a right angle” and led to the second terrace. There two entrances at an angle to each other led to the third wall” and “could be blocked by lowering large, specifically fitted stones into the openings.
On a nearby plateau, Sitchin noted a cut rock that once held “a mechanical contraption.”“Walls, conduits, receptacies, channels form a series of water-channeling structures one above the other; rain or spring water could flow from level to level. A huge hircular area enclosed by megalithic ashlars lies underground at a level permitting the running off of the water from the circular area–a large-scale gold-panning facility. The water was flowed off througth the sluice-chamber and away through the labyrinth. In the stone vats, what remained was gold.” “Facing the cyclopean walls across the wide open flat area, the Chingana (labyrinth), a cliff whose natural features have been artificially enlarged into passages, corridors, chambers, niches, and hollowed-out spaces” featuring “rocks dressed and shaped into horizontal, vertical, and inclined facings, openings, grooves cut in precise angles and geometric shapes, holes drilled down.”Sitchin says the megalithic builders of Tiahuancu, rather than the very recent Incas, that built Cuzco, long before Inca times. “One of the Inca mastermasons decided to haul up a stone where the original builders had dropped it. More than 20,000 Indians, dragging it with great cables.” But the rock rolled down the slope and killed four thousand Indians.”CoricanchaThe Coricanchais [conflated into the “Temple of the Sun” by the Spanish] an Anunnaki temple of which a semicircular wall survives, Sitchin wrote, honored Adad. The Coricancha adjoined auxiliary temples for Nannar, Inanna and other Anunnaki. Next to an enclosure, the Acilla-Huasi, we see “a secluded enclave where virgins dedicated to the Great God lived.”[Childress, 2012: 209-254; Lost Realms:120 – 131 ]CHAVIN DE HUANTAR
The nation of Chavin appeared suddenly, around 15000BCE or Earlier.” The main city, Chavin De Huantar–probably a ceremonial center– sits “at an elevation of 10,000 feet in the Cordillera Blanca range of the northwestern Andes of northern Peru between the coast and the Amazon basin.
There in a mountain valley where tributaries of the Maranon River form a triangle, an area of 300,000 square feet was flattened and terraced for complex structures precisely laid out.Buildings and plazas form precise rectangulars and squares aligned with east-west as the major axis.” The builders “ingeniously used the two levels of the tributaries to create a flow for panning gold. The site once held ultramodern machinery.
The site yielded artifacts with motifs from Ninishzidda’s Mayans as they retreated south –jaguars, condors, entwined fangs–Egyptian motifs–the Eye of Marduk/Ra, serpents, pyramids–Mesopotamian motifs–winged disks, Anunnaki headdresses, and trophy statues of Sumerians in pain– and portraits of black African Olmecs holding mining tools.
A nearby site Peruvian site shows Gilgamesh of Uruk, Sumer, in Mesopotamia, wrestling two lions–good evidence of the same people inhabited both places.
The Sumerian trophy statues show straight-nosed Indo-European men from “Asia Minor, Elam and the Indus Valley, the “giants” with metal tools–perhaps part of two invasions, one by Naymlap who landed at La Plata Island and Equador. Inca histories say Adad and his Sumerian assistants massacred these newcomers.
The three main buildings rose from terraces that elevated them and leaned them against forty-foot high outer western wall that ran 500 feet and encompassed the complex on three sides and left the site open to the river on the east side.
The southeast corner building– the site’s largest (240 x 250 feet in area)–rose three stories made of smooth-faced incised masonry stone blocks. “From a terrace on the east a monumental stone stairway led to a gate up to the main building.” Two cylindrical columns flanked the gate. “Adjoining vertical stone blocks supported a thirty-foot horizontal lintel made of a single monolith. A double stairway led to towers atop the building.
Steps led from the eastern terrace at Chavin De Huantar to a sunken plaza surrounded on three sides by rectangular platrorms. A large flat boulder with seven grind holes and a rectangular niche stood “Outside the southwestern corner of the sunken plaza.”
The three buildings featured corridors and inside maze-like passages, connecting galleries rooms and staircased faced with decorated stone slabs. The stone slabs that roofed the passages set to support the buildings.
The Tello Obelisk
This monolith in the main building engraved Chavin’s tales of figures with “human bodies and faces with feline hands, fangs or with wings” as well as animals, birds, trees, gods emitting rocketlike rays, and geometric designs.”
The Raimondi Monolith
This stone column, enscribed with Adad’s bull, in the Chavin De Huacar’s middle building sticks through a hole in the floor above it. [Lost Realms: 184 – 196]
Behold the feet-deep ’Candelabra’ in nearby Bay of Paracas, Peru, symbol of Adad-Viracocha, the Great God of South America.
Candelabra, Adad/Viracocha’s trademark, Bay of Paracas, Peru
From South America, Ningishzidda surveyed Yucatan and the Valley of Mexico for gold and then brought his Olmec and Sumerian aides to organize Indians to mine and refine gold, silver and other minerals. [Lost Realms: 237 – 250]
On Easter Island (Rapa Nui), a native informant told Ancient Aliens [Season 3, Disk 2, 44:53] that a god, wearing an Eagle Helmet (helmet of the Nibiran Astronaut Corps [Igigi]) transported the huge statues through the air to their platforms on the hill.
After Enlil nuked Sinai and radiated Sumer in 2024 BCE, Ningishzidda and his team–Anunnaki assistants, Black Olmec and bearded Mesopotamians–brought descendants of Ka-in (“Indians” to American Anthropologists) across the Atlantic to Yucatan and then to Teotihuacan in the Valley of Mexico. In both Central America’s Yucatan and in the valley of Mexico, the Anunnaki team first built their megalithic pyramidic power stations, then had their Indians built megalithic structures and statues. In Yucatan, the Maya, guided by Olmec technicians and Sumerian overseers, built (at Dzibilchaltun, Palenque, Tikal, Uxmal, Izamal, Mayapan, Chichen Itza, Copan, Tolan and Izapa) huge stepped-stone temples like Sumer’s. [Lost Realms: 86 – 110]
The Olmec-Maya culture spread across Central America and into the Valley of Mexico till Ningishzidda* left them in 311 BCE, when he said he’d return December 21, 2012, with other Anunnaki, to wrest control of Earth from its current matrix of war, miscegenation and debt-slavery. [* Ningishzidda’s also known as Hermes, Thoth, Votan, Quetzlcoatl, Kukuklan, Itzmam, Mercury.]
Olmec structures appeared suddenly throughout Yucatan, without prior development. The ceremonial center at Itza aligned with three-mile markers along a north-south line. Heads were buried around 1 CE as Olmecs retreated south. Pyramids laid out south to north to allow transit-sightings.
Toltecs moved to Itza after they left Tolan, near Mexico City. Toltec pyramid to the Plumed Serpent reaches 185 feet high; it duplicated the pyramid the Toltecs left at Tula.
Toltec centers in the Yucatan featured ballcourts where opposing teams enacted the astronomical events depicted in the Anunnaki account of Earth’s creation. Toltecs beheaded the losing captain of the losing team to mimic how Nibiru, the Anunnaki homeplanet, decapitated Tiamat, the Proto-Earth.
The Maya decorated the main ballcourt at Chichen Itza “with scenes of the Sumerian Tree of Life and the standard winged and bearded Anunnaki.”
Itzas mined. They employed cutting tools and lights for their mines. They–perhaps after Ningishzidda left them–threw maidens as well as mirrors, gold, silver, refined tin and bronze ornaments engraved with bearded Mesopotamians and Anunnaki gods [Itzas were beardless descendants of Ka-in] into the well.
Olmec ponds connected thru subterranean conduits.
Itza statues had glyph writing and calendar starting 3113 BCE. Their engavings showed miners and metalworkers with tunneling and metalworking tools [metal-cutting flame thrower], as well as Anunnaki flight (Winged people).
Archeologists unearthed sixteen buried massive stone Negroid heads the Olmec’s had moved through over sixty miles of jungle and swamp. The buried heads measured “5 10 feet high, 21 feet in circumfrance and weighed up to 25 tons. The Olmec, as they retreated south to survive Indian attackers, buried the stone heads. [Lost Realms: 86 – 110]
“Maya cities were open-ended ceremonial centers surrounded by a population of administrators, artisans and merchants supported by an extensive rural population. “From a base abutting the Gulf of Mexico, the cities of La Venta, Tres Zapotes and San Lorenzo formed the area of Olmec settlement and cut southward toward the Pacific Coast of Mexico and Guatemala. By AD 900 the realm of the Maya extended from the Pacific Coast to the Gulf of Mexico and the Carribean. Mayan civilization spread “southward across Mesoamerica by 800B.C.. Thoth organized the Maya into four domains, 4 parts, each with a capital. Palenque was capital of the West, Calakmul was capital of the North, whose rulers conquered Palenque. Copan was capital of the South and Tikal, the East.
Palenque with plumbing
From Palenque, a focus on the twelve-foot king buried under a pyramid:
Kukuklan emerges from rocket
Itza and the other Mayan centers featured ballcourts where Celestial Battle acted out by opposing teams. Losing captain decapitated, enacting, Sitchin suggests, how the Anunnaki homeplanet, Nibiru, decapitated Tiamat, the Proto-Earth.
Ningishzidda tutored the Maya in both phonetic and character writing; they manufactured ink and paper books. The Maya produced monumental sculptured art, carved jade, hand-held lights, weapons, flamethrowers and tools fitted with mirrors. Thoth taught them the principles of place value and zero which let them, with the advanced astronomy he dictated, so they could know when Nibiru neared Earth and when Nibiran “gods’ came and went.
He showed them how to make calendars that showed Venus circled the sun every 6000 years. Thoth also taught the Mayans the more-than 26,000-year Precession of the Equinoxes on Earth.
He showed them how to make and use telescopes, and had astronomer apprentices raised in the dark to better see the heavens through the telescopes. The accuracy of their predictions over time validates Nibiran presence and influence on Earth. [Tsarion, M., 2012]
Thoth left Earth. He said he and his father Balam Yokte (Enki) would return on December 21, 2012, and challenge the forces of evil on Earth. Thoth and his accompanying Nibirans “left, presumed to be swallowed by the ruler of the night, the Jaguar; and the image of Thoth was henceforth covered by the jaguar’s mask through which serpents, his symbol, emerge.”
Priests encouraged blood sacrifice, ostensibly to bring back the gods, but then to execute prisoners and control people. Kings and Priests kept blood sacrifice–at first those of rulers, then of enemy rulers, then anyone they needed to control.
From 1000 to 450 B.C., Chichen Itza, near where Thoth first landed in Yucatan, became “the principle sacred city of Yuctan.
Chichen Itza by 450BC, “the principle sacred city of Yucatan,” boasted a sacred well. The Itzas, migrants from the south, built their ceremonial center–the great central pyramid and the observatory–near where Thoth first came ashore in Yucatan. The Toltecs, who had migrated from central Mexico, gradually populated Tiotihuacan, then migrated south to Chichen Itza, to be near the place Thoth came ashore and would return.
At Chichen Itza, the Toltecs reproduced sculpture of the Sumerian stories of the Celestial Battle including the astronomical events that killed Tiamat, the proto-Earth and resurrected Tiamat as Earth. The art shows the exact position of the Earth from the outside to the inside of the inner solar system. They built a 9-stage pyramid dedicated to Thoth, decorated with carvings of him. The decorations incorporated in its structure calendrical aspects and duplicates the pyramid in Tula, capital of the Toltecs from Teotihuacan.
Toltecs, in the sacred well at Chichen Itza, threw 40 virgin girls as sacrifices, as well as gold, silver and copper ornaments made from metals refined from ores Mesoamerica lacked. Art on the ornaments showed bearded Sumerian types, as well as Sky Gods [Nibirans].
The Olmecs gradually retreated south as Indians moving down from the North attacked them. Olmecs first fled their older metropolitan center near the Gulf, circa 300 BCE. They gave up their more southern centers last. The Indians killed both negroid Olmecs and the Bearded Ones,” from the Eastern Mediterranean.”
In Mexico, Ningishzidda, Sumerian overseers, Olmec foremen and Indian laborers built two unadorned pyramids at Teotihuacan.
Teotihuacan’s thirty miles north of Mexico City.
Stone markers, two miles out, lined up with The Temple of the Sun on its east-west axis; its other axis is north-west at the time it was built on the model of the Giza Pyramids. Teotihuacan’s Sun Pyramid’s 745 feet per side on its base. The Anunnaki built the Sun Pyramid of “mud bricks, adobe,pebbles and gravel held together by a sheath of crude stones and stucco with and aggregate mass of 10,000,000 feet.” The Sun Pyramid rises250 feet. [Lost Realms: 49]
Ningishzidda brought the Maya then the Toltecs to Teotihuacan. In in 987 CE, the Toltecs moved to the Yucatan Peninsula.
The Anunnaki Council of Enlil, Commander of the goldmining expedition from the planet Nibiru to Earth, exiled Marduk–once heir to rule Nibiru–from Egypt to North America in 8670 BCE for his attack on their forces.
Marduk and his cohorts, 7-10 tall, blond and red-haired people, the thousands of so-called “Mound Builders” built pyramids, effigies, fortresses, underground tunnels, energy capacitating chambers, astronomical observatories and megalithic cities, as well as more recent burial mounds for tens of thousands of inhabitants all over North America.
Map of Mound Builder Sites [Vieira, 2013]
The Anunnaki mined gold and copper and used electromagnetic and sonar energy to move huge stones along many mile-lanes of magnetically-charged rocks like those Marduk’s father, Chief Scientist Enki, developed in southern Africa. Anunnaki sites in North America yielded iron implements, cuneiform writing, incised designs, woven fabrics and evidence of advanced metallurgy.
The Anunnaki took descendants of Ka-in from Asia and of Adamu from Africa across both the Atlantic and Pacific. These Anunnaki and perhaps other extraterrestrials–the moundbuilders–far preceded both the Eskimo and “Indians” who crossed from Asia as well as the South American “Indians”–also descendants of Ka-in and Adamu who came north from the Lake Titicaca area. Some of these giants and their hybrid offspring may have stayed on as rulers of the more recent arrivals to North America, the ones who call themselves “Indians” and came to occupy the mounds of the ETs. Some of the giants had double rows of both upper and lower teeth and may have been cannibals.
On Mars, Nibirans manned a spaceport and lasered a monument to Alalu, one of their kings, who was exiled to Mars and died there. Nibiran Chief Medical Officer, Princess Ninmah, on orders from King Anu of Nibiru, had Pilot Anzu create the statue and left 20 men with him to construct Marsbase for transhipment of gold from Earth to Nibiru .
Datum 22: BOSTNIAN PYRAMIDS SHOW STONE MADE INTO CONCRETE, THEN BLOCKS
Semir Osmanagich describes pre-Deluge Bosnian Pyramids–the biggest 700 feet high, (vs Giza’s 450 feet). Nibirans probably build these structures and the labyrinth beneath them. The Bosnian Pyramids of huge blocks of very fine pour concrete. The Deluge buried the 5 pyramids about 13,000 years ago. Under the Bosnian Pyramid complex, Osmanagish found a labyrinth of tunnels with fresh running water.
Under the pyramid Osmanagish calls Bosnia’s Pyramid of the Sun, he reports, are mysterious little generators that still run on Tesla technology. The pyramid still sends an ever -growing energy wave up and releases a high frequency sound wave (like that Tellinger reports from the South African structures and lanes).
All who’ve worked inside these tunnels come out euphoric; the vibrations inside release endorphines and pleasurable sensations in them. The vibrations also purify the water that runs through the pyramid complex.
Datum 23: ANUNNAKI COMPUTERS
The Master Computer
In 400,000 BCE the master computers and command modules that controlled space and Earth communication sat in Enlil’s place at Nippur in southern Iraq. In 380,000 BCE, Anzu stole crystals [computers]and MEs [portable command modules]. Each ME controlled a certain function, and only its possessor could operate it. Anzu now had the ME of Enlil’s Command of Earth, the ME called The Brilliance or ME of Enlilship, until Ninurta retook them for Enlil.
After the Deluge of 11,000 BCE that destroyed Nippur, Ningishzidda installed the master computer controls in the Great Pyramid (the E.Kur) in Egypt. “The Great Pyramid was directed by different gods in turn. During the Second Pyramid War [8670 BCE], the Great Pyramid was the temple abode of Marduk-Ra. After Ninmah-Ninharsag-Hathor brokered a peace agreement that ended the war, she was appointed mistress of the new Ekur and got the title Goddess of the Rocket Ships.” Later “Isis became Mistress of the Pyramid.” Finally, Ningishzidda, when he reigned in Egypt, took the title “Guardian of the the Secrets of the Ekur.” In 3450 BCE, when Enki ordered Ningishzidda to yield to Marduk, Ningishzidda hid the master computer progams and plans to the Pyramid. [DNA: 55 -57]
PORTABLE, DEDICATED “ME” COMPUTING DEVICES
Anunnki royals carried crystal computers a few inches large–MEs– in purses they wore on their wrists. They sewed MEs they controlled into their clothes or worked into their scepters, hand-held weapons as well as weapons and devices they ensconsed on their aircraft and in their control rooms. Inanna seduced Enki for ninety MEs that Anu ruled she could keep.
“MEs give their depositories or the persons in possession of them powers such as the control of interplanetary travels and communications, scientific and technological knowledge and quasi-magical potency. In Sumer, the proprietor of a specific ME is the only one to have full power over it and to bend its operations according to his or her will: the proprietor is the unique master of the system.” [DNA: 58 – 75]
References abound for Anunnaki devices–probably atomic–in their structures, rockets and even in the Ark of the Covenant.
Ninurta, after he defeated Marduk in the Giza pyramid, smashed a technological device called “The Heart of the Ekur” which “emitted a net force.”
Enki, drunk and in sexual heat with Inanna, gave her computer devices (MEs) he later regretted but couldn’t get back. These devices, including The Exalted Tiara, the Exalted Scepter and Staff, the Exalted Shrine, set in jewels and and crystals and affixed to her clothes, a scepter or shrine, could “emit and control a force field,” or a ME sewn into her clothes “could emit a luminous field” that made her “look clothed in radiance” and let her proclaim herself queen, “clothed in radiance. For centuries afterward, Earthlings with whom she coupled were dead the next morning–perhaps a combination of radiation and Inanna’s sexual practices.
A computer, dated one hundred years before Jesus, discovered in 1900 of the island of Antikythera near Crete contained a system of differential gears not known to have been used until the sixteenth century.
Compare and contrast the variants of Gardiner, Pye, Tellinger, Sitchin, Childress, Cremo, Thompson, Pravupad and others for the elements they have in common. Use the principle of parsimony–which explanations best account for all the data–and leave the least ooparts (data and artifacts that the explanations out their comprehension. All our theories are hypothetical formulations, words and mathematical models we employ to account for our observations. Our observations are in turn directed by our theories. In science we test the null hypothesis–what data would disprove our theories.
Consensus determines social reality but does it predict the chemical composition of asteroids or the shape of the landmass beneath the Antarctic icepack the way ancient Sumerian tablets do? We’re left with alternate explanations of much. Enjoy them, wonder, and keep asking what to cut away with Occam’s razor. We grasp the elephant of reality from varying perspectives. See them all and get a clearer picture of the beast and the blind who generalize from their particular vantages to the nature of the whole and its context. An explanation or theory that most parsimoniously (simplest, with least words, numbers, adumbrations) accounts for the all the data and makes more accurate predictions of future behavior as well as past accumulated data is more useful for our understanding than one that uses more words and symbols and must exclude exceptions to work.
Thus, Copenicus’s heliocentric explanation of the apparent movement of planets takes less math than Ptolomy’s epicycle system, though the latter also can predict apparent planetary movement. Freer writes: “I am convinced of the correctness of Sitchin’s thesis and of Sir Laurence Gardiner’s by utter coherence; they are the only explanations which contain no inexplicable elements, no contradictions and in which all the facts dance together in total consort. Our species’ internecine violence, a product of Babel-factoring for crowd control that has carried through to great wars and the religious mayhem of crusades, jihads, inquisitions and persecutions and not intrinsically of human nature. The Roman Church, a continuation of the fear of the god Enlil [Yahweh] type of subservient religion came into ascendance by an alliance with the gradual assimulation of the Roman empire and adopting its practices. Suppression of our true history through promulagation of the Hebrew Old Testament forgeries done to make Enlil their single monotheistic god affected a racial amnesia and the ancient Sumerian culture was forgotten and only rediscovered in the late 1800s. Military and political controllers have suppressed the knowledge and data about alien presence on this planet by denial and ridicule.” |
ADVANCED CHEMISTRY CHAPTER 5 : BONDING I: BASIC CONCEPTS OF BONDING
Purpose • To be able to understand the basic concept of bonding in metal atoms, nonmetal atoms, and ions. • Learn how to draw accurate Lewis structure and indicate its formal charge. • Know how to name the molecules. • Know how to calculate the bond enthalpy.
Bonding • A chemical bond is an attraction between atoms that allows the formation of chemical substances that contain two or more atoms. • Three basic types of bonds includes • Ionic- Electrostatic attraction between ions. • Covalent- Sharing of electrons. • Metallic- Metal atoms bonded to several other atoms.
Valence Electrons • Valence electrons are electrons in the outermost shell of an atom; electrons in the highest occupied energy level. • The number of valence electrons determines the properties of an element.
Lewis Dot Symbols • G.N. Lewis postulated that atoms achieve greater stability by bonding and procuring a noble gas electronic structure. • When forming compounds, atoms tend to add or subtract electrons until they are surrounded by eight valence electrons (the octet rule). • He devised a system, the Lewis dot system, where dots represent the valence electrons, and are moved or combined to represent processes and bonding.
Formation of Cations • A positively charged ion called a cation is produced when an atoms loses one or more valence electrons. • e.g. Li → Li⁺ + e¯
Formation of Anion • A negatively charged ion is called anion and is produced when an atoms loses electrons. Note that the name of anions ends with –ide. • Cl + e¯ → Cl¯ chloride
Formation of Ionic Bonds & Ionic Compounds • An ionic compound is one formed between a cation and an anion. Although they are composed of ions, ionic compound are electrically neutral. • Cation and anions have opposite charges and therefore attracts each other by means of electrostatic forces forming bonds called ionic bonds. • Because compounds are electrically neutral.
Naming Cation • Cations (except ammonium NH4+) derive their names unchanged from the metal they are derived from. When cations occur with two different charges the suffix –ous and –ic were used to designate the lower and higher oxidation states respectively.
Naming Anion • Anions take the elements name but with the suffix changed to –ide.
Naming Oxyanion Compounds • The one with the fewest oxygens has the prefix hypo- and ends in –ite (ClO−: hypochlorite). • The one with the second fewest oxygens ends in –ite (ClO₂‾: chlorite). • The one with the second most oxygens ends in –ate (ClO₃‾: chlorate). • The one with the most oxygens has the prefix per- and ends in –ate (ClO₄‾: perchlorate).
Lattice Energy • Lattice Energy is the energy required to completely separate a mole of a solid ionic compound into its gaseous ions. • Lattice energy, then, increases with the charge on the ions. • It also increases with decreasing size of ions.
Properties of Ionic Compounds • Exist in solid state. • Form crystal lattices not molecules. • Good insulators. • High melting points/ Boiling Points. • Conduct electricity when dissolved in water or as a liquid; are strong electrolytes. Solids do not conduct electricity. • Are insoluble in the organic compounds. • Are very reactive. • Most are brittle and break under stress.
Metallic Bonding • Metals consists of closely packed cations and losely held valence elctrons. • The valence electrons in the metal can be modeled as a sea of electrons. The valence electrons are mobile and can drift from one part of the metal to the other. • Metallic bond are forces of attraction between the free-floating delocalized valence electrons and the positively charged metal ions. The bonds hold metals together.
Properties of Metals • Conduct heat & conduct electricity • Generally high melting and boiling points • Strong • Malleable (can be hammered or pressed out of shape without breaking) • Ductile (able to be drawn into a wire) • Metallic luster • Opaque (reflect light)
Alloys • An alloy is a mixture or metallic solid solution composed of two or more elements. • Alloys are important because their properties are often superior to those of their components.
Examples • Duralumin - copper and aluminium • Woods metal - lead, tin and cadmium • Bronze - copper and tin • Brass - copper and zinc • Rose gold - copper and gold • Solder - lead and tin • Steel - iron and carbon, often other metals as well such as Mn, B, and Cr. • Nichrome(chromium, iron, nickel) • Surgical stainless steel ( iron, carbon, chromium, molybdenum, nickel)
Molecules & Molecular Compounds • Ionic bonds are formed by giving up and accepting electrons. Atoms that are held together by sharing electrons are joined by a covalent bond. • A molecule is comprised of atoms held together by chemical bonds. • Molecular compounds are composed of different atoms bonded together by covalent bonds and almost always contain only nonmetals. • Polyatomic molecules are molecules where more than two atoms are bonded.
Representing Molecules • A molecular formula shows how many atoms of each element a substance contain • Structural formulas show the order in which atoms are bonded. • Perspective drawings also show the three-dimensional array of atoms in a compound. • We use molecular models to try and visualise and represent molecules. These include space filling molecular model and ball-and-stick molecular model.
Lewis Structures • Lewis suggested that a bond between atoms is the sharing of electrons, called a covalent bond, represented in Lewis structures as: • Molecules that are comprised of covalent bonds only are called covalent compounds. When there are many valence electrons, only the unpaired electrons are involved in bonding, the others remain unbonded and are called lone pairs or unshared pair.
Covalent Bonds • Procuring 8 electrons is called the octet rule. • Single bonds are where two atoms share one pair of electrons. • Double bonds when two atoms share two pairs of electrons. • Triple bonds when three pairs are shared. • Multiple bonds are more stable than single bonds. • There are several electrostatic interactions in these bonds • Attractions between electrons and nuclei, • Repulsions between electrons, • Repulsions between nuclei.
Coordinate (Dative) Covalent Bonds • In a coordinate covalent bond, the shared electron pair comes from one of the bonding atoms.
Naming Molecular Compounds • Nomeclature is similar to ionic compounds. Greek multiplicity prefixes used to indicate multiplicity. The mono-prefix is not used for the first atom, and in oxides of the multiplicity prefix is left out. With hydrides, trivial names, and non-prefixed names are in common use. • Naming • Binary Compounds • Acids and Bases • Hydrates • Organic Compounds
Naming Binary Compounds Nomenclature • The ending on the more electronegative element is changed to –ide. (CO₂: carbon dioxide and CCl₄: carbon tetrachloride). • If the prefix ends with a or o and the name of the element begin with a vowel, the two successive vowels are often elided into one (N₂O₅: dinitrogenpentoxide).
Naming Acids and Bases Nomenclature • An acid produces a hydrogen cation (proton) when dissolved n water. • Anions whose names end in –ide form acids that are prefixed with “hydro” and the anion’s suffix becomes “ic.” • Oxoacidsare those that have a central atom, oxygens, and hydrogens toward the outside. The main acid usually has the “ic” suffix. Adding an O changes the name to “per…ic” acid. • Removing an O gives “-ous” acid (e.gHClO₂: chlorous acid), and removing two O’s produces “hypo…ous” acid. • Rules for naming anions: • If all the protons are removed from the normal “-ic” acid, the –“-ate” suffix serves. If all the protons from the “–ous” acid is removed the suffix is “-ite.”
Naming Hydrates • Hydrates are compounds that bind water in when crystallised. • They are named with the number of water molecules indicated afterwards. • The hydrates is name by naming the cation first, followed by “hydrates” with prefix to indicate the number of water molecules. • E.g.CuSO45H2O is named copper (II) sulphatepentahydrate
Naming Organic Compounds Nomenclature • Organic chemistry is the study of carbon. • Straight chain hydrocarbons are named with the Greek numerical prefix for the number of carbon atoms, and the suffix “-ane.” • The first four, do not use the Greek prefix: Methane, ethane, propane and butane.
Bond Polarity • In a covalent bond the electrons are shared between two equal atoms. • In an ionic bond the electrons are practically transferred completely from one atom to the other, creating ions of opposite polarity. • Bond polarity is a measure of how equally or unequally the electrons in any covalent bond are shared. • A nonpolar covalent bond is one in which the electrons are shared equally, as is Cl₂ and N₂. • A polar covalent bond is one in which one of the atoms exerts a greater attraction for the bonding electrons than the other.
Electronegativity • Atoms of widely varying electronegativities form ionic bonds, and those of closer electronegativities form polar covalent bonds. • Atoms in a bond with an electonegativity difference greater than 2.0 are ionic and less, polar covalent.
Dipole Moments • Dipole Moments – Occurs when electrons are not shared equally in a covalent bond. When two atoms share electrons unequally, a bond dipole results. • The dipole moment, μ, produced by two equal but opposite charges separated by a distance, r, is calculated: μ = Qr • It is measured in debyes (D). • This is indicated by a crossed arrow, and leads to partial charges at the various atoms designated by a arrow to indicate the charge is partial:
Attraction between Molecules • Intermolecular attractions are weaker than either ionic or covalent bonds. These include • Van der Waals Forces - the attraction of intermolecular forces between molecules. • Dipole Interactions - two dipolar molecules interact with each other through space. • Dispersion forces - Interactions between ions, dipoles, and induced dipoles account for many properties of molecules • Hydrogen Bonds - special type of dipole-dipole attraction which occurs when a hydrogen atom bonded to a strongly electronegative atom exists in the vicinity of another electronegative atom with a lone pair of electrons.
Drawing the Lewis Structure • Find the sum of valence electrons of all atoms in the polyatomic ion or molecule. • If it is an anion, add one electron for each negative charge. • If it is a cation, subtract one electron for each positive charge. • e.g. in PCl₃, # of valence electrons is: 5 + 3(7) = 26 • The central atom is the least electronegative element that isn’t hydrogen. Connect the outer atoms to it by single bonds. Keep track of the electrons: 26 − 6 = 20 • Fill the octets of the outer atoms. Keep track of the electrons: 26 − 6 = 20; 20 − 18 = 2 • Fill the octet of the central atom. Keep track of the electrons: 26 − 6 = 20; 20 − 18 = 2; 2 − 2 = 0 • If you run out of electrons before the central atom has an octet…form multiple bonds until it does.
Formal Charges • For each atom, count the electrons in lone pairs and half the electrons it shares with other atoms. • Subtract that from the number of valence electrons for that atom: the difference is its formal charge.
Resonance Structure • Resonance means when two or more Lewis structures are required to represent a compound. Resonance structures are imaginary models to try and describe the real thing which is somewhere between.
Exceptions to the Octet Rule • Ions or molecules with an odd number of electrons. • Ions or molecules with less than an octet. • Ions or molecules with more than eight valence electrons (an expanded octet).
Bond Energies • Bond enthalpy is measured by determining how much energy is required to break the bond. • The energy required to break the bonds between two covalent bonded atoms is called bond dissociation energy. • Average bond enthalpies are positive, because bond breaking is an endothermic process.
Bond Enthalpies & the Enthalpy of Reactions • To estimate ∆H for a reaction, we have to compare the bond enthalpies of bonds broken to the bond enthalpies of the new bonds formed. • ∆ Hrxn= D(Bond enthalpies of bonds broken) − D(bond enthalpies of bonds formed) • CH4(g) + Cl2(g) to CH3Cl(g) + HCl(g) • ∆ H = [D(C—H) + D(Cl—Cl)] − [D(C—Cl) + D(H—Cl)] = [(413 kJ) + (242 kJ)] − [(328 kJ) + (431 kJ)] = (655 kJ) − (759 kJ) = −104 kJ
References • Lecture: Theodore E. B., Eugene, H. L. H., Bruce E. B., Catherine M., Patrick W., (2011). Chemistry: The Central Science (12 Ed). Prentice Hall. USA. • Laboratory: Theodore E. B., John H. N., Kenneth C. K., Matthew S. (2011). Laboratory Experiments for Chemistry: The Central Science (12 Ed). Prentice Hall. USA. • Theodore E. B., (2011). Solutions to Exercises for Chemistry: The Central Science. Prentice Hall. USA. • John M., Robert C. F. (2010). Chemistry (4 Ed): Prentice Hall Companion Website. http://wps.prenhall.com/esm_mcmurry_chemistry_4/9/2408/616516.cw/index.html • Chemistry Online at http://preparatorychemistry.com/Bishop_Chemistry_First.htm • Chemistry and You at http://www.saskschools.ca/curr_content/science9/chemistry/index.html • Teachers Notes • http://chemwiki.ucdavis.edu/ |
CPU and Memory are two important aspects of a computer. CPU stands for Central Processing Unit and is responsible for processing data. Memory is responsible for storing data. Both CPU and Memory are important for a computer to function properly.
Let’s get into detail how Central Processor Unit (CPU) and Memory works.
Central Processor Unit (CPU)
The core component of any digital computer system, the central processing unit (CPU), is typically made up of the main memory, control unit, and arithmetic-logic unit. It serves as the actual brain of the entire computer system, to which various peripheral components, such as input/output devices and auxiliary storage units, are connected. The central processing unit (CPU) of modern computers is housed on a microprocessor, an integrated circuit chip.
The CPU is constantly executing computer programs that give it instructions on which data to process and in what order. We couldn’t use a computer to run programs without a CPU.
A CPU is the central processor unit of a computer. This is the main chip that performs all the calculations and operations of the machine. The speed and power of a CPU determines how fast a computer can run. A typical home PC has a CPU that runs at around 3 GHz, or three billion cycles per second.
Inside the CPU
A CPU is an integrated circuit, usually referred to as a chip, at the hardware level. Millions or billions of tiny electrical components are “integrated” into an integrated circuit, which then integrates them into circuits and packs everything into a small space. The motherboard’s compatible CPU socket is where the processor is installed and fastened.
To ensure correct insertion into the CPU socket, the CPU chip is typically square with one notched corner. Numerous connector pins that match the socket holes are located on the chip’s bottom.
There are slot processors in Intel and AMD processors. They were much larger and slid into a slot on the motherboard. On motherboards, there have also been many sorts of sockets over the years.
Some of those layers are real components, like transistors and chips, while others are abstractions, like gates and logic circuits.
Computer chips include components, which also include logic gates and transistors. Transistors function as a straightforward switch that either prevents data from passing through or permits it to do so. The bits of this information are either 0 (low) or 1 (high) (high). More sophisticated information can be represented by combining these bits. Combining these transistors creates logic gates, which are capable of performing extremely basic tasks.
CPUs are made up of millions of transistors, which are tiny switches that control the flow of electricity.
What does CPU do?
CPU coordinates the activities of all other components of computer system and performs all the arithmetic and logical operations to be applied to data.
The primary job of the CPU is to interpret input from a computer program or peripheral devices (such as a keyboard, mouse, printer, etc.). Next, the CPU either sends data to your monitor or carries out the peripheral’s desired action.
Functions of CPU
▶ Control and handles all the instructions and calculations.
▶ Controls all the operations
▶ It Fetches, Decode, Execute and Store the data.
Parts of CPU
The three basic components to CPU are
→ ALU ( Arithmetic Logic Unit )
→ CU ( Control Unit )
ALU ( Arithmetic Logic Unit )
An ALU is a digital circuit that performs arithmetic and logic operations on data. It is a place where the actual executions of instructions takes place during processing operations.
▬ The data and instruction stored in primary storage prior to processing are transferred as and when needed to the ALU where processing takes place.
▬ Data may move from primary storage to ALU and back again to memory many times before the processing is over.
▬ After processing is complete, the final result which are stored in main memory unit are released to output device.
The ALU is the fundamental building block of the central processing unit (CPU) of a computer, and it is generally composed of electronic circuits that implement logic gates and flip-flops.
The most common operations performed by an ALU are addition, subtraction, multiplication, division, and logic operations such as AND, OR, NOT, and XOR.
CU (Control Unit)
Control Unit manages the flow of data between the various components of a computer.
The control unit is also responsible for handling interrupts, which are signals that indicate that an event has occurred that needs attention.
It communicates with all other devices.
It obtains instructions from the program stored in main memory, interprets the instructions and issues signals that cause other units of the system to execute the.
▬ They hold various types of information such as data, instructions, addresses and intermediate results of calculations.
▬ It holds the information computer is currently working with. As soon a particular instruction or piece of data is finished, the next one immediately replaces it and information that results from the processing is returned to main memory.
▬ Size of registers can effect the speed and performance of the processor.
Computers use a variety of memory types to store data and programs. There are two main types of memory.
Primary Memory is the main storage area for a computer’s processor. It is where the operating system, application programs, and data in current use are kept so that they can be quickly accessed by the computer’s processor.
Primary memory is computer memory that is accessed directly by the CPU.
It is located on the motherboard.
There are two types of primary memory:
Random Access Memory (RAM)
Read-Only Memory (ROM)
Random Access Memory (RAM)
It is used to hold the program and data during computation. It stores temporary data.
RAM is volatile, meaning it only stores data temporarily. Data retains as long as continuous power supply is provided.
Any cell can be accessed in any order at same speed if address is known.
The most important thing to know about primary memory is that it is very fast. This is because the processor can access any location in RAM directly without having to search through a mass of data.
Read-Only Memory (ROM)
ROM is non-volatile, meaning it stores data permanently.
Information can be read by the user but cannot be modified.
Generally stores BIOS (Basic Input Output System).
Secondary memory refers to the storage devices that are used to store data and programs permanently. It is also known as external memory. It is non-volatile in nature.
Secondary memory is slower than primary memory, but it can store more data.
It is also more expensive than primary memory.
Secondary memory is used to store data and programs that are not in use currently. When you want to use a program or data from secondary memory, you have to first transfer it to primary memory.
Types of Secondary Memory
Magnetic Storage Devices
Optical Storage Devices
USB Flash Drive
Solid State Drive
Sequential Access Storage Device
It belongs to a group of data storage systems that read stored data in a sequence.
In contrast, data can be accessed in any sequence in random access memory (RAM), and magnetic tape is the most popular sequential access storage medium.
Magnetic Tape: A long, narrow strip of plastic film is covered in a thin, magnetizable coating to create magnetic tape, which is a medium for magnetic recording. Tape recorders and video tape recorders are devices that use magnetic tape to record and playback audio and video. A tape drive is a device that uses magnetic tape to store computer data.
It was a crucial technology component in the creation of the first computers because it made it possible to automatically create vast amounts of data, store it for a long time, and access it quickly.
Direct Access Storage Devices
Hard disc drives, optical drives, and the majority of magnetic storage devices are examples of secondary storage devices that store data in discrete locations with a unique address .
- Magnetic discs: A magnetic disc is a type of storage that allows for the writing, rewriting, and accessing of data through the process of magnetization. It has a magnetic layer over it and stores data as tracks, spots, and sectors. Magnetic discs are frequently seen in the form of hard drives, zip disks, and floppy discs.
Floppy Disk: The flexible disc that makes up a floppy disc has a magnetic coating, and it is enclosed in a protective plastic envelope. These are some of the earliest portable storage devices, capable of holding up to 1.44 MB of data, but they are no longer in use because they have such little memory.
Hard Disk Drive (HDD): A hard disc drive is made up of several circular discs, or platters, one over the other almost ½ inches apart around a spindle. Aluminum alloy, a non-magnetic substance, is used to make discs, which are then covered with a 10–20 nm magnetic material. These discs typically have a diameter of 14 inches and rotate at speeds ranging from 4200 rpm for personal computers to 15000 rpm for servers.
By demagnetizing or magnetizing the magnetic coating, data can be saved. Data is read from and written to the discs using a magnetic reader arm. Terabytes are the unit of capacity for most current HDDs (TB).
- CD Drive: CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read and write data. They are very cheap as you can get 700 MB of storage space for less than a dollar. CDs are inserted in CD drives built into the CPU cabinet. They are portable as you can eject the drive, remove the CD and carry it with you. There are three types of CDs:
- CD-ROM (Compact Disk – Read Only Memory): The manufacturer recorded the data on these CDs. Proprietary Software, audio or video are released on CD-ROMs.
- CD-R (Compact Disk – Recordable): The user can write data once on the CD-R. It cannot be deleted or modified later.
- CD-RW (Compact Disk – Rewritable): Data can repeatedly be written and deleted on these optical disks.
- DVD Drive: DVD stands for digital video display. DVD is an optical device that can store 15 times the data held by CDs. They are usually used to store rich multimedia files that need high storage capacity. DVDs also come in three varieties – read-only, recordable and rewritable.
- Blu Ray Disk: Blu Ray Disk (BD) is an optical storage media that stores high definition (HD) video and other multimedia files. BD uses a shorter wavelength laser than CD/DVD, enabling the writing arm to focus more tightly on the disk and pack in more data. BDs can store up to 128 GB of data.
3. Memory Storage Devices: A memory device contains trillions of interconnected memory cells that store data. When switched on or off, these cells hold millions of transistors representing 1s and 0s in binary code, allowing a computer to read and write information. It includes USB drives, flash memory devices, SD and memory cards, which you’ll recognize as the storage medium used in digital cameras.
- Flash Drive: A flash drive is a small, ultra-portable storage device. USB flash drives were essential for easily moving files from one device to another. Flash drives connect to computers and other devices via a built-in USB Type-Aor USB-C plug, making one a USB device and cable combination.
Flash drives are often referred to as pen drives, thumb drives, or jump drives. The terms USB drive and solid-state drive (SSD) are also sometimes used, but most of the time, those refer to larger, not-so-mobile USB-based storage devices like external hard drives.
These days, a USB flash drive can hold up to 2 TB of storage. They’re more expensive per gigabyte than an external hard drive, but they have prevailed as a simple, convenient solution for storing and transferring smaller files.
Pen drive has the following advantages in computer organization, such as:
- Transfer Files: A pen drive is a device plugged into a USB port of the system that is used to transfer files, documents, and photos to a PC and vice versa.
- Portability: The lightweight nature and smaller size of a pen drive make it possible to carry it from place to place, making data transportation an easier task.
- Backup Storage:Most of the pen drives now come with the feature of having password encryption, important information related to family, medical records, and photos can be stored on them as a backup.
- Transport Data: Professionals or Students can now easily transport large data files and video, audio lectures on a pen drive and access them from anywhere. Independent PC technicians can store work-related utility tools, various programs, and files on a high-speed 64 GB pen drive and move from one site to another.
- Memory card: A memory cardor memory cartridge is an electronic data storage device used for storing digital information, typically using flash memory. These are commonly used in portable electronic devices, such as digital cameras, mobile phones, laptop computers, tablets, PDAs, portable media players, video game consoles, synthesizers, electronic keyboards and digital pianos, and allow adding memory to such devices without compromising ergonomy, as the card is usually contained within the device rather than protruding like USB flash drives. |
The lesson below is still images taken from the fully animated PowerPoint I created that is available in my Teachers Pay Teachers store page. If the PowerPoint is not one of my freebies, you may also head over to my YouTube channel to see the slideshow fully animated and can pause as needed to be sure you grasp each concept before moving forward (each lesson will always be free there!).
In this lesson we introduce the concept of exponent. The slides build off the idea of repeated addition and use this topic to get students thinking about repeated multiplication. Then the lesson continues on to describe what the power and base refer to and utilizes a chart to have students already pondering powers of 10.
So without further ado, read through the slides below to get a feel for what exponents are and how to evaluate them!
Phew, that’s a lot to take in. Once you’ve gone over this and found some practice problems to cement the idea in your head what integers are, you may move on to the next lesson below!
Just in case the slides aren’t your thing, here is a text outline of the main points of the lessons above!
- The basics
- By the end of this lesson you should feel comfortable:
- Identifying the parts of an exponential number
- Evaluating exponents
- Way back when you first learned to multiply, it was through repeated addition:
- We then used multiplication facts to more quickly recognize problems such as 7×9 or 12×8 without the need for repeated addition.
- Remember we mathematicians are lazy!
- If we can repeat addition, though, is it possible to repeat multiplication?
- First, let’s learn a little notation.
- We start with a number called the base.
- Then we raise it to a power called an exponent.
- This problem would be read five to the power of three or five cubed.
- This problem would be expressed or expanded as so:
- The exponent tells us how many times to multiply a number together with itself.
- So let’s look at a few examples of repeated multiplication with exponents:
- Our first column holds our exponents.
- The second shows the expression with a base of 10 in each case.
- Then the pattern follows as such:
- Special Case
- There is a question we should always ask ourselves in math, however.
- What about zero?
- The simple answer is that any number raised to the zero power is always equal to 1.
- The reasoning involves rules of exponents which you will learn later, so for now just enjoy knowing that this is an easy case!
- A number can be raised to a power to represent repeated multiplication of the same number.
- The number being raised to a power and multiplied by itself is called the base.
- The power is called the exponent.
- Any number raised to the zero power will always equal 1. |
Unit 1- The Intellectual Virtues
The purpose of this unit is to help you to begin to realize your own understanding of the Intellectual Virtues.
Aloha and welcome!
To create courses infused with critical thinking, we must begin to think about critical thinking. To think critically we will use reasoning by asking questions, gathering information, making inferences and assumptions to come to conclusions about how to infuse critical thinking into online discussions.
- We must read and watch the content not as a lot of information to remember, but as tools to think about critical thinking. Thinking about critical thinking is about using empathy, attentive awareness and reasoning by asking questions, gathering information, making inferences and assumptions to come to conclusions about how things should or ought to be, which things are good or bad, and which actions are right or wrong.
- We must become clear about the purposes of using thinking critically online. We must begin to ask questions, and recognize the questions being asked, about critical thinking to improve how we think about critical thinking.
- We must begin to sift through information, and drawing conclusions about critical thinking.
- We must begin to question where information about critical thinking comes from.
- We must notice the different interpretations that are formed about critical thinking give meaning to critical thinking.
- We must question those interpretations to understand them better. we must begin to question the implications of various critical thinking interpretations and begin to see how to reasoning is used to come to conclusions.
- We must begin to look at the world and develop the viewpoint of how to think critically online.
- We must read and watch the content looking explicitly for the elements of thought.
- We must actively ask questions about critical thinking from a critical-thinking thinking perspective.
- We must begin paying attention to our own thinking about critical thinkings in our everyday lifes.
- We must make thinking about critical thinking a more explicit and prominent part of our thinking.
First, we will work through the first of three major dimensions of Paulian Critical Thinking – Intellectual Virtues. This is a significant part of the theory that is often neglected in other approaches to critical thinking.
The focus is on the effective, fairminded application of critical thinking. A key development point is to begin acknowledging and overcoming our egocentric nature. People can go through formal education, receive multiple degrees, and be highly successful in their career and yet, not possess important intellectual virtues such as humility and fair-mindedness.
Unit 1- Practice
- Watch the following two videos
2. Read the following resources to help guide your thinking
- Miniature Guide to Critical Thinking Concepts and Tools pages 2-7
- The Aspiring Thinker’s Guide to Critical Thinking pages 3-9
3. Complete the following practice: Unit 1-Intellectual Virtues Activity.
Unit 1- Reflection & Badge
Complete the following refection: Unit 1 Reflection & Badge |
The origins of proof
What is proof? Philosophers have argued for centuries about the answer to this question, and how (and if!) things can be proven; no doubt they will continue to do so! Mathematicians, on the other hand, have been using "working definitions" of proof to advance mathematical knowledge for equally long.
Starting in this issue, PASS Maths is pleased to present a series of articles introducing some of the basic ideas behind proof and logical reasoning and showing their importance in mathematics.
In this article, we shall present a brief introduction to deductive reasoning, and take a look at one of the earliest known examples of mathematical proof.
Given a set of facts that are known or assumed to be true, deductive reasoning is a powerful way of extending that set of facts. In deductive reasoning, we argue that if certain premises (P) are known or assumed, a conclusion (C) necessarily follows from these. For example, given the following (rather famous!) premises
P: Socrates is a man.
then the conclusion
follows via deductive reasoning. In this case, the deductive step is based on the logical principle that if A implies B, and A is true, then B is true, a principle that mediaeval logicians called modus ponens.
Of course, deductive reasoning is not infallible: the premises may not be true, or the line of reasoning itself may be wrong! This is how you can sometimes "prove" something that isn't actually true. There are any number of ways of "proving" that 1 = 2, for example. Here's an old favourite:
Can you spot the flaw in the argument?
Now, if a conclusion doesn't follow from its premises, the argument is said to be invalid and no reliable judgement can be made about whether the conclusion is true, regardless of whether or not the premises are true.
If the argument is valid but the premises are not true, then again the conclusion may or may not be true, but the argument can't help us decide this.
Finally, if the argument is valid and the premises are true, then the argument is described as sound, and we deem the conclusion to be true. From a pragmatic point of view, we can be said to have proved something if we can find a sound argument for it.
Table 1 summarises these different kinds of deductive arguments, and table 2 provides an example of each.
Table 1: Different kinds of deductive arguments.
P: Fish are mammals.
P: Fish are warm-blooded.
P: Mammals are cold-blooded.
P: Humans are mammals.
P: Fish are cold-blooded.
P: Humans are not fish.
P: Humans are warm-blooded.
P: Fishermen are human.
Table 2: Some example deductive arguments.
As the two invalid arguments in table 2 suggest, the conclusion of an invalid argument doesn't necessarily have to be false - it's just unproven by that particular argument!
In the beginning: Euclid's geometry
Euclid was born about 365 BC in Alexandria, Egypt, and died about 300 BC. Little is known of his life other than that he taught mathematics in Alexandria.
Euclid wrote a number of treatises, but the most famous is his Elements, a work on geometry that has been used as a textbook for over 2000 years! Rather than representing Euclid's original work, the Elements are now thought to be a summary of the geometrical knowledge current during his time. However, they represent one of the earliest uses of proof in the history of mathematics.
In his Elements, Euclid begins with a list of twenty-three definitions describing things like points, lines, plane surfaces, circles, obtuse and acute angles and so on (here, these are given in the appendix).
Euclid's definitions are neither true nor false: they simply act as a kind of dictionary, explaining what is meant by the various terms he will be using.
He then presents a set of ten assumptions. Five of these are not specific to geometry, and he calls them common notions:
- 1. Things which are equal to the same thing are also equal to one another.
- 2. If equals be added to equals, the wholes are equal.
- 3. If equals be subtracted from equals, the remainders are equal.
- 4. Things which coincide with one another are equal to one another.
- 5. The whole is greater than the part.
The other five assumptions are specifically geometric, and he calls them postulates :
- 1. It is possible to draw a straight line from any point to any point.
- 2. It is possible to produce a finite straight line continuously in a straight line.
- 3. It is possible to describe a circle with any centre and distance.
- 4. All right angles are equal to one another.
- 5. If a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles.
(Note that Playfair's Axiom (originally due to Proclus) that "through a point not on a given line, there passes not more than one parallel to the line" is a rather neater way of expressing postulate 5! In addition, in the nineteenth century, Legendre went on to show that postulate 5 is equivalent to the postulate that "the sum of the angles of a triangle is equal to two right angles").
Together, these common notions and postulates represent the axioms of Euclid's geometry. An axiom is a logical principle which is assumed to be true rather than proven, and which can be used as a premise in a deductive argument.
Euclid's set of axioms, or axiomatic system, represents a collection of "first principles" from which other principles can be produced using deductive reasoning. Of course, any deductive arguments are only sound if Euclid's common notions and postulates really are true!
An example proposition and proof
Euclid goes on in his Elements to present various geometric propositions, and shows them to be true using deductive inference within his axiomatic system.
An example is proposition 6: "If in a triangle two angles be equal to one another, the sides which subtend the equal angles will also be equal to one another."
Euclid's actual proof of this proposition is as follows:
Figure 1: Euclid's proposition 6.
"Let ABC be a triangle having the angle ABC equal to the angle ACB;
I say the side AB is also equal to the side AC. For, if AB is unequal to AC, one of them is greater. Let AB be greater; and from AB the greater, let DB be cut off equal to AC the less; let DC be joined.
Then, since DB is equal to AC, and BC is common, the two sides DB, BC are equal to the two sides AC, CB respectively, and the angle DBC is equal to the angle ACB.
Therefore, the base DC is equal to the base AB, and the triangle DBC will be equal to the triangle ACB, the less to the greater, which is absurd. Therefore AB is not unequal to AC; it is therefore equal to it."
If you have a Java capable browser you can try a dynamic version of this drawing here.
Are Euclid's postulates true?
Both the Greeks of Euclid's time, and later Arabic mathematicians, had an intuition that the fifth postulate could actually be proven using the definitions and common notions and the first four postulates.
Many attempts to prove the fifth postulate in this manner were made, and often a putative proof would be accepted for a long period before being shown to be flawed. Typically, the flawed proofs contained a "circular argument": in one way or another, they assumed that what they were trying to prove (the fifth postulate) was true in order to prove it!
In fact, the fifth postulate is not derivable from the other postulates and notions, and nor is it universally true. Mathematicians continued to be fascinated by the fifth postulate throughout the centuries, but it was not until the nineteenth and twentieth centuries (through the efforts of a number of famous mathematicians including Legendre, Gauss, Bolyai, Lobachevsky, Riemann, Beltrami and Klein), that we came to know about geometries (called non-Euclidian geometries) where the fifth postulate is not true.
The fifth postulate can be shown to be true in a plane (or Euclidian) geometry. However, there are many other geometries where it is not true. Surprisingly enough, this is easy to illustrate! Consider the simple case of a sphere's surface.
It is impossible to draw a true straight line on a sphere without leaving the surface, So in spherical geometry the Euclidean idea of a line becomes a great circle. Thinking of the Earth, any line of longitude is a great circle - as is the equator. In fact the shortest path between any two points on a sphere is a great circle. (More generally, a minimal path on any surface is known as a geodesic.)
One of the consequences of Euclid's first four postulates is that if two different lines cross, they meet at a single point. This presents a small problem on the sphere, since distinct great circles always cross at two antipodal points! Two lines of longitude always cross at both the North and the South Pole!
But remember, we haven't yet said what the spherical analogue of a Euclidean point is! All we have to do is define a point in spherical geometry to be a pair of antipodal points and the problem promptly disappears.
According to Euclid's definition number 23, "Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction".
Given these definitions, it is easy to see that Euclid's first four postulates still make good sense. The fifth postulate, however, fails because it is impossible to draw two different lines that do not meet. In spherical geometry there are no parallel lines!
One of the consequences of the failure of the 5th postulate is that it is no longer true that the sum of the angles of a triangle is always 180 degrees.
In fact, there's a famous lateral-thinking puzzle that depends implicitly on this non-Euclidian geometry:
A hunter leaves his house one morning and walks one mile due south. He then walks one mile due west and shoots a bear, before walking a mile due north back to his house.What colour is the bear?
Euclid and deductive reasoning
The story of Euclidian geometry, and the subsequent discovery of non-Euclidian geometries, show the benefits and disadvantages of using deductive reasoning from axioms as a proof system.
Using his definitions, common notions and postulates as an axiomatic system, Euclid was able to produce deductive proofs of a number of important geometric propositions. His axioms and proofs have been a useful set of tools for many subsequent generations of mathematicians, and show how powerful and beneficial deductive reasoning can be!
However, the long and painful process of discovering non-Euclidian geometries has shown one of the limitations of deductive reasoning in an axiomatic system: that any proof is only as good as the axioms it starts off with! In the Euclidian plane, Euclid's fifth postulate is true, and his valid proofs are sound. In non-Euclidian geometries such as the surface of a sphere, however, the fifth postulate is not true, and Euclid's proofs are therefore unsound.
- There are plenty of good introductory texts about logic and reasoning. "Thinking Things Through" by Clark Glymour (MIT Press, 1997) is a good place to start, and Critical Thinking by Francis W. Dauer (OUP, 1989) is a more comprehensive guide.
- The University of Toronto Mathematics Network has some nicely-annotated Classic Fallacies, including a detailed look at our initial example.
- There is an interactive html version of Euclid's Elements at Clark University.
About the author
Image © Kerry Rodden 1998
Appendix - Euclid's Definitions
- 1. A point is that which has no part.
- 2. A line is breadthless length.
- 3. The extremities of a line are points.
- 4. A straight line is a line which lies evenly with the points on itself.
- 5. A surface is that which has length and breadth only.
- 6. The extremities of a surface are lines.
- 7. A plane surface is a surface which lies evenly with the straight lines on itself.
- 8. A plane angle is the inclination to one another of two lines in a plane which meet one another and do not lie in a straight line.
- 9. And when the lines containing the angle are straight, the angle is called rectilineal.
- 10. When a straight line set up on a straight line makes the adjacent angles equal to one another, each of the equal angles is right, and the straight line standing on the other is called a perpendicular to that on which it stands.
- 11. An obtuse angle is an angle greater than a right angle.
- 12. An acute angle is an angle less than a right angle.
- 13. A boundary is that which is an extremity of anything.
- 14. A figure is that which is contained by any boundary or boundaries.
- 15. A circle is a plane figure contained by one line such that all the straight lines falling upon it from one point among those lying within the figure are equal to one another;
- 16. And the point is called the centre of the circle.
- 17. A diameter of the circle is any straight line drawn through the centre and terminated in both directions by the circumference of the circle, and such a straight line also bisects the circle.
- 18. A semicircle is the figure contained by the diameter and the circumference cut off by it. And the centre of the semicircle is the same as that of the circle.
- 19. Rectilineal figures are those which are contained by straight lines, trilateral figures being those contained by three, quadrilateral those contained by four, and multilateral those contained by more than four straight lines.
- 20. Of trilateral figures, an equilateral triangle is that which has its three sides equal, an isosceles triangle that which has two of its sides alone equal, and a scalene triangle that which has its three sides unequal.
- 21. Further, of trilateral figures, a right-angled triangle is that which has a right angle, an obtuse-angled triangle that which has an obtuse angle, and an acute-angled triangle that which has its three angles acute.
- 22. Of quadrilateral figures, a square is that which is both equilateral and right-angled; an oblong that which is right-angled but not equilateral; a rhombus that which is equilateral but not right-angled; and a rhomboid that which has its opposite sides and angles equal to one another but is neither equilateral nor right-angled. And let quadrilaterals other than these be called trapezia.
- 23. Parallel straight lines are straight lines which, being in the same plane and being produced indefinitely in both directions, do not meet one another in either direction. |
Levels of Programming Language:
Programs are a set of instructions or commands needed to perform a specific task by a programmable device such as a microprocessor. The programs needed for a programmable device can be developed at three different levels and they are as follows:
1. Machine-level programming
2. Assembly-level programming
3. High-level programming
Machine Level Programming:
In machine-level programming, instructions are written using binary codes which use only two symbols ‘0’ and ‘1’. The manufacturer of microprocessors will give a set of instructions for each microprocessor in binary codes, i.e., one binary code will represent one operation performed by the microprocessor. The language in which the instructions are represented by binary codes is called machine language. A microprocessor can understand and execute machine language programs directly.
The binary instructions of one microprocessor will not be the same as that of another microprocessor. Therefore, the machine language programs developed for one microprocessor cannot be used for another microprocessor i.e., the machine-level programs are machine-dependent. Moreover, it is highly tedious for a programmer to write programs in machine language.
Assembly Level Programming:
In assembly-level programming, instructions are written using mnemonics. A mnemonic comprises a few letters of the English language which represent the operation performed by the instruction. For example, the mnemonic for the instruction which performs addition operation is ADD. The manufacturer of the microprocessors will provide a set of instructions in the form of a mnemonic for each microprocessor. Also, for each mnemonic, a binary code will be specified by the manufacturer. If the program is developed using binary codes then it is called machine-level programming and if the program is developed using mnemonics then it is called Assembly-level programming.
The language in which the instructions are represented by mnemonics is called assembly language. Microprocessors cannot execute the assembly language programs directly. The assembly language programs have to be converted to machine language for execution. This conversion is performed using a software tool called an assembler.
In high-level programming, the instructions will be in the form of statements written using symbols, English words, and phrases. Each high-level language will have its own vocabulary of words, symbols, phrases, and sentences. Examples of high-level languages are BASIC, C, C++, etc.
The programs written in high-level languages are easy to understand and machine-independent. So they are known as portable programs. A high-level language program has to be converted into machine language programs in order to be executed by the microprocessor. This conversion is performed by a software tool called a compiler. |
Y Mx + C in Geometry: An Explanation for Students
If you're taking a geometry class, you've probably seen the equation y = mx + c on a chalkboard or whiteboard at some point. But what does this equation actually mean? In this blog post, we'll break down the meaning of each term in the equation so that you can better understand how to use it.
Y: The Y coordinate is the point where the line intersects with the Y axis.
M: M is the slope of the line. Slope is defined as the ratio of the rise (the vertical distance between two points on a line) to the run (the horizontal distance between two points on a line). You can calculate slope using the following formula:
Slope = (y2 - y1)/(x2 - x1)
C: C is the y-intercept, which is the point where the line intersects with the y-axis.
Now that we know what each term in the equation means, let's take a look at how to use it. Suppose we have a line with the following coordinates: (2,4), (4,8), and (6,12). We can plug these coordinates into the equation as follows:
y = mx + c
4 = m(2) + c
8 = m(4) + c
12 = m(6) + c
From here, we can solve for m and c. First, we'll solve for c. We can do this by plugging in one of our coordinate pairs and solving for c. Let's use (2,4). We know that 4 = m(2) + c, so we can rearrange the equation to solve for c like this:
4 - m(2) = c
c = 4 - 2m
Now we'll plug our value for c back into one of our equations and solve for m. We'll use 8 = m(4) + c. We know that c = 4 - 2m, so we can substitute that into our equation and solve for m like this:
8 = m(4) + (4 - 2m)
8 = m(4) + 4 - 2m
8 = 4m + 4 - 2m
8 = 2m + 4
6 = 2m
m = 3
Now that we know both m and c, we can plug them back into our original equation and write it out in standard form like this: y = 3x + 4. And there you have it! That's all there is to understanding and using the y = mx + c equation in geometry.
What is the Y MX C equation called?
The Y MX C equation is also known as the slope-intercept form of a linear equation.
What does MX mean in geometry?
M is the slope of a line and X is the x-coordinate of a point on that line.
What is the slope of Y MX C?
The slope of the Y MX C equation is 3.
What is slope and intercept in y MX C?
The slope is the number that is multiplied by the x-coordinate, and the intercept is the number that is added to the product of the slope and the x-coordinate. In the equation y = 3x + 4, 3 is the slope and 4 is the intercept. |
In dipterous insects, halteres (//; singular halter or haltere) are minute dumbbell-shaped organs that have been modified from hindwings to provide a means of encoding body rotations during flight. Halteres are rapidly oscillated simultaneously with the wings, allowing them to experience forces resulting from body rotations. If the body of the insect rotates about one of its three axes (yaw, pitch or roll), the rotation exerts a force on the vibrating halteres – this is known as the Coriolis effect (see below). The insect detects this force with sensory organs called campaniform sensilla and chordotonal organs located at the base of the halteres and uses this information to interpret and correct its position in space. Halteres act as a balance and guidance system by providing rapid feedback to the wing-steering muscles, as well as those responsible for stabilizing the head. This is what allows flies to perform their fast acrobatic maneuvers.
- 1 Background
- 2 History
- 3 Evolution
- 4 Genetics
- 5 Dynamics
- 6 Morphology
- 7 Role in Visual processing
- 8 Non-flying haltere activity
- 9 References
- 10 Further reading
- 11 External links
The majority of insects have two pairs of wings. What makes flies unique is that they possess only one set of lift-generating wings and yet are still regarded as some of the most skillful fliers. The order name for flies, "Diptera" literally means two wings, but there is another order of insect which has evolved flight with only two wings: strepsipterans, which are more commonly referred to as twisted wing parasites; they are the only other organism that possess two wings and two halteres. The only difference is that strepsipterans have adapted their forewings into halteres, whereas dipterans have adapted their hindwings into halteres. This unique structure which detects rotations/perturbations during flight has never been described in nature elsewhere.
Halteres are able to sense small deviations in body position using the gyroscopic properties of moving mass. What this means is that halteres beat up and down in time with the flapping of the wings along a linear pathway, but when the fly's body begins to rotate, the path of the beating halteres also changes. Now, instead of the halteres following a linear path, they begin to follow a curved path. The larger the perturbation they experience, the farther the halteres move from their original linear path. During these periods, the haltere is no longer moving in only two directions (up and down), but four (up, down, left, and right). The force exerted on the halteres in response to this left right movement is known as Coriolis force and can be produced when any moving object is rotated in the three directions of rotation, yaw, pitch or roll (see figure). When this occurs, tiny bell-shaped structures at the base of the haltere experience strain as the haltere stalk bends in their direction. The nervous system can then transform the bending of these hairs into electrical signals, which the fly interprets as body rotation information. The fly uses this information to make corrections to its position and thereby restabilizes itself during flight. Further details explaining the dynamics and physiology of halteres are described below.
Halteres are typically only associated with flight stabilization, but their ability to detect body rotations can elicit compensatory reactions not only from the wing steering muscles, but also from neck muscle which are responsible for head position and gaze. Halteres may also be useful for other behaviors. Certain species of flies have been observed to oscillate their halteres while walking in addition to oscillating them during flight. In these individuals, halteres could thus be detecting sensory information during walking behavior as well. When the halteres are removed, these insects perform more poorly at certain walking challenges. However, how haltere information is processed and used during walking remains, with few exceptions, unclear. Specific examples of what has been found are described below.
Halteres were first documented by William Derham in 1714. He discovered that flies were unable to remain airborne when their halteres were surgically removed, but otherwise behaved normally. This result was initially attributed to the haltere's ability to sense and maintain equilibrium. In 1917 v. Buddenbrock asserted that something else was responsible for the flies' loss of flight ability. He claimed that the halteres should instead be considered "stimulation organs". In other words, that the activity of the halteres energized the wing muscular system, so that they acted as an on/off switch for flight. V. Buddenbrock attempted to show that activation of the halteres would stimulate the central nervous system into a state of activity which allowed the wings to produce flight behavior. It has since been concluded that this is not in fact true, and that the original assertion that halteres act as balance organs is the correct one. V. Buddenbrock was able to show that immediately after haltere removal flies were unable to produce normal wing movements. This was later explained by the fact that allowing flies a few minutes recovery time post-surgery resulted in total recovery of normal flight muscle control. Further, in an interesting side experiment performed by Pringle (1938), when a thread was attached to the abdomen of haltereless flies, relatively stable flight was again achieved. The thread in these experiments presumably aided in keeping the fly from rotating (similar to the way a heavy basket below a hot air balloon prevents the balloon from tipping), which supported the hypothesis that halteres are responsible for sensing body rotations.
The original balancer theory, which was postulated by Pringle (1948), only accounted for forces produced in two directions. Pringle claimed that yaw was the only direction of rotation that flies used their halteres to detect. Using high speed video analysis, Faust (1952) demonstrated that this was not the case and that halteres are capable of detecting all three directions of rotation. In response to this new discovery, Pringle reexamined his previous assumption and came to the conclusion that flies were capable of detecting all three directions of rotation simply by comparing inputs from the left and right sides of the body. Of course, this is not the actual mechanism by which flies detect rotation. Different fields of sensory organs located in different regions at the base of each haltere detect the different directions of rotation, which also explains why flies with one haltere are still able to fly without issue.
It is generally accepted that the halteres evolved from the hindwings of insects. Their movement, structure and development all support this hypothesis. Characterizations of the arrangement of sensory organs known as campaniform sensilla, found at the base of the haltere, show many similarities to those found at the base of the hindwings in other insects. The sensilla are arranged in a way so similar to that of hindwings, that were the halteres to be replaced with wings, the forces produced would still be sufficient to activate the same sensory organs. Genetic studies have also brought to light many similarities between halteres and hindwings. In fact, haltere development has been traced back to a single gene (Ubx), which when deactivated results in the formation of a hindwing instead. Because just a single gene is responsible for this change, it is easy to imagine a small mutation here leading to the formation of the first halteres.
Though no other structure with entirely the same function and morphology as halteres has been observed in nature, they have evolved at least twice in the class Insecta, once in the order Diptera and again in Strepsiptera. Another structure in the class insecta also exists whose primary function is not the same as halteres, but that additionally serves a similar balancing function. This occurs in the order Lepidoptera and refers to the antennae of moths and butterflies.
Strepsipterans, more commonly referred to as twisted wing parasites, are a unique group of insects with major sexual dimorphism. The females spend their entire lives in a grub-like state, parasitizing larger insects. The only time they ever come out of their host insect is to extend their fused heads and thoraces for males to notice. The males are also parasites, but they eventually will leave their host to seek their female counterparts. Because of this, they still retain the ability to fly. Male strepsipterans uniquely possess two hindwings, while their forewings have taken on the club-like form of halteres. Though strepsipterans are very difficult to locate and are additionally rather short-lived, Pix et al. (1993) confirmed that the specialized forewings that male Strepsiptera possess perform the same function as dipteran halteres. Rotational movements of the body combined with the oscillating halteres produce Coriolis forces that can be detected by fields of mechanosensors (campaniform sensilla) located at the base of the halteres. Using functional morphology and behavior studies, Pix et al. showed that these sensors then transmit body position information to the head and abdomen to produce compensatory movements. For simplicity, the remainder of this article will refer only to dipteran halteres.
Certain lepidopterans (moths and butterflies) exhibit small amplitude oscillation of their antennae at constant angles during flight. Antennal movements in lepidopterans were originally hypothesized to aid in wind or gravity perception. A study performed using the hawk moth, Manduca sexta, confirmed that these tiny, antennal oscillations were actually contributing to body rotation sensation.
Sane et al. (2007) determined that antennae were responsible for flight stabilization in hawk moths by removing the long part of the antenna (the flagellum), then reattaching it to determine its influence on flight performance. When the flagella were removed, the moths were no longer able to maintain stable flight. After reattachment of the flagella, flight performance was restored. The source of this difference was determined to be mechanosensory. There are two sets of mechanosensory organs located at the base of the lepidopteran antenna, Böhm's bristles and the Johnston organ. These fields of receptors respond to different directions of antennal movements. Antennae are also capable of sensing odor, humidity, and temperature. Sane et al. (2007) was able to demonstrate that it was the mechanosensors that were responsible for flight stability as opposed to the other sensory organs, because when the flagella were removed and then reattached, all antennal nerves were severed excluding those at the base (Böhm's bristles and the Johnston organ).
In segmented organisms there are genes called Hox genes, which determines the development of serial homologs, or repeating structures within an organism (e.g. jointed appendages of arthropods or vertebrae in mammals). In insects, the thorax is separated into different segments. One of the things that the Hox gene Ultrabithorax (Ubx) is responsible for, is specifying the identity of the third thoracic segment of their body. Proper hindwing development in a number of insect species is dependent on Ubx, including butterflies, beetles, and flies. In fruit flies, (Ubx) is responsible for the formation of the halteres during metamorphosis. If this gene is experimentally deactivated, the haltere will develop into a fully formed wing. This single homeotic gene change results in a radically different phenotype, but also begins to give us some insight into how the ancestors of flies' hindwings may have originally evolved into halteres.
Though it is clear that Ubx is the primary gene responsible for hindwing formation, Ubx also regulates other genes once expressed. Weatherbee (1998) postulated that differences in Ubx expression patterns or levels may not be responsible for the observed physiological changes. Instead, he suggested that Ubx-regulated target gene sets were the direct source of the observed changes. Several Ubx-regulated target genes have been identified, including two direct targets, spalt and knot, which are expressed in the wing and repressed in halteres. Other genes which are expressed in wings and repressed in halteres have also been identified, but whether or not they act as direct targets of Ubx regulation are still unknown.
Dipteran insects along with the majority of other insect orders use what are known as indirect flight muscles to accomplish flight. Indirect insect flight muscles are composed of two sets of perpendicular muscles (see left figure) that are attached to the thorax (instead of directly to the wing base as is the case for direct flight muscles). When the first set of muscles contracts, they deform the body of the insect and compress its thorax vertically, which lifts the wings. When the first set of muscles relaxes and the second set contracts, the thorax is squeezed in the opposite direction, which extends the body vertically and moves the wings downward. The below figure demonstrates this movement with only the first set of muscles.
The movement of the wings and the halteres are mechanically coupled. Sane et al. (2015) demonstrated that in freshly killed flies, without any neural input, the movement of wings was still coupled with the movement of halteres. When a forcep was used to manually move a wing up and down, not only did the opposite wing move in synchrony, the halteres also beat in antiphase with both wings. The source of this coupling however, was not between the muscles which control the halteres and those that control the wings. Instead, two small ridges of cuticle known as the subepimeral ridges were found to be responsible. These ridges connect the right wing to the right haltere and the left wing to the left haltere.
Each side of the body must be synchronized and the two sides are also coupled. That is, the left and right wings and thus the left and right halteres always beat at the same frequency. However, the amplitude of the wingbeat does not always have to be the same on the left and right side. This is what allows the flies to turn and is accomplished using a gearbox, much like what you would find in an automobile. This gearbox can change the maximum amplitude of the wing movement and determine its speed of motion. The wings of flies even have a clutch structure at their base. The clutch moves between grooves in the gearbox, to engage and disengage the wing muscles and also modulate the wingbeat amplitude. When the amplitude of the left wing is less than the right, the fly will make a left turn. Even though haltere movement is controlled by separate muscles than the wings, because the wings are mechanically coupled with the halteres, changes in wingbeat frequency extend to the haltere-beat frequency as well, but haltere beat amplitude does not change.
Interestingly, though halteres are coupled with the wings and with each other during flight, some flies oscillate their halteres while walking (without oscillating their wings). Because the haltere muscles are tiny in comparison to the flight muscles, flight muscle activity completely overshadows that of the haltere muscles during flight. It is unknown how haltere muscle activity during flight differs from walking. The left and right halteres show much more variable phase relationships while walking compared with flying, which may indicate decoupling of the left and right haltere muscles.
Differences between species
Although halteres are always synchronized with the movements of the wings, the phase at which they oscillate differs between species. Brachyceran flies (short-antennaed) oscillate their halteres almost exactly opposite their wings (180 degrees). More ancient suborders such as the Nematecerans (long-antennaed flies), which for example include crane flies and mosquitoes, exhibit a variety of wing-haltere phasing. These observed differences in wing-haltere coordination suggest that differences in sensory neuron output also exist between species. This means that the decoding mechanisms used by the central nervous system to interpret such movements and produce adequate motor output probably also vary depending on phylogeny.
The general structure of halteres are well recognized, but much variability exists between species. The more ancient families, such as tipulidae (crane flies), possess halteres with rather long stalks. This causes the haltere bulbs to be much further from the body and easily visible to the naked eye. More derived families, such as Calliphoridae (blow flies), have developed specialized structures called "calyptrae" or "squama", which are tiny flaps of wing, that cover the haltere. Pringle (1948) hypothesized that they prevented wind turbulence from affecting haltere movements, allowing more precise detection of body position, but this was never tested. The stalk of the haltere is also not always straight. Instead, the stalk's shape in more derived families tends to be reflective of the body shape of the individual. This minimizes the amount of air space between end-knobs and the sides of the abdomen and thorax. In these families, the halteres beat so close to the body that the distance between haltere and thorax is a fraction of the diameter of the haltere bulb. An extreme example of this trait is in the family Syrphidae (hoverflies), where the bulb of the haltere is positioned nearly perpendicular to the stalk.
Flies typically hold their halteres at a 90 degree offset. To visualize this, if you were to imagine a person holding their arms out sideways, this would be a 180 degree offset. If that person then moved their arms backward so that the angle created between their fingertips and spine was 90 degrees, this would be a 90 degree offset. The halteres of flies work the same way. They are positioned behind their bodies, forming a 90 degree angle between the haltere bulbs and the center of their thorax. It is necessary for the halteres to be positioned like this in order to detect the three axes of motion. Those axes are yaw pitch and roll, as illustrated in the above figure (Directions of rotation). The mechanoreceptors at the base of the halteres are only able to measure force in two directions (horizontal and vertical), so a single haltere is only able to measure rotations along two of the three axes. Because the halteres are set at different angles (90 degree offset), they also beat along two separate horizontal and vertical axes. This gives them the ability to acquire information from two non-parallel planes and allows sensation of rotation in all three directions. However, flies are most sensitive to pitch.
When halteres are experimentally induced to flap, volleys of action potentials within the haltere nerve occur in synchrony with the haltere-beat frequency. When flies are then rotated, these volleys break down, likely in response to different groups of sensilla being activated to inform the fly of its recently changed body position. Haltere afferents have also been shown to terminate in the mesothoracic neuropil where flight muscle neurons are located. Haltere afferent activity responding to rotations and wing steering behavior converge in this processing region.
The haltere nerve
Sensory inputs from five sensory fields located at the base of the haltere all converge onto one nerve, the haltere nerve. How these sensory fields are organized at the level of the central nervous system is currently unknown. It has been determined that those five sensory fields all project to the thorax in a "region-specific" way and afferents originating from the forewing were also shown to converge in the same regions. Not every specific target for the haltere afferents have been determined, but a few connections between motor neurons known to be involved in wing steering control and particular haltere sensory fields have been identified, particularly one synapse between the haltere nerve and a wing steering motor neuron known as mnb1.
Flies use indirect flight muscles to accomplish wing movement, and the beating haltere movements are driven by the same group of muscles (see dynamics section). In addition to the indirect flight muscles which are responsible for the flapping motion, there are also steering muscle which control the rotation/angle of the wings. Because halteres evolved from hindwings, the same complement of steering muscles exists for the other directions of movement as well. Chan et al. (1998) identified 10 direct control muscles in the haltere similar to those found in the forewing. In 1998, Chan and Dickinson proposed that the planned haltere movements (without external forces acting on them) were what initiated planned turns. To explain this, imagine a fly that wishes to turn to the right. Unfortunately, as soon as it does, the halteres sense a body rotation and reflexively correct the turn, preventing the fly from changing direction. Chan and Dickinson (1998) suggested that what the fly does to prevent this from occurring is to first move its halteres as if it were being pushed in the opposite direction that it wants to go. The fly has not moved, but the halteres have sensed a perturbation. This would allow the haltere-initiated reflex to occur, correcting the imagined perturbation. The fly would then be able to execute its turn in the desired direction. This is not how flies actually turn. Mureli and Fox (2015) showed that flies are still capable of performing planned turns even when their halteres have been removed entirely.
The way in which rotation sensation is accomplished is that there are five distinct sensory fields located at the base of the haltere. These fields, which actually contain the majority of campaniform sensilla found on the exoskeleton of blowflies (more than 400 campaniform sensilla per haltere), are activated in response to strain created by movements at the haltere base in different directions (due to Coriolis forces acting on the end knobs). Campaniform sensilla are cap-shaped protrusions located on the surface of the exoskeleton (cuticle) of insects. Attached inside the cap is the tip of a dendritic projection (or sensory nerve fiber). The outer segment of the dendritic projection is attached to the inside surface of the cap. When the haltere is pushed to one side, the cuticle of the insect bends and the surface of the cap is distorted. The inner dendrite is able to detect this distortion and convert it to an electrical signal which is sent to the central nervous system to be interpreted.
Chordotonal organs detect and transmit distortions in their position/shape in the same way that campaniform sensilla do. They differ slightly at their site of detection. Chordotonal organs, unlike campaniform sensilla, exist beneath the cuticle and typically respond to stretch as opposed to distortion or bending. Their sensory nerve endings attach between two internal points and when those points are stretched, the difference in length is what is detected and transformed into electrical signaling. There are far fewer chordotonal organs at the base of the haltere than campaniform sensilla (on the order of hundreds), so it is assumed that they are far less important for detecting and transmitting rotational information from haltere movements.
Role in Visual processing
Insect eyes are unable to move independently of the head. In order for flies to stabilize their visual fields, they must adjust the position of their entire head. Sensory inputs detected by halteres not only determine the position of the body, but also, the position of the head, which can move independently from the body. Halteres are particularly useful for detecting fast perturbations during flight and only respond to angular velocities (speeds of rotation) above a certain threshold. When flies are focused on an object in front of them and their body is rotated, they are able to maintain their head position so that the object remains focused and upright. Hengstenberg (1988) found that in the roll direction of rotation, the flies' ability to maintain their head position in response to body motion was only observed at speeds above 50 degrees per second and their ability peaked at 1500 degrees per second. When halteres were removed at the bulb (to retain intact sensory organs at the base) the fly's ability to perceive roll movements at high angular velocities disappeared.
Halteres and vision both play a role in stabilizing the head. Flies are also able to perform compensatory head movements to stabilize their vision without the use of their halteres. When the visual field is artificially rotated around a fly at slower angular velocities, head stabilization still occurs. Head stabilization outputs due to optical inputs alone are slower to respond, but also last longer than those due to haltere inputs. From this result it can be concluded that although halteres are required for detecting fast rotations, the visual system is adept by itself at sensing and correcting for slower body movements. Thus, the visual and mechanosensory (halteres) systems work together to stabilize the visual field of the animal: first, by quickly responding to fast changes (halteres), and second, by maintaining that response until it is corrected (vision).
Flies rely on both visual information from their compound eyes and mechanical input from their halteres. Sherman and Dickinson (2002) discovered that the responsiveness of the halteres and eyes are tuned to complimentary speeds of rotation. Responses to body rotations detected via the visual system are greatest at slow speeds and decrease with increased angular velocity. In contrast, body rotations detected by the halteres elicit the greatest responses at higher angular velocities and degrade as the speed of rotation decreases. The integration of these two separately tuned sensors allows the flies to detect a wide range of angular velocities in all three directions of rotation.
Two main aspects of the visual field have been used to study fly vision, figure and background. Figures are the objects that the fly is focused on and background represents everything else. When haltere bulbs are removed from tethered flying flies, they are still able to track moving figures, but they struggle to stabilize moving backgrounds. If a static figure is placed in the field of view of a fly, its ability to stabilize a moving background is restored. This indicates that although halteres are not required for motion vision processing, they do contribute to it in a context-dependent manner, even when the behavior is separated from body rotations. Context determines whether the fly will use its halteres or vision as the primary source of body/head position information.
Non-flying haltere activity
The necessity of halteres in flight has been well documented, yet little is known about their use in other behaviors such as walking. Certain flies in the families Muscidae, Anthomyiidae, Calliphoridae, Sarcophagidae, Tachinidae, and Micropedizae have been documented to oscillate their wings while walking in addition to during flight. The oscillation of the haltere is similar in amplitude and frequency during walking and flight for these flies, and the halteres always oscillate when walking or when flying. All other families of Diptera never oscillate their halteres while walking, but always do so while flying. Flesh flies are among those that oscillate their halteres while walking, and also perform more poorly at certain walking tasks when their halteres are removed. In contrast, fruit flies, which do not oscillate their halteres when walking, do not exhibit any differences in ability when their halteres are removed. This indicates that haltere inputs are behaviorally relevant to those species which oscillate them while walking and that they aid those individuals in walking behavior.
- Dickinson, MH (29 May 1999). "Haltere-mediated equilibrium reflexes of the fruit fly, Drosophila melanogaster.". Philosophical transactions of the Royal Society of London. Series B, Biological sciences. 354 (1385): 903–16. doi:10.1098/rstb.1999.0442. PMC . PMID 10382224.
- Pringle, J. W. S. (2 November 1948). "The Gyroscopic Mechanism of the Halteres of Diptera". Philosophical Transactions of the Royal Society B: Biological Sciences. 233 (602): 347–384. doi:10.1098/rstb.1948.0007.
- Fox, JL; Fairhall, AL; Daniel, TL (23 February 2010). "Encoding properties of haltere neurons enable motion feature detection in a biological gyroscope.". Proceedings of the National Academy of Sciences of the United States of America. 107 (8): 3840–45. doi:10.1073/pnas.0912548107. PMC . PMID 20133721.
- Hengstenberg, Roland (1988). "Mechanosensory control of compensatory head roll during flight in the blowflyCalliphora erythrocephala Meig.". Journal of Comparative Physiology A. 163 (2): 151–165. doi:10.1007/BF00612425.
- Pix, W; Nalbach, G; Zeil, J (1993). "Strepsipteran Forewings Are Haltere-Like Organs of Equilibrium". Naturwissenschaften (80): 371–374.
- FRAENKEL, G.; PRINGLE, J. W. S. (21 May 1938). "Biological Sciences: Halteres of Flies as Gyroscopic Organs of Equilibrium". Nature. 141 (3577): 919–920. doi:10.1038/141919a0.
- Nalbach, G. (1993). "The halteres of the blowfly Calliphora". Journal of Comparative Physiology A (173): 293–300.
- Hengstenberg, R.; Sandeman, D. C.; Hengstenberg, B. (22 May 1986). "Compensatory Head Roll in the Blowfly Calliphora during Flight". Proceedings of the Royal Society B: Biological Sciences. 227 (1249): 455–482. doi:10.1098/rspb.1986.0034.
- Hall, JM; McLoughlin, DP; Kathman, ND; Yarger, AM; Mureli, S; Fox, JL (3 November 2015). "Kinematic diversity suggests expanded roles for fly halteres. Biol.". Biol. Lett. 20150845.
- Derham, William (1714). haltere. London: Physico-Theology.
- v. Buddenbrock, W (1919). "haltere". Pflug. Arch. ges. Physiol. 175 (125).
- Faust, R. (1952). "Untersuchungen zum halterenproblem". Zool Jahrb Physiol. 63: 352–366.
- Pringle, J. W. S. (1957). Insect flight. London: Cambridge University Press.
- Hersh, Bradley M.; Nelson, Craig E.; Stoll, Samantha J.; Norton, Jason E.; Albert, Thomas J.; Carroll, Sean B. (February 2007). "The UBX-regulated network in the haltere imaginal disc of D. melanogaster". Developmental Biology. 302 (2): 717–727. doi:10.1016/j.ydbio.2006.11.011.
- Sane, SP; Dieudonné, A; Willis, MA; Daniel, TL (9 February 2007). "Antennal mechanosensors mediate flight control in moths.". Science. 315 (5813): 863–6. doi:10.1126/science.1133598. PMID 17290001.
- Proffitt, F. (21 January 2005). "PARASITOLOGY: Twisted Parasites From". Science. 307 (5708): 343–343. doi:10.1126/science.307.5708.343.
- Niehaus, Monika (1981). "Flight and flight control by the antennae in the Small Tortoiseshell (Aglais urticae L., Lepidoptera)". Journal of Comparative Physiology A. 145 (2): 257–264. doi:10.1007/BF00605038.
- "Serial homology". http://www.britannica.com/. Encyclopædia Britannica, inc. Retrieved 16 November 2015. External link in
- Weatherbee, SD; Halder, G; Kim, J; Hudson, A; Carroll, S (15 May 1998). "Ultrabithorax regulates genes at several levels of the wing-patterning hierarchy to shape the development of the Drosophila haltere.". Genes & Development. 12 (10): 1474–82. doi:10.1101/gad.12.10.1474. PMC . PMID 9585507.
- Weatherbee, SD; Nijhout, HF; Grunert, LW; Halder, G; Galant, R; Selegue, J; Carroll, S (11 February 1999). "Ultrabithorax function in butterfly wings and the evolution of insect wing patterns.". Current Biology. 9 (3): 109–15. doi:10.1016/s0960-9822(99)80064-5. PMID 10021383.
- Tomoyasu, Y; Wheeler, SR; Denell, RE (10 February 2005). "Ultrabithorax is required for membranous wing identity in the beetle Tribolium castaneum.". Nature. 433 (7026): 643–7. doi:10.1038/nature03272. PMID 15703749.
- Hersh, BM; Carroll, SB (April 2005). "Direct regulation of knot gene expression by Ultrabithorax and the evolution of cis-regulatory elements in Drosophila.". Development (Cambridge, England). 132 (7): 1567–77. doi:10.1242/dev.01737. PMID 15753212.
- Galant, R; Walsh, CM; Carroll, SB (July 2002). "Hox repression of a target gene: extradenticle-independent, additive action through multiple monomer binding sites.". Development (Cambridge, England). 129 (13): 3115–26. PMID 12070087.
- Crickmore, MA; Mann, RS (7 July 2006). "Hox control of organ size by regulation of morphogen production and mobility.". Science. 313 (5783): 63–8. doi:10.1126/science.1128650. PMC . PMID 16741075.
- Mohit, P; Makhijani, K; Madhavi, MB; Bharathi, V; Lal, A; Sirdesai, G; Reddy, VR; Ramesh, P; Kannan, R; Dhawan, J; Shashidhara, LS (15 March 2006). "Modulation of AP and DV signaling pathways by the homeotic gene Ultrabithorax during haltere development in Drosophila.". Developmental Biology. 291 (2): 356–67. doi:10.1016/j.ydbio.2005.12.022. PMID 16414040.
- Hedenström, Anders (2014-03-25). "How Insect Flight Steering Muscles Work". PLoS Biol. 12 (3): e1001822. doi:10.1371/journal.pbio.1001822. PMC . PMID 24667632.
- Deora, Tanvi; Singh, Amit Kumar; Sane, Sanjay P. (3 February 2015). "Biomechanical basis of wing and haltere coordination in flies". Proceedings of the National Academy of Sciences. 112 (5): 1481–1486. doi:10.1073/pnas.1412279112.
- "Recognising hoverflies". National Biodiversity Data Centre. Biodiversity Ireland. Retrieved 2 December 2015.
- Neal, Jonathan (27 February 2015). "Living With Halteres III". Living with insects blog. The Twenty Ten Theme. Blog at WordPress.com. Retrieved 17 November 2015.
- Chan, WP; Prete, F; Dickinson, MH (10 April 1998). "Visual input to the efferent control system of a fly's "gyroscope".". Science. 280 (5361): 289–92. doi:10.1126/science.280.5361.289. PMID 9535659.
- Fayyazuddin, A; Dickinson, MH (15 August 1996). "Haltere afferents provide direct, electrotonic input to a steering motor neuron in the blowfly, Calliphora.". The Journal of neuroscience: the official journal of the Society for Neuroscience. 16 (16): 5225–32. PMID 8756451.
- Mureli, S.; Fox, J. L. (25 June 2015). "Haltere mechanosensory influence on tethered flight behavior in Drosophila". Journal of Experimental Biology. 218 (16): 2528–2537. doi:10.1242/jeb.121863.
- Gnatzy, Werner; Grunert, Ulrike; Bender, Manfred (March 1987). "Campaniform sensilla of Calliphora vicina (Insecta, Diptera)". Zoomorphology. 106 (5): 312–319. doi:10.1007/BF00312005.
- Keil, TA (15 December 1997). "Functional morphology of insect mechanoreceptors.". Microscopy Research and Technique. 39 (6): 506–31. doi:10.1002/(sici)1097-0029(19971215)39:6<506::aid-jemt5>3.0.co;2-b. PMID 9438251.
- Hengstenberg, Roland (February 1991). "Gaze control in the blowfly Calliphora: a multisensory, two-stage integration process". Seminars in Neuroscience. 3 (1): 19–29. doi:10.1016/1044-5765(91)90063-T.
- Fuller, Sawyer Buckminster; Straw, Andrew D.; Peek, Martin Y.; Murray, Richard M.; Dickinson, Michael H. (1 April 2014). "Flying stabilize their vision-based velocity controller by sensing wind with their antennae". Proceedings of the National Academy of Sciences. 111 (13): E1182–E1191. doi:10.1073/pnas.1323529111.
- Sherman, A; Dickinson, MH (January 2003). "A comparison of visual and haltere-mediated equilibrium reflexes in the fruit fly Drosophila melanogaster.". The Journal of Experimental Biology. 206 (Pt 2): 295–302. doi:10.1242/jeb.00075. PMID 12477899.
- Yarger, A. M., and J. L. Fox. (2016) Dipteran Halteres: Perspectives on Function and Integration for a Unique Sensory Organ. ICB. DOI: 10.1093/icb/icw086
- Pringle, J. W. S. (1948) The Gyroscopic Mechanism of the Halteres of Diptera. Phil. Trans. R. Soc. B. vol. 233 (602) p. 347-384. DOI: 10.1098/rstb.1948.0007
- Fraenkel, G., and J. W. S. Pringle. (1938) Biological sciences: halteres of flies as gyroscopic organs of equilibrium. Nature. vol. 141 p. 919-920. DOI: 10.1038/141919a0
- Dickinson, M. H. (1999). "Haltere–mediated equilibrium reflexes of the fruit fly, Drosophila melanogaster". Phil. Trans. R. Soc. B. 354 (1385): 903–916. doi:10.1098/rstb.1999.0442. PMC . PMID 10382224.
- Frye, M. A. (2009). "Neurobiology: fly gyro-vision". Curr. biol. 19 (24): 1119–1121. doi:10.1016/j.cub.2009.11.009. PMID 20064422.
- Frye, M (2015). "Elementary motion detectors". Curr. biol. 25 (6): 215–217. doi:10.1016/j.cub.2015.01.013. PMID 25784034.
- Graham, T. K.; Krapp, H. G. (2007). "Sensory Systems and Flight Stability: What do Insects Measure and Why?.". Adv. Insect Physiol. 34: 231–316. doi:10.1016/S0065-2806(07)34005-8.
- Methods in insect sensory neuroscience. Christensen, T. A., ed. (2004) CRC Press. p. 115-125. Google books
- Insect Mechanics and Control: Advances in Insect Physiology. Casas, J., Simpson, S. (2007) Academic Press. vol. 34 p. 283-294 Google books
- Dipteran Halteres: Perspectives on Function and Integration for a Unique Sensory Organ at Oxford Journals
- Insect wings might serve gyroscopic function, new research suggests at Science daily
- Staying the course: Fruit flies employ stabilizer reflex to recover from midflight stumbles at Science daily
- Flying by the Seat of Their Halteres at Science
- How flies fly at Wired
- Flies that do calculus with their wings at The New York Times |
Module: Sentential logic
Quote of the page
All colours will agree in the dark.
- Francis Bacon
In this tutorial we study how to make use of full truth-table method to check the validity of a sequent in SL. Consider this valid sequent:
P, (P→Q) ⊧ Q
To prove that it is valid, we draw a table where the top row contains all the different sentence letters in the argument, followed by the premises, and then the conclusion. Then, using the same method as in drawing complex truth-tables, we list all the possible assignments of truth-values to the sentence letters on the left. In our particular example, since there are only two sentence letters, there should be 4 assignments :
The next step is to draw the truth-table for all the premises and also the conclusion:
In the completed truth-table, the first two cells in each row give us the assignment of truth-values, and the next three cells tell us the truth-values of the premises and the conclusion under each of the assignment. If an argument is valid, then every assignment where the premises are all true is also an assignment where the conclusion is true. It so happens that there is only one assignment (the first row) where both premises are true. We can see from the last cell of the row that the conclusion is also true under such an assignment. So this argument has been shown to be valid.
In general, to determine validity, go through every row of the truth-table to find a row where ALL the premises are true AND the conclusion is false. Can you find such a row? If not, the argument is valid. If there is one or more rows, then the argument is not valid.
Note that in the table above the conclusion is false in the second and the forth row. Why don't they show that the argument is invalid?
Remember that (P→Q), ~P, therefore ~Q is invalid. Look at the truth-table, and determine which line is supposed to show that?
To show that a sequent is invalid, we find one or more assignment where all the premises are true and the conclusion is false. Such an assignment is known as an invalidating assignment (a counterexample) for the sequent.
Let's look at a slightly more complex sequent and draw the truth-table:
(~P∨Q), ~(Q→P) ⊧ (Q↔~P)
Again we draw a truth-table for the premises and the conclusion :
To help us calculate the truth-values of the WFFs under each assignment, we use the full truth-table method to write down the truth-values of the sentence letters first, and then work out the truth-values of the whole WFFs step by step. The truth-values of the complete WFFs under each assignment is written beneath the main operator of the WFFs. As you can see, the critical one to check is the third assignment. Since there is no assignment where the premises are true and the conclusion is false, the sequent is valid.
Examine this table and answer the questions:
True or false?
Use the full truth-table method to determine the validity of these sequents:
See this page for the answers.
Confirm for yourself that the WFFs in each pair of WFFs below are logically equivalent to each other :
One thing you might notice about the full truth-table method is that it can help us determine the validity of any sequent in SL. A program can be written that, given any finite sequent in SL as input, after a finite number of processing steps, produces an output "Yes" if the sequent is valid, or an output "No" if it is not. Of course, the computer would need to have a lot of memory if the sequent is a long one, but in principle it can be done. This is roughly what logicians mean when they say that validity in SL is decidable. Intuitively, it is a matter of whether there is an algorithm or computer program that can come up with a proof of either the validity or the invalidity of a sequent.
What is interesting and perhaps surprising is that decidability no longer obtains when we are dealing with some more powerful systems of logic. One of the most important discoveries in modern logic is that mathematics is undecidable. In particular, Gödel's first incompleteness theorem says that any consistent systems of mathematics would have to include statements that can neither be proved or disproved. In other words, it is impossible for there to be a computer program that can tell whether these statements are true. |
From the late 17th century, Indian cotton textile imports – such as white calicoes, muslins, printed and striped cotton goods – flowed into Europe in ever larger volumes.
Local manufacturers, seeing the lightness and superior nature of cotton, started to create new cotton textile industries.
But they faced a serious problem: they could not compete with the Indian imports in terms of price or even quality (Parthasarathi 2011: 89).
The centre of the world’s cotton textile production was in India in the 18th century; by the mid-19th century, it had shifted to Europe (Parthasarathi 2011: 89).
How did this happen?
Parthasarathi (2011) examines this question, and the answer he provides (which as we will see below is incomplete) is as follows:
(1) technical knowledge on how to dye and print on cotton was obtained from the Middle East and India by Europeans: that is to say, Europe imitated and borrowed the technical knowledge (Parthasarathi 2011: 90–93).
(2) however, despite the technical knowledge of (1), domestically-made cotton textiles in Europe still could not compete in price or quality with Indian goods, either at home or in export markets (Parthasarathi 2011: 89, 96).
(3) if we take the case of Britain, whose cotton textile industry became dominant by the mid-19th century, we find that Britain industrialised in this sector by imposing massive protectionism and tariff barriers to Indian cotton goods, as follows:
1685 – 10% import tariff on Indian goods;Some of these early acts of protection were imposed to protect the woollen, silk and linen textile producers of Britain, but their consequence was also to protect the cotton manufacturers, who in the 18th century mainly concentrated on the production of a new hybrid fustian cloth, a mixture of cotton and linen (Parthasarathi 2011: 93).
1690 – tariff doubled to 20%;
1701 – First Calico Act, legislation banning imports of dyed, painted or printed fabric;
1707 – British textiles manufacturers obtained further tariffs on Indian textiles;
1721 – Second Calico Act, which further banned imports of Indian textiles.
(4) With a protected home market, British manufacturers were able to develop and apply the following technologies to production:
(1) Hargreaves’s spinning jenny (invented c. 1764; patented 1770), which was later made obsolete by 1800 by mules;Hargreaves’s spinning jenny greatly increased the quality of cotton goods and lowered costs (Parthasarathi 2011: 98), but it was Arkwright’s water frame that allowed the production of higher-quality all-cotton cloth (Parthasarathi 2011: 98).
(2) Arkwright’s spinning frame, which was later developed into the water frame (patented 1769);
(3) Crompton’s mule (1779).
Crompton’s mule allowed the spinning of all-cotton muslins as fine as those of India (Parthasarathi 2011: 98).
The idea that shortages in yarn were the main driver of these innovations is not supported by the evidence (Parthasarathi 2011: 98) which rather shows that the desire to match the quality of Indian cloths was the major factor (Parthasarathi 2011: 109).
(5) However, despite the 18th century technological developments by Kay, Hargreaves, and Arkwright, British cotton textiles could still not compete in price with Indian goods. In the 1780s, there was vehement demand for protection by cotton manufacturers (Parthasarathi 2011: 112), which the government readily supported.
As far as I can see, all these points are true, but Parthasarathi seems to have missed further very important points about British protectionism.
As Parthasarathi notes, even with the invention and gradual use of Crompton’s mule in the 1780s, British textiles still could not compete with Indian calicoes (Alavi 1982: 56).
The British producers were protected with more tariffs, and by 1813 the import duty on Indian cotton goods stood at 85% (Alavi 1982: 56). As Alavi argues:
“It was the wall of protection that made possible the survival and growth of the British cotton textile industry in the face of Indian competition and facilitated large capital investments in the industry. Without it, the English industry would have found it impossible to get a foothold in the home market, let alone abroad.” (Alavi 1982: 56).
From 1797–1819 British cotton textile manufacturers were still unable to compete. In 1815, the value of all Indian cotton goods coming into England was 1.3 million pounds (from 1741–1750, it had stood at 1.2 million points annually, at a time when domestic cotton textile competition was still largely non-existent). British producers asked for and obtained tariff increases on Indian cottons on 7 separate occasions in the years from 1797–1819.
Even with the technological innovations, by the beginning of the 19th century, Indian silk and cotton goods
“… could be sold in the British market at a price between 50% and 60% lower than those fabricated in England. It consequently became necessary to protect the latter by duties of 70% to 80% on their value.” (Das 1946: 313, quoting Mukerjee 1967).
It was only the application of steam power in the period between 1815 to 1830 that allowed English textile goods to be competitive globally (Marks 2002: 100). The power loom, for instance, was initially limited by relying on water power, but by the beginning of the 19th century was able to use steam power (Moe 2007: 34).
The cost of British-made cotton cloth fell by 85%, but only from 1780 to 1850
, and it was only in 1835 that steam power fuelled 75% of the British cotton industry (Moe 2007: 35).
British textile goods probably became internationally competitive by the mid-1820s (when tariffs were still in place). The British protectionism that lasted until the 1820s allowed
British goods to become competitive.
It is estimated that by 1820, about 53% of Britain’s exports were cotton textile goods (Bairoch 1993: 85). These exports displaced India’s textile exports in world markets. Thus Britain itself had an “export-led” model of economic growth even in the early stages of the industrial revolution, by taking away the market share of India through technological innovation allowed by protectionism and tariffs.
Yet, according to classical free trade theory, India had the comparative advantage in production of cotton textiles even around 1810 when the British textile industry was developing. If real free trade had been implemented, the protective tariff would have been abolished and the market for British-made textiles at home would have collapsed.
Yet nobody can seriously deny that having a large productive textile industry was the foundation of Britain’s industrial revolution and in the long run good for the economy.
After the successful decades of tariff protection and shelter from competition, British goods succeeded in global markets at the expense of India’s exports. Bengal and the textile manufacturers were ruined and the resultant de-industrialization impoverished the previously prosperous towns.
Contemporary 19th-century British advocates of free trade actually noticed this state of affairs and criticised it. Robert Montgomery Martin (1801–1868)
was a historian of Irish descent and wrote about twenty-six books on history and the British empire (including a History of the British Colonies
). In 1844 he was Treasurer of Hong Kong. He appears to have been a free trader, though admittedly a member of the unconventional, proto-Keynesian “Birmingham School” of economists.
Robert Montgomery Martin was called upon to give evidence in 1840 during a British parliamentary inquiry about India:
“[Before a British Parliamentary Committee in 1840] Montgomery Martin stated that he . . . was convinced that an outrage had been committed ‘by reason of the outcry for free trade on the part of England without permitting India a free trade herself.’ After supplying statistical data of Indian textile exports to Great Britain, he pointed out that between 1815–1832 prohibitive duties ranging from 10 to 20, 30, 50, 100 and 1,000 per cent were levied on articles from India. ... ‘Had this not been the case,’ wrote Horace Wilson in his 1826 History of British India, ‘the mills of Paisley and Manchester would have been stopped in their outset, and could scarcely have been again set in motion, even by the power of steam. They were created by the sacrifice of Indian manufacture. Had India been independent, she could have retaliated, would have imposed prohibitive duties on British goods and thus have preserved her own productive industry from annihilation. This act of self-defence was not permitted her.’” (Clairmonte 1960: 86-87).
Thus some British apostles of free trade noticed this double standard. They were appalled at the hypocrisy of British protectionism and the destruction of India’s prosperous cities built on textile exports.
But they of course failed to notice that the protectionism had been a major cause of Britain’s industrial revolution and that, without it, the UK would have been much poorer. In other words, the success of the cotton textile industry in the early industrial revolution in Britain was an example of infant industry protectionism, or modern import substitution industrialization (ISI).
The great Ricardian lie spun by modern free-trading cultists that free trade was always good for Britain and that British industrialisation was the result of free trade by comparative advantage stands exposed by historical reality.
Alavi, H. 1982. “India: The Transition to Colonial Capitalism,” in H. Alavi et al. (eds), Capitalism and Colonial Production
, Croom Helm, London.
Bairoch, Paul. 1993. Economics and World History: Myths and Paradoxes
. Harvester Wheatsheaf, New York and London.
Clairmonte, F. 1960. Economic Liberalism and Underdevelopment: Studies in the Disintegration of an Idea
, Asia Publishing House, New York.
Das, T. 1946. Review of The Economic History of India: 1600–1800, American Historical Review
51.2 (January): 312–314.
Marks, R. 2002. The Origins of the Modern World: A Global and Ecological Narrative
, Rowman & Littlefield, Lanham, MD.
Moe, E. 2007. Governance, Growth and Global Leadership: The Role of the State in Technological Progress, 1750–2000
, Ashgate Publishing, Aldershot, UK.
Mukerjee, R. 1967. The Economic History of India: 1600–1800
, Kitab Mahal, Allahabad.
Parthasarathi, Prasannan. 2011. Why Europe Grew Rich and Asia Did Not: Global Economic Divergence, 1600–1850
. Cambridge University Press, Cambridge.
Wright, John. 1785. An Address to the Members of Both Houses of Parliament on the Late Tax laid on Fustian and Other Goods
. W. Eyres, Warrington, UK. |
Defining the Institutions. The institutions are the basic building blocks of a government. The institutions include the government, the executive, the Legislature, and the Judiciary. Each institution has a unique role in society and contributes to the governance of a country.
The executive may be defined as that branch of the state which formulates policy and is responsible for its execution. In formal terms, the sovereign is the head of the executive. The Prime Minister, Cabinet, and other ministers, for the most part, are elected Members of Parliament. In addition, the Civil Service, local authorities, police, and armed forces constitute the executive in practical terms.
The Queen in Parliament is the sovereign law-making body within the United Kingdom. Formally expressed, parliament comprises the Queen, House of Lords, and House of Commons. All Bills’ must be passed by each House and receive royal assent.
Parliament is bicameral, that is to say, there are two chambers, each exercising a legislative role although not having equal powers and each playing a part in ensuring the accountability of the government. By way of introduction, it should be noted that membership of the House of Lords is not secured by election and is accordingly not accountable in any direct sense to the electorate.
The House of Commons is directly elected, and a parliamentary term is limited under the Parliament Act 191] to a maximum of five years. In practice, the average life of a parliament is between three and four years. The House is made up of the majority party the political party which secures the highest number of seats at the election, which will form the government.
The head of that party will be invited by the Queen to take office as Prime MinistetIn turn, it is for the Prime Minister to select his or her Cabinet. The opposition parties comprise the remainder of the now 659 Members of Parliament.
The official Opposition is the party that represents the second largest party in terms of elected members. In principle, the role of the official Opposition is to act as a government in waiting, ready at any time to take office should the government seek a dissolution of parliament.
The judiciary is that branch of the state which adjudicates upon conflicts between state institutions, between state and individual, and between individuals. The judiciary is independent of both parliament and the executive. It is the feature of judicial independence which is of prime; importance both in relation to government according to law and in the protection of the liberty of the citizen against the executive. As Blackstone observed in his Commentaries.
… in this distinct and separate existence of the judicial power in a particular body of men, nominated indeed, but not removable at pleasure by the Crown, consists one main preservative of the public liberty which cannot subsist long in any state unless the administration of common justice is in some degree separated both from the legislative and from the executive power.
It is apparent, however, that, whilst a high degree of judicial independence is secured under the constitution, there are several aspects of the judicial function which reveal an overlap between the judiciary, parliament, and the executive.
The Lord Chief Justice, Master of the Rolls, President of the Family Division, Vice Chancellor, Lords of Appeal in Ordinary, and Lord Justices of Appeal are appointed by the Queen. For appointments to the High Court, the candidate must be a barrister of ten years standing, a solicitor with rights of audience in the High Court, or a circuit judge of two years standing. For appointment to the Court of Appeal, the candidate must either be a barrister of ten years standing, a solicitor with rights of audience in the High Court, or a current member of the High Court Bench.
The Lord Chief Justice assumes the Lord Chancellor’s former functions as head of the Judiciary assuming the additional title of President of the Courts of England and Wales and Head of the Judiciary of England and Wales. The Lord Chief Justice of England and Wales, the Lord Chief Justice of Northern Ireland, and the Lord President of the Court of Session in Scotland may make written representations to Parliament on matters relating to the judiciary or the administration of justice. Where the functions of the Lord Chancellor have been modified or transferred to the Lord Chief Justice, those functions will generally be exercised either with the concurrence of or after consultation with the Lord Chancellor.
The socio-economic and educational background of the judiciary has been subjected to much research. In brief, the picture presented is one of a middle and upper-class, middle-aged, White, predominantly male, judiciary dominated by public schools and Oxford or Cambridge University education. The process of selection has traditionally been shrouded in secrecy, with records of eligible candidates, who in practice will be successful practitioners, being maintained by the Lord Chancellor’s Department.
The criteria for selection are ability, experience, standing integrity, and physical health. It had long been argued that the appointment of judges should be made by an independent Judicial Appointments Commission rather than on the recommendation of the Lord Chancellor alone. The Constitutional Reform Act 2005 has finally brought, about reform. The 2005 Act establishes a Judicial Appointments Commission which has responsibility for the recruitment and selection of judges for the courts in England and Wales.
The Act of Settlement 1700 secured a senior’’ judge’s tenure of office during good behavior More modern expression is given to this protection under the Supreme Court Act 1981, which provides that a person appointed shall hold office during good behavior, removable only by He, Majesty on an Address presented to her by both Houses of Parliament. Senior judges cannot be dismissed for political reasons. They can be removed by compulsory retirement if they are incapacitated or unable to resign through incapacity.
Judges can be dismissed for misbehavior, and under an Address to the Crown made by the two Houses of Parliament. ‘Misbehaviour’ relates to the performance of a judge’s official duties or the commission of a criminal offense.
Not every judge convicted of an offense will be dismissed: six judges have been convicted for driving with an excess of alcohol in their blood but have continued in office. In 1830, Sir Jonah Barrington was removed from office in Ireland under the Address procedure for the embezzlement of monies paid into court.
Theoretically, a judge can also be removed by ‘impeachment’ for ‘high crimes and misdemeanors, although this procedure has not been used since 1805 and is thought to be obsolete. In Scotland, judges can only be removed on the grounds of misconduct.
The Constitutional Reform Act 2005 established the Office for Judicial Complaints. The Lord Chief Justice and the Lord Chancellor can refer a matter to the Office for investigation and report. Any decision relating to further action lies with the Lord Chief Justice.
The Judicial Pensions and Retirement Act 1993 introduced the retirement age of 70, which may be extended to 75 if in the public interest. From 1959, the retirement ages were set at 75 for a High Court judge and 72 for a circuit judge, although judges appointed before this date were permitted to remain in office.
In order further to protect the judiciary from political debate, judicial salaries are charged to the Consolidated Fund. Judicial salaries are relatively high, on the basis that it is in the national interest to ensure an adequate supply of candidates of sufficient caliber for appointment to judicial office.
Holders of full-time judicial appointments are barred from legal practice, and may not hold paid appointments as directors or undertake any professional or business work. Judges are also disqualified from membership in the House of Commons. Membership of the House of Commons does not, however, disqualify that person from appointment to the Bench.
Immunity From Suit:-
All judges have immunity from legal action in the performance of their judicial functions. Provided that a judge acts within their jurisdiction, or honestly believes he is acting within his jurisdiction, no action for damages may lie. A judge is immune from the law of defamation and, even if ‘actuated by envy, hatred and malice and all uncharitableness’, he is protected.
In Sirrus v Moore (1975), Lord Denning MR and Ormrod LJ ruled that every judge irrespective of rank including the lay magistracy — is protected from liability in respect of his judicial function provided that he honestly believed that the action taken was within his jurisdiction. The Crown Proceedings Act 1947 also provides protection for the Crown from liability for the conduct of any person discharging ‘responsibilities of a judicial nature vested in him or in executing the judicial process.
Bias or Personal Interest:-
A judge is under a duty not to adjudicate on cases in which he has either an interest — whether personal or financial — or where he may be influenced by bias. A fundamental doctrine of natural Justice is that ‘no man should be a judge in his own cause: Nemo index in sua causa.
In Dr. Bonham’s Case (1609) Lord Coke held that members of a board that determined physicians’ fines could not both impose and receive the fines, thus giving early judicial expression to the requirement of freedom from bias. Rather more recently, in Dimes v Grand Junction Can) Proprietors (1852), the propriety of Lord Cottenham LC adjudicating was challenged on the basis that the Lord Chancellor held shares in the canal company involved in litigation. The House of Lords set aside the decision of the court despite the fact that.
No one can suppose that Lord Cottenham could be in the remotest degree influenced by the interest . . . It is of the last importance that the maxim that no man is to be a judge in his own cause should be held sacred.
Thus, the mere existence of financial interest, even where it does not in fact result in actual bias but may present the appearance of bias, will be sufficient to disqualify a judge from adjudication, The same position prevails in the United States of America, where the issue of financial interests of federal judges is expressly covered by the law. The Ethics in Government Act 1978 requires that Supreme Court and Federal judges make a public declaration of ‘income, gifts, shares, liabilities, and transactions in securities and real estate protection which is conspicuously absent in the United Kingdom.
A financial interest in a case that does not go beyond the financial interest of any other citizen does not disqualify judges from sitting. Thus, in Bromley London Borough Council v Greater London Council (1983), for example, the fact that all the judges in the Court of Appeal were taxpayers and users of public transport in London did not disqualify them from hearing the case.
Judges, like everyone else, may be biased by virtue of race, sex, politics, background, association, and opinions. When adjudicating they must, however, be demonstrably impartial. This impartiality involves:
. .. the judge listening to each side with equal attention, and coming to a decision on the argument, irrespective of his personal view about the litigants . . .
Whatever his personal beliefs, the judge should seek to give effect to the common values of the community, rather than any sectional system of values to which he may adhere.
Where a judge himself feels that he has a bias against one of the parties to litigation he may disqualify himself from sitting on the case, as did Lord Denning MR in Ex parte Church of Scientology of California (1978). There, counsel for the Church requested that he disqualify himself as a result of eight previous cases involving the Church on which he had adjudicated, and in which, in the eyes of the Church, he displayed bias against them.
1989 witnessed the start of a high-profile case in which the doctrine of judicial impartiality was re-stated by the House of Lords. Earlier in the year, the former President of Chile, Senator Pinochet, arrived in Britain on a private visit for medical tests.
The Spanish government sought the arrest and extradition of Pinochet on charges involving the murder, torture, and the hostage-taking of Spanish citizens in Chile between 1973 and 1979. Two provisional warrants for Pinochet’s arrest had been granted following the Spanish proceedings.
Pinochet then sought judicial review of the decision to grant an arrest warrant and an order of certiorari to quash the decision. On appeal to the House of Lords, the court ruled by a majority of three to two judges that former Heads of State enjoyed immunity from arrest and extradition proceedings in the United Kingdom only in respect of official acts performed in the exercise of their functions as a Head of State.
Torture and hostage-taking could not be regarded as part of Pinochet’s official functions and therefore were excluded from immunity. In the course of the hearing before the House of Lords, several organizations including Amnesty International had been granted leave to intervene and submit evidence to the court.
Following the decision, Senator Pinochet’s lawyers complained to the Home Secretary that one of the judges, Lord Hoffmann, was a director of the Amnesty International Charitable Trust and, as a result, was disqualified from sitting, on the basis that his participation raised the question of bias: of a judge ‘sitting on his own cause’. Senator Pinochet accordingly applied for the decision to be set aside. Lord Hoffmann had been one of the three majority judges.
In an unprecedented move, Lord Browne-Wilkinson, convened a differently constituted panel of judges to reconsider the case, reiterating the principle that it was of fundamental importance that justice should not only be done but should manifestly and undoubtedly be seen to be done. The mere fact of his interest ‘was sufficient to disqualify him unless he made sufficient disclosure.
The Pinochet case, spawned further challenges against judges, alleging bias of one form or another. It also led to calls for a register of judges’ interests, in which any interests which might raise the question of bias could be recorded and made public, a proposal rejected by Lord Browne-Wilkinson as unworkable: whereas it was generally clear when Members of Parliament’s interests conflict with their professional duties, as he put it, a judge [ unlike Members of Parliament ] may be dealing with anything, any local club or society . . . there’s no end to it.
Read More Related Article:- |
An area of mathematics concerned with geometric figures on a sphere, in the same way as planimetry is concerned with geometric figures in a plane.
Every plane that intersects a sphere gives a certain circle as section; if the intersecting plane passes through the centre of the sphere, then a so-called great circle is obtained as the intersection. A unique great circle can be drawn through any two points and on the sphere (Fig. a), except when they are diametrically opposite.
The great circles of a sphere are its geodesics (cf. Geodesic line), and for this reason their role in spherical geometry is the same as the role of straight lines in planimetry. However, whereas any segment of a straight line is the shortest curve between its ends, an arc of a great circle on a sphere is only the shortest curve when it is shorter than the complementary arc. Spherical geometry differs from planimetry in many other senses; for example, there are no parallel geodesic lines: two great circles always intersect, and, moreover, they intersect in two points.
The length of a segment on a sphere, i.e. the length of the arc (Fig. a) of a great circle, is measured by its corresponding central angle . The angle (Fig. b) formed on the sphere by the arcs of two great circles is measured by the angle between the tangents of the corresponding arcs at the point of intersection or by the dihedral angle formed by the planes and .
When two great circles intersect on a sphere, four spherical digons, or lunes, are formed (Fig. c). A lune is defined by specifying its angle. The area of a lune is determined by the formula , where is the radius of the sphere and is the angle of the lune expressed in radians.
Three great circles that do not intersect in one pair of diametrically-opposite points form eight spherical triangles on the sphere (Fig. d); if the elements (angles and sides) of one of these is known, it is easy to determine the elements of all the others. It is therefore usual to consider only triangles whose sides and angles are less than (such triangles are called Euler triangles). The sides of a spherical triangle are measured by the planar angles of the trihedral angle (Fig. e); the angles of the triangle are measured by the dihedral angles of that same trihedral angle. The properties of spherical triangles vary greatly from the properties of triangles on a plane (rectilinear triangles). Thus, a fourth case of equality for triangles on a sphere can be added to the three already known for rectilinear triangles: Two triangles are equal if their corresponding angles are equal (on a sphere, similar triangles do not exist).
Triangles that can be matched up by a movement around the sphere are said to be directly congruent. Such triangles have equal elements and the same orientation. Triangles that have equal elements and a different orientation are called oppositely symmetric; the triangles and in Fig. fform an example.
In every spherical (Euler) triangle, each side is less than the sum of, and more than the difference between, the other two; the sum of all the sides is always less than . The sum of the angles of a spherical triangle is always less than and more than . The difference , where is the sum of the angles of a spherical triangle, is called the spherical excess. The area of a spherical triangle is defined by the formula , where is the radius of the sphere. For the relationship between the angles and sides of a spherical triangle, see Spherical trigonometry.
The position of each point on a sphere is completely defined by the specification of two numbers; these two numbers (coordinates) can be defined in the following way (Fig. g). A great circle (the equator) is fixed, along with one of the two points of intersection of the diameter of the sphere perpendicular to the plane of the equator and the surface of the sphere, for example (the pole), as well as one of the great semi-circles that emanate from the pole (the zero meridian). The great semi-circles of the sphere that emanate from are called meridians, while its small circles, which are parallel to the equator, are called parallels. One of the coordinates of the point on the sphere is the angle — the polar distance — while the other is the angle between the zero meridian and the meridian which passes through the point — the longitude, which is counted anti-clockwise.
The length of an arc (Fig. h) of the curve , is calculated according to the formula
|||N.N. Stepanov, "Spherical trigonometry" , Leningrad-Moscow (1948) (In Russian)|
|||P.S. Alexandroff [P.S. Aleksandrov] (ed.) et al. (ed.) , Enzyklopaedie der Elementarmathematik , 4. Geometrie , Deutsch. Verlag Wissenschaft. (1969) (Translated from Russian)|
Let be the unit sphere. The points of , i.e. unit length vectors, can be identified with half-lines emanating from the origin in . A notion of distance, a metric, on is defined by , where is the inner product of the unit length vectors . Let and be two arcs of great circles in intersecting, as the notation implies, in . Let be the unit length tangent vector to at and let be analogously defined. Then the angle between and at is , which is also the angle between the planes through cutting out and .
As mentioned above, a spherical triangle with sides always satisfies and ; conversely, if and these inequalities are satisfied, then there exists a spherical triangle with these sides.
A pole of a great circle is a point of the sphere perpendicular to the plane cutting out that great circle; i.e. if the great circle is regarded as the equator, the two poles are the North and South Poles.
|[a1]||M. Berger, "Geometry" , II , Springer (1987)|
|[a2]||D. Hilbert, S.E. Cohn-Vossen, "Geometry and the imagination" , Chelsea (1952) (Translated from German)|
|[a3]||B.A. [B.A. Rozenfel'd] Rosenfel'd, "A history of non-euclidean geometry" , Springer (1988) (Translated from Russian)|
|[a4]||J.L. Coolidge, "A treatise on the circle and the sphere" , Clarendon Press (1916)|
|[a5]||H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961) pp. 11; 258|
Spherical geometry. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Spherical_geometry&oldid=15193 |
A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
Reactions may proceed in the forward or reverse direction until they go to completion or reach equilibrium.
Reactions that proceed in the forward direction to approach equilibrium are often described as spontaneous, requiring no input of free energy to go forward. Non-spontaneous reactions require input of free energy to go forward (examples include charging a battery by applying an external electrical power source, or photosynthesis driven by absorption of electromagnetic radiation in the form of sunlight).
Different chemical reactions are used in combinations during chemical synthesis in order to obtain a desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperatures and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays, and reactions between elementary particles, as described by quantum field theory.
Antoine Lavoisier developed the theory of combustion as a chemical reaction with oxygen.
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by Alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.
The production of chemical substances that do not normally occur in nature has long been tried, such as the synthesis of sulfuric and nitric acids attributed to the controversial alchemist Jābir ibn Hayyān. The process involved heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis.
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of the experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as reaction with oxygen from the air.
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.
Isomerization of azobenzene, induced by light (hν) or heat (Δ)
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.
The most important elementary reactions are unimolecular and bimolecular reactions.
Only one molecule is involved in a unimolecular reaction; it is transformed by an isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
- Dissociation of a molecule AB into fragments A and B
For bimolecular reactions, two molecules collide and react with each other.
Their merger is called chemical synthesis or an addition reaction.
Another possibility is that only a portion of one molecule is transferred to the other molecule.
This type of reaction occurs, for example, in redox and acid-base reactions.
In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton.
This type of reaction is also called metathesis.
Most chemical reactions are reversible, that is they can and do run in both directions.
The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on such parameters as temperature, pressure and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with the fewer moles of gas.
The reaction yield stabilizes at equilibrium, but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure.
A change in the concentrations of the reactants does not affect the equilibrium constant, but does affect the equilibrium position.
Reactions can be exothermic, where ΔH is negative and energy is released. Typical examples of exothermic reactions are precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous reaction products, which have high entropy. Since the entropy increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur at low temperatures. Changes in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
Changes in temperature can also reverse the direction tendency of a reaction.
For example, the water gas shift reaction
is favored by low temperatures, but its reverse is favored by high temperature.
The shift in reaction direction tendency occurs at 1100 K.
Reactions can also be characterized by the internal energy which takes into account changes in the entropy, volume and chemical potential. The latter depends, among other things, on the activities of the involved substances.
The speed at which reactions takes place is studied by reaction kinetics. The rate depends on various parameters, such as:
Reactant concentrations, which usually make the reaction happen at a faster rate if raised through increased collisions per unit time. Some reactions, however, have rates that are independent of reactant concentrations. These are called zero order reactions.
Surface area available for contact between the reactants, in particular solid ones in heterogeneous systems. Larger surface areas lead to higher reaction rates.
Pressure – increasing the pressure decreases the volume between molecules and therefore increases the frequency of collisions between the molecules.
Activation energy, which is defined as the amount of energy required to make the reaction start and carry on spontaneously. Higher activation energy implies that the reactants need more energy to start than a reaction with a lower activation energy.
Temperature, which hastens reactions if raised, since higher temperature increases the energy of the molecules, creating more collisions per unit time,
The presence or absence of a catalyst. Catalysts are substances which change the pathway (mechanism) of a reaction which in turn increases the speed of a reaction by lowering the activation energy needed for the reaction to take place. A catalyst is not destroyed or changed during a reaction, so it can be used again.
For some reactions, the presence of electromagnetic radiation, most notably ultraviolet light, is needed to promote the breaking of bonds to start the reaction. This is particularly true for reactions involving radicals.
Several theories allow calculating the reaction rates at the molecular level.
This field is referred to as reaction dynamics.
The rate v of a first-order reaction, which could be disintegration of a substance A, is given by:
Its integration yields:
Here k is first-order rate constant having dimension 1/time, A is concentration at a time t and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with the characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
where Ea is the activation energy and kB is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.
Four basic types
In a synthesis reaction, two or more simple substances combine to form a more complex substance.
These reactions are in the general form:
In a single replacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound These reactions come in the general form of:
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make magnesium hydroxide and hydrogen gas:
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
Oxidation and reduction
Illustration of a redox reaction
Sodium chloride is formed through the redox reaction of sodium metal and chlorine gas
Redox reactions can be understood in terms of transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is oxidized and the latter is reduced. Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state, and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
In the reaction, sodium metal goes from an oxidation state of 0 (as it is a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized.
On the other hand, the chlorine gas goes from an oxidation of 0 (it is also a pure element) to −1: the chlorine gains one electron and is said to have been reduced.
Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent.
Conversely, the sodium is oxidized or is the electron donor, and thus induces reduction in the other species and is considered the reducing agent.
Which of the involved reactants would be reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativity, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many ions with high oxidation numbers, such as H 2O 2, MnO− 4, CrO 3, Cr 2O2− 7, OsO 4 can gain one or two extra electrons and are strong oxidizing agents.
The number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron respectively. Noble gases themselves are chemically inactive.
An important class of redox reactions are the electrochemical reactions, where electrons from the power supply are used as the reducing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process in which electrons are released in redox reactions and can be used as electrical energy is possible and used in batteries.
Ferrocene – an iron atom sandwiched between two C5H5 ligands
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.
In the Brønsted–Lowry acid–base theory, an acid-base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium.
The equilibrium is determined by the acid and base dissociation constants (Ka and Kb) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at exactly same amounts, form a neutral salt.
Acid-base reactions can have different definitions depending on the acid-base concept employed.
Some of the most common are:
Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions.
Brønsted-Lowry definition: Acids are proton (H+) donors, bases are proton acceptors; this includes the Arrhenius definition.
Lewis definition: Acids are electron-pair acceptors, bases are electron-pair donors; this includes the Brønsted-Lowry definition.
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.
Reactions can take place between two solids.
However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.
Reactions at the solid|gas interface
Reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis.
In this Paterno–Büchi reaction, a photoexcited carbonyl group is added to an unexcited olefin, yielding an oxetane.
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.
Many important processes involve photochemistry.
The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
Schematic potential energy diagram showing the effect of a catalyst in an endothermic chemical reaction.
Solid heterogeneous catalysts are plated on meshes in ceramic catalytic converters in order to maximize their surface area. This exhaust converter is from a Peugeot 106 S2 1100
In catalysis, the reaction does not proceed directly, but through reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid–liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction which is kinetically inhibited by a high activation energy can take place in circumvention of this activation energy.
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area.
Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.
Reactions in organic chemistry
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involve covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
SN2 reaction causes stereo inversion (Walden inversion)
The three steps of an SN2 reaction. The nucleophile is green and the leaving group is red
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.
The SN1 reaction proceeds in two steps.
First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.
In the SN2 mechanism, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved.
These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers (cis/trans). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.[](https://openlibrary.org/books/OL26764711M/Reaktionsmechanismen_Organische_Reaktionen_Stereochemie_Moderne_Synthesemethoden_(Sav_Chemie)_(Germa)
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2
In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing the radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.
Addition and elimination
Electrophilic addition of hydrogen bromide
The addition and its counterpart, the elimination, are reactions which change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms which are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, formation of the double bond, takes place with elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate.
In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group.
Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.
The counterpart of elimination is the addition where double or triple bonds are converted into single bonds.
Similar to the substitution reactions, there are several types of additions distinguished by the type of the attacking particle.
For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible.
In this case, one has to use the hydroboration–oxidation reaction, where in the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. At the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role for the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with an elimination, so that after the reaction the carbonyl group is present again. It is therefore called addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase by the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.
Some additions which can not be executed with nucleophiles and electrophiles, can be succeeded with free radicals.
As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.
Other organic reaction mechanisms
The Cope rearrangement of 3-methyl-1,5-hexadiene
Orbital overlap in a Diels-Alder reaction
Mechanism of a Diels-Alder reaction
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.
Illustration of the induced fit model of enzyme activity
Biochemical reactions are mainly controlled by enzymes. These proteins can specifically catalyze a single reaction, so that reactions can be controlled very precisely. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. An important energy source is glucose, which can be produced by plants via photosynthesis or assimilated from food. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions.
Thermite reaction proceeding in railway welding.
Chemical reactions are central to chemical engineering where they are used for the synthesis of new compounds from natural raw materials such as petroleum and mineral ores. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the amount of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.
Some specific reactions have their niche applications.
For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate.
Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients.
Important tools of real time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at time scaled down to a few femtoseconds.
Chemical reaction Substrate Reagent Catalyst Product
Chemical reaction model
List of organic reactions
Reaction progress kinetic analysis |
Based on Pythagorean Theorem, the vector from the origin to the point (3, 4) in 2D Euclidean plane has length of and the vector from the origin to the point has length. The length of a vector with two elements is the square root of the sum of each element squared.
The magnitude of a vector is sometimes called the length of a vector, or norm of a vector. Basically, norm of a vector is a measure of distance, symbolized by double vertical bar.
The magnitude of a vector can be extended to dimensions. A vector a with elements has length
The vector length is called Euclidean length or Euclidean norm. Mathematician often used term norm instead of length. Vector norm is defined as any function that associated a scalar with a vector and obeys the three rules below
- Norm of a vector is always positive or zero. The norm of a vector is zero if and only if the vector is a zero vector.
- A scalar multiple to a norm is equal to the product of the absolute value of the scalar and the norm.
- Norm of a vector obeys triangular inequality that the norm of a sum of two vectors is less than or equal to the sum of the norms.
There are many common norms:
- 1-norm is defined by the sum of absolute value of the vector elements
- 2-norm is the most often used vector norm, sometimes called Euclidean norm. When the subscript index of the vector norm is not specified, you may think that it is a Euclidean norm.
- p-norm is sometimes called Minskowski norm is defined as . p-norm is generalized norm with a parameter .
- max-norm is also called Chebyshev norm is the largest absolute element in the vector .
Use the interactive program below to experiment with your own vector input. The program will give you the norm of vector for p=1, 2, 3 and max. The vector input will be redrawn to give you feedback on what you type. Click Random Example button to generate random vector.
Some important properties of vector norm are
- Square of Euclidean norm is equal to the sum of square.
- Pythagorean Theorem is hold if and only if the two vectors are orthogonal.
- The law of cosine
- Norm of the dot product of two vectors is equal to the product of their norms.
- Relationship to vector inner products
- Square of Euclidean norm of a vector is equal to the inner product to itself
- where, is the angle between the two vectors
- Norm of addition or subtraction follow the law of cosine
- Addition of two square of norm of vectors follow parallelogram law
- p-norm is greater than the max-norm but less than times the max-norm, that is .
- The norm ratio satisfies the inequality. As tends to infinity, the approaches and approaches.
- Cauchy-Schwartz inequality stated that the absolute value of vector dot product is always less than or equal to the product of their norms. The equality holds if and only if the vectors are linearly dependent.
- Relationship of norm of cross product and dot product is.
Preferable reference for this tutorial is
Teknomo, Kardi (2011) Linear Algebra tutorial. http:\\people.revoledu.com\kardi\tutorial\LinearAlgebra\ |
Layers of the Atmosphere
Scientists divide the atmosphere up into 4 or 5 distinct layers, as follows:
- Troposphere - from ground level up to somewhere between 8 and 16 km (5 and 10 miles, or 26,000 to 53,000 feet), depending on latitude and season. Most of the mass (~80%) of the atmosphere is here and essentially all weather occurs in the troposphere. Temperature decreases with increasing altitude. The tropopause is the name given to the boundary between the top of the troposphere and the bottom of the stratosphere above.
- Stratosphere - extends from the tropopause to about 50 km (31 km) up. Temperature rises with altitude. Contains the ozone layer, which shields Earth's surface from most solar ultraviolet radiation. Top boundary is called the stratopause.
- Mesosphere - extends from the stratopause to about 85 km (53 miles). Many meteors burn up here. Temperature decreases with altitude. The coldest temperatures in Earth's atmosphere, about -85° C (-120° F), are found near the top of this layer. The top boundary is called the mesopause. Part of the ionosphere, a series of sub-layers containing higher levels of ionized and thus electrically charged atoms and molecules, is in the mesosphere.
- Thermosphere - from the mesopause to between 500 and 1,000 km (311 to 621 miles) up. Air is very, very thin here. Variations in solar heating due to the Sun's 11-year sunspot cycle and to short-term space weather storms cause the air in this layer to expand and contract; thus the large variation in altitude of the top of this layer (the thermopause). Most of the ionosphere is within the thermosphere. Temperatures increase with altitude, but also vary dramatically over time in response to solar activity. The aurora (Southern and Northern Lights) periodically light up the thermosphere. The top boundary is called the thermopause. Many spacecraft actually orbit within the thermosphere.
- Exosphere - from the thermopause on upward. Not universally recognized as a layer of the atmosphere. The exosphere is essentially the sparse scattering of atmospheric gasses as they gradually thin to the near-vacuum of space.
Concepts Embedded in this Activity
There are several interrelated concepts relevant to Earth's atmosphere and the process of scientific investigation embedded within this activity. You may wish to emphasize certain aspects that best match your curriculum. Topics this activity touches upon include:
- layers of Earth's atmosphere (and especially temperature variations within those layers)
- air pressure & density throughout the atmosphere (including the concept of lapse rate for more advanced students)
- ozone, the ozone layer, and the ozone hole - including the creation of ozone, where it is found, and its role in heating the stratosphere and protecting us from excessive UV radiation
- greenhouse gases and the greenhouse effect - and their role in warming Earth and the troposphere from the ground upward
- electromagnetic radiation and the electromagnetic spectrum - especially visible light, ultraviolet radiation, and infrared "light"
- electromagnetic radiation and the atmosphere - at which wavelengths is the atmosphere transparent or opaque, which gases absorb which frequencies of UV (ozone) or IR (water vapor, carbon dioxide, methane, etc.), how absorption of EM radiation can cause heating of certain regions of the atmosphere
- effects of human activities on the atmosphere - global warming due to increases in anthropogenic greenhouse gases in the troposphere, increased UV exposure due to ozone depletion in the stratosphere
The greenhouse effect and greenhouse gases: Solar energy of various wavelengths across the electromagnetic spectrum arrives at Earth at the "top" of our planet's atmosphere. Most of that solar energy is in the form of visible light. There is also quite a bit of ultraviolet (UV) and infrared (IR) radiation, and lesser amounts of X-rays and radio waves. Our atmosphere is mostly transparent at visible wavelengths. Although some sunlight is scattered by air molecules or reflected by clouds, most of it passes straight through the atmosphere and impinges upon the land or sea beneath. The atmosphere is not as transparent at UV and IR wavelengths; most of the UV is absorbed by ozone in the stratosphere, while much of the incoming IR is absorbed by various greenhouse gases (water vapor, carbon dioxide, methane, and others). Sunlight that strikes the Earth's surface (including oceans) warms the land or water. The warm ground or ocean emits infrared radiation, which carries energy back upward into the atmosphere. However, greenhouse gases quickly absorb much of that outbound IR energy, heating the lower atmosphere.
The troposphere (lowest layer of the atmosphere, which extends down to ground level) is warmest at low altitudes and cools as one goes higher. The troposphere is mainly heated by IR energy rising from the surface; therefore, the troposphere is warmest near ground level where the heating source is nearby, and cooler at higher altitudes as one gets further and further from the warm ground.
The ozone layer, UV radiation, and the stratosphere: Normal oxygen molecules (O2) have two oxygen atoms. Ozone (O3), a special type of oxygen molecule, has three atoms instead of two. UV photons from the Sun hit normal oxygen molecules in the stratosphere. The high-energy photons break the molecular bonds holding the oxygen atoms together, splitting the O2 molecule apart into two separate oxygen atoms (the process is called photodissociation). Some of those individual atoms combine with other oxygen molecules to form ozone (O3) molecules. Over time, ozone accumulates in the stratosphere, forming the ozone layer - a region in the stratosphere with elevated concentrations of ozone.
Ozone is almost opaque at UV wavelengths. The ozone layer absorbs most of the incoming solar UV radiation. Ozone molecules shed the energy they absorbed from UV photons as heat, warming the stratosphere. The intensity of UV radiation is greatest at the top of the stratosphere, where energy from the incoming sunlight hasn't yet been "diluted" by atmospheric absorption. The temperature trend in the stratosphere is, therefore, exactly opposite of that in the troposphere below - the warmest area is at the highest altitudes and temperatures grow cooler as one goes lower and moves away from the main source of heating.
Let's consider an analogy to help us understand the temperature trends in the troposphere and stratosphere. Imagine two giant hot plates as heat sources. One is on the ground, facing upwards. It represents the heating of the atmosphere by IR radiation emitted by the warm ground (which was warmed by incoming sunlight). The second hot plate is at the top of the stratosphere, facing downward. It represents the heat given off by ozone molecules after they had absorbed energy from incoming UV radiation. So where are the warmest and coolest areas in the lower atmosphere? The air nearest the lower "hot plate" (Earth's surface) figures to be warm, with temperature decreasing as one moves upward away from the heat source. Likewise, air near the upper "hot plate" should be warm, with temperature dropping off as one moves downward away from the heat source. It also makes sense that the coolest temperatures in the lower atmosphere should be roughly midway between the two "hot plates". The tropopause, the boundary between the troposphere and the stratosphere, corresponds to this relatively cool spot between the two heating sources. If you look at a graph of temperature versus altitude in the lower atmosphere you can see how this "pair of hot plates" analogy plays out in the real atmosphere.
Mesosphere: Above the stratopause (the boundary between the top of the stratosphere and the bottom of the mesosphere) temperatures once again decrease with altitude, as was the case in the troposphere. The air is so thin above the stratosphere that relatively few photons of incoming solar radiation (whether visible light, IR, or UV) collide with air molecules. Temperatures in the mesosphere are therefore quite cold, dropping to -85° C (-120° F) near the top of the layer. The "hot plate" (from the previous analogy) near the top of the stratosphere provides some warmth to the mesosphere above, but temperatures quickly cool as one moves higher and away from that heat source as one climbs through the mesosphere.
How high do balloons fly? We've taken a bit of "artistic license" in this activity by allowing balloons to climb into the mesosphere 60 km above Earth's surface. Typical weather balloons have an operational ceiling somewhere around 30 km. Special-purpose high-altitude research balloons sometimes reach as high as 35 km or even 45 km. The altitude record for a balloon carrying people is just shy of 35 km. The altitude record for unmanned balloons is 51.8 km.
The minimum altitude for spacecraft is about 100 km; below that level atmospheric drag is strong enough to quickly pluck a satellite from orbit. Regions of the atmosphere between 40 and 100 km are therefore difficult to study; too high for balloons, too low for satellites. Researchers use sub-orbital-sounding rocket flights to probe the mesosphere directly, but such flights last just minutes and thus supply relatively limited data. Because of these difficulties, study of the mesosphere is difficult and less is known about that region than about other layers of the atmosphere. Some scientists jokingly refer to the mesosphere as the "ignorosphere".
Air pressure: Air pressure variation with altitude is much simpler than temperature variation. Standard pressure at sea level is defined as 1 atmosphere ( = 1013 millibars = 14.7 lb/in2 = 101.3 kilopascals). Pressure drops steadily with altitude; at roughly 5,500 meters it is down to 1/2 of the sea level value. Rise up another 5,500 meters and the pressure drops by half again, so that pressure at 11 km altitude is roughly a quarter of the sea level value. In fact, the decrease of air pressure with altitude approximately follows an exponential decay curve. Atmospheric scientists use a concept called "scale height" (H) to express the rate of this decay. In Earth's troposphere, the scale height is about 8.4 km. The equation that expresses this trend is:
|P = P0 e – z / H|| |
- P = the air pressure at a given altitude
- P0 = air pressure at sea level
- z = altitude in kilometers
- H = scale height in km
If you have advanced students or like to mix some math into your science lessons, you might want to have your students try to determine the scale height from the data they gather from their balloon flights. This could be a trial-and-error iterative approach, where the students first guess at the scale height, plug it into the equation above, and see how well it matches their data, and then iteratively adjust their hypothesized scale height until it fits their data pretty well. You could also have the students graph their pressure vs. altitude data on semi-log paper; the data should generate a more-or-less straight line, with the slope representing the scale height.
Please note that the equation above is an approximation, though a pretty good one. The actual behavior of the atmosphere is a bit more complex than portrayed by this simple equation. As mentioned above, the scale height in the troposphere is about 8.4 km in the troposphere. If we extend our area of interest higher into the atmosphere, it turns out that the average scale height from sea level to 70 km is about 7.6 km; so scale height does vary with altitude. Any value between 7.5 and 8.5 km that your students determine for scale height would be a pretty good result. Here are some values for pressure at various altitudes, to help you get a feel for this:
|Denver's altitude ("mile high")|
|Half of sea level pressure|
|Top of Mt. Everest|
|10% of sea level pressure|
Data Table for the "Standard Atmosphere"
Scientists sometimes use a concept called the "standard atmosphere" to represent the conditions (such as temperature and pressure) in the "typical" atmosphere. The standard atmosphere removes variations from day to night, across the seasons, at different latitudes, and as weather systems move across regions. The standard atmosphere is more-or-less Earth's average atmosphere, with variations across space and time removed. The table below provides temperature and pressure data versus altitude for the standard atmosphere to a height of 60 kilometers (about 37 miles or 196,850 feet). These are the values used "behind the scenes" in the virtual ballooning software simulation.
|8||-37.0||356.5||35|| -36.7||5.746|| |
|9||-43.5||308||40|| -22.8|| 2.871|
|14||-56.5||141.7|| || || |
Teaching Tips for this Activity
We recommend allowing students four balloon flights to collect data and permitting them to collect data at four altitudes on each flight. Increasing either or both of these allotments would make this activity easier, so that is something you might wish to do if you wish to make the activity less challenging. Limits on the amount of sampling in a scientific investigation reflect constraints that scientists are often under, usually as a result of limited funding for research flights (or perhaps limited battery power for transmitting data in the case of data points per flight). We recommend that you keep this activity pretty challenging, so students have to think a bit and plan their flights to get the data they need. We also recommend giving them a surprise - "funding" for a second set of flights - after they've completed their initial four flights. This will allow them to fine-tune the results obtained in their first trials.
You can have students conduct this activity by themselves or in teams. You could have small groups (3-4 students) each conduct a series of balloon flights. Students would consult with one another before each flight to choose settings for that flight. Alternately, you could have individual students (or even groups) each chooses settings for one flight out of the four flights in a given "research campaign". Students should examine data from previous flight(s) to see where the remaining gaps in their data are, adjusting forthcoming flight settings to fill in those holes.
You may want to have each team report on their results. One team might do a better job filling in data about temperature in the stratosphere, while another group may have collected better data about pressure in the troposphere. Each group would learn from the reports of others. This approach also models the way in which real science is often conducted, with groups sharing limited data sets to build up a more complete picture.
This activity and the accompanying software were developed at the UCAR Center for Science Education by staff member Randy Russell. |
Tides are due to the gravitational attraction of one massive body on another. We commonly think of the tides as being a phenomenon that we see in the sea. There are other instances of the effects of tidal forces such as the drastic effect that a Black Hole has on matter in its close vicinity.
The effect of the tidal forces of a white dwarf star on its close companion are sufficient to drag matter away from the companion onto the surface of the white dwarf where is can cause a sudden, drastic increase in brightness seen as a Nova explosion. Other binary stars also show the effects of tidal forces and these are also exhibited by close pairs of galaxies where the effects of the gravitational pull are sufficient to distort the shapes of the galaxies into weird and wonderful shapes.
The Law of Gravity
Isaac Newton showed that the pull of gravity depended on three things; the masses of the two bodies and their distance apart. He showed that the force was inversely proportional to the square of the distance. This means that if we consider the gravitational pull of the Earth on a satellite, the force will be only a quarter if we double the distance from the centre of the Earth. The Sun is far more massive than the Moon yet, because it is much further away, its gravitational pull is less than half the Moon's.
The tides which we see in the oceans are due to the pull of the Moon and the Sun. The simplest explanation is that the water on the side of the Earth closest to the Moon is pulled, by the Moon's gravitational force, more strongly than is the bulk of the Earth; whereas the water on the side furthest from the Moon is pulled less strongly than the Earth. The effect is to make bulges in the water on opposite sides of the Earth. The effect of the Sun's pull is similar and the tides that we see are the net effect of both pulls.
When the pull from the Sun adds to that of the Moon the tides are large and we call them Spring tides whereas when the pulls are at 90 degrees the tides are small and we call them Neap tides. The heights of spring tides are governed by the distance of the Moon from the Earth, being largest at Perigee (when the Moon is closest to the Earth) and smallest at Apogee (when the Moon is at its furthest).
Because the Sun's pull is aligned with that of the Moon at New Moon and Full Moon these are the times when Spring Tides occur. The pull of the Sun is less than half that of the Moon and so the frequency of the tides is determined by the apparent passage of the Moon around the Earth which takes just over a day. We, therefore, in most places on the Earth have two tides a day with the time of each becoming later from one day to the next by just under an hour a day. (The actual period is, of course, determined by the rotation of the Earth and the orbit of the Moon.)
The height of the tide at any one place is determined by the shape of the coastline and of the nearby continental shelf. The presence of shelving land masses and bays gives much greater range to the tides than is seen in mid-ocean. A phenomenon which is generally not realised is that the air and solid landmasses also move up and down due to the tidal forces. Although the movement is much less in the land than that in the sea it can amount to a metre of vertical shift. It might be expected that the time of high tide would be when the Moon is on the meridian. This is not so. The reason is that, because of the Earth's rotation and friction, the tidal bulge gets left behind a little. The effects near complex coastlines such as in Britain are very difficult to compute.
The Earth-Moon system
The long term effect of the tides is that energy is dissipated by friction in the oceans and the land and in the distortion of the Moon by the tidal pull of the Earth. This slows down the rotation speed of the Earth and moves the Moon further away from the Earth. The Earth loses rotational energy which is given to the Moon. Eventually the Earth's rotation rate will be slowed so that it is the same as that of the orbital period of the Moon. The Earth will then always keep the same face towards the Moon in the same way that the Moon already keeps the same face towards the Earth. After that the system will slowly lose energy so that the Moon will come closer to the Earth again.
This is, of course, a very slow effect. The present rate of change is that the Earth's rotation rate is slowing by 16 seconds every million years and the distance of the Moon is increasing by 120 cm each year.
Satellites of other planets
In the same way the the tidal forces of the Earth on the Moon have caused it to rotate in synchronism with its orbital period (it keeps the same face towards the Earth as it goes around), almost all of the satellites of the planets do the same. The exceptions are believed to be satellites which are ex-asteroids captured by the planet where the tidal forces have not yet had time to equalise the two periods. Even the planet Mercury has suffered from such tidal forces and its rotational period is two-thirds of its orbital period due to the tidal force of the Sun.
Jupiter's satellite Io has an eccentric orbit. Tidal forces from Jupiter are trying to remove this eccentricity and force the orbit to be circular but the eccentricity is caused by tidal forces from the satellite Europa. This means that Io is suffering considerable distorting forces. These generate heat inside Io which is sufficient to power the active volcanoes that were seen by the Voyager spacecraft.
Close binary stars
It is believed that at least half the stars, which look to us to be single, are in fact two, or more, stars in binary or multiple systems. It is clear, from analogy with the Earth-Moon system that such pairs of stars will exert tidal pulls on one another. These tidal pulls become very important when we consider pairs of stars which are close together.
If one star is much bigger than the other it is possible to think of situations where the gravitational pull of the smaller star on the closest part of the big star is greater than the pull of the big star. In these circumstances the big star will lose matter towards the small star. We see this happening in many binary systems where the big star has reached the point in its evolution where it increases markedly in size. This leads to many interesting objects, the most notable being when the smaller star is a compact object.
Novae are stars that suddenly appear where there was apparently no star, or only a very faint star, before. We know that what has happened is that the tidal forces have stripped material from the larger star of a pair and deposited it onto a smaller white-dwarf companion. This material, when it reaches the surface of the white-dwarf, is 'burnt up' in a very rapid and explosive thermonuclear reaction. This raises the brightness of the white-dwarf to be one of the brightest stars in the whole galaxy, whereas before the explosion it was one of the faintest.
Another example of this phenomenon is where the small companion is a neutron star or a black-hole. Then the matter transferred from the larger star gives up so much energy that it emits intense X-radiation which can be seen by X-ray satellites as an x-ray transient. Such sources are the best way in which we can 'see' evidence for black holes. |
Earth's volcanoes occur because its crust is broken into 17 major, rigid tectonic plates that float on a hotter, softer layer in its mantle. Therefore, on Earth, volcanoes are generally found where tectonic plates are diverging or converging. For example, a mid-oceanic ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates pulling apart; the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates coming together. Volcanoes can also form where there is stretching and thinning of the crust's [clarify], e.g., in the East African Rift and the Wells Gray-Clearwater volcanic field and Rio Grande Rift in North America. This type of volcanism falls under the umbrella of "plate hypothesis" volcanism. Volcanism away from plate boundaries has also been explained as mantle plumes. These so-called "hotspots", for example Hawaii, are postulated to arise from upwelling diapirs with magma from the core–mantle boundary, 3,000 km deep in the Earth. Volcanoes are usually not created where two tectonic plates slide past one another.
Erupting volcanoes can pose many hazards, not only in the immediate vicinity of the eruption. One such hazard is that volcanic ash can be a threat to aircraft, in particular those with jet engines where ash particles can be melted by the high operating temperature; the melted particles then adhere to the turbine blades and alter their shape, disrupting the operation of the turbine. Large eruptions can affect temperature as ash and droplets of sulfuric acid obscure the sun and cool the Earth's lower atmosphere (or troposphere); however, they also absorb heat radiated up from the Earth, thereby warming the upper atmosphere (or stratosphere). Historically, so-called volcanic winters have caused catastrophic famines.
- 1 Etymology
- 2 Plate tectonics
- 3 Volcanic features
- 4 Erupted material
- 5 Volcanic activity
- 6 Decade volcanoes
- 7 Effects of volcanoes
- 8 Volcanoes on other celestial bodies
- 9 Traditional beliefs about volcanoes
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
The word volcano is derived from the name of Vulcano, a volcanic island in the Aeolian Islands of Italy whose name in turn comes from Vulcan, the god of fire in Roman mythology. The study of volcanoes is called volcanology, sometimes spelled vulcanology.
Divergent plate boundaries
At the mid-oceanic ridges, two tectonic plates diverge from one another as new oceanic crust is formed by the cooling and solidifying of hot molten rock. Because the crust is very thin at these ridges due to the pull of the tectonic plates, the release of pressure leads to adiabatic expansion and the partial melting of the mantle, causing volcanism and creating new oceanic crust. Most divergent plate boundaries are at the bottom of the oceans; therefore, most volcanic activity is submarine, forming new seafloor. Black smokers (also known as deep sea vents) are evidence of this kind of volcanic activity. Where the mid-oceanic ridge is above sea-level, volcanic islands are formed, for example, Iceland.
Convergent plate boundaries
Subduction zones are places where two plates, usually an oceanic plate and a continental plate, collide. In this case, the oceanic plate subducts, or submerges under the continental plate forming a deep ocean trench just offshore. In a process called flux melting, water released from the subducting plate lowers the melting temperature of the overlying mantle wedge, creating magma. This magma tends to be very viscous due to its high silica content, so it often does not reach the surface but cools at depth. When it does reach the surface, a volcano is formed. Typical examples of this kind of volcano are Mount Etna and the volcanoes in the Pacific Ring of Fire.
"Hotspots" is the name given to volcanic areas believed to be formed by mantle plumes, which are hypothesized to be columns of hot material rising from the core-mantle boundary in a fixed space that causes large-volume melting. Because tectonic plates move across them, each volcano becomes dormant and is eventually reformed as the plate advances over the postulated plume. The Hawaiian Islands have been suggested to have been formed in such a manner, as well as the Snake River Plain, with the Yellowstone Caldera being the part of the North American plate currently above the hot spot. This theory is currently under criticism, however.
The most common perception of a volcano is of a conical mountain, spewing lava and poisonous gases from a crater at its summit; however, this describes just one of the many types of volcano. The features of volcanoes are much more complicated and their structure and behavior depends on a number of factors. Some volcanoes have rugged peaks formed by lava domes rather than a summit crater while others have landscape features such as massive plateaus. Vents that issue volcanic material (including lava and ash) and gases (mainly steam and magmatic gases) can develop anywhere on the landform and may give rise to smaller cones such as Puʻu ʻŌʻō on a flank of Hawaii's Kīlauea. Other types of volcano include cryovolcanoes (or ice volcanoes), particularly on some moons of Jupiter, Saturn, and Neptune; and mud volcanoes, which are formations often not associated with known magmatic activity. Active mud volcanoes tend to involve temperatures much lower than those of igneous volcanoes except when the mud volcano is actually a vent of an igneous volcano.
Volcanic fissure vents are flat, linear fractures through which lava emerges.
Shield volcanoes, so named for their broad, shield-like profiles, are formed by the eruption of low-viscosity lava that can flow a great distance from a vent. They generally do not explode catastrophically. Since low-viscosity magma is typically low in silica, shield volcanoes are more common in oceanic than continental settings. The Hawaiian volcanic chain is a series of shield cones, and they are common in Iceland, as well.
Lava domes are built by slow eruptions of highly viscous lava. They are sometimes formed within the crater of a previous volcanic eruption, as in the case of Mount Saint Helens, but can also form independently, as in the case of Lassen Peak. Like stratovolcanoes, they can produce violent, explosive eruptions, but their lava generally does not flow far from the originating vent.
Cryptodomes are formed when viscous lava is forced upward causing the surface to bulge. The 1980 eruption of Mount St. Helens was an example; lava beneath the surface of the mountain created an upward bulge which slid down the north side of the mountain.
Volcanic cones (cinder cones)
Volcanic cones or cinder cones result from eruptions of mostly small pieces of scoria and pyroclastics (both resemble cinders, hence the name of this volcano type) that build up around the vent. These can be relatively short-lived eruptions that produce a cone-shaped hill perhaps 30 to 400 meters high. Most cinder cones erupt only once. Cinder cones may form as flank vents on larger volcanoes, or occur on their own. Parícutin in Mexico and Sunset Crater in Arizona are examples of cinder cones. In New Mexico, Caja del Rio is a volcanic field of over 60 cinder cones.
Stratovolcanoes (composite volcanoes)
Stratovolcanoes or composite volcanoes are tall conical mountains composed of lava flows and other ejecta in alternate layers, the strata that gives rise to the name. Stratovolcanoes are also known as composite volcanoes because they are created from multiple structures during different kinds of eruptions. Strato/composite volcanoes are made of cinders, ash, and lava. Cinders and ash pile on top of each other, lava flows on top of the ash, where it cools and hardens, and then the process repeats. Classic examples include Mount Fuji in Japan, Mayon Volcano in the Philippines, and Mount Vesuvius and Stromboli in Italy.
Throughout recorded history, ash produced by the explosive eruption of stratovolcanoes has posed the greatest volcanic hazard to civilizations. Not only do stratovolcanoes have greater pressure build up from the underlying lava flow than shield volcanoes, but their fissure vents and monogenetic volcanic fields (volcanic cones) have more powerful eruptions, as they are many times under extension. They are also steeper than shield volcanoes, with slopes of 30–35° compared to slopes of generally 5–10°, and their loose tephra are material for dangerous lahars. Large pieces of tephra are called volcanic bombs. Big bombs can measure more than 4 feet(1.2 meters) across and weigh several tons.
A supervolcano usually has a large caldera and can produce devastation on an enormous, sometimes continental, scale. Such volcanoes are able to severely cool global temperatures for many years after the eruption due to the huge volumes of sulfur and ash released into the atmosphere. They are the most dangerous type of volcano. Examples include: Yellowstone Caldera in Yellowstone National Park and Valles Caldera in New Mexico (both western United States); Lake Taupo in New Zealand; Lake Toba in Sumatra, Indonesia; and Ngorongoro Crater in Tanzania. Because of the enormous area they may cover, supervolcanoes are hard to identify centuries after an eruption. Similarly, large igneous provinces are also considered supervolcanoes because of the vast amount of basalt lava erupted (even though the lava flow is non-explosive).
Submarine volcanoes are common features of the ocean floor. In shallow water, active volcanoes disclose their presence by blasting steam and rocky debris high above the ocean's surface. In the ocean's deep, the tremendous weight of the water above prevents the explosive release of steam and gases; however, they can be detected by hydrophones and discoloration of water because of volcanic gases. Pillow lava is a common eruptive product of submarine volcanoes and is characterized by thick sequences of discontinuous pillow-shaped masses which form under water. Even large submarine eruptions may not disturb the ocean surface due to the rapid cooling effect and increased buoyancy of water (as compared to air) which often causes volcanic vents to form steep pillars on the ocean floor. Hydrothermal vents are common near these volcanoes, and some support peculiar ecosystems based on dissolved minerals. Over time, the formations created by submarine volcanoes may become so large that they break the ocean surface as new islands or floating pumice rafts.
Subglacial volcanoes develop underneath icecaps. They are made up of flat lava which flows at the top of extensive pillow lavas and palagonite. When the icecap melts, the lava on top collapses, leaving a flat-topped mountain. These volcanoes are also called table mountains, tuyas, or (uncommonly) mobergs. Very good examples of this type of volcano can be seen in Iceland, however, there are also tuyas in British Columbia. The origin of the term comes from Tuya Butte, which is one of the several tuyas in the area of the Tuya River and Tuya Range in northern British Columbia. Tuya Butte was the first such landform analyzed and so its name has entered the geological literature for this kind of volcanic formation. The Tuya Mountains Provincial Park was recently established to protect this unusual landscape, which lies north of Tuya Lake and south of the Jennings River near the boundary with the Yukon Territory.
Mud volcanoes or mud domes are formations created by geo-excreted liquids and gases, although there are several processes which may cause such activity. The largest structures are 10 kilometers in diameter and reach 700 meters high.
Another way of classifying volcanoes is by the composition of material erupted (lava), since this affects the shape of the volcano. Lava can be broadly classified into 4 different compositions (Cas & Wright, 1987):
- If the erupted magma contains a high percentage (>63%) of silica, the lava is called felsic.
- Felsic lavas (dacites or rhyolites) tend to be highly viscous (not very fluid) and are erupted as domes or short, stubby flows. Viscous lavas tend to form stratovolcanoes or lava domes. Lassen Peak in California is an example of a volcano formed from felsic lava and is actually a large lava dome.
- Because siliceous magmas are so viscous, they tend to trap volatiles (gases) that are present, which cause the magma to erupt catastrophically, eventually forming stratovolcanoes. Pyroclastic flows (ignimbrites) are highly hazardous products of such volcanoes, since they are composed of molten volcanic ash too heavy to go up into the atmosphere, so they hug the volcano's slopes and travel far from their vents during large eruptions. Temperatures as high as 1,200 °C are known to occur in pyroclastic flows, which will incinerate everything flammable in their path and thick layers of hot pyroclastic flow deposits can be laid down, often up to many meters thick. Alaska's Valley of Ten Thousand Smokes, formed by the eruption of Novarupta near Katmai in 1912, is an example of a thick pyroclastic flow or ignimbrite deposit. Volcanic ash that is light enough to be erupted high into the Earth's atmosphere may travel many kilometres before it falls back to ground as a tuff.
- If the erupted magma contains 52–63% silica, the lava is of intermediate composition.
- These "andesitic" volcanoes generally only occur above subduction zones (e.g. Mount Merapi in Indonesia).
- Andesitic lava is typically formed at convergent boundary margins of tectonic plates, by several processes:
- If the erupted magma contains <52% and >45% silica, the lava is called mafic (because it contains higher percentages of magnesium (Mg) and iron (Fe)) or basaltic. These lavas are usually much less viscous than rhyolitic lavas, depending on their eruption temperature; they also tend to be hotter than felsic lavas. Mafic lavas occur in a wide range of settings:
- Some erupted magmas contain <=45% silica and produce ultramafic lava. Ultramafic flows, also known as komatiites, are very rare; indeed, very few have been erupted at the Earth's surface since the Proterozoic, when the planet's heat flow was higher. They are (or were) the hottest lavas, and probably more fluid than common mafic lavas.
Two types of lava are named according to the surface texture: ʻAʻa (pronounced [ˈʔaʔa]) and pāhoehoe ([paːˈho.eˈho.e]), both Hawaiian words. ʻAʻa is characterized by a rough, clinkery surface and is the typical texture of viscous lava flows. However, even basaltic or mafic flows can be erupted as ʻaʻa flows, particularly if the eruption rate is high and the slope is steep.
Pāhoehoe is characterized by its smooth and often ropey or wrinkly surface and is generally formed from more fluid lava flows. Usually, only mafic flows will erupt as pāhoehoe, since they often erupt at higher temperatures or have the proper chemical make-up to allow them to flow with greater fluidity.
Popular classification of volcanoes
A popular way of classifying magmatic volcanoes is by their frequency of eruption, with those that erupt regularly called active, those that have erupted in historical times but are now quiet called dormant or inactive, and those that have not erupted in historical times called extinct. However, these popular classifications—extinct in particular—are practically meaningless to scientists. They use classifications which refer to a particular volcano's formative and eruptive processes and resulting shapes, which was explained above.
There is no consensus among volcanologists on how to define an "active" volcano. The lifespan of a volcano can vary from months to several million years, making such a distinction sometimes meaningless when compared to the lifespans of humans or even civilizations. For example, many of Earth's volcanoes have erupted dozens of times in the past few thousand years but are not currently showing signs of eruption. Given the long lifespan of such volcanoes, they are very active. By human lifespans, however, they are not.
Scientists usually consider a volcano to be erupting or likely to erupt if it is currently erupting, or showing signs of unrest such as unusual earthquake activity or significant new gas emissions. Most scientists consider a volcano active if it has erupted in the last 10,000 years (Holocene times) – the Smithsonian Global Volcanism Program uses this definition of active. Most volcanoes are situated on the Pacific Ring of Fire. An estimated 500 million people live near active volcanoes.
Historical time (or recorded history) is another timeframe for active. The Catalogue of the Active Volcanoes of the World, published by the International Association of Volcanology, uses this definition, by which there are more than 500 active volcanoes. However, the span of recorded history differs from region to region. In China and the Mediterranean, it reaches back nearly 3,000 years, but in the Pacific Northwest of the United States and Canada, it reaches back less than 300 years, and in Hawaii and New Zealand, only around 200 years.
As of 2013, the following are considered Earth's most active volcanoes:
- Kīlauea, the famous Hawaiian volcano, has been in continuous, effusive eruption since 1983, and has the longest-observed lava lake.
- Mount Etna and nearby Stromboli, two Mediterranean volcanoes in "almost continuous eruption" since antiquity.
- Mount Yasur, in Vanuatu, has been erupting "nearly continuously" for over 800 years.
The longest currently ongoing (but not necessarily continuous) volcanic eruptive phases are:
- Mount Yasur, 111 years
- Mount Etna, 109 years
- Stromboli, 108 years
- Santa María, 101 years
- Sangay, 94 years
Other very active volcanoes include:
- Mount Nyiragongo and its neighbor, Nyamuragira, are Africa's most active volcanoes
- Piton de la Fournaise, in Réunion, erupts frequently enough to be a tourist attraction.
- Erta Ale, in the Afar Triangle, has maintained a lava lake since at least 1906.
- Mount Erebus, in Antarctica, has maintained a lava lake since at least 1972.
- Mount Merapi
- Whakaari / White Island, has been in continuous state of smoking since its discovery in 1769.
- Ol Doinyo Lengai
- Arenal Volcano
- Klyuchevskaya Sopka
Extinct volcanoes are those that scientists consider unlikely to erupt again because the volcano no longer has a magma supply. Examples of extinct volcanoes are many volcanoes on the Hawaiian – Emperor seamount chain in the Pacific Ocean, Hohentwiel, Shiprock and the Zuidwal volcano in the Netherlands. Edinburgh Castle in Scotland is famously located atop an extinct volcano. Otherwise, whether a volcano is truly extinct is often difficult to determine. Since "supervolcano" calderas can have eruptive lifespans sometimes measured in millions of years, a caldera that has not produced an eruption in tens of thousands of years is likely to be considered dormant instead of extinct. Some volcanologists refer to extinct volcanoes as inactive, though the term is now more commonly used for dormant volcanoes once thought to be extinct.
It is difficult to distinguish an extinct volcano from a dormant (inactive) one. Volcanoes are often considered to be extinct if there are no written records of its activity. Nevertheless, volcanoes may remain dormant for a long period of time. For example, Yellowstone has a repose/recharge period of around 700,000 years, and Toba of around 380,000 years. Vesuvius was described by Roman writers as having been covered with gardens and vineyards before its eruption of AD 79, which destroyed the towns of Herculaneum and Pompeii. Before its catastrophic eruption of 1991, Pinatubo was an inconspicuous volcano, unknown to most people in the surrounding areas. Two other examples are the long-dormant Soufrière Hills volcano on the island of Montserrat, thought to be extinct before activity resumed in 1995 and Fourpeaked Mountain in Alaska, which, before its September 2006 eruption, had not erupted since before 8000 BC and had long been thought to be extinct.
Technical classification of volcanoes
The three common popular classifications of volcanoes can be subjective and some volcanoes thought to have been extinct have erupted again. To help prevent people from falsely believing they are not at risk when living on or near a volcano, countries have adopted new classifications to describe the various levels and stages of volcanic activity. Some alert systems use different numbers or colors to designate the different stages. Other systems use colors and words. Some systems use a combination of both.
Volcano warning schemes of the United States
The United States Geological Survey (USGS) has adopted a common system nationwide for characterizing the level of unrest and eruptive activity at volcanoes. The new volcano alert-level system classifies volcanoes now as being in a normal, advisory, watch or warning stage. Additionally, colors are used to denote the amount of ash produced. Details of the U.S. system can be found at Volcano warning schemes of the United States.
The Decade Volcanoes are 17 volcanoes identified by the International Association of Volcanology and Chemistry of the Earth's Interior (IAVCEI) as being worthy of particular study in light of their history of large, destructive eruptions and proximity to populated areas. They are named Decade Volcanoes because the project was initiated as part of the United Nations-sponsored International Decade for Natural Disaster Reduction. The 17 current Decade Volcanoes are
The Deep Earth Carbon Degassing Project, an initiative of the Deep Carbon Observatory, monitors 9 volcanoes, 2 of which are Decade volcanoes. The focus of the Deep Earth Carbon Degassing Project is to use Multi-Component Gas Analyzer System instruments to measure CO2/SO2 ratios in real-time and in high-resolution to allow detection of the pre-eruptive degassing of rising magmas, improving prediction of volcanic activity.
Effects of volcanoes
There are many different types of volcanic eruptions and associated activity: phreatic eruptions (steam-generated eruptions), explosive eruption of high-silica lava (e.g., rhyolite), effusive eruption of low-silica lava (e.g., basalt), pyroclastic flows, lahars (debris flow) and carbon dioxide emission. All of these activities can pose a hazard to humans. Earthquakes, hot springs, fumaroles, mud pots and geysers often accompany volcanic activity.
The concentrations of different volcanic gases can vary considerably from one volcano to the next. Water vapor is typically the most abundant volcanic gas, followed by carbon dioxide and sulfur dioxide. Other principal volcanic gases include hydrogen sulfide, hydrogen chloride, and hydrogen fluoride. A large number of minor and trace gases are also found in volcanic emissions, for example hydrogen, carbon monoxide, halocarbons, organic compounds, and volatile metal chlorides.
Large, explosive volcanic eruptions inject water vapor (H2O), carbon dioxide (CO2), sulfur dioxide (SO2), hydrogen chloride (HCl), hydrogen fluoride (HF) and ash (pulverized rock and pumice) into the stratosphere to heights of 16–32 kilometres (10–20 mi) above the Earth's surface. The most significant impacts from these injections come from the conversion of sulfur dioxide to sulfuric acid (H2SO4), which condenses rapidly in the stratosphere to form fine sulfate aerosols. It is worth mentioning that the SO2 emissions alone of two different eruptions are sufficient to compare their potential climatic impact. The aerosols increase the Earth's albedo—its reflection of radiation from the Sun back into space – and thus cool the Earth's lower atmosphere or troposphere; however, they also absorb heat radiated up from the Earth, thereby warming the stratosphere. Several eruptions during the past century have caused a decline in the average temperature at the Earth's surface of up to half a degree (Fahrenheit scale) for periods of one to three years – sulfur dioxide from the eruption of Huaynaputina probably caused the Russian famine of 1601–1603.
One proposed volcanic winter happened c. 70,000 years ago following the supereruption of Lake Toba on Sumatra island in Indonesia. According to the Toba catastrophe theory to which some anthropologists and archeologists subscribe, it had global consequences, killing most humans then alive and creating a population bottleneck that affected the genetic inheritance of all humans today. The 1815 eruption of Mount Tambora created global climate anomalies that became known as the "Year Without a Summer" because of the effect on North American and European weather. Agricultural crops failed and livestock died in much of the Northern Hemisphere, resulting in one of the worst famines of the 19th century. The freezing winter of 1740–41, which led to widespread famine in northern Europe, may also owe its origins to a volcanic eruption.
It has been suggested that volcanic activity caused or contributed to the End-Ordovician, Permian-Triassic, Late Devonian mass extinctions, and possibly others. The massive eruptive event which formed the Siberian Traps, one of the largest known volcanic events of the last 500 million years of Earth's geological history, continued for a million years and is considered to be the likely cause of the "Great Dying" about 250 million years ago, which is estimated to have killed 90% of species existing at the time.
The sulfate aerosols also promote complex chemical reactions on their surfaces that alter chlorine and nitrogen chemical species in the stratosphere. This effect, together with increased stratospheric chlorine levels from chlorofluorocarbon pollution, generates chlorine monoxide (ClO), which destroys ozone (O3). As the aerosols grow and coagulate, they settle down into the upper troposphere where they serve as nuclei for cirrus clouds and further modify the Earth's radiation balance. Most of the hydrogen chloride (HCl) and hydrogen fluoride (HF) are dissolved in water droplets in the eruption cloud and quickly fall to the ground as acid rain. The injected ash also falls rapidly from the stratosphere; most of it is removed within several days to a few weeks. Finally, explosive volcanic eruptions release the greenhouse gas carbon dioxide and thus provide a deep source of carbon for biogeochemical cycles.
Gas emissions from volcanoes are a natural contributor to acid rain. Volcanic activity releases about 130 to 230 teragrams (145 million to 255 million short tons) of carbon dioxide each year. Volcanic eruptions may inject aerosols into the Earth's atmosphere. Large injections may cause visual effects such as unusually colorful sunsets and affect global climate mainly by cooling it. Volcanic eruptions also provide the benefit of adding nutrients to soil through the weathering process of volcanic rocks. These fertile soils assist the growth of plants and various crops. Volcanic eruptions can also create new islands, as the magma cools and solidifies upon contact with the water.
Ash thrown into the air by eruptions can present a hazard to aircraft, especially jet aircraft where the particles can be melted by the high operating temperature; the melted particles then adhere to the turbine blades and alter their shape, disrupting the operation of the turbine. Dangerous encounters in 1982 after the eruption of Galunggung in Indonesia, and 1989 after the eruption of Mount Redoubt in Alaska raised awareness of this phenomenon. Nine Volcanic Ash Advisory Centers were established by the International Civil Aviation Organization to monitor ash clouds and advise pilots accordingly. The 2010 eruptions of Eyjafjallajökull caused major disruptions to air travel in Europe.
Volcanoes on other celestial bodies
The Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the moon), rilles and domes.
The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank.
There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well.
Jupiter's moon Io is the most volcanically active object in the solar system because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. Its lavas are the hottest known anywhere in the solar system, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the solar system occurred on Io. Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the solar system.
In 1989 the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar.
A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, studied that tidal heating from the host star very close to the planet and neighboring planets could generate intense volcanic activity similar to Io.
Traditional beliefs about volcanoes
Many ancient accounts ascribe volcanic eruptions to supernatural causes, such as the actions of gods or demigods. To the ancient Greeks, volcanoes' capricious power could only be explained as acts of the gods, while 16th/17th-century German astronomer Johannes Kepler believed they were ducts for the Earth's tears. One early idea counter to this was proposed by Jesuit Athanasius Kircher (1602–1680), who witnessed eruptions of Mount Etna and Stromboli, then visited the crater of Vesuvius and published his view of an Earth with a central fire connected to numerous others caused by the burning of sulfur, bitumen and coal.
Various explanations were proposed for volcano behavior before the modern understanding of the Earth's mantle structure as a semisolid material was developed. For decades after awareness that compression and radioactive materials may be heat sources, their contributions were specifically discounted. Volcanic action was often attributed to chemical reactions and a thin layer of molten rock near the surface.
- NSTA Press / Archive.Org (2007). "Earthquakes, Volcanoes, and Tsunamis" (PDF). Resources for Environmental Literacy. Archived from the original (PDF) on July 14, 2014. Retrieved April 22, 2014.
- Foulger, G.R. (2010). Plates vs. Plumes: A Geological Controversy. Wiley-Blackwell. ISBN 978-1-4051-6148-0.
- Davis A. Young (January 2016). "Volcano". Mind over Magma: The Story of Igneous Petrology. Retrieved January 11, 2016.
- Wood, C. A., 1979b. Cinder cones on Earth, Moon and Mars. Lunar Planet. Sci. X, 1370–1372.
- Meresse, S.; Costard, F. O.; Mangold, N.; Masson, P.; Neukum, G. (2008). "Formation and evolution of the chaotic terrains by subsidence and magmatism: Hydraotes Chaos, Mars". Icarus. 194 (2): 487. doi:10.1016/j.icarus.2007.10.023.
- Brož, P.; Hauber, E. (2012). "A unique volcanic field in Tharsis, Mars: Pyroclastic cones as evidence for explosive eruptions". Icarus. 218: 88. doi:10.1016/j.icarus.2011.11.030.
- Lawrence, S. J.; Stopar, J. D.; Hawke, B. R.; Greenhagen, B. T.; Cahill, J. T. S.; Bandfield, J. L.; Jolliff, B. L.; Denevi, B. W.; Robinson, M. S.; Glotch, T. D.; Bussey, D. B. J.; Spudis, P. D.; Giguere, T. A.; Garry, W. B. (2013). "LRO observations of morphology and surface roughness of volcanic cones and lobate lava flows in the Marius Hills". Journal of Geophysical Research: Planets. 118 (4): 615. doi:10.1002/jgre.20060.
- Lockwood, John P.; Hazlett, Richard W. (2010). Volcanoes: Global Perspectives. p. 552. ISBN 978-1-4051-6250-0.
- Berger, Melvin, Gilda Berger, and Higgins Bond. "Volcanoes-why and how ." Why do volcanoes blow their tops?: Questions and answers about volcanoes and earthquakes. New York: Scholastic, 1999. 7. Print.
- "Volcanoes". European Space Agency. 2009. Retrieved August 16, 2012.
- Decker, Robert Wayne; Decker, Barbara (1991). Mountains of Fire: The Nature of Volcanoes. Cambridge University Press. p. 7. ISBN 0-521-31290-6. Retrieved August 16, 2012.
- Tilling, Robert I. (1997). "Volcano environments". Volcanoes. Denver, Colorado: U.S. Department of the Interior, U.S. Geological Survey. Retrieved August 16, 2012.
There are more than 500 active volcanoes (those that have erupted at least once within recorded history) in the world
- "The most active volcanoes in the world". VolcanoDiscovery.com. Retrieved 3 August 2013.
- "The World's Five Most Active Volcanoes". livescience.com. Retrieved 4 August 2013.
- Chesner, C.A.; Rose, J.A.; Deino, W.I.; Drake, R.; Westgate, A. (March 1991). "Eruptive History of Earth's Largest Quaternary caldera (Toba, Indonesia) Clarified" (PDF). Geology. 19 (3): 200–203. Bibcode:1991Geo....19..200C. doi:10.1130/0091-7613(1991)019<0200:EHOESL>2.3.CO;2. Retrieved January 20, 2010.
- "Volcanic Alert Levels of Various Countries". Volcanolive.com. Retrieved August 22, 2011.
- "Forecasting Etna eruptions by real-time observation of volcanic gas composition".
- Pedone, M.; Aiuppa, A.; Giudice, G.; Grassa, F.; Francofonte, V.; Bergsson, B.; Ilyinskaya, E. (2014). "Tunable diode laser measurements of hydrothermal/volcanic CO2 and implications for the global CO2 budget." (PDF). Solid Earth. 5: 1209–1221. doi:10.5194/se-5-1209-2014.
- Miles, M. G.; Grainger, R. G.; Highwood, E. J. (2004). "The significance of volcanic eruption strength and frequency for climate" (PDF). Quarterly Journal of the Royal Meteorological Society. 130: 2361–2376. doi:10.1256/qj.30.60.
- University of California – Davis (April 25, 2008). "Volcanic Eruption Of 1600 Caused Global Disruption". ScienceDaily.
- "Supervolcano Eruption – In Sumatra – Deforested India 73,000 Years Ago". ScienceDaily. November 24, 2009.
- "The new batch – 150,000 years ago". BBC – Science & Nature – The evolution of man.
- "When humans faced extinction". BBC. June 9, 2003. Retrieved January 5, 2007.
- Volcanoes in human history: the far-reaching effects of major eruptions. Jelle Zeilinga de Boer, Donald Theodore Sanders (2002). Princeton University Press. p. 155. ISBN 0-691-05081-3
- Oppenheimer, Clive (2003). "Climatic, environmental and human consequences of the largest known historic eruption: Tambora volcano (Indonesia) 1815". Progress in Physical Geography. 27 (2): 230–259. doi:10.1191/0309133303pp379ra.
- "Ó Gráda, C.: Famine: A Short History". Princeton University Press.
- "Yellowstone's Super Sister". Discovery Channel.
- Benton M J (2005). When Life Nearly Died: The Greatest Mass Extinction of All Time. Thames & Hudson. ISBN 978-0-500-28573-2.
- McGee, Kenneth A.; Doukas, Michael P.; Kessler, Richard; Gerlach, Terrence M. (May 1997). "Impacts of Volcanic Gases on Climate, the Environment, and People". United States Geological Survey. Retrieved 9 August 2014. This article incorporates text from this source, which is in the public domain.
- "Volcanic Gases and Their Effects". U.S. Geological Survey. Retrieved June 16, 2007.
- M. A. Wieczorek, B. L. Jolliff, A. Khan, M. E. Pritchard, B. P. Weiss, J. G. Williams, L. L. Hood, K. Righter, C. R. Neal, C. K. Shearer, I. S. McCallum, S. Tompkins, B. R. Hawke, C. Peterson, J, J. Gillis, B. Bussey (2006). "The Constitution and Structure of the Lunar Interior". Reviews in Mineralogy and Geochemistry. 60 (1): 221–364. doi:10.2138/rmg.2006.60.3.
- Bindschadler, D. L. (1995). "Magellan: A new view of Venus' geology and geophysics". Reviews of Geophysics. 33: 459. doi:10.1029/95RG00281. Retrieved 28 September 2015.
- "Glacial, volcanic and fluvial activity on Mars: latest images". European Space Agency. February 25, 2005. Retrieved August 17, 2006.
- "Exceptionally bright eruption on Io rivals largest in Solar System", November 13, 2002.
- "Cassini Finds an Atmosphere on Saturn's Moon Enceladus". PPARC. 16 March 2005. Archived from the original on 2007-03-10. Retrieved 4 July 2014.
- Smith, Yvette (March 15, 2012). "Enceladus, Saturn's Moon". Image of the Day Gallery. NASA. Retrieved 4 July 2014.
- "Hydrocarbon volcano discovered on Titan". Newscientist.com. June 8, 2005. Retrieved October 24, 2010.
- Jaggard, Victoria (February 5, 2010). ""Super Earth" May Really Be New Planet Type: Super-Io". National Geographic web site daily news. National Geographic Society. Retrieved March 11, 2010.
- Williams, Micheal (November 2007). "Hearts of fire". Morning Calm. Korean Air Lines (11–2007): 6.
- Cas, R.A.F. and J.V. Wright, 1987. Volcanic Successions. Unwin Hyman Inc. 528p. ISBN 0-04-552022-4
- Macdonald, Gordon and Agatin T. Abbott. (1970). Volcanoes in the Sea. University of Hawaii Press, Honolulu. 441 p.
- Marti, Joan & Ernst, Gerald. (2005). Volcanoes and the Environment. Cambridge University Press. ISBN 0-521-59254-2.
- Ollier, Cliff. (1988). Volcanoes. Basil Blackwell, Oxford, UK, ISBN 0-631-15664-X (hardback), ISBN 0-631-15977-0 (paperback).
- Sigurðsson, Haraldur, ed. (1999). Encyclopedia of Volcanoes. Academic Press. ISBN 0-12-643140-X. This is a reference aimed at geologists, but many articles are accessible to non-professionals.
|Wikimedia Commons has media related to Volcano.|
|Wikivoyage has a travel guide for Volcanoes.|
|Library resources about |
What are the Instructional Materials for Different Disabilities ?Accommodations for Students with Special NeedsTeacher Checklist to Maximize AccommodationsRarely are there specific lesson plans for special education. Teachers take existing lesson plans and provide either accommodations or modificationsto enable the student with special education to have optimum success. This tip sheet will focus on four areas where one can make specialaccomodations to support students with special needs students in the inclusive classroom. Those four areas include:1.) Instructional Materials2.) Vocabulary2.) Lesson Content4.) AssessmentInstructional Materials: Are the materials you select for the instruction conducive to meeting the child(ren) with special needs? Can they see, hear or touch the materials to maximize learning? Are the instructional materials selected with all of the students in mind? What are your visuals and are they appropriate for all? What will you use to demonstrate or simulate the learning concept? What other hands on materials can you use to ensure that the students with needs will understand learning concepts? If you are using overheads, are there extra copies for students who need to see it closer or have it repeated? Does the student have a peer that will help?Vocabulary Do the students understand the vocabulary necessary for the specific concept you are going to teach? Is there a need to focus first on the vocabulary prior to starting the lesson? How will you introduce the new vocabulary to the students?
What will your overview look like? How will your overview engage the students?Lesson Content Does your lesson focus completely on the content, does what the students do extend or lead them to new learning? (Wordsearch activities rarely lead to any learning) What will ensure that the students are engaged? What type of review will be necessary? How will you ensure that students are understanding? Have you built in time for a breakout or change in activity? Many children have difficulty sustaining attention for lengthy periods of time. Have you maximized assistive technology where appropriate for specific students? Do the students have a element in choice for the learning activities? Have you addressed the multiple learning styles? Do you need to teach the student specific learning skills for the lesson? (How to stay on task, how to keep organized, how to get help when stuck etc). What strategies are in place to help re-focus the child, continue to build self-esteem and prevent the child from being overwhelmed?Assessment Do you have alternate means of assessment for students with special needs (word processors, oral or taped feedback)? Do they have a longer time lines? Have you provided checklists, graphic organizers, or/and outlines? Does the child have reduced quantities?InSummary
Overall, this may seem like a lot of questions to ask yourself to ensure that all students have maximized learning opportunities. However, once youget into the habit of this type of reflection as you plan each learning experience, you will soon be a pro at ensuring the inclusional classroom works asbest as it can to meet your diverse group of students which are found in most classrooms today. Always remember, that no 2 students learn thesame, be patient and continue to differentiate both instruction and assessment as much as possible.About Spelling What to Look For in a Spelling Program The Dos and Donts of Spelling First of all, please note that very little research is available regarding the teaching and acquisition of spelling skills. However, evidence of good practice is. Many teachers have developed the tried and true strategies to help their students become better spellers. Here is what they say and do: Do have a wordwall. Dont forget to change the words. Word walls provide a great strategy for young learners to see and write the words they need, when they need them. Change the words as needed throughout the year to ensure maximum learning. Use it all year, refer to it often and make sure the words are relevant to their learning throughout the year. Wordwalls will benefit students in kindergarten to the 3rd grade. However, they can be used in the inclusional classroom at any grade. Word wall words should be alphabetized to help children locate the word they need quickly. Do provide spelling lists that meet the weekly/monthly needs. Dont use those traditional spelling texts. Students need to be able to spell the words they need to write. Therefore their spelling lists need to be connected to other things that are currently being taught. For instance, if you are teaching transportation, the spelling words should be those that they need to know like: fast, slow, air, ground, fly, train etc. Have your students brainstorm the list of words they need to learn on a regular basis. Everyday words should be included in their word walls. Words that have certain patterns are good to learn as well. These would be the word families and words with similar patterns like through, enough, etc. I cant find any research to indicate that spelling texts lead to improved spelling ability or new learning. Also, note that word searches, alphabetizing words, writing words out rarely leads to new learning or improved spelling ability. Applying words in authentic situations is much more worthwhile. Do focus on the 44 sounds throughout the year. Dont just focus on the long and short vowels and beginning and ending consonants. When you think about ape and apple, long and short come to mind. However, what about the a sound in star and in jaw? Is it long or short? If youre teaching about some of the spelling patterns, be aware of the 44 different sounds.
Do provide strategies to help them spell. Dont bother with weekly spelling tests. Help students recognize spelling patterns, generalizations and some of the basic rules. When students write, have them circle the words theyre uncertain about. This will help them learn them. Spelling tests only support short term memory and dont tend to lead to permanent learning. Help them to notice the patterns and help them to make connections. (If funny has 2 consonants, how do you think bunny and runny would be spelled? Prompt children to identify the patterns) Do use spelling patterns, everyday words and theme based words focused on your specific curricular area. Although some children enjoy the weekly spelling tests, others spend far too much time memorizing words and all too often forget them. The weekly spelling test tends to only be a test of short term memory. Dont over emphasize spelling rules. Remember that thinking is more important than memory and leads to more permanent learning. There are also many exceptions to the spelling rules so choose the rules you teach carefully. The 44 Sounds in SpellingWhen considering a spelling program and how to best help children learn the sounds of the English language. Remember to choose words that helpthem understand all of the 44 sounds. (19 vowel sounds including 5 long vowels, 5 short vowels, 3 dipthongs, 2 oo sounds, 4 r controlled vowelsounds and 25 consonant sounds).The following lists provides you with sample words to teach the sounds in the English language.The 5 Short Vowel Sounds short -a- in and, as, after short -e- in pen, hen, lend short -i- in it, in
short -o- in top, hop short -u- in under, cupThe 6 Long Vowel Sounds long -a- in make, take long -e- in beet, feet long -i- in tie, lie long -o- in coat, toe long -u- (yoo) in rule long -oo- in few, blueThe R-Controlled Vowel Sounds -ur- in her, bird, and hurt -ar- in bark, dark -or- in fork, pork, storkThe 18 Consonant Soundsc, q and x are missing as they are found in other sounds. (The C sound is found in the k sounds and in the s sound in words like cereal, city and cent.The Q sound is found in kw words like backwards and Kwanza. The X sound is also found in ks words like kicks.) -b- in bed, bad -k- in cat and kick -d- in dog -f- in fat -g- in got -h- in has -j- in job -l- in lid -m- in mop -n- in not -p- in pan
-r- in ran -s- in sit -t- in to -v- in van -w- in went -y- in yellow -z- in zipperTheBlendsBlends are 2 or 3 letters combined to form a distinct spellingsound.The blends sounds: -bl- in blue and black -c- in clap and close -fl- in fly and flip -gl- in glue and glove -pl- in play and please -br- in brown and break -cr- in cry and crust -dr- in dry and drag -fr- in fry and freeze -gr- in great and grand -pr- in prize and prank -tr- in tree and try -sk- in skate and sky -sl- in slip and slap -sp- in spot and speed -st- in street and stop -sw- in sweet and sweater -spr- in spray and spring -str- in stripe and strap
The 7 Digraph Sounds -ch- in chin and ouch -sh- in ship and push -th- in thing -th- in this -wh- in when -ng- in ring -nk- in rinkThe Other Special Sounds Including Dipthongs -oi- in foil and toy -ow- in owl and ouch short -oo- in took and pull -aw- in raw and haul -zh- in vision Spelling DiagraphsWord Walls How To Use a Word WallEffective Use of Word Walls and Word CardsFrom Sue Watson, former About.com GuideSee More About homonyms
phonics printing worksheetsFind how how to use word walls or word flash cards. Learning to read is key to a childs future success and when we discover reluctant readers ornon-readers, we are usually quick to assess the methods that will provide success. Although a good early reading program consists of phonics,listening/thinking, letter formation, letter sounds, real reading, and sight words, this article will focus on the importance of phonics using word wallsand or word cards.Phonics is mainly concerned with sounds, learning letter formation, blending sounds and the ability to identify sounds in words. Learning the soundsof letters leads children to the next step - applying the sounds including the blends to hear the words. When main letter sounds are known, the childapplies this knowedge to words. For instance, if the popular sounds are learned first (s, t, m, r, ,c, f) etc. the knowledge is then in place for a child torecognize, cat, fat, mat, sat, rat etc.)Word walls can be used from Kindergarten to the eighth grade.A child needs a set of word cards - or word walls should be in place. Begin with the Dolch words at the appropriate level. Also use the word familycards to extend word knowledge. Again, beginning with the easiest level first.Activities for the use of Word Cards/WallsPut the words in alpha order as each is said aloud.Print a rhyming word for 10 of the word cards or word wall words.Use the cards in a flash game with a partner.Put the cards in piles - those you can add an s to and those you cant.Write a word wall story, see how many of the words you can use.Use a timer to see how fast the words can be read.Change 1 or 2 of the letters of to see if new words can be made.
Write in a journal and underline the word wall/card words.How many different ways can you add or take away a letter to make new words, i.e., ten - tent - then.Children must state 5 facts or ask 5 questions begining with their chosen word cards/wall words.The goals for word wall or word card activities are: being able to read common and word family words accurately and quickly; being able to spell theword card/wall words and self assessing the spelling and reading of the words.Parent connections are extremely valuable in the reading process. Give parents a list of high frequency (Dolch) words and the word families with afew strategies to support reading at home. Printable Word Wall Words Activities for Flash Word Cards and Word Walls List 1 a and away big blue can come down find for funny go help here in is it jump little look make me one play
my not red run said see the three to two up wePhonics, Letters and the Alphabet Phonics Worksheets Print the Letter A
Dolch Cloze WorksheetsName____________________________1. I have a __ __ __ hat. (red, the, fun)2. My __ __ __ has a tail. (arm, let, dog)3. I can __ __ __ fast! (tan, run, let)4. Do you like __ __? (do, hi, me)5. How are __ __ __? (fun, hat, you)6. __ __ __ __ with me. (play, door, look)7. Where is my __ __ __ __? (tall, door, book)8. Can I __ __ __ __? (some, fall, come)
9. Close the __ __ __ __. (took, door, ball)10. Can you __ __ __ me? (and, fan, see)11. Let me __ __! (hi, go, it)12. __ __ __ __ at the dog. (find, snow, lookRhyming WordsAll word families.ack back, black, crack, pack, quack, rack, sack, snack, stack, tack, track, whack.ad ad, dad, fad, glad, grad, had, lad, mad, pad, rad, sad, tad.ail fail, hail, jail, mail, nail, pail, rail, sail, snail, tail.ain brain, chain, drain, gain, grain, main, pain, plain rain, stain, strain, train.ake bake, cake, flake, make, rake, take.ale bale, male, pale, scale, tale, whale.all ball, call, fall, hall, mall, small, tall, wall.am am, ham, jam, slam, spam, yam.ame blame, came, flame, frame, game, lame, name, same, tame.an an, ban, can, fan, man, pan, plan, ran, tan, van.
ank bank, blank, crank, drank, plan, sank, spank, tank, thank, yank.ap cap, clap, flap, gap, lap, map, nap, rap, sap, slap, scrap, tap.ar are, bar, char, car, far, jar, par, scar, cigar, guitar.ash ash, bash, cash, crash, dash, flash, gash, hash, mash, rash, sash, slash, smash, splash, trash.at at, bat, brat, cat, fat, hat, mat, pat, rat, sat, spat, tat, that, vat.aw claw, draw, flaw, jaw, law, paw, straw, thaw.ay away, bay, clay, day, gay, gray, hay, lay, may, okay, pay, play, way, spray, stay, tray, way. Letter Sound and Alphabet WorksheetsChecklist for Readers ages 3-5 yearsReading StrategiesFrom Sue Watson, former About.com GuideSee More About struggling readers literacy worksheets phonicsReading comprehension and reading strategies are key to success. Early diagnosis of learning disabilities is crucial to helping with the skills ofreading. Heres a checklist to determine if your child/student is at an expected level of development.
1. ____ The child enjoys being read to and has expressed an interest in favorite books.2. ____ The child is able to sit a listen to stories being read to him/her and takes an interest in the illustrations.3. ____ The child pretends to read by holding the book correctly, turns the pages and makes reference to the story from memory and from thepictures.4. ____ The child recognizes his/her own name and knows some of the letters of the alphabet.5. ____ When prompted, the child recalls events in the story.6. ____ The child enjoys participating in songs, chimes, chants, poems and storybook times.7. ____ The child chimes in on familiar or predictable stories.8. ____ When prompted the child can distinguish the beginning, middle and end of the story.9. ____ Some children will have sound-symbol correspondence, theyll know that the B is what the word ball begins with.10.____ Is beginning to recognize similarities and differences between stories or characters.If youve checked most of the boxes, theres nothing to worry about. However, if the child isnt displaying many of the readiness for readingcharacteristics, the child may be showing signs of having language delays or a learning disability. Refer to some of the helpful Suggested Readingon this page to guide you.Reading Disability Checklist 4-6 YearsReading ComprehensionFrom Sue Watson, former About.com GuideSee More About
phonics worksheets spelling struggling readers teacher strategiesReading comprehension and effective reading strategies are critical to the reading process. Early diagnosis of learning disabilities is crucial to helpingwith the skills of reading. Heres a checklist to determine if your child/student is at an expected level of development.1. ____ The child enjoys being read to and has expressed an interest in favorite books.2. ____ The child is able read some environmental print that he/shes exposed to: stop signs, McDonalds signs etc.3. ____ The child pretends to read and uses the illustrations to guide reading.4. ____ The child recognizes letters and sounds of the alphabet. When prompted with: what is the beginning sound of bat, the child knows b or,what is the ending sound of bat and the child knows p.5. ____ The child has memorized familiar books and reads these from memory. (Note: memory reading is an early stage of reading, at this stage itsimportant to write some of the words on cards and get the child to start identifying words from the story in isolation.)6. ____ The child enjoys participating in songs, chimes, chants, poems and storybook times.7. ____ The child chimes in on familiar or predictable stories.8. ____ The child is able to make predictions about what might happen in the story based on what has happened - making connections is part ofcomprehension.9. ____ The child will have fun with words and provide rhymes both real and nonsense type. For instance: right rhymes with tight, fight and grite.The child selects rhyming words and makes up rhyming words. Seuss books are helpful at this stage.10.____ Is beginning to recognize similarities and differences between stories or characters and provides rational regarding the similarities and thedifferences.
If youve checked most of the boxes, theres nothing to worry about. However, if the child isnt displaying many of the readiness for readingcharacteristics, the child may be showing signs of having language delays or a learning disability. Refer to some of the helpful Suggested Readinglinks on this page to guide you.Reading Comprehension RubricHow to Assess Reading Comprehension
Comprehension RubricIn order to determine if a struggling reader is becoming proficient, youll need to watch carefully to see if they exhibit characteristics of competentreaders. These characteristics will include: making effective use of cueing systems, bringing in background information, moving from a word by wordsystem to a fluent reading for meaning system. The rubric below should be used on each student to help ensure reading proficiency.Capitalization RulesFrom Sue Watson, former About.com GuideSee More About how to use capital letters using capital letters helpSentences:Capital letters should always be used for the beginning of sentences and questions.Titles:Capital letters always need to be used for titles and proper names.Countries, Cities, Towns, Lakes, Rivers etc.:All kinds of places require capital letters. Notice how all maps contain capitals on cities, streets and towns?Calendar:The names of the days and months also must be capitalized.Books and Poems:Titles of books and poems have capitals. Book covers also have capital names.Brands:Religious titles, brand names and companies also need capital letters.
Mr. Mrs., Ms., Miss:Always use capitals when addressing people by Mr, Miss etc.Summary:Free Capital Letter Worksheets Students with learning disabilities often need intervention and practice with grammar. Be sure to provide opportunityfor direct teaching and self-correction when working with the many grammar rules.Notice all of the capital letters I need to use in the following paragraph:One day during the month of June, Mrs. Jones took my 3 brothers Jake, Andy and James shopping. She drove them to the shopping plaza on JohnStreet in Chicago. The Glen Echo shopping plaza is just past the Green River Bridge. My brothers bought lots of books at Chapters Book Store.They couldnt wait to get home and begin reading their books.Capital Letters Worksheet # 1 of 4
Deafness and Hearing LossFrom Sue Watson, former About.com GuideSee More About hard of hearing supporting deaf hearing disabledA student/child with deafness or hard-of-hearing disabilities has deficitis in language and speech development because due to a diminished or lack ofauditory response to sound. Students will demonstrate varying degrees of hearing loss which often results in difficulty acquiring spoken language.When you have a child with hearing loss/deafness in your classroom, you need to be careful not t
o assume that this student has other developmental or intellectual, delays. Typically, many of these students have average or better than averageintelligence.Characteristics Found in the Classroom: Difficulty following verbal directions Difficulty with oral expression Some difficulties with social/emotional or interpersonal skills Will often have a degree of language delay Often follows and rarely leads Will usually exhibit some form of articulation difficulty Can become easily frustrated if their needs are not met which will lead to some behavioral difficulties Sometimes the use of hearing aids leads to embarassment and fear of rejection from peersWhat Can You Do? Many students with hearing disabilities will have some form of specialized equipment recommended by the audiologist - help the child to feel comfortable with his/her device and promote understanding and acceptance with other children in the class Remember that devices DO NOT return the childs hearing to normal Noisy enviroments will cause grief to the child with a hearing device and noise around the child should be kept to a minimum Check the device often to ensure it is working When using videos - make sure you get the closed captioned type Shut classroom doors/windows to help eliminate noise Cushion chair bottoms Use visual approaches whenever possible Establish predictable routines for this child Provide older students with visual outlines/graphic organizers and clarification Use a home/school communication book Enunciate words clearly using lip movement to assist the child to lip read Keep close proximity to the student Provide small group work when possible Make assessment accommodations to enable a clear picture of demonstrated academic growth Provide visual materials and demos whenever possible.
Language will be the priority area for students who are deaf or hard of hearing. It is the basic requirement for success in all subject areas and willinfluence the student’s comprehension in your classroom. Language development and its impact on the learning of students who are deaf or hard ofhearing can be complex and difficult to attain. You may find that students will need interpreters, note-takers, or educational assistants to facilitatecommunication. This process will usually require external personnel involement.Teaching WritingFrom Sue Watson, former About.com GuideSee More About ideas to teach writing writing intervention oral activities to promote writingSometimes children with language and/or learning disabilities struggle with writing activities. Often, this is due to the lack of previous oral experience.Children need lots of experience orally before putting their thoughts and ideas to print. Play lots of games orally first and keep these oral activitiesenjoyable.Types of Oral Activities that Will Support Writing:1. Expand my sentence. For this activity, you start with a basic sentence and take turns expanding the sentence.
For instance:Person 1: "I have a dog."Person 2: "I have a big dog."Person 1: "I have a big black dog."Person 2"I have a big, black dog named Dodger."Person 1"I have a big, black dog named Dodger who loves people.2. Another activity that can be done orally is to take any object or item and tell as much about it as possible.For instance: Dogs are friendly. Dogs are furry. Dogs like to eat bones. Dogs can really hear well. (When the child exhausts everything they know,you move to a different object/item or topic)3. To help children understand the 4 types of sentences, you will want to help them understand what they are: Declarative, which makes a statment: Close the door. Imperative, which expresses a command: Finish eating your dinner. Interrogative, which asks a question: Would you like to go to the park? Exclamatory, which makes an exclamation: That roller coaster ride was really scary! Take turns orally making sentences while the other states what type of sentence it is, or give the type of sentence and get the child to come up with that type of sentence. However, keep the oral language fun and as the child progresses, written language is the next logical step.Sub-Departments in Low Vision Products
Low Vision ProductsHi there! Thanks for dropping into our low vision clinic. Whether youre just beginning to lose your vision, or expect to be totally blind in a matter ofmonths, we have a variety of helpful tools to make the most of every bit of sight you do have. Come on - give us a few minutes of your time, and wellshow you what we have to offer.Pocket MagnifiersArguably the most popular type of magnifying glasses out there, pocket magnifiers afford the user quick, discreet visual help whenever its needed.With everything from illuminated credit card magnifiers - to powerful pocket readers - to a handy dandy magnifier thatll fit right onto your keychain,this section is a must-visit for helpful tools that wont break the bank.Reading MagnifiersIf youve always been a real bookworm, or have to do a lot of studying, youll be blown away by what a couple of these magnifiers can do for you! Wehave lighted portable readers, full-page magnifiers, and hand-held magnifiers of all shapes and strengths, lit or not. Go ahead ... Grab one or two ofthese - and bring the joy back to devouring the printed word, free from all that squinting and eyestrain!Stand MagnifiersA practical and useful addition to the desk of any visually impaired person is one kind or another of a stand magnifier. Once you find a model thatsright for you, you can comfortably work for long periods of time, hands-free, while your magnifier sits faithfully in place. Most of these magnifiers arefully-adjustable, meaning you can position the lens and/or light just so, then turn page after page (or work on whatever else you typically do at home,work or play) while benefiting from premium magnification.Loupe MagnifiersA loupe is simply a special type of magnifier, typically designed for high-powered magnification. It has a small lens designed to be put right up to theeye, and can sometimes be attached to a pair of glasses. Loupes are commonly used by jewelers to examine expensive gems, electricians to repaircomplex circuitry, and so on. But the less powerful ones are also helpful for reading and other regular activities. Our pocket-sized loupes in particularare so small that its practical to stick one into your pocket or purse, and pull it out for quick sneak-peeks now and again.
Writing SuppliesWriting out letters, checks, or quick notes gets a lot easier with these simple yet effective handwriting aids. Weve got stationery with raised lines,easy-see pens, and writing templates for all common uses.DepartmentsSub-Departments in Toys and Games(14 products on this shelf.)
Metal HarmonicaPlay Songs by Sliding Side-to-Side as You BlowTell Me More
Plastic KazooMini Megaphone for Kids of All Ages
Plastic FlutePlay Simple Tunes with This Six-Note InstrumentTell Me More
Jingle BandCloth Bracelet with Four large Bells AttachedAvailable Colors: Assorted, or Christmas
Plastic JambourineSmall Drum with Miniture Jingling Discs InsideItem Number: 3426Tell Me More
Wooden TambourineSix-Inch Hollow Drum with Jingle Discs All aroundTell Me More
Plastic MaracaThe Favorite Mexican Rattle, Shaped Like a GordTell Me More
Gripper ShakerQuality Egg-Shaped Wood Rattle for Baby or ToddlerTell Me More
Plastic WhistleBlow for Fun or to Get Help, Has Clip for Keychain
Water Bird CallBlow Whistle to Hear Calls of Favorite Birds
Duck CallTemporarily out of stock. Usually ships in Late January.Blow Into the Back - and Make the Duck Quack
Balloon WhistleBlow Up Balloons with this Tiny Whistling Rattle
Whistle MagnifierTwo-in-One Toy Great for Visually Impared Children
Squeeze ToysTextured Bath Toys that Make Noise, TooSub-Departments in Braille WorkshopGrade 1 Braille ... Grade 2 Braille ... What on earth are you talking about?"Once you learn to read, you will forever be free." - Fredrick Douglas.
ABCs of Braille(10 products on this shelf.) Braille Flash CardsA Fun and Easy Way for Everyone to Learn Braille
Braille Alphabet MagicPop-Up Board to Help Sighted Folks Learn Braille
Braille for the SightedIncludes Print Book and Raised Braille Exercises
Braille Alphabet ChartLarge Poster for Learning or Showing Braille
Braille Alphabet TrayPlastic Plate with Raised Letters and Numbers
Grade 2 Braille Flash CardsPractice Contractions with this Big Box of Cards
Sign Language Flash Cards (Brailled)26 Cards Feature Print, ASL, and Braille Letters
Sign Language Reference Cards, Twelve-PackDetailed ASL Reference for the Basics - and BeyondCovered1. Identifying and writing numbers to 99 1-3, depending on mastery2. Identifying more or less with objects 1-3, depending on mastery
3. Sequencing numbers 34. Using <, >, = symbols 1-3, depending on mastery5. Skip counting by 10s, 5s, 2s 1-3, depending on mastery6. Introduction to place value 3: cannot skip any7. Identifying operations 1-3, depending on mastery8. Place value 0-50 1-3, depending on mastery9. Writing number sentences 1 and 2 must be completed; 3 dependson mastery at day 210. Place value 0-99 311. Addition facts to sums of 18 1-4 must be completed; 5-6 depends onmastery at previous day12. Subtraction facts to minuends of 18 1-4 must be completed; 5-6 depends onmastery at a previous day13. Review of addition and subtractionfacts1-3, depending on mastery14. Missing addends 1-3, depending on mastery15. Place value 1-3, depending on mastery16. Two-digit addition with no regrouping1-3 must be completed; 4-6 depends onmastery at previous day17. Two-digit subtraction with noregrouping1-3 must be completed; 4-6 depends onmastery at previous dayWHY TEACH TOUCHMATH TO STUDENTS K–3 WITH SPECIAL NEEDS?What if your students with special educational needs could compete with their peers in general education classrooms? Suppose you could help themdevelop a positive attitude about math and raise math test scores across the board at your school?
Sound impossible? Not according to Elizabeth De Fazio, a special education teacher in the Los Angeles Unified School District (LAUSD) and one of2,000 special education professionals trained in TouchMath as part of a district-wide initiative. Leveling the playing field."Typically, what I hear back from teachers in the general education classrooms is yes, your child did really well on this area of the test, and yes, theywere using TouchMath," says De Fazio.
Braille By MaggieRead a real life story about a person who is blind.Braille is a way for people who are blind to be able to read. It uses a system of six dots in different patterns. Each pattern of dotsrepresents a different letter of the alphabet. The dots are raised up on paper, like little bumps, and the person who is blind "reads" thesebumps with their fingertips. The bumps stand for the same letters no matter which language you are reading. Braille was invented by a boynamed Louis Braille when he was only fifteen years old.Louis Braille was born in 1809 in a small town called Coupvray, near Paris, France. When Louis was only three years old, he had anunexpected accident and became blind. It all started when Louis was in his fathers workshop, and he was trying to be like his father anduse his fathers tools. Louis picked up a tool called an awl (which is a sharp tool used for making holes).Question: How do you think Louis Braille became blind?A. Louis accidentally let the awl tool slip from his hand, which landed in his eye. It became infected.B. The awl chipped out a large piece of wood which flew into his eyes.C. Louis dropped the awl on his cheek, and the infection was bad and spread to his eyes.Now that Louis was blind, he needed to go to a new school. Children who were blind were not allowed in regular school back then. Learning the regular way was now impossible for Louis. It was a hard life. But that all changed when Louis was ten years old and got a scholarship to the Royal Institution for Blind Youth. But even there, Louis had a hard time. There were only fourteen books at the school library, all of which had large, raised letters, and they were extremely hard to read. In 1821, a former soldier named Charles Barbier visited the school. He brought an invention called night writing. Night writing was acode of 12 raised dots. This code was usually used for communication in wartime, and it let the soldiers get a message across a battlefield withoutspeaking. The code turned out to be too hard for the soldiers to use, but it wasnt too hard for twelve-year-old Louis! Louis decided that this would bea great way for people who are blind to be able to read.
Soon, Louis changed the complicated twelve-dot code into a more simple six-dot code. Sixteen years later, in 1937, Louis published thefirst Braille book ever. But Louis didnt stop with just letters. He also made Braille symbols for math and music.Each Braille letter is kind of like the six-sided dice. One, two, three, four, five, or six dots. There are two columns with three rows. You canmix them up in many different ways to make a letter. Most of the time at least one place where a bump could be is empty. You could have only onebump on the top left, and that would be the letter A.It took a while for the Braille method to catch on, but soon it was used in many places. It wasnt until after Louis died though, that his old school, TheRoyal Institution for Blind Youth, began teaching it to all their students.Braille became common worldwide in 1886 after a group of British men, working for a place now known as the Institute for the Blind, took up thecause. Today, Braille is used in practically every country. Braille books now have double-sided pages which saves a lot of space and paper andhelps them to be smaller and easier to carry. Braille is used so that people who are blind can read, but it is also often used on signs, which helppeople who are blind get around better. The next time youre in an elevator, notice the Braille numbers underneath the regular ones. Most importantof all, it helps people who are blind communicate on their own.Just as regular technology has come a long way over the years, Braille technology has really changed too. We now have many new tools that helppeople who are blind read things on their computers and machines that produce books in Braille. Some of the devices are very simple and others arereally complicated. But all these new machines help people who are blind to do their schoolwork, work at their jobs, and communicate better witheveryone.The slate and stylus are portable and easy to use just like a pencil and paper. They are used so that people can write in Braille. The slate is a pieceof paper between two plastic pieces with little holes in it. The holes are the same size as Braille dots would be. The stylus is an object that looks likea needle with a wooden or plastic grip. You use the stylus to punch holes in the slate. The plastic on the slate keeps the stylus from punching too farand making holes in the paper. Instead, if you do it right, you get raised dots on the other side of the paper. This is Braille writing. The Braille Writer is basically like a regular typewriter. It has six keys, one for each Braille cell, a spacebar, and a backspace key. When you press a key, the key is connected to a little metal bar with a stamp with the dots on it. The key then presses on the paper and indents the paper with the correct amount and position of Braille bumps, or dots. Then if you turn over the paper, you have raised bumps. But what if someone who is blind needs to print something off his or her computer? They would use a
Braille printer. Like a Braille typewriter, it doesnt use any ink. Instead of printing flat words, a Braille printer prints raised Braille bumps. People whoare blind use something called a Braille Display to help them use a computer. A Braille Display translates the words on the computer screen intoBraille on a special keyboard so that the person can "read" the screen as they use the computer.If you would like to make real Braille, get a pillow or soft object, pencil, and paper. Put the piece of paper on the pillow, and poke the pencil into thepaper until you know a tiny bump has emerged on the other side of your paper. You should have texture on the other side of the paper. Dont poketoo hard or youll just make a hole. Try making real sentences.I hope you learned a lot about Braille!
..Braille Alphabet and Numbers...Braille Alphabet:a b c d e f g h i jk l m n o p q r s tu v w x y z! , - . ? CapitalNumbers:# 0 1 2 3 4 5 6 7 8 9American Sign Language (ASL)
NumbersRiddles 1. I have a face, yet no senses. Time is of the essence, but I dont really care. Answer: A clock 2. Voiceless it cries, Wingless it flutters, Toothless bites, Mouthless mutters. Answer: Wind
3. What has roots as nobody sees, Is taller than trees, Up, up it goes, And yet never grows? Answer: A mountain4. Little Nanny Etticoat In a white petticoat And a red nose The longer she stands, The shorter she grows. What is she? A candle5. Thirty white horses upon a red hill, Now they tramp, now they champ, now they stand still. Answer: Your Teeth6. Lives in winter, Dies in summer, And grows with its root upwards. Answer: An Icicle
7. I run But I cant walk What am I? Answer: Water Where do cows go on Saturdays?Answer: To the...moovies! What is a snakes favorite school subject?Answer:Hisssstory What goes up when the rain comes down?Answer:An umbrella! What does a lazy dog do for fun?Answer: Chases...parked cars! Why did the dinosaur cross the road?Answer: To get to the...
museum! How do you keep a rhinoceros from charging?Answer: Take away its...credit cards! What did thedog say when he sat on the sandpaper?Answer:Ruff ruff! What time is it when the clock strikes 13?Answer: Time to...Fix the clock! Why are Teddy Bears never hungry?Answer: Because theyre always...Stuffed! What is a monkeys favorite month?Answer:Ape-ril!Truth or Lie?After you read each of the following statements, select Truth or Lie. When yourefinished, click "Am I Right?" at the bottom to find out whether you were correct!1. People with learning disabilities arent smart. Truth Lie
2. People who cant hear can use the telephone. Truth Lie3. You can catch a disability. Truth Lie4. People with cerebral palsy always have mental retardation. Truth Lie5. People who use wheelchairs cant play basketball. Truth Lie6. People who are blind can read. Truth Lie7. People with mental retardation can get jobs. Truth Lie8. People with disabilities cant live by themselves. Truth Lie9. People who cant hear dont watch TV. Truth Lie10. People with disabilities can vote. Truth Lie Am I Right?
Truth or Lie?After you read each of the following statements, select Truth or Lie. When yourefinished, click "Am I Right?" at the bottom to find out whether you were correct!1. People with learning disabilities arent smart. Truth LieCongratulations! This is the correct answer.You probably already knew that people with learning disabilities have normal or abovenormal intelligence. Did you know that George Bush, Tom Cruise, and Greg Louganishave learning disabilities?2. People who cant hear can use the telephone. Truth LieGood job! This is the correct answer.Using a Text Telephone (TT), people with hearing impairments can communicate withjust about everyone through telephone lines.3. You can catch a disability. Truth LieSuperb! This is the correct answer.Disabilities are not illnesses, so you cant catch them.4. People with cerebral palsy always have mental retardation. Truth LieSorry! Your answer is not correct.Although people with cerebral palsy may have limited control of their arms and legs,most have full intellectual capabilities.5. People who use wheelchairs cant play basketball. Truth LieSorry! Your answer is incorrect.Many people with physical disabilities participate in organized basketball programs.6. People who are blind can read. Truth LieTerrific! This is the correct answer.People who are blind often read materials in Braille or use talking books.
7. People with mental retardation can get jobs. Truth LieTerrific! This is the correct answer.Did you see Chris Burke, an actor who has Down Syndrome, when he appeared on theTV show Touched by an Angel?8. People with disabilities cant live by themselves. Truth LieAwesome! This is the correct answer.With support from different adapted devices (such as door bells that light up for peoplewho are deaf), most people with disabilities can live by themselves. Some people mayneed support from friends or family members, but they could still live in their own home.9. People who cant hear dont watch TV. Truth LieMagnificent! This is the correct answer.The words you sometimes see at the bottom of the television screen are the closedcaptioning that help people who cant hear know what is said during TV programs.10. People with disabilities can vote. Truth LieExtraordinary! This is the correct answer.In fact, many people with disabilities have been elected or appointed to political office.President Franklin Roosevelt used a wheelchair because his legs didnt work well afterhe had polio. Senator Robert Dole did not have the use of one hand after an injuryduring World War II. Robert Williams, former Commissioner of the Administration onDevelopmental Disabilities within the U.S. Department of Health and Human Services,uses an electronic communication device to give speeches because he doesnt havegood control of the muscles needed to talk.Disability Awareness CrosswordHow much do you know about different disabilities? Print out the puzzle below andcomplete it with your friends or family to find out!Ask your parents if you can access a printer-friendly pdf version here!
Across 1. a developmental disability that affects communication and social interaction 2. ____ Syndrome causes chronic vocal and motor tics 3. Legally ____: "visual acuity of 20/200 or higher" 4. people with _____ have trouble reading, even if they are very smart 5. ____ Syndrome; a condition caused by the presence of an extra chromosome #21 6. totaly or partially unable to hear 7. ____ Syndrome; a type of autism where people have great memories 8. Cerebral ____; a disorder caused by damage to the brain during pregnancy or birthDown 1. a person who has ____ sometimes has seizures 2. people who are deaf can talk using ____ Language 3. people who are blind can read using ____ 4. when people lose their memory, language, or motor skills 5. Cystic ____; an inherited disease that causes the lungs and pancreas to secrete thick mucusDisability Awareness Crossword: AnswersAcross 1. a developmental disability that affects communication and social interaction Answer: Autism 2. ____ Syndrome causes chronic vocal and motor tics Answer: Tourette
3. Legally ____: "visual acuity of 20/200 or higher" Answer: Blind 4. people with _____ have trouble reading, even if they are very smart Answer: Dyslexia 5. ____ Syndrome; a condition caused by the presence of an extra chromosome #21 Answer: Down 6. totaly or partially unable to hear Answer: Deaf 7. ____ Syndrome; a type of autism where people have great memories Answer: Asperger 8. Cerebral ____; a disorder caused by damage to the brain during pregnancy or birth Answer: PalsyDown 1. a person who has ____ sometimes has seizures Answer: Epilepsy 2. people who are deaf can talk using ____ Language Answer: Sign 3. people who are blind can read using ____ Answer: Braille 4. when people lose their memory, language, or motor skills Answer: Dementia 5. Cystic ____; an inherited disease that causes the lungs and pancreas to secrete thick mucus Answer: FibrosisAlzheimers DiseaseView this puzzle as a Microsoft Word or pdf file.
Find the following words: ALOIS DISEASE PROGRESS ALZHEIMERS DISORDER PROGRESSIVE BEHAVIOR INTELLECTUAL REMEMBERING BRAIN LIVING RESEARCH COMMON MEDICATIONS SUPPORT DELAY MEMORY TREATMENT DEMENTIA NO CUREAutism Spectrum DisordersView this puzzle as a Microsoft Word or pdf file.
Find the following words: ABILITIES EDUCATION ASPERGER PDDNOS AUTISM SENSORY BEHAVIOR SOCIAL COMMUNICATION SPECTRUM CREATIVE SUPPORT DIAGNOSIS TECHNOLOGY DISORDER TEMPLE GRANDINDown SyndromeView this puzzle as a Microsoft Word or pdf file.
Find the following words: CARDIAC EMPLOYMENT RESEARCH CHROMOSOME FAMILIES SPEECH COGNITIVE GENETIC SUPPORT COMMON GIFTS SYNDROME DEVELOPMENTAL HAPPY TRISOMY DOWN HEALTHY UNDERSTANDING EDUCATION INCLUSIONTourettes SyndromeView this puzzle as a Microsoft Word or pdf file.
Find the following words: ANXIOUS MILD SHRUGGING BLINKING MODERATE SUPPORT COMMON MOVEMENTS SYNDROME DISORDER NEUROLOGICAL TICS GENETIC OBSESSIONS TOURETTES IMPULSIVE SENSITIVE VOCALIZE INVOLUNTARY SEVEREDyslexiaView this puzzle as a Microsoft Word or pdf file.
Find the following words: ASSISTANCE LANGUAGE BRAIN LEARNING DEVELOPMENTAL NEUROLOGICAL DIAGNOSIS PHONICS DIFFICULTY READING DYSLEXIA SPELLING EDUCATION UNDERSTANDING FAMILIES WORDS HIDDEN WRITINGGraphic Organizers for Use With Special Education StudentsPosted by Denis Soukhanov on Fri, Apr 16, 2010
Graphic organizers are a popular educationaltool. They help students to visually display,interpret, and understand complex topics.They also assist in reading comprehension byallowing the students to track main ideas, facts,plot, setting, and characters. The most populargraphic organizers are Venn Diagrams,Concept Maps, KWL Charts, checklists andstory maps. For special education students,these tools can help them to express and showan understanding of concepts that may bedifficult for them to show with traditional writtenor essay assessments.Finding, modifying, and printing graphic organizers is easily accessible via the Internet.They can easily be adapted to assist all type of learners, topics, and desired learningoutcomes. Many sites now also allow students to create their own graphic organizersthat they can edit, print and share via the Internet.Printable Graphic OrganizersThe Education World site offers a variety of free printable graphic organizers, includingVenn Diagrams, Comparison Charts, Concept Maps, Fishbone Diagrams, Family Trees,KWL Charts, Life Cycle Charts, Spider Maps, Story Maps, and T-charts. The files youload from this site are available in Word format. When you select the style of graphicorganizer that you would like to print, you have the ability to edit the titles, headings,subheadings or to add or delete information as needed. The files you create can alsobe saved for later use.On the Project Based Learning Checklists for Teachers site, teachers can create theirown project-based learning checklists. These checklists can be used by the students asguidelines to teacher expectations and learning outcomes for their projects. This site isreally great because you can create checklists for writing, science, oral presentations,and multimedia for a variety of different grade levels. To create a checklist, you includethe teacher name, the title for the project, category selections, and then additionaldetails. The additional details can be added from a drop-down list or typed directly in.When completed, you just have to print and photocopy the checklist for your students tofollow.Worksheet Works is a beta website that has free printable organizers, including clocks,fishbones, t-charts, y-charts, YWLs, Venn Diagrams, pies, stars, cycles, PMIs, anddecision-making charts. When you select the type of chart you would like to print, youare taken to a page of options where you can add titles and headers that areappropriate for your lesson. You can also choose the size of paper that you would liketo print on. You then create your worksheet and it is available to download, print andsave print as a PDF file.Online Graphic OrganizersBubbl.us is a free online brainstorming application. Students can create concept maps(webs) or flow charts using this program. There are options available to save and toprint your maps. The program is kid friendly with fun colors and transitions. Theprogram allows students to create as many bubbles as they need to complete their
project. They can connect and move the bubbles in various ways. Bubbles can beconnected using either arrows or lines, and can be moved above, below, or at the samelevel as other bubbles in the maps.Read Write Think has a section of their website that includes student "interactives."These are interactive online applications where students make and complete their owngraphic organizers. The teacher should provide the link for the interactive applicationthe students should be using based on the lesson they are to complete. Then the workis up to the student! There are interactive activities including creating Venn Diagrams,writing aids, comparison and contrast tables, plot development charts, timelines, andstory maps. Many of their "interactives" involve either reading or writing and would begreat for Language Arts and Social Studies courses.While Class Tools does not have the fancy and easy-to-read format of some of theother sites I have mentioned, they have some of the most fun and interactive graphicorganizers. Along the right-hand column of the site you will find a list of the differentorganizers and activities. Students can choose the graphic organizer style, add therequired information for the assignment, and then either save the file, embed it into awebpage, or print. There are also many other fun review games, activities, andclassroom management tools on this site you should definitely check out.Additional research-based data regarding the successful use of graphic organizers withspecial education students can be found here. NAME: 2-Circle Venn Diagram TOPIC
TITLE TITLE TITLE 3-Circle Venn Diagram NAME: TOPIC
TITLE TITLE TITLE TITLE TITLE TITLE Bone Diagram TOPIC/TITLE + DRIVING FORCES |
The scratchy dust clung to everything it touched, causing scientific instruments to overheat and, for Apollo 17 astronaut Harrison Schmitt, a sort of lunar dust hay fever. The annoying particles even prompted a scientific experiment to figure out how fast they collect, but NASA’s data got lost.
The Lunar Dust Detector, attached to the leftmost corner of this experiment package left by the Apollo 12 astronauts, made the first measurement of lunar dust accumulation. As the matchbox-sized device’s three solar panels became covered by dust, the voltage they produced dropped. Credit: NASA
Or, so NASA thought. Now, more than 40 years later, scientists have used the rediscovered data to make the first determination of how fast lunar dust accumulates. It builds up unbelievably slowly by the standards of any Earth-bound housekeeper, their calculations show – just fast enough to form a layer about a millimeter (0.04 inches) thick every 1,000 years. Yet, that rate is 10 times previous estimates. It’s also more than speedy enough to pose a serious problem for the solar cells that serve as critical power sources for space exploration missions.
“You wouldn’t see it; it’s very thin indeed,” said University of Western Australia Professor Brian O’Brien, a physicist who developed the experiment while working on the Apollo missions in the 1960s and now has led the new analysis. “But, as the Apollo astronauts learned, you can have a devil of a time overcoming even a small amount of dust.”
That faster-than-expected pile-up also implies that lunar dust could have more ways to move around than previously thought, O’Brien added.
In his experiment, dust collected on small solar cells attached to a matchbox-sized case over the course of six years, throughout three Apollo missions. As the granules blocked light from coming in, the voltage the solar cells produced dropped. The electrical measurements indicated that each year 100 micrograms of lunar dust collected per square centimeter. At that rate, a basketball court on the Moon would collect roughly 450 grams (1 pound) of lunar dust annually.
Comparing the effects on cells from dust and from damaging high-energy radiation from the sun, O’Brien found that long-term dust accretion could diminish the output from shielded power supplies of a lunar outpost more than even the most intense solar outbursts.
Because the threat posed by radiation damage was recognized early on, solar-cell makers fortified their devices against that sort of harm. Yet, “while solar cells have become hardier to radiation, nothing really has been done to make them more resistant to dust,” said O’Brien’s colleague on the project Monique Hollick, who is also a researcher at the University of Western Australia, in Crawley. “That’s going to be a problem for future lunar missions.”
The work is detailed this week in Space Weather, a publication of the American Geophysical Union.
Answers from Apollo
Before Apollo 11 blasted off to the Moon in 1969, NASA scientists realized the Lunar Module would likely kick up a large amount of lunar soil on takeoff, potentially coating nearby science experiments with dust. Detachable covers would require either a small explosive or a physical mechanism to remove after the astronauts left, creating more engineering challenges and room for failure.
“Then I asked what I thought was a pretty common sense question,” recalled O’Brien. “If we’ve got to guard ourselves against damage from the lunar module taking off, who’s measuring whether any damage actually took place; who’s measuring the dust?”
O’Brien proceeded to quickly invent the Lunar Dust Detector experiment as a small add-on device to the larger experiments. Requiring little power and weighing only 270 grams (0.6 pounds), the dust detector reported back to Earth alongside the non-scientific housekeeping data.
“It really got a free ride,” O’Brien said.
The detectors flown on Apollo 12, 14 and 15 operated until NASA shut them off in September 1977 due to budgetary concerns. While the detectors worked properly, NASA did not preserve the archival tapes of the data they collected. For three decades NASA assumed the dust detector data had been lost forever, until 2006 when O’Brien heard about NASA’s mistake and told them he still had a set of backup copies.
Each detector in the experiment had three solar cells, each covered with a different amount of shielding against incoming radiation. By comparing damage to the unshielded and shielded solar cells, O’Brien made his determination that dust, rather than radiation, caused the most degradation to the protected cells.
Previous model-based estimates of lunar dust accumulation assumed the dust came entirely from meteor impacts and falling cosmic dust. “But that’s not enough to account for what we measured,” O’Brien said.
With no atmosphere for wind, the Moon’s soil should be stagnant. However, O’Brien said a popular idea of a “dust atmosphere” on the Moon could explain the difference. The concept goes that, during each lunar day, solar radiation is strong enough to knock a few electrons out of atoms in dust particles, building up a slight positive charge. On the nighttime side of the Moon, electrons from the flow of energetic particles, called the solar wind, which comes off the Sun strike dust particles and give them a small negative charge. Where the illuminated and dark regions of the moon meet, electric forces could levitate this charged dust, potentially lofting grains high into the lunar sky.
“Something similar was reported by Apollo astronauts orbiting the Moon who looked out and saw dust glowing on the horizon,” said Hollick.
The idea of levitating lunar dust could soon be confirmed by NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE), launched in September. The spacecraft orbits 250 kilometers (155 miles) above the surface of the Moon, searching for dust in the lunar atmosphere.
While LADEE scours the Moon’s atmosphere, O’Brien looks back on a decades-long science experiment that finally has a result.
“It’s been a long haul,” said O’Brien. “I invented [the detector] in 1966, long before Monique was even born. At the age of 79, I’m working with a 23-year old working on 46-year-old data and we discovered something exciting—it’s delightful.”Notes for Journalists
Or, you may order a copy of the final paper by emailing your request to Thomas Sumner at email@example.com. Please provide your name, the name of your publication, and your phone number.Neither the paper nor this press release is under embargo.
Monique Hollick: Phone: +011 (+61) 08-9387-3827, Email: firstname.lastname@example.org
Note: Both authors are located in Western Australia (UTC+8:00)AGU Contact:
Thomas Sumner | American Geophysical Union
Research icebreaker Polarstern begins the Antarctic season
09.11.2018 | Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung
Far fewer lakes below the East Antarctic Ice Sheet than previously believed
08.11.2018 | Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung
Faster and secure data communication: This is the goal of a new joint project involving physicists from the University of Würzburg. The German Federal Ministry of Education and Research funds the project with 14.8 million euro.
In our digital world data security and secure communication are becoming more and more important. Quantum communication is a promising approach to achieve...
On Saturday, 10 November 2018, the research icebreaker Polarstern will leave its homeport of Bremerhaven, bound for Cape Town, South Africa.
When choosing materials to make something, trade-offs need to be made between a host of properties, such as thickness, stiffness and weight. Depending on the application in question, finding just the right balance is the difference between success and failure
Now, a team of Penn Engineers has demonstrated a new material they call "nanocardboard," an ultrathin equivalent of corrugated paper cardboard. A square...
Physicists at ETH Zurich demonstrate how errors that occur during the manipulation of quantum system can be monitored and corrected on the fly
The field of quantum computation has seen tremendous progress in recent years. Bit by bit, quantum devices start to challenge conventional computers, at least...
Scientists developed specially coated nanometer-sized vehicles that can be actively moved through dense tissue like the vitreous of the eye. So far, the transport of nano-vehicles has only been demonstrated in model systems or biological fluids, but not in real tissue. The work was published in the journal Science Advances and constitutes one step further towards nanorobots becoming minimally-invasive tools for precisely delivering medicine to where it is needed.
Researchers of the “Micro, Nano and Molecular Systems” Lab at the Max Planck Institute for Intelligent Systems in Stuttgart, together with an international...
09.11.2018 | Event News
06.11.2018 | Event News
23.10.2018 | Event News
12.11.2018 | Life Sciences
12.11.2018 | Materials Sciences
12.11.2018 | Physics and Astronomy |
By the end of this section, you will be able to:
- Explain the origin of Ohm's law.
- Calculate voltages, currents, and resistances with Ohm's law.
- Explain the difference between ohmic and non-ohmic materials.
- Describe a simple circuit.
The information presented in this section supports the following AP® learning objectives and science practices:
- 4.E.4.1 The student is able to make predictions about the properties of resistors and/or capacitors when placed in a simple circuit based on the geometry of the circuit element and supported by scientific theories and mathematical relationships. (S.P. 2.2, 6.4)
What drives current? We can think of various devices—such as batteries, generators, wall outlets, and so on—which are necessary to maintain a current. All such devices create a potential difference and are loosely referred to as voltage sources. When a voltage source is connected to a conductor, it applies a potential difference that creates an electric field. The electric field in turn exerts force on charges, causing current.
The current that flows through most substances is directly proportional to the voltage applied to it. The German physicist Georg Simon Ohm (1787–1854) was the first to demonstrate experimentally that the current in a metal wire is directly proportional to the voltage applied:
This important relationship is known as Ohm's law. It can be viewed as a cause-and-effect relationship, with voltage the cause and current the effect. This is an empirical law like that for friction—an experimentally observed phenomenon. Such a linear relationship doesn't always occur.
Resistance and Simple Circuits
If voltage drives current, what impedes it? The electric property that impedes current (crudely similar to friction and air resistance) is called resistance . Collisions of moving charges with atoms and molecules in a substance transfer energy to the substance and limit current. Resistance is defined as inversely proportional to current, or
Thus, for example, current is cut in half if resistance doubles. Combining the relationships of current to voltage and current to resistance gives
This relationship is also called Ohm's law. Ohm's law in this form really defines resistance for certain materials. Ohm's law (like Hooke's law) is not universally valid. The many substances for which Ohm's law holds are called ohmic. These include good conductors like copper and aluminum, and some poor conductors under certain circumstances. Ohmic materials have a resistance that is independent of voltage and current . An object that has simple resistance is called a resistor, even if its resistance is small. The unit for resistance is an ohm and is given the symbol (upper case Greek omega). Rearranging gives , and so the units of resistance are 1 ohm = 1 volt per ampere:
Figure 20.8 shows the schematic for a simple circuit. A simple circuit has a single voltage source and a single resistor. The wires connecting the voltage source to the resistor can be assumed to have negligible resistance, or their resistance can be included in .
Ohm's law () is a fundamental relationship that could be presented by a linear function with the slope of the line being the resistance. The resistance represents the voltage that needs to be applied to the resistor to create a current of 1 A through the circuit. The graph (in the figure below) shows this representation for two simple circuits with resistors that have different resistances and thus different slopes.
The materials which follow Ohm's law by having a linear relationship between voltage and current are known as ohmic materials. On the other hand, some materials exhibit a nonlinear voltage-current relationship and hence are known as non-ohmic materials. The figure below shows current voltage relationships for the two types of materials.
Clearly the resistance of an ohmic material (shown in (a)) remains constant and can be calculated by finding the slope of the graph but that is not true for a non-ohmic material (shown in (b)).
What is the resistance of an automobile headlight through which 2.50 A flows when 12.0 V is applied to it?
We can rearrange Ohm's law as stated by and use it to find the resistance.
Rearranging and substituting known values gives
This is a relatively small resistance, but it is larger than the cold resistance of the headlight. As we shall see in Resistance and Resistivity, resistance usually increases with temperature, and so the bulb has a lower resistance when it is first switched on and will draw considerably more current during its brief warm-up period.
Resistances range over many orders of magnitude. Some ceramic insulators, such as those used to support power lines, have resistances of or more. A dry person may have a hand-to-foot resistance of , whereas the resistance of the human heart is about . A meter-long piece of large-diameter copper wire may have a resistance of , and superconductors have no resistance at all (they are non-ohmic). Resistance is related to the shape of an object and the material of which it is composed, as will be seen in Resistance and Resistivity.
Additional insight is gained by solving for yielding
This expression for can be interpreted as the voltage drop across a resistor produced by the current . The phrase drop is often used for this voltage. For instance, the headlight in Example 20.4 has an drop of 12.0 V. If voltage is measured at various points in a circuit, it will be seen to increase at the voltage source and decrease at the resistor. Voltage is similar to fluid pressure. The voltage source is like a pump, creating a pressure difference, causing current—the flow of charge. The resistor is like a pipe that reduces pressure and limits flow because of its resistance. Conservation of energy has important consequences here. The voltage source supplies energy (causing an electric field and a current), and the resistor converts it to another form (such as thermal energy). In a simple circuit (one with a single simple resistor), the voltage supplied by the source equals the voltage drop across the resistor, since , and the same flows through each. Thus the energy supplied by the voltage source and the energy converted by the resistor are equal. (See Figure 20.11.)
In a simple electrical circuit, the sole resistor converts energy supplied by the source into another form. Conservation of energy is evidenced here by the fact that all of the energy supplied by the source is converted to another form by the resistor alone. We will find that conservation of energy has other important applications in circuits and is a powerful tool in circuit analysis.
See how the equation form of Ohm's law relates to a simple circuit. Adjust the voltage and resistance, and see the current change according to Ohm's law. The sizes of the symbols in the equation change to match the circuit diagram. |
CIRCLE OF FIFTHS
4.7. The Circle of Fifths*
The circle of fifths is a way to arrange keys to show how closely they are related to each other.
Figure 4.58. Circle of Fifths
The major key for each key signature is shown as a capital letter; the minor key as a small letter. In theory, one could continue around the circle adding flats or sharps (so that B major is also C flat major, with seven flats, E major is also F flat major, with 6 flats and a double flat, and so on), but in practice such key signatures are very rare.
Keys are not considered closely related to each other if they are near each other in the chromatic scale (or on a keyboard). What makes two keys “closely related” is having similar key signatures. So the most closely related key to C major, for example, is A minor, since they have the same key signature (no sharps and no flats). This puts them in the same “slice” of the circle. The next most closely related keys to C major would be G major (or E minor), with one sharp, and F major (or D minor), with only one flat. The keys that are most distant from C major, with six sharps or six flats, are on the opposite side of the circle.
The circle of fifths gets its name from the fact that as you go from one section of the circle to the next, you are going up or down by an interval of a perfect fifth. If you go up a perfect fifth (clockwise in the circle), you get the key that has one more sharp or one less flat; if you go down a perfect fifth (counterclockwise), you get the key that has one more flat or one less sharp. Since going down by a perfect fifth is the same as going up by a perfect fourth, the counterclockwise direction is sometimes referred to as a “circle of fourths”. (Please review inverted intervals if this is confusing.)
The key of D major has two sharps. Using the circle of fifths, we find that the most closely related major keys (one in each direction) are G major, with only one sharp, and A major, with three sharps. The relative minors of all of these keys (B minor, E minor, and F sharp minor) are also closely related to D major.
What are the keys most closely related to E flat major? To A minor?
Name the major and minor keys for each key signature.
If you do not know the order of the sharps and flats, you can also use the circle of fifths to find these. The first sharp in a key signature is always F sharp; the second sharp in a key signature is always (a perfect fifth away) C sharp; the third is always G sharp, and so on, all the way to B sharp.
The first flat in a key signature is always B flat (the same as the last sharp); the second is always E flat, and so on, all the way to F flat. Notice that, just as with the key signatures, you add sharps or subtract flats as you go clockwise around the circle, and add flats or subtract sharps as you go counterclockwise.
Figure 4.60. Adding Sharps and Flats to the Key Signature
Each sharp and flat that is added to a key signature is also a perfect fifth away from the last sharp or flat that was added.
Figure 4.58 shows that D major has 2 sharps; Figure 4.60 shows that they are F sharp and C sharp. After D major, name the next four sharp keys, and name the sharp that is added with each key.
E minor is the first sharp minor key; the first sharp added in both major and minor keys is always F sharp. Name the next three sharp minor keys, and the sharp that is added in each key.
After B flat major, name the next four flat keys, and name the flat that is added with each key.
Solutions to Exercises
Solution to Exercise 4.7.1.
E flat major (3 flats):
- B flat major (2 flats)
- A flat major (4 flats)
- C minor (3 flats)
- G minor (2 flats)
- F minor (4 flats)
A minor (no sharps or flats):
- E minor (1 sharp)
- D minor (1 flat)
- C major (no sharps or flats)
- G major (1 sharp)
- F major (1 flat)
Solution to Exercise 4.7.2.
Solution to Exercise 4.7.3.
- A major adds G sharp
- E major adds D sharp
- B major adds A sharp
- F sharp major adds E sharp
Solution to Exercise 4.7.4.
- B minor adds C sharp
- F sharp minor adds G sharp
- C sharp minor adds D sharp
Solution to Exercise 4.7.5.
- E flat major adds A flat
- A flat major adds D flat
- D flat major adds G flat
- G flat major adds C flat
4.8. Scales that aren’t Major or Minor*
Sounds – ordinary, everyday “noises” – come in every conceivable pitch and groups of pitches. In fact, the essence of noise, “white noise”, is basically every pitch at once, so that no particular pitch is heard.
One of the things that makes music pleasant to hear and easy to “understand” is that only a few of all the possible pitches are used. But not all pieces of music use the same set of pitches. In order to be familiar with the particular notes that a piece of music is likely to use, musicians study scales.
The set of expected pitches for a piece of music can be arranged into a scale. In a scale, the pitches are usually arranged from lowest to highest (or highest to lowest), in a pattern that usually repeats within every octave.Note
In some kinds of music, the notes of a particular scale are the only notes allowed in a given piece of music. In other music traditions, notes from outside the scale (accidentals) are allowed, but are usually much less common than the scale notes.
The set of pitches, or notes, that are used, and their relationships to each other, makes a big impact on how the music sounds. For example, for centuries, most Western music has been based on major and minor scales. That is one of the things that makes it instantly recognizable as Western music. Much (though not all) of the music of eastern Asia, on the other hand, was for many centuries based on pentatonic scales, giving it a much different flavor that is also easy to recognize.
Some of the more commonly used scales that are not major or minor are introduced here. Pentatonic scales are often associated with eastern Asia, but many other music traditions also use them. Blues scales, used in blues, jazz, and other African-American traditions, grew out of a compromise between European and African scales. Some of the scales that sound “exotic” to the Western ear are taken from the musical traditions of eastern Europe, the Middle East, and western Asia. Microtones can be found in some traditional musics (for example, Indian classical music) and in some modern art music.Note
Some music traditions, such as Indian and medieval European, use modes or ragas, which are not quite the same as scales. Please see Modes and Ragas.
Scales and Western Music
The Western musical tradition that developed in Europe after the middle ages is based on major and minor scales, but there are other scales that are a part of this tradition.
In the chromatic scale, every interval is a half step. This scale gives all the sharp, flat, and natural notes commonly used in all Western music. It is also the twelve-tone scale used by twentieth-century composers to create their atonal music. Young instrumentalists are encouraged to practice playing the chromatic scale in order to ensure that they know the fingerings for all the notes. Listen to a chromatic scale.
Figure 4.65. Chromatic Scale
The chromatic scale includes all the pitches normally found in Western music. Note that, because of enharmonic spelling, many of these pitches could be written in a different way (for example, using flats instead of sharps).
In a whole tone scale, every interval is a whole step. In both the chromatic and the whole tone scales, all the intervals are the same. This results in scales that have no tonal center; no note feels more or less important than the others. Because of this, most traditional and popular Western music uses major or minor scales rather than the chromatic or whole tone scales. But composers who don’t want their music to have a tonal center (for example, many composers of “modern classical” music) often use these scales. Listen to a whole tone scale.
Figure 4.66. A Whole Tone Scale
Because all the intervals are the same, it doesn’t matter much where you begin a chromatic or whole tone scale. For example, this scale would contain the same notes whether you start it on C or E.
There is basically only one chromatic scale; you can start it on any note, but the pitches will end up being the same as the pitches in any other chromatic scale. There are basically two possible whole tone scales. Beginning on a b, write a whole tone scale that uses a different pitches than the one in Figure 4.66. If you need staff paper, you can download this PDF file.
Now write a whole tone scale beginning on an a flat. Is this scale essentially the same as the one in Figure 4.75 or the one in Figure 4.66?
In Western music, there are twelve pitches within each octave. (The thirteenth note starts the next octave.) But in a tonal piece of music only seven of these notes, the seven notes of a major or minor scale, are used often.
In a pentatonic scale, only five of the possible pitches within an octave are used. (So the scale will repeat starting at the sixth tone.) The most familiar pentatonic scales are used in much of the music of eastern Asia. You may be familiar with the scale in Figure 4.67 as the scale that is produced when you play all the “black keys” on a piano keyboard.
Figure 4.67. A Familiar Pentatonic Scale
This is the pentatonic scale you get when you play the “black keys” on a piano.
Listen to the black key pentatonic scale. Like other scales, this pentatonic scale is transposable; you can move the entire scale up or down by a half step or a major third or any interval you like. The scale will sound higher or lower, but other than that it will sound the same, because the pattern of intervals between the notes (half steps, whole steps, and minor thirds) is the same. (For more on intervals, see Half Steps and Whole Steps and Interval. For more on patterns of intervals within scales, see Major Scales and Minor Scales.) Now listen to a transposed pentatonic scale.
Figure 4.68. Transposed Pentatonic Scale
This is simply a transposition of the scale in Figure 4.67
But this is not the only possible type of pentatonic scale. Any scale that uses only five notes within one octave is a pentatonic scale. The following pentatonic scale, for example, is not simply another transposition of the “black key” pentatonic scale; the pattern of intervals between the notes is different. Listen to this different pentatonic scale.
Figure 4.69. Different Pentatonic Scale
This pentatonic scale is not a transposed version of Figure 4.67.It has a different set of intervals.
The point here is that music based on the pentatonic scale in Figure 4.67 will sound very different from music based on the pentatonic scale in Figure 4.69, because the relationships between the notes are different, much as music in a minor key is noticeably different from music in a major key. So there are quite a few different possible pentatonic scales that will produce a recognizably “unique sound”, and many of these possible five-note scales have been named and used in various music traditions around the world.
To get a feeling for the concepts in this section, try composing some short pieces using the pentatonic scales given in Figure 4.67 and in Figure 4.69. You may use more than one octave of each scale, but use only one scale for each piece. As you are composing, listen for how the constraints of using only those five notes, with those pitch relationships, affect your music. See if you can play your Figure 4.67 composition in a different key, for example, using the scale in Figure 4.68.
Dividing the Octave, More or Less
Any scale will list a certain number of notes within an octave. For major and minor scales, there are seven notes; for pentatonic, five; for a chromatic scale, twelve. Although some divisions are more common than others, any division can be imagined, and many are used in different musical traditions around the world. For example, the classical music of India recognizes twenty-two different possible pitches within an octave; each raga uses five, six, or seven of these possible pitches. (Please see Indian Classical Music: Tuning and Ragas for more on this.) And there are some traditions in Africa that use six or eight notes within an octave. Listen to one possible eight-tone, or octatonic scale.
Figure 4.70. An Octatonic Scale
Many Non-Western traditions, besides using different scales, also use different tuning systems; the intervals in the scales may involve quarter tones (a half of a half step), for example, or other intervals we don’t use. Even trying to write them in common notation can be a bit misleading.
Microtones are intervals smaller than a half step. Besides being necessary to describe the scales and tuning systems of many Non-Western traditions, they have also been used in modern Western classical music, and are also used in African-American traditions such as jazz and blues. As of this writing, the Huygens-Fokker Foundation was a good place to start looking for information on microtonal music.
Constructing a Blues Scale
Blues scales are closely related to pentatonic scales. (Some versions are pentatonic.) Rearrange the pentatonic scale in Figure 4.68 above so that it begins on the C, and add an F sharp in between the F and G, and you have a commonly used version of the blues scale. Listen to this blues scale.
Blues scales are closely related to pentatonic scales.
Modes and Ragas
Many music traditions do not use scales. The most familiar of these to the Western listener are medieval chant and the classical music of India. In these and other modal traditions, the rules for constructing a piece of music are quite different than the rules for music that is based on a scale. Please see Modes and Ragas for more information.
There are many, many other possible scales that are not part of the major-minor system. Some, like pentatonic and octatonic scales, have fewer or more notes per octave, but many have seven tones, just as a major scale does. A scale may be chosen or constructed by a composer for certain intriguing characteristics, for the types of melodies or harmonies that the scale enables, or just for the interesting or pleasant sound of music created using the scale.
For example, one class of scales that intrigues some composers is symmetrical scales. The chromatic scale and whole tone scales fall into this category, but other symmetrical scales can also be constructed. A diminished scale, for example, not only has the “symmetrical” quality; it is also a very useful scale if, for example, you are improvising a jazz solo over diminished chords.
Figure 4.72. A Diminished Scale
Like chromatic and whole tone scales, a diminished scale is “symmetrical”.
Some scales are loosely based on the music of other cultures, and are used when the composer wants to evoke the music of another place or time. These scales are often borrowed from Non-western traditions, but are then used in ways typical of Western music. Since they usually ignore the tuning, melodic forms, and other aesthetic principles of the traditions that they are borrowed from, such uses of “exotic” scales should not be considered accurate representations of those traditions. There are examples in world music, however, in which the Non-western scale or mode is used in an authentic way. Although there is general agreement about the names of some commonly used “exotic” scales, they are not at all standardized. Often the name of a scale simply reflects what it sounds like to the person using it, and the same name may be applied to different scales, or different names to the same scale.
Figure 4.73. Some “Exotic” Scales
You may want to experiment with some of the many scales possible. Listen to one version each of: “diminished” scale, “enigmatic” scale, “Romanian” Scale, “Persian” scale and “Hungarian Major” Scale. For even more possibilities, try a web search for “exotic scales”; or try inventing your own scales and using them in compositions and improvisations.
Figure 4.74. An “Enigmatic” Scale
Solutions to Exercises
Solution to Exercise 4.8.1.
This whole tone scale contains the notes that are not in the whole tone scale in Figure 4.66.
Solution to Exercise 4.8.2.
The flats in one scale are the enharmonic equivalents of the sharps in the other scale.
Assuming that octaves don’t matter – as they usually don’t in Western music theory, this scale shares all of its possible pitches with the scale in Figure 4.66.
Solution to Exercise 4.8.3.
If you can, have your teacher listen to your compositions. |
Computable Problems –
You are familiar with many problems (or functions) that are computable (or decidable), meaning there exists some algorithm that computes an answer (or output) to any instance of the problem (or for any input to the function) in a finite number of simple steps.A simple example is the integer increment operation:
f(x) = x + 1
It should be intuitive that given any integer x, we can compute x + 1 in a finite number of steps. Since x is finite, it may be represented by a finite string of digits. Using the addition method (or algorithm) we all learned in school, we can clearly compute another string of digits representing the integer equivalent to x + 1.
Yet there are also problems and functions that that are non-computable (or undecidable or uncomputable), meaning that there exists no algorithm that can compute an answer or output for all inputs in a finite number of simple steps. (Undecidable simply means non-computable in the context of a decision problem, whose answer (or output) is either “true” or “false”).
Non-Computable Problems –
A non-computable is a problem for which there is no algorithm that can be used to solve it. Most famous example of a non-computablity (or undecidability) is the Halting Problem. Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes).
The alternative is that it runs forever without halting. The halting problem is about seeing if a machine will ever come to a halt when a certain input is given to it or if it will finish running. This input itself can be something that keeps calling itself forever which means that it will cause the program to run forever.
Other example of an uncomputable problem is: determining whether a computer program loops forever on some input. You can replace “computer program” by “Turing machine or algorithm”if you know about Turing machine.
Proving Computability or Non-Computability –
We can show that a problem is computable by describing a procedure and proving that the procedure always terminates and always produces the correct answer. It is enough to provide a convincing argument that such a procedure exists finding the actual procedure is not necessary (but often helps to make the argument more convincing).
To show that a problem is not computable, we need to show that no algorithm exists that solves the problem. Since there are an infinite number of possible procedures, we cannot just list all possible procedures and show why each one does not solve the problem. Instead, we need to construct an argument showing that if there were such an algorithm it would lead to a contradiction.
The core of our argument is based on knowing the Halting Problem is noncom- putable. If a solution to some new problem P could be used to solve the Halting Problem, then we know that P is also noncomputable. That is, no algorithm exists that can solve P since if such an algorithm exists it could be used to also solve the Halting Problem which we already know is impossible.The proof technique where we show that a solution for some problem P can be used to solve a different problem Q is known as a reduction.A problem Q is reducible to a problem P if a solution to P could be used to solve Q. This means that problem Q is no harder than problem P, since a solution to problem Q leads to a solution to problem P.
Some Examples On Computable Problems –
These are four simple examples of computable problem:
- Computing the greatest common divisor of a pair of integers.
- Computing the least common multiple of a pair of integers.
- Finding the shortest path between a pair of nodes in a finite graph.
- Determining whether a propositional formula is a tautology.
Some Examples On Computable Problems – State Entry Problem.
Consider the problem of determining if a string ‘w’ is given to some Turing machine ‘M’ will it enter some state ‘q'(where q belongs to set of all states in Turing machine M and string w is not equal to empty string). Is it computable or non-computable?
We show the State Entry Problem is non-computable by showing that it is as hard as the Halting Problem, which we already know is non-computable.
State Entry Problem is asking us on given string w if we start from initial state of Turing machine will it reach to a state q. Now this state entry problem can be converted to halting problem. Halting problem is whether our Turing machine ever halts and state entry problem is asking same thing whether this Turing machine halts at some state q if we give string w as input to the Turing machine M. So the state entry problem is non-computable as we converted it to halting problem which we already know is non-computable problem. So in this way we can prove non-computability. |
About This Chapter
Logic in Mathematics - Chapter Summary
Go through the chapter's lessons to develop a deeper understanding of mathematical induction. You will get to look at the uses of this type of induction and review some of its proofs.
Moving forward in the chapter, you'll examine the axiomatic system. Explore the properties of this system and see what it looks like by analyzing some examples. Once you have finished reviewing topics from this chapter, you should be able to:
- Define mathematical induction
- Describe the axiomatic system
- Identify the developments and postulates of Euclid's axiomatic geometry
The lessons in this chapter make learning about logic math principles a fun experience. All of the videos are taught by professional instructors who can answer any questions you may have. In order to see what type of math skills and knowledge you possess after finishing this chapter, use the self-assessment quizzes.
1. Mathematical Induction: Uses & Proofs
Watch this video lesson to learn about mathematical induction and how you can use it to prove mathematical statements. See how it is similar to falling dominoes.
2. The Axiomatic System: Definition & Properties
Learn what kinds of things are included in an axiomatic system in this video lesson. Also learn why consistency, independence, and completeness are important in axiomatic systems.
3. Euclid's Axiomatic Geometry: Developments & Postulates
Learn how the way we do proofs in geometry had its start with Euclid in this video lesson. Learn about his contributions to the geometry we know today. Also learn about the five basic truths that he used as a basis for all other teachings.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the MTTC Professional Readiness Examination (096): Practice & Study Guide course
- Basic Number Sense
- Fundamentals of Calculation
- Mathematical Reasoning & Problem-Solving
- Expressions, Functions & Operations
- Expressions & Equations in Algebra
- Understanding Algebraic Functions
- Using Algebraic Functions
- Polynomials, Rational Equations & Trigonometric Equations
- Introduction to Basic Geometry
- Measuring Geometric Figures
- Relationships Between Figures in Geometry
- Basics of Trigonometry
- Fundamentals of Statistics
- Bivariate Relationships in Statistics
- Using Tables & Graphs
- Interpreting Statistical Data
- Probability Overview
- Introduction to Conditional Probability & Diagrams
- Structure, Analysis & Word Meanings in English
- Literary Themes & Main Ideas
- Strategies for Effective Writing
- Reading Strategies & Literary Analysis
- Critical Reasoning for Test-Taking
- Conventions in Writing: Effective Usage
- Constructing Sentences & Paragraphs
- Argumentative Writing Overview
- Structuring Informational & Explanatory Texts
- MTTC Professional Readiness Flashcards |
Unit Plan Title: We Are All Time Travelers Length: Ten Class Periods
||Grade Level Expectations
|1. Observe and Learn toComprehend
|Artists make choices that communicate their intent and ideas in works of art
Characteristics and expressive features of art and design are used to identify and discuss works of art and artistic intent
|2. Envision and Critique toReflect
|Visual arts use various literacies to convey intended meaning
Artists, viewers, and patrons use the language of art to respond to their own art and the art of others
Artists, viewers, and patrons make connections among the characteristics, expressive features, and purposes of art and design
|3. Invent and Discover toCreate
|Use familiar symbols to identify and demonstrate characteristics and expressive features of art and design
Use basic media to express ideas through the art-making process
Demonstrate basic studio skills
|4. Relate and Connect toTransfer
|Visual arts respond to human experience by relating art to the community
Works of art connect individual ideas to make meaning
Historical and cultural ideas are evident in works of art
Colorado 21st Century Skills
Critical Thinking and Reasoning: Think Deep, Think Different
Information Literacy: Untangling the Web
Collaboration: Working Together, Learning Together
Self-Direction: Owning Your Learning
Invention: Creating Solutions
Creative Process in Visual Art
Develop Craft: Learning to use materials, tools and techniques
Engage and Persist: Learning to embrace problems and not give up
Envision: Imagine the possible next steps; see what is not there
Express: Convey an idea, feeling, personal meaning
Observe: Seeing things that otherwise might not be seen
Reflect: think, talk and evaluate your work and the work of others
Stretch and Explore: Reach beyond one’s perceived capacities
Understand Art World: Learn about contemporary and past art(ist)
|Lesson Titles and Description
|Exploring the Past: The Mad Archaeologist: students will explore the relationship between time, culture, and storytelling. Exploration will involve examining ancient art (such as cave paintings) as well as Cantimpré-Medieval Bestiary and Enrique Gomez Molina to display the relationship between culture and storytelling. The emphasis will be on creating a 2D creature that could have been documented by an imaginary civilization of the past. Students will begin by reading an article about a newly discovered place that is home to many previously undiscovered animals and will act as archaeologists putting together clues to create a new creature. Students will replicate this creature using pastels and watercolors, scratch art, and printmaking
|Discovering the Present: Mask Making: students will explore the relationship between culture, identity, and expression of ideas. Exploration will involve examination of masks from many different cultures and how their structure relates to purpose. The emphasis is on creating a mixed media mask that expresses an aspect of one’s identity. They will create the mask using plaster gauze and utilize painting techniques. They will learn color mixing including primary colors, secondary colors, intermediate colors, and wet and dry brush techniques. Students will look at Wendy Sollod, Estella Loretto, and Frank Smith. Students will be motivated by a video about identity and masks exhibited at the FCMOA.
|Imagining the Future: Dream Homes: students will explore the relationship between systems/structures, intent/purpose and expression of ideas. Exploration will involve examination of buildings from different cultures and with different purposes. The students will construct their dream home models from clay, construction paper, cardboard, paper towel rolls, tissue and news paper, disposable plates, as well as acrylic and water color paints. The emphasis is on creating a 3D dream home that displays intent and expresses personal interests. Students will look at art by Frank Gehry, Frank Loyd Wright, and Gaudi.
|Unit: Focusing Lens/Lenses: Timeless, Transferrable and Universal (I.E. Beliefs/Values, Identity, Relationships. Tension/Conflict, Freedom, Design, Aesthetic, Patterns, Origins, Transformation, Change, Influence, Collaboration, Intention, Play/Exploration, Synergy/Flow, Choices, Balance, Inspiration, System, Structure/Function, Reform)
||Unit: Prepared Graduate
|COMPREHEND: Analyze, interpret, and make meaning of art and design critically using oral and written discourse
COMPREHEND: Explain, demonstrate, and interpret a range of purposes of art and design, recognizing that the making and study of art and design can be approached from a variety of viewpoints, intelligences, and perspectives
REFLECT: Identify, compare, and interpret works of art derived from historical and cultural settings, time periods, and cultural contexts
REFLECT: Identify, compare and justify that the visual arts are a way to acknowledge, exhibit and learn about the diversity of peoples, cultures and ideas
REFLECT: Explain, compare and justify that the visual arts are connected to other disciplines, the other art forms, social activities, mass media, and careers in art and non-art related arenas
CREATE: Recognize, interpret, and validate that the creative process builds on the development of ideas through a process of inquiry, discovery, and research
CREATE: Develop and build appropriate mastery in art-making skills, using traditional and new technologies and an understanding of the characteristics and expressive features of art and design
CREATE: Recognize, compare, and affirm that the making and study of art and design can be approached from a variety of viewpoints, intelligences, and perspectives
TRANSFER: Critique personal work and the work of others with informed criteria
TRANSFER: Recognize, articulate, and implement critical thinking in the visual arts by synthesizing, evaluating, and analyzing visual information
|Unit: Standards and Grade Level Expectations
(Unit must have all standards; NOT all GLEs.)
|(Visual Arts Standard # – Name; GLE #, # and #)
1. Artists make choices that communicate their intent and ideas in works of art.
2. Characteristics and expressive features of art and design are used to identify and discuss works of art
1. Visual arts use literacies to convey intended meaning.
2. Artists, viewers, and patrons make connections among the characteristics, expressive features, and purposes of art and design
1. Use familiar symbols to identify and demonstrate characteristics and expressive features of art and design.
2. Use basic media to express ideas through the art-making process
3. Demonstrate basic studio skills
1. Visual arts respond to human experience by relating art to the community.
2. Works of art connect individual ideas to make meaning
|Unit: Inquiry Questions
(Engaging-Debatable: In art, what does it mean when something is beautiful? How can something be so ugly it is beautiful?)
|◼ Why do we tell stories? How can art tell stories?
◼ Why do artists work together?
◼ What kinds of things change over time? How do they change?
◼ How can we use our tools to show our ideas?
◼ What makes you who you are? Did you think of other people (like your family or friends)? Can you answer with
◼ How can art show who you are?
◼ How does something look relate to what it does?
◼ Can you always tell what a building is for simply by looking at it?
◼ How do buildings convey function?
◼ How do the materials change or affect the appearance or meaning?
◼ What does a mask say about the person who wears it or made it?
◼ How would you use art to tell others about you?
|Unit: Concepts: Timeless, Transferrable and Universal (I.E. Composition, Patterns, Technique, Rhythm, Paradox, Influence, Style, Force, Culture, Space/Time/Energy, Line, Law/Rules, Value, Expressions, Emotions, Tradition, Symbol, Movement, Shape, Improvisation, Observation)
strategic tool use
communication/expression of ideas
expressive features and characteristics of art
culture & community
visual & spatial thinking
systems & structures
|For each statement you create below align with Standard(s), Prepared Graduate Competencies, and Grade Level Expectations. Refer to Standards: Inquiry Questions, Relevance and Application and Nature of Statement when writing understandings.
|Enduring Understandings: My students will UNDERSTAND…
(Timeless, Transferrable and Universal. Shows a relationship between two or more concepts.)
|Conceptual Guiding Questions
||Factual Guiding Questions
|Systems/structuresrepresentrelationships by showing how artistic representations can tell stories and display purpose
||Why might we tell stories through art making?
Why do we study old art like cave paintings, and things from other cultures?
Why would we use art to express our own identity?
What kind of building forms have you seen where form and function are related?
|How did you come to construct the creature you did using the objects/items you found?
How would you explain where this creature came from?
What is the difference between 2-D & 3-D?
Can you always tell what a building is for simply by looking at it?
How do buildings convey function?
How do we learn about culture(s) through their artifacts (objects)?
|Culture and community informidentity and influence individuals’ art
|How can you use your knowledge of connecting details to make a big picture outside of school?
Why is identity important in art?
How does our identity emerge through our art?’
Which do you think is more important- form? or function?
How can different masks communicate different identities?
|How did you decide how your creature looked?
What did you learn about the creature from the objects you found?
How does your mask show your identity?
What are your favorite parts of your dream house? Do they fit more with the function or the form of the house?
|Purpose informsthe expression of ideas by fitting form to function.
||From the items in lesson 1, how big do you think your creature was? Why?
How might other materials affect how you built your mask? Your house?
Do different materials change what a building might be used for
|How did you decide what your creature was going to look like?
How did you decide what to put where on your mask?
What kinds of objects represent an identity?
Does your house have any special functions or purposes?
What kinds of tools did you use to make your porject?
|Expressive features and characteristics of art convey visualnarrative/story in art.
|How does art communicate with people?
What is your art saying or communicating?
Is it better to collaborate to find answers, or find them alone?
How do cultures communicate?
|What do you want to communicate to people with your art?
|Critical Content: My students will KNOW…
(NOT Timeless, Transferrable and Universal. Factual information in the unit [topics] that students must know.)
|Key Skills: What my students will be able to DO…
(Timeless, Transferrable and Universal. What students will do AND be able to transfer to new learning experiences as a result of learning the unit.)
|Artists: Cantimpré-Medieval Bestiary and Enrique Gomez Molina; Wendy Sollod, Estella Loretto, and Frank Smith; Frank Gehry, Frank Lloyd Wright, and Gaudi
How to use and benefits/disadvantages of scratch art, watercolors, pastels, watercolor pencils, monoprints, plate etchings, collage, acrylic paint, clay, assemblage, sculpture, plaster gauze
What masks are used for and how they communicate identity
How form influences function
|Analyze objects to formulate ideas of where they came from and/or why they were used (formulate intended meaning).
Synthesize smaller pieces of information to create a larger sense of understanding.
Evaluate their own identity and traits.
Use personal identity to envision the preferred outcome of a project.
Examine personality traits and how they relate to ones overall identity.
Reflect on artistic choices
Analyze others’ artistic choices
2-D versus 3-D
Slip and Score
||(1st Lesson) Students will formulate stories, or ideas by writing about what artifacts they see; what it did, what it looked like, etc.
We can begin the lesson by reading a “news” article about a place with a bunch of undiscovered creatures, and the young archaeologists that would help in the discovery. Each student can have their own copy to help read from.
Students read clues to find parts of a “creature”
Read A Color of His Own
(2nd Lesson) Students will list or write about their own identity in preparation for making their mask (brainstorming essentially)
(3rd Lesson) Students can write about their ideal home.
Students can talk about their dream home using a video camera
A reading and ideation writing will be incorporated into each lesson.
In process and end of project critiques will be utilized the whole unit.
||(3rd Lesson) Students will have to measure and plan the logistics of their home. |
Students face so many problems when it comes to remembering a theorem.
Because it has a complicated statement, and even students don’t even try to memorize the statement.
You have to memorize the statement if you want to be stand out from the crowd.
You might be thinking I am going to lie to you, well, I am not because I got 75/75 marks in 9th Class whereas 74/75 marks in 10th class in Mathematics. Here I am going to share three tips.
I will tell you everything that played a role to get such excellent marks in theorems as well as all over the Mathematics subject.
Tip 1: Understand the Fundamental of the Theorem
Many students don’t understand the basis of the theorem statement, and direct jump to remembering that creates enormous problems, in this way, students forget sooner or later.
This rule applies everywhere if you don’t know the basic, you’re more likely to face problems in understanding the basic concepts.
Let’s come to the point.
The first line of the most theorem statement before the commas (,) is the line of the GIVEN and the second sentence after the comma (,) is the sentence of TO PROVE.
If the theorem doesn’t have a comma, use the word THEN will act as a comma.
Sometimes it may be vice versa which means the PROOF can be written before then GIVEN is written after.
I know you will not understand without an example. Look at the below image.
The first sentence before the comma is GIVEN whereas the after the word THEN is the TO PROVE.
Pretty simple, right?
I am saying you again to memorize the statement otherwise many related theorems may be terrified you.
Here’s another trick for you, guys!
S.A.S. stands for Side, Angle and Side. It means 2 sides and one angle have been proved.
What has it to do with the memorizing easily?
It has to do because many theorems seem the same because many statements have the same words, but they have different orders with different GIVEN and TO PROVE.
This thing will help you to identify which theorem it is.
If you know which theorem it is, then you are more likely to draw its geometrical diagram, GIVEN, and TO PROVE. Which hold the half of the marks.
Pretty great, right?
Tip 2: Revise 30 Minutes a Day To Keep Your Neurons Connected
Who doesn’t know our brain is in the constant state of forgetting which is good for us.
Good for us?
Yes, otherwise, we will be occupied with unlimited thoughts and anxiety that will make our life more miserable.
It’s important to tell our brain which thing is more important and which is not.
Because we generally have two types of memory:
- Short-Term Memory
- Long-Term Memory
Short Terms Memory lasts from a few seconds to few minutes. It is what we hear and see whenever we talk. It doesn’t remain in our brain until we pay attention to it firmly.
The other type is the long-term memory that stays on our mind for a more extended period of time, and it only happens if we keep revising the things. You may know your phone number because you have revised it many times.
So, if you want a theorem stays longers in your mind, you need to set 30 minutes to revise a day.
When I was in 10th Class, my bus used to come after 20 minutes when the school closed. So, I used to revise the theorem during this time.
Even it seems strange because all other students are wasting time on gossip, and you spend quality time with your other friend: the book. 😉
Tip 3: Memorize by Writing On a Rough Copy To Activate Your More Senses
Have you observed the more senses you involve in something, the more memorable it becomes
Okay! You need an example.
A written story doesn’t remain in your mind, but a movie that has different sounds, visual scenes, and some feelings, you remember everything. Right?
The same goes for theorems. If you write what you’ve memorized, you’re more likely to keep on your mind. Because your hands write and feel. Your eyes see.
Do you know? You can make it more memorable if you attach some memories and feelings with the theorem?
How is it possible?
It is possible when some imaginary characters are used.
For example, when one theorem says the Angle A is congruent to Angle B, you can imagine that there are two dinosaurs which are equal in size.
You can attach sounds like dinosaur A and dinosaur B have the sound similar to your best friends. The same goes for other angles and sides of a triangle.
You must take your test on the weekend of all the congruent that you have memorized on the regular days. Keep in mind, never forget to revise 30 minutes all the congruent on a daily basis.
He is an SEO wizard and founder of Top Study World & Nafran, has been featured more times than a celebrity on Ahrefs, Semrush, Dawn News, Propakistani and dozens more. His superpower? Helping students ace their exams! |
The field of view (FOV) is the angular extent of the observable world that is seen at any given moment. In the case of optical instruments or sensors, it is a solid angle through which a detector is sensitive to electromagnetic radiation. It is further relevant in photography.
In the context of human and primate vision, the term "field of view" is typically only used in the sense of a restriction to what is visible by external apparatus, like when wearing spectacles or virtual reality goggles. Note that eye movements are allowed in the definition but do not change the field of view when understood this way.
If the analogy of the eye's retina working as a sensor is drawn upon, the corresponding concept in human (and much of animal vision) is the visual field. It is defined as "the number of degrees of visual angle during stable fixation of the eyes". Note that eye movements are excluded in the visual field's definition. Humans have a slightly over 210-degree forward-facing horizontal arc of their visual field (i.e. without eye movements), (with eye movements included it is slightly larger, as you can try for yourself by wiggling a finger on the side), while some birds have a complete or nearly complete 360-degree visual field. The vertical range of the visual field in humans is around 150 degrees.
The range of visual abilities is not uniform across the visual field, and by implication the FoV, and varies between species. For example, binocular vision, which is the basis for stereopsis and is important for depth perception, covers 114 degrees (horizontally) of the visual field in humans; the remaining peripheral 40 degrees on each side have no binocular vision (because only one eye can see those parts of the visual field). Some birds have a scant 10 to 20 degrees of binocular vision.
Similarly, color vision and the ability to perceive shape and motion vary across the visual field; in humans color vision and form perception are concentrated in the center of the visual field, while motion perception is only slightly reduced in the periphery and thus has a relative advantage there. The physiological basis for that is the much higher concentration of color-sensitive cone cells and color-sensitive parvocellular retinal ganglion cells in the fovea – the central region of the retina, together with a larger representation in the visual cortex – in comparison to the higher concentration of color-insensitive rod cells and motion-sensitive magnocellular retinal ganglion cells in the visual periphery, and smaller cortical representation. Since rod cells require considerably less light to be activated, the result of this distribution is further that peripheral vision is much more sensitive at night relative to foveal vision (sensitivity is highest at around 20 deg eccentricity).
Many optical instruments, particularly binoculars or spotting scopes, are advertised with their field of view specified in one of two ways: angular field of view, and linear field of view. Angular field of view is typically specified in degrees, while linear field of view is a ratio of lengths. For example, binoculars with a 5.8 degree (angular) field of view might be advertised as having a (linear) field of view of 102 mm per meter. As long as the FOV is less than about 10 degrees or so, the following approximation formulas allow one to convert between linear and angular field of view. Let be the angular field of view in degrees. Let be the linear field of view in millimeters per meter. Then, using the small-angle approximation:
In machine vision the lens focal length and image sensor size sets up the fixed relationship between the field of view and the working distance. Field of view is the area of the inspection captured on the camera’s imager. The size of the field of view and the size of the camera’s imager directly affect the image resolution (one determining factor in accuracy). Working distance is the distance between the back of the lens and the target object.
In tomography, the field of view is the area of each tomogram. In for example computed tomography, a volume of voxels can be created from such tomograms by merging multiple slices along the scan range.
In remote sensing, the solid angle through which a detector element (a pixel sensor) is sensitive to electromagnetic radiation at any one time, is called instantaneous field of view or IFOV. A measure of the spatial resolution of a remote sensing imaging system, it is often expressed as dimensions of visible ground area, for some known sensor altitude. Single pixel IFOV is closely related to concept of resolved pixel size, ground resolved distance, ground sample distance and modulation transfer function.
In astronomy, the field of view is usually expressed as an angular area viewed by the instrument, in square degrees, or for higher magnification instruments, in square arc-minutes. For reference the Wide Field Channel on the Advanced Camera for Surveys on the Hubble Space Telescope has a field of view of 10 sq. arc-minutes, and the High Resolution Channel of the same instrument has a field of view of 0.15 sq. arc-minutes. Ground-based survey telescopes have much wider fields of view. The photographic plates used by the UK Schmidt Telescope had a field of view of 30 sq. degrees. The 1.8 m (71 in) Pan-STARRS telescope, with the most advanced digital camera to date has a field of view of 7 sq. degrees. In the near infra-red WFCAM on UKIRT has a field of view of 0.2 sq. degrees and the VISTA telescope has a field of view of 0.6 sq. degrees. Until recently digital cameras could only cover a small field of view compared to photographic plates, although they beat photographic plates in quantum efficiency, linearity and dynamic range, as well as being much easier to process.
Main article: Angle of view
In photography, the field of view is that part of the world that is visible through the camera at a particular position and orientation in space; objects outside the FOV when the picture is taken are not recorded in the photograph. It is most often expressed as the angular size of the view cone, as an angle of view. For a normal lens, the diagonal (or horizontal or vertical) field of view can be calculated as:
where is the focal length, here the sensor size and are in the same unit of length, FOV is in radians.
In microscopy, the field of view in high power (usually a 400-fold magnification when referenced in scientific papers) is called a high-power field, and is used as a reference point for various classification schemes.
For an objective with magnification , the FOV is related to the Field Number (FN) by
if other magnifying lenses are used in the system (in addition to the objective), the total for the projection is used.
Main article: Field of view in video games
The field of view in video games refers to the field of view of the camera looking at the game world, which is dependent on the scaling method used.
((cite book)): CS1 maint: multiple names: authors list (link) |
Triangles are plane figures (or polygons) that have only three sides (edges) and three corners (vertices). Along with circles and quadrilaterals (four-sided polygons), they are the most basic of all plane figures. Also, their properties make them one of the most applicable geometric shapes in science and engineering.
There are several basic facts about them you should be aware of. The first fact is that the sum of the measures of all internal angles inside any triangle is exactly 180 degrees. This means that if you know the measure of two angles inside a triangle, you can easily determine the measure of the third one by subtracting the combined size of the two known angles from 180 degrees.
Secondly, a triangle also has something called exterior angles. Exterior angles are supplementary to their corresponding interior angles and the sum of their measures is always 360 degrees. Also, if you know the measure of an exterior angle, you can calculate the measure of its corresponding interior angle by subtracting the measure of the known angle from 180 degrees.
The third fact to keep in mind is that the sum of the lengths of any two sides of a triangle is larger than the length of the third side and that is a rule with no exceptions. This fact is also known as the principle of triangle inequality.
The fourth fact to remember (and it is a very important one) is that two triangles can be similar. They are similar if at least two angles in a triangle are the same as the corresponding angles in the other triangle. Also, they are similar if two or three of their corresponding sides are proportional. If the two triangles are the same in size and shape, they are considered congruent.
Types of triangles
We can classify a triangle according to two criteria: by its relative lengths of sides and by its internal angles.
By the relative lengths of its sides, a triangle can be:
- An equilateral triangle – all of its sides are the same length and the measure of its interior angles is the same (60 degrees each).
- An isosceles triangle – two of its sides are equal, as well as their corresponding angles.
- A scalene triangle – all of its sides are unequal. Its angles are also different in measure.
By internal angles, a triangle can be:
- A right triangle – also called a right-angled or a rectangle triangle is the one that has one of its interior angles measuring 90 degrees. They obey the Pythagorean theorem and have special names for their sides. The side opposite to the right angle is called the hypothenuse and it is the longest side of the triangle. The other two sides are legs or catheti of the triangle.
- An oblique triangle – the one in which no angles measure 90 degrees.
- An acute triangle – a triangle in which the measures of all angels are smaller than 90 degrees.
- An obtuse triangle – the triangle in which one of the angles is larger than 90 degrees.
Area of a triangle
There are a couple of ways to calculate the area of a triangle (T). The simplest formula is:
T = (b * h)/2
The letter b is the symbol for the length of the base of the triangle and the letter h we use to mark the height (altitude) of the triangle. This is the formula we will be using in our examples for now. The base of the triangle is the side perpendicular to the height of the triangle. The height (or the altitude) of a triangle is a straight line that passes through a vertex (angle) and is perpendicular to the opposite side (called the base). We will now solve a couple of examples in order to show how you can use this formula to calculate the area of any triangle whose height and length of its base are known.
Calculate the surface of this scalene triangle. The length of its base is 9.2 cm and its height is 4.3 cm.
This one is very easy. All you have to do is to place the appropriate values inside our formula and finish the calculation.
T = (9.2 * 4.3)/2
T = 39.56 / 2
T = 19.78 cm2
We can now see that the size of the area is 19.78 cm2. It is extremely important to keep your units in check when calculating areas. If the size of the sides is in meters, the result will be in meters squared (m2).
This assignment was pretty straightforward. Let us try to solve one that is a bit more complicated.
Find the area of this triangle. Its height is 6.7 cm and the length of the opposite side is 4 cm.
Now, this one is a bit trickier. You can see that the height does not pass through a vertex, but it looks like it forms a larger right triangle with an extended version of what should be the base. Do not let it confuse you. You can still use the formula we used before and the same procedure applies.
T = (6.7 * 4) / 2
T = 26.8 / 2
T = 13.4
You may be asking yourself: “How come?” Well, we will solve it using a more complicated way to show you why this approach still works. First, we will form the equation for calculating the area of the larger right triangle. The height is one leg of the triangle and the extended base is the other. And since two right triangles can form a rectangle, its area is half the area of a rectangle with the legs as its sides.
T1 = (h * (b + x)) / 2
T1 = (6.7 * (4 + x)) / 2
T1 = (26.8 + 6.7 * x) / 2
This is almost as far as we can go with that equation. So now we have to find the area of the smaller right triangle whose legs are the height of the triangle and x (the extension of the base). If we use the same principles as before, the equation for its area (T2) is:
T2 = (6.7 * x) / 2
The only thing left to do now is to subtract the area of the smaller right triangle from the area of the larger right triangle. That should give us the area we are interested in as a result.
T = ((26.8 + 6.7 * x) / 2) – ((6.7 * x) / 2)
T = (26.8 + 6.7 * x – 6.7 * x) / 2
T = 26.8 / 2
T = 13.4
As you can see, the result is the same as before. So do not let an unusual drawing confuse you. If you know the height of a triangle and the length of its base, you can calculate its area using this formula.
This is the basic information you need to have about a triangle. As we progress and put more lessons on this site, we will expand this article with everything you need to know in order to grow your mathematical skills and knowledge.
If you wish to practice everything you learned about triangles here, please feel free to use the math worksheets below.
Triangles exams for teachers
|Exam Name||File Size||Downloads||Upload date|
Classification – Sides
|Triangles – Classification – Sides – easy||624.3 kB||2244||October 13, 2012|
|Triangles – Classification – Sides – medium||595.8 kB||1678||October 13, 2012|
|Triangles – Classification – Sides – hard||454.1 kB||1465||October 13, 2012|
Classification – Angles
|Triangles – Classification – Angles – easy||621.2 kB||1830||October 13, 2012|
|Triangles – Classification – Angles – medium||610.9 kB||1359||October 13, 2012|
|Triangles – Classification – Angles – hard||454.6 kB||1216||October 13, 2012|
Classification – Sides and angles
|Triangles – Classification – Sides and angles – easy||613.8 kB||1877||October 13, 2012|
|Triangles – Classification – Sides and angles – medium||610.2 kB||1536||October 13, 2012|
|Triangles – Classification – Sides and angles – hard||454.2 kB||1313||October 13, 2012|
|Triangles – Finding angles||528.5 kB||2684||October 13, 2012|
|Triangles – Equations – Angles||541.1 kB||1488||October 13, 2012|
Triangles – Area
|Triangles – Area – very easy||659.3 kB||1127||October 13, 2012|
|Triangles – Area – easy||638.7 kB||1031||October 13, 2012|
|Triangles – Area – medium||653.3 kB||1276||October 13, 2012|
|Triangles – Area – hard||647.2 kB||1103||October 13, 2012|
Triangles worksheets for students
|Worksheet Name||File Size||Downloads||Upload date|
|Triangles – Classification||6.8 MB||1825||October 14, 2012|
|Triangles – Find the missing angle||6.9 MB||3854||October 14, 2012|
|Triangles – Area||6 MB||1349||October 14, 2012| |
How Your Kidneys Work
Kidneys & COVID-19
Learn how the kidneys and kidney patients are affected here.
Why Are the Kidneys So Important?
Most people know that a major function of the kidneys is to remove waste products and excess fluid from the body. These waste products and excess fluid are removed through the urine. The production of urine involves highly complex steps of excretion and re-absorption. This process is necessary to maintain a stable balance of body chemicals.
The critical regulation of the body's salt, potassium and acid content is performed by the kidneys. The kidneys also produce hormones that affect the function of other organs. For example, a hormone produced by the kidneys stimulates red blood cell production. Other hormones produced by the kidneys help regulate blood pressure and control calcium metabolism.
The kidneys are powerful chemical factories that perform the following functions:
- remove waste products from the body
- remove drugs from the body
- balance the body's fluids
- release hormones that regulate blood pressure
- produce an active form of vitamin D that promotes strong, healthy bones
- control the production of red blood cells
Below you will find more information about the kidneys and the vital role they play in keeping your body functioning.
- Where are the kidneys and how do they function?
- Kidney disease causes
- Kidney disease diagnosis
- Kidney disease treatment
- Kidney failure treatment
- What are the warning signs of kidney disease?
Where Are the Kidneys and How Do They Function?
There are two kidneys, each about the size of a fist, located on either side of the spine at the lowest level of the rib cage. Each kidney contains up to a million functioning units called nephrons. A nephron consists of a filtering unit of tiny blood vessels called a glomerulus attached to a tubule. When blood enters the glomerulus, it is filtered and the remaining fluid then passes along the tubule. In the tubule, chemicals and water are either added to or removed from this filtered fluid according to the body's needs, the final product being the urine we excrete.
The kidneys perform their life-sustaining job of filtering and returning to the bloodstream about 200 quarts of fluid every 24 hours. About two quarts are removed from the body in the form of urine, and about 198 quarts are recovered. The urine we excrete has been stored in the bladder for anywhere from 1 to 8 hours.
What Are Some of the Causes of Chronic Kidney Disease?
Chronic kidney disease is defined as having some type of kidney abnormality, or "marker", such as protein in the urine and having decreased kidney function for three months or longer.
There are many causes of chronic kidney disease. The kidneys may be affected by diseases such as diabetes and high blood pressure. Some kidney conditions are inherited (run in families).
Others are congenital; that is, individuals may be born with an abnormality that can affect their kidneys. The following are some of the most common types and causes of kidney damage.
Diabetes is a disease in which your body does not make enough insulin or cannot use normal amounts of insulin properly. This results in a high blood sugar level, which can cause problems in many parts of your body. Diabetes is the leading cause of kidney disease.
High blood pressure (also known as hypertension) is another common cause of kidney disease and other complications such as heart attacks and strokes. High blood pressure occurs when the force of blood against your artery walls increases. When high blood pressure is controlled, the risk of complications such as chronic kidney disease is decreased.
Glomerulonephritis is a disease that causes inflammation of the kidney's tiny filtering units called the glomeruli. Glomerulonephritis may happen suddenly, for example, after a strep throat, and the individual may get well again.However, the disease may develop slowly over several years and it may cause progressive loss of kidney function.
Polycystic kidney disease is the most common inherited kidney disease. It is characterized by the formation of kidney cysts that enlarge over time and may cause serious kidney damage and even kidney failure. Other inherited diseases that affect the kidneys include Alport's Syndrome,primary hyperoxaluria and cystinuria.
Kidney stones are very common, and when they pass, they may cause severe pain in your back and side. There are many possible causes of kidney stones, including an inherited disorder that causes too much calcium to be absorbed from foods and urinary tract infections or obstructions. Sometimes, medications and diet can help to prevent recurrent stone formation. In cases where stones are too large to pass, treatments may be done to remove the stones or break them down into small pieces that can pass out of the body.
Urinary tract infections occur when germs enter the urinary tract and cause symptoms such as pain and/or burning during urination and more frequent need to urinate. These infections most often affect the bladder, but they sometimes spread to the kidneys, and they may cause fever and pain in your back.
Congenital diseases may also affect the kidneys. These usually involve some problem that occurs in the urinary tract when a baby is developing in its mother's womb. One of the most common occurs when a valve-like mechanism between the bladder and ureter (urine tube) fails to work properly and allows urine to back up (reflux) to the kidneys, causing infections and possible kidney damage.
Drugs and toxins can also cause kidney problems. Using large numbers of over-the-counter pain relievers for a long time may be harmful to the kidneys. Certain other medications, toxins, pesticides and "street" drugs such as heroin and crack can also cause kidney damage
How is Chronic Kidney Disease Detected?
Early detection and treatment of chronic kidney disease are the keys to keeping kidney disease from progressing to kidney failure. Some simple tests can be done to detect early kidney disease. They are:
- A test for protein in the urine. Albumin to Creatinine Ratio (ACR), estimates the amount of a albumin that is in your urine. An excess amount of protein in your urine may mean your kidney's filtering units have been damaged by disease. One positive result could be due to fever or heavy exercise, so your doctor will want to confirm your test over several weeks.
- A test for blood creatinine. Your doctor should use your results, along with your age, race, gender and other factors, to calculate your glomerular filtration rate (GFR). Your GFR tells how much kidney function you have. To access the GFR calculator, click here.
It is especially important that people who have an increased risk for chronic kidney disease have these tests. You may have an increased risk for kidney disease if you:
- are older
- have diabetes
- have high blood pressure
- have a family member who has chronic kidney disease
- are an African American, Hispanic American, Asians and Pacific Islander or American Indian.
If you are in one of these groups or think you may have an increased risk for kidney disease, ask your doctor about getting tested.
Can Kidney Disease Be Successfully Treated?
Many kidney diseases can be treated successfully. Careful control of diseases like diabetes and high blood pressure can help prevent kidney disease or keep it from getting worse. Kidney stones and urinary tract infections can usually be treated successfully. Unfortunately, the exact causes of some kidney diseases are still unknown, and specific treatments are not yet available for them. Sometimes, chronic kidney disease may progress to kidney failure, requiring dialysis or kidney transplantation. Treating high blood pressure with special medications called angiotensin converting enzyme (ACE) inhibitors often helps to slow the progression of chronic kidney disease. A great deal of research is being done to find more effective treatment for all conditions that can cause chronic kidney disease.
How is Kidney Failure Treated?
Kidney failure may be treated with hemodialysis, peritoneal dialysis or kidney transplantation. Treatment with hemodialysis (the artificial kidney) may be performed at a dialysis unit or at home. Hemodialysis treatments are usually performed three times a week. Peritoneal dialysis is generally done daily at home. Continuous Cycling Peritoneal Dialysis requires the use of a machine while Continuous Ambulatory Peritoneal Dialysis does not. A kidney specialist can explain the different approaches and help individual patients make the best treatment choices for themselves and their families.
Kidney transplants have high success rates. The kidney may come from someone who died or from a living donor who may be a relative, friend or possibly a stranger, who donates a kidney to anyone in need of a transplant.
What Are the Warning Signs of Kidney Disease?
Kidney disease usually affects both kidneys. If the kidneys' ability to filter the blood is seriously damaged by disease, wastes and excess fluid may build up in the body. Although many forms of kidney disease do not produce symptoms until late in the course of the disease, there are six warning signs of kidney disease:
- High blood pressure.
- Blood and/or protein in the urine.
- A creatinine and Blood Urea Nitrogen (BUN) blood test, outside the normal range. BUN and creatinine are waste that build up in your blood when your kidney function is reduced.
- A glomerular filtration rate (GFR) less than 60. GFR is a measure of kidney function.
- More frequent urination, particularly at night; difficult or painful urination.
- Puffiness around eyes, swelling of hands and feet. |
You are here:
These webpages are for educators who want to be able to load or download data on to a spreadsheet, and use it for their own analysis.
They may want to go beyond the data reports from the digital assessment tools available, or from their student management systems, or analyse data from assessments which produce only raw data.
The topics in the section have advice on manipulating data on a spreadsheet to prepare it for analysis. There are basic instructions on how to clean, sort and move data, and how to make and read simple graphs.
Information on data reports from some assessment tools is available here.
Information on data reports from student management systems is available here.
Working with data concepts
Standards-based assessment allows us to make judgments about the level of an individual's learning with respect to shared benchmarks of expected performance, supported by exemplars.
The reliability of an assessment tool is the extent to which it measures learning consistently. The validity of an assessment tool is the extent by which it measures what it was designed to measure.
An important part of a well-designed analysis is to be aware of the types of data that are available, so that the appropriate analytic techniques are employed, and inappropriate ones avoided.
The mean and the median are both measures of central tendency. Standard deviation (SD) is a widely used measurement of variability used in statistics.
In order to understand and analyse data from an assessment tool, you need to know the differences between the ways that different tools measure student achievement, and what that might mean for your analysis.
Norms are statistical representations of a population, for example PAT maths scores for year 6 males, or e-asTTle reading scores for year 9 Māori females.
A good way of presenting differences between groups or changes over time in test scores or other measures is by ‘effect sizes’, which allow us to compare things happening in different classes, schools or subjects regardless of how they are measured.
Working with data topics
There are several ways by which quantitative data in the form of scores can be entered into a spreadsheet. Data can be downloaded from a digital assessment tool or student management system.
When working with data to analyse results and draw conclusions, it is essential that the data with which you are working is ‘clean’. This means that it is consistent, accurate and complete.
Graphs (also called charts) play an important role in data analysis. A graphic representation can make the relationship between sets of data much easier to understand.
Student achievement data is often reported for whole populations (for example: cohorts, year levels, whole class). This is called aggregate data.
Return to top |
Introduction to Loops in PL/SQL
Procedural Language/Structured Query Language or PL/SQL is Oracle Corporation’s procedural extension for the Oracle RDBMS. PL/SQL extended SQL by adding constructs used in procedural languages to enable more complex programming than SQL provides. Examples of these structures are IF…THEN…ELSE, basic loops, FOR loops, and WHILE loops.
Explain Different Types of Loops in PL/SQL
This article will explain you the iterative control structure means loops of PL/SQL; it will let you run the same code repeatedly. PL/SQL provides three different kinds of loop types:
- The simple or infinite loop
- The FOR loop
- The WHILE loop
Here, each loop is designed for a specific purpose, rules for use, and guidelines for high-quality creation.
Examples of Different Loops
Consider the following three procedures to understand different loops and their problem-solving ability in different ways.
1. The Simple Loop
This loop is as simple as its name. It starts with the LOOP keyword and ends with the end statement “END LOOP”.
The sequence of statements;
Here, as per the above syntax keyword, ‘LOOP’ marks the starting of the loop and ‘END LOOP’ stated the end of the loop. The sequence of statement part can contain any statement for execution.
Example of Simple loop
Let’s write a program to print the multiplication table of 18.
Here in the above loop, we do not have the “EXIT” statement; means the output execution will go on to infinite until we close this program manually.
Refer below program with Exit statement:
Explanation of the above program
In the declaration section, we have declared two variables; variable v_counter will serve as a counter and v_result will hold the result of the multiplication.
Down the execution section, we have our simple loop, here we have three statements.
- The first statement will work as our update statement; this will update our counter and increment it by 1.
- The second statement is an arithmetic expression, which will perform the multiplication of our table and will store the result in v_result variable.
- The third statement is an output statement, which will print the result of the multiplication in a formatted manner.
Use of Exit Statement
As per exit statement if v_counter >=10 then loop with an exit which means the loop will execute 10 times.
2. The FOR Loop
FOR loop allows you to execute the block of statements repeatedly for a fixed number of times.
FOR loop_counter IN [REVERSE] lower_limit .. upper_limit LOOP
- The first line of the syntax is the loop statement where keywords FOR marks the beginning of loop followed by loop counter which is an implicit index integer variable.
- This means you do not need to define this variable in the declaration section, also it will increment itself by 1 implicitly on each iteration of your loop, unlike the other loops where we have to define loop counter.
- Keyword IN is a must to be in the FOR Loop program.
- Keyword REVERSE is not mandatory but always used in conjunction with Keyword IN.
- If keyword REVERSE is used then the loop will iterate in the reverse order.
- lower_limit and upper_limit are two integer numbers. These two variables define a number of iteration of the loop.
- Two dots between these two variables serve as the range operator.
- Then we have the body of the loop, which can be a statement or group of statements.
- In the end, we have the phrase END LOOP that indicates the ending of the loop.
Here as per the above program, we have our FOR loop which will print the value of the v_counter variable from 11 to 20.
Example #2: Now let’s print the same in reverse order using FOR loop.
Just add keyword REVERSE after IN and before 11, this will execute the same o/p but in reverse order.
3. The WHILE Loop
While loop executes statements of program multiple times also this is best used for the program when no of iterations is unknown.
WHILE condition LOOP
- Unlike another syntax WHILE loop, the syntax is very easy to understand. Here as per the above syntax, ‘WHILE’ marks beginning of the loop along with condition and ‘END LOOP’ stated the end of the loop.
- The statements 1 through N are the executable statements, defined in the body of the loop. In addition, in the end, we have mentioned the END LOOP which indicates the end of the while loop.
- In order to run statements inside the body of the While loop, the condition needs to be true.
Example: Print multiplication table of 17 using while loop.
- In this example, we have the first variable “v_counter” that will serve as counter and the second variable is “v_result” this will hold the result of the multiplication.
- Here the first statement is an arithmetic expression inside WHILE loop, which will be doing the task of table multiplication and result, will get stored in v_result.
- The second statement is the print statement, which will print multiplication results. The third statement is update counter, which will update the counter with each iteration
- This while loop will keep on working until we have counter value more than or equal to 10 and WHILE loop will get terminated post 10 counter value.
Advantages of Loops in PL/SQL
- Code Re-usability is the best advantage of loops, we do not need to write code repeatedly for each iteration, using loops we can re-use code in every iteration.
- Loops also help us in reducing the code size or program size. All we have to do is just write one simple code and put that inside any loop to complete the work without coding for different outputs from the same program.
- Complexity reduction has also added the advantage of loops.
Conclusion – Loops in PL/SQL
SQL is the only interface to a relational database, and PL/SQL is a procedural extension to SQL. It is important to understand how SQL works and to design databases and business logic correctly to get the right result set. PL/SQL can bSQe used inside the database, and it has many powerful features. There are many improvements to PL/SQL in Oracle Database 12.1. Use SQL whenever possible, but if your query gets too complicated or procedural features are needed, it is best to use PL/SQL instead.
This has been a guide to the Loops in PL/SQL. Here we also discuss advantages and different types of loops with examples. You may also have a look at the following articles to learn more– |
A revolution in metals has arrived. NASA, the California Institute of Technology (Caltech) and the U.S. Department of Energy united to help develop a new building material. “Liquidmetal” is a type of alloy, a mix of three or more metals, with characteristics similar to plastic that cools quickly and has more than twice the strength of titanium.
It has long been thought that plastic and steel were the best materials to use in building large products that might be used for aerospace and space exploration applications. These new “shapeless alloys” combine the strength of steel with the molding capability of plastic.
Dr. Bill Johnson of Caltech, Pasadena, Calif., has studied metals with liquid atomic structures for over 30 years. He eventually teamed up with Dr. Atakan Peker of Liquidmetal Technologies, Lake Forest, Calif. Peker further helped Johnson develop the idea of creating thick liquid metals that form glass without the need for rapid cooling.
Johnson began working in the field in the early 1980s with colleagues at NASA’s Jet Propulsion Laboratory. NASA and Liquidmetal Technologies cooperated on research using the microgravity conditions available flying on the space shuttle. Extensive experiments on liquid metals were conducted onboard the International Microgravity Laboratory flight in 1994 and again in 1997 on the Microgravity Science Laboratory mission. The work was sponsored by NASA, CalTech and the U.S. Department of Energy to create new materials for aerospace.
Johnson has continued this research on the ground using electrostatic levitation and laser heating. In this process small spheres are held up in a vacuum and melted by a laser beam. NASA sponsors two high-vacuum electrostatic levitator facilities for this research at NASA Marshall Space Flight Center, Huntsville, Ala., and at Caltech.
Johnson and Peker were able to create a new form of mixed metals that went from a liquid to a solid at room temperature. The liquid included a mix of elements: zirconium, titanium, nickel, copper, beryllium.
Instead of having to quickly cool a liquid metal to become solid, it cooled and hardened itself at room temperature to avoid crystallization and become a glass. They named this liquid metal “Vitreloy.” This metal showed massive strength: a one inch wide bar could lift 300,000 pounds, compared to a titanium bar of the same size that could only lift 175,000 pounds. Although this material had super strength, it lacked the attributes that make metals tough. Vitreloy, was more robust than window pane glass, but still cracked.
The successful method used to toughen Vitreloy and create Liquidmetal is the same method used to process plastics. In 2000, Johnson and graduate student Paul Kim improved Vitreloy’s toughness while giving it the flexibility to allow it to be made into many different shapes. Now, the new line of Liquidmetal alloys is on the rise.
It has been proven that Liquidmetal can handle lots of stress without losing its shape and is three times more elastic than other alloys. To test these characteristics, an experiment was set up. In the experiment, three marble-sized balls made of steel were dropped from the same height into their own glass tubes. Each tube had a different type of metal plate at the bottom: steel, titanium, Liquidmetal. Once each ball was dropped they were left to bounce. The balls hitting the steel and titanium plates bounced for 20 to 25 seconds. The ball hitting the Liquidmetal plate bounced for 1 minute and 21 seconds. During the experiment, this was the only ball that bounced outside its tube.
Liquidmetal Technologies Inc. has an exclusive license for this product and is finding more uses for it. Plates for golf equipment were one of the early products in 1996. Now it is being considered by the U.S. Department of Defense as an armor and anti-armor material.
HEAD Racquet Sports showed its interest in the material in 2003 and used it for a new tennis racquet line that ultimately became the world’s top-selling new technology racquet that same year. Now the Liquidmetal alloy is finding its way into any number of consumer goods, including, cell phone cases and parts, a Rawlings baseball bat, HEAD skis and more. The technology is also being considered for several upcoming aerospace applications. |
Solar sailing technology has been a dream for many decades. The simple elegance of sailing on the waves of sunlight has a dreamy side that has captured the imaginations of engineers as well as writers. However, the practical aspects of the amount of energy received compared to that needed to transport useful payloads brought those dreams back to reality. Now, a team led by Amber Dubill of the John Hopkins University Applied Physics Laboratory and supported by the NASA Innovative Advanced Concepts (NIAC) program is developing a new solar sail architecture that may already have found its killer application – heliophysics.
The technique they use is known as refractive light sailing. It has significant advantages over current solar sail technology, including the ability to spin. This is a big problem for most solar sails, which lose their effectiveness if they are not facing the sun directly. Diffraction causes light to scatter as it passes through an aperture. Using this property of the solar sail material would allow the craft to move away from the sun while still being pressured by the light pushing it in whatever direction it was spinning.
To create such refractive pressure, the team created a material with very small gratings embedded in it to diffract light onto a surface that could still take advantage of the force generated when that light was absorbed. This would allow any spacecraft using a sail as a propulsion system to move away slightly from the Sun and take advantage of the powerful thrust from the photons of light.
Remove all ads on Universe today
Join Patreon for only $3!
Get an ad-free experience for life
To demonstrate this technology, the NIAC is supporting it with a Phase III grant after the successful completion of Phases I and II over the past few years. The third phase comes with $2 million in two-year funding to further develop the materials used in the solar sail, culminating in ground-based testing that could herald a move to use in deep space.
Deep space is the most likely place to apply such refractive sails. In particular, the researchers believe that they will be effective in solar physics. Conventional thrust techniques do not work well around the sun’s poles, due to magnetic interference in that space. Conventional solar sails will not work well either, because the light falling on them in these locations may push them away from the sun or not at all.
Using a refracting solar sail, the spacecraft can still orient itself in the right direction while still using the force from the light to move effectively. This will allow a vehicle equipped with one to observe the sun from an angle like never before. But there is still a long way to go before any vehicle is equipped with one. The funding path after NIAC Phase 3 is ambiguous at best at the moment, and there will still be more development work to be done after another two years of development. But, with luck, a new type of solar sail may be linked to the next generation of the solar physics lab. It may eventually be used in many other programs as well.
NASA – A NASA-powered solar sail can take science to new heights
UT – Forget about interstellar flights. Small light sails can be used to explore the solar system today
UT – LightSail 2 sends new images of Earth
Utah – What is a solar sail?
Artist’s depiction of refractive solar sails.
Credit – Mackenzie Martin |
Combine a variety of in-built functions with pipe operator to do powerful data analysis
The pipe operator %>% is used to pass the output of a function to another function, thereby enabling functions to be chained together. The end result is a block of very readable code with separate functions chained together.
Let’s see an example to understand. Consider the following 3 functions
- Square of a number — square
- Double of a number — double
- Inverse of a number — inverse
- Rounding off a number to 1 decimal place — round
Let’s say we want to apply these three functions to an array of numbers. One way to do that is to chain these functions together
#An array of numbers [0.1, 0.2, 0.3, ...., 1] x = seq(0.1,1,by=0.1) #Chaining Them Together round(square(double(inverse(x)), digits=1)
For longer chains it is harder to read and difficult to keep track of parenthesis. Consider the alternative using pipes
#An array of numbers [0.1, 0.2, 0.3, ...., 1] x <- seq(0.1,1,by=0.1) #Pipeline of functions x %>% square %>% double %>% inverse %>% round(digits=1)
The output is the same for both
Note that the piping starts from left to right. The array x is first squared, then doubled, then inverted and finally rounded off.
To use the pipe function just include the library tidyverse. Tidyverse is a collection of the most used packages in R. Once you successfully install and include it you can use the pipe operator in your code.
To include it in your code just write
Once executed, this statement attaches various packages to your environment. Note, that it includes various string manipulation packages as well as the plotting library ggplot2.
Now let’s try to use pipe function in a real world situation. In the following example we will extract COVID-19 data from
Covid Dataset Analysis
The aim of the analysis will be to see the countries where the number of cases reported today were less than 10. With as much ease we can also observe the countries where the cases were greater than a threshold.
We will use COVID-19 data set compiled by the Johns Hopkins University Center for Systems Science and Engineering (JHU CCSE) from various sources including the World Health Organization (WHO), BNO News, etc. JSU CCSE maintains the data on the 2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository on github.
Fetching the data
We fetch the data using the read_csv function. For the purposes of our analysis we will analyse only the confirmed cases.
confirmed_raw <- read_csv(link_to_data)
Running this command gives us a dataframe of 264 rows and 86 columns. Note that the variable link_to_data is a very long hyperlink and can be copied from the gist attached at the end of the article.
The head of the data summarises the dataset for us. It shows us the following information
- The first 6 rows of data
- Column Names
- Column Count
- Column Types
We can see that the first four columns describe the country’s details for which the data is collected. Every column after that is for a single day with the latest data at the end. Notice that after 29th January the head command clips the data set because it is too large to view and displays all the column names in a commented-out form.
Piping the data
Now let’s start piping the data.
We will pipe the data through following methods
- Selecting a subset of dataframe
- Renaming of columns
- Filtering out those rows which have greater than 10 cases
- Grouping the rows by Country and displaying the sum of all cases across provinces. (A single country may have multiple rows, with each row for a different state/province)
The dataset looks like this
Selecting a subset
The select function works like this
#To Keep Only Columns M through N select(c(M,N)) #To Remove Columns M through N select(-c(M,N)) #e.g. Keep columns 2 to 10 select(c(2,10))
In the raw dataframe we see that the data starts on 22nd January. We do not want all the data. We just want the first 4 columns and the confirmed cases for today and yesterday. So we select columns 1 to 4 and columns 85 to 86. We assign this data to conf_subset variable.
conf_subset <- confirmed_raw %>% select(c(1:4),c(85,86))
Notice how we piped the bigger dataframe into the select function. The conf_subset looks like this.
Notice that now we are left with only 2 columns of data. 4/12/20 and 4/11/20.
Renaming the data
We do not, however, want the names of our columns to be dates. So we will rename them to today and yesterday. Also, we do not want names like Province/State or Country/Region. So we will rename them to Province and Country respectively.
conf_subset <- confirmed_raw %>% select(c(1:4),c(85,86)) %>% rename(Country = "Country/Region", Province = "Province/State", today = colnames(confirmed_raw), yesterday = colnames(confirmed_raw) )
Notice how we piped the data to rename method. This is how the dataframe looks now.
Now we have the column names we want. The problem is that we still have 264 rows of data. Let’s see how that looks on a plot.
That’s too many countries. The highest bars correspond to countries like USA and Italy.
Filtering by count
Now let’s remove all those countries and provinces that have a count of greater than 10 today.
conf_subset <- confirmed_raw %>% select(c(1:4),c(85,86)) %>% rename(Country = "Country/Region", Province = "Province/State", today = colnames(confirmed_raw), yesterday = colnames(confirmed_raw) ) %>% filter( today < 10) #%>%
We are left with just 28 rows
Notice United Kingdom and Canada. Both of them occupy multiple rows and each of them has the count for some province of theirs. Let’s see how that looks on a plot.
Look at the fourth bar that corresponds to Canada. There is 1 province of Canada that reports a negative number so the bar goes below 0. A negative number might mean a falsely reported confirmed case. Similarly, pay attention to the third-last bar corresponding to the United Kingdom.
Grouping by countries
We don’t want to see the province data separately. We want to see all of that added together.
conf_subset <- confirmed_raw %>% select(c(1:4),c(85,86)) %>% rename(Country = "Country/Region", Province = "Province/State", today = colnames(confirmed_raw), yesterday = colnames(confirmed_raw) ) %>% filter( today < 10) #%>% group_by(Country) %>% summarise( Yesterday = sum(yesterday), Today = sum(today) )
Note that the group_by method goes along with the summarise method because once grouped we can then perform functions on each group. e.g. here we perform sum() on each group.
So that gives us the following dataframe
Note that Canada’s data is summed and drops by 1 from yesterday to today. When plotted the data looks like this.
The whole pipeline of methods looks like this
confirmed_raw %>% select %>% rename %>% filter %>% group_by %>% summarise
You can add many more methods to this chain. We can see that after 10 or more methods this piping becomes to unwieldy.
When to avoid pipes
Hadley Wickham the inventor of pipes states in his blog the times when piping may not be useful.
When not to use the pipe
The pipe is a powerful tool, but it’s not the only tool at your disposal, and it doesn’t solve every problem! Pipes are most useful for rewriting a fairly short linear sequence of operations.
- Longer than 10 steps — In that case, create intermediate objects with meaningful names. That will make debugging easier, because you can more easily check the intermediate results
- Multiple inputs or outputs — If there isn’t one primary object being transformed, but two or more objects being combined together, don’t use the pipe.
- Complex Dependency Structure — Pipes are fundamentally linear and expressing complex relationships with them will typically yield confusing code.
Having said that do try to use pipes in your next data analysis effort and reap the benefits of this tool. The gist for the complete code along with plotting steps is below.
Rahul has created a very beautiful open source dashboard completely in R that shows informative visualizations on how the COVID disease is progressing.
Dashboard - India against COVID19 |
To describe the Doppler effect and its application to the measurement of flow velocity.
To understand the differences between continuous-wave and pulsed-wave Doppler and the clinical rationale for each.
To identify the components of the Doppler spectral waveform.
To recognize the aliasing artifact and understand the methods of reducing or eliminating it.
To understand the principles of the two major types of color Doppler imaging.
Color flow imaging
Color velocity scale
Color wall filter
Combined Doppler mode
Doppler shift frequency
Doppler spectral waveform
Maximum velocity waveform
Packet size (ensemble length)
Power Doppler imaging
The ability to identify flow patterns and measure flow velocities is one of the most important functions of diagnostic ultrasound. The sonographer must understand the factors that contribute to the Doppler information displayed on the monitor. Most of what we discuss in this chapter applies to both spectral Doppler and color flow imaging, since and each mode is governed by the Doppler equation and is ultimately subject to the same factors.
THE DOPPLER EFFECT
The Doppler effect is the observed change in frequency of a transmitted wave due to the relative motion between the source of the sound and the receiver or observer. Doppler ultrasound is a valuable tool because this methodology detects the presence, direction, velocity, and time variation of blood flow within blood vessels and in the heart. Several types of Doppler devices are available. Although each relies on the Doppler effect to detect motion, the manner in which flow information is acquired, processed, and displayed distinguishes one type of instrument from another. Some scanners offer several Doppler modes, which are selectable by the user. The most basic (inexpensive) systems offer only a single option for the Doppler mode (velocity analysis or two-dimension Doppler imaging, i.e. color flow).
The apparent frequency change produced by the Doppler effect is based on the relative motion between the source of sound and the observer, regardless of which is moving and which is stationary. When a police car with siren blaring passes a pedestrian, the audible sound is heard as a change in frequency or pitch as the vehicle approaches (the frequency appears to be higher) while the frequency of the retreating vehicle after it passes is observed to be lower. In the above illustration, the sound source is the moving vehicle, while the receiver or observer is the stationary pedestrian.
Imagine a situation in which an observer is standing in a boat in the middle of a lake. If the wind is blowing at a constant rate from the north and the waves all have the same distance between peaks (same wavelength), the stationary boat will encounter the same number of wave crests each second (constant frequency) as are produced by the wind. If the boat begins traveling in a northerly direction, into the wind, the wave crests are encountered more frequently. The observer standing in the boat sees an increase in the wave frequency, although, in actuality, the frequency of the cresting waves has not changed. If the boat turns around and begins heading south, this time with the wind (away from the source of the waves), fewer crests are seen, and to the observer the frequency appears to decrease. As the boat moves faster in either direction, the difference between the actual and observed frequencies becomes greater. The only circumstance in which these “transmitted” and “observed” frequencies are the same is when the boat is stationary.
A stationary observer views the same number of pressure waves as are emitted by the stationary source (Figure 5-1). However, the relative motion between the sound source and the receiver distorts the pattern of symmetric wavefronts and the observed frequency increases or decreases, depending upon the direction of movement. The change or difference in frequency between the transmitted frequency and the received frequency, caused by the motion, is the Doppler shift frequency (often abbreviated as “Doppler shift” or “Doppler frequency”). In the example of the police siren above, the frequency appears higher to the stationary observer as the car approaches. In this case, the relative motion of the source and the receiver is toward one another. As the police car passes and travels away from the observer, the frequency appears to decrease, since the relative movement between the source and the observer is away from one another.
FIGURE 5-1. The Doppler effect. (A) Stationary sound source and receiver, the observed frequency is the same as the frequency emitted by the sound source. (B) Sound source moving toward the receiver, the observed frequency is higher than the actual frequency emitted by the sound source. (C) Sound source moving away from the receiver, the observed frequency is lower than the actual frequency emitted by the sound source.
When considering a sound wave produced by a piezoelectric transducer, the sound source remains stationary while the moving “receiver” could be blood cells or another moving structure, such as a heart valve. The echo from the moving reflector is then observed with a Doppler shift frequency by the stationary transducer, which is now the receiver.
The magnitude of the Doppler shift frequency depends on how rapidly the sound source, the receiver, or both are moving in relation to one another. An increase in the relative velocity between the source and the receiver causes a greater deviation from the transmitted frequency. Indeed, this is the rationale behind why we perform the Doppler examination. The Doppler shift frequency (fD) produced by a moving reflector is calculated from the equation:
where c is the acoustic velocity of tissue, f is the transmitted frequency, v is the velocity of the interface, and θ is the angle between the path of reflector movement and the direction of beam propagation (called the Doppler angle or angle to flow) as illustrated in Figure 5-2. Note that the letter “c” in the Doppler equation represents the velocity of sound in tissue instead of the usual “v” for velocity. In this mathematical symbolism, the character “v” is reserved for the velocity of the flowing blood. The number 2 in the equation represents two separate (and equal) Doppler shifts that occur in Doppler ultrasound. The first Doppler shift occurs between the stationary sound source, the transducer, and the “observer,” the moving blood cells. The second Doppler shift takes place as the moving blood cells (now the sound source) reflect the sound wave back to the stationary transducer, which now becomes the receiver.
FIGURE 5-2. Doppler ultrasound detection of reflector velocity. The Doppler angle θ is defined by the reflector path with respect to the transmitted beam.
As a reflector moves directly toward a 5-MHz transducer at a velocity of 50 cm/s, the angle to flow is 0 degrees and the observed frequency is 5,003,247 Hz, corresponding to a Doppler shift frequency of 3247 Hz above the original transmitted frequency (Figure 5-3). If the flow is away from that transducer at 50 cm/s, the observed frequency is 4,996,753 Hz or 3247 Hz below the original transmitted frequency. The Doppler angle gives the component of the velocity along the direction of propagation for the ultrasound beam. If the Doppler angle is increased from 0 to 30 degrees, the Doppler shift frequency is 2.8 kHz instead of the 3.2 kHz obtained for parallel incidence. For a given reflector velocity, the Doppler shift frequency decreases as the Doppler angle is increased (Figure 5-4).
FIGURE 5-3. Received echo frequency is 3247 Hz above the transmitted frequency when the flow velocity is 50 cm/s toward the transducer. The transmitted frequency is 5 MHz and the Doppler angle θ is 0 degrees.
FIGURE 5-4. Doppler shift frequency from reflectors moving at a velocity of 50 cm/s versus Doppler angle. The transmit frequency is 5 MHz. The Doppler shift frequency is 3.2 kHz at 0 degrees, 2.8 Hz at 30 degrees, and 1.6 kHz at 60 degrees. No Doppler shift frequency is observed at 90 degrees.
No Doppler shift frequency occurs at a 90-degree angle of incidence (cosine theta in the Doppler equation is equal to zero for an incident angle of 90 degrees). In practice, the signal never disappears completely. Because the beam has a finite width, some portion of the beam impinges at an angle that is not perpendicular to the motion. Beam divergence tends to amplify this effect, especially in the region beyond the beam’s focal point.
The acoustic velocity is assumed to remain constant at a value of 1540 m/s for soft tissue. The observed change in frequency occurs because the sound beam encounters a moving structure between the source and the detector. The Doppler equation predicts that an increase in reflector velocity results in a greater Doppler shift frequency. If the Doppler shift frequency and angle to flow are known, the velocity of the moving reflectors can be calculated. In practice, the transmitted and received frequencies are first measured, and then processed to find the resultant Doppler shift frequency. The instrument accomplishes these steps autonomously without operator intervention. However, the Doppler angle to flow must be determined by the sonographer with manual input to the scanner for the correct display of flow velocity.
Uncertainty in the measurement of the Doppler angle, particularly at large angles, introduces error in the velocity computation. The exact angle to flow is much more of a consideration when evaluating blood vessels than in the heart due to differences in acoustic access. In vascular applications, the process of angle correction (angle measurement) must be performed by the sonographer in order to achieve an accurate estimation of flow velocity. At a Doppler angle to flow of 60 degrees, the resultant Doppler shift is only half that with a Doppler angle of 0 degrees. The angle to flow must be measured as accurately as possible, because a 5-degree deviation for a 60-degree angle to flow (frequently used when examining blood vessels) introduces an 18% error in the measurement of flow velocity.
Conversely, when the beam is near parallel to flow (as is frequently the case in the heart), the Doppler angle to flow is assumed to be 0 degrees and no angle correction is performed. At a Doppler angle to flow near 0 degrees, a 5-degree inaccuracy in the angle results in only a 1% error in the calculation of flow velocity. A 10-degree error in the estimation of Doppler angle to flow results in a velocity error of less than 10%. In practice, the angle of insonation is assumed to be 0 degrees in cardiac applications and no “angle correction” is performed.
Doppler signals from superficial blood vessels (e.g., the carotids) should generally not be acquired at angles greater than 60 degrees, due to the increased potential of error as the Doppler angle approaches 90 degrees. Regardless of the angle, care should be taken in vascular applications to measure the angle to flow as accurately as possible.
Scattering from Blood
For Doppler measurements of blood flow, red blood cells (RBCs) act as Rayleigh scatterers. The RBC has a diameter of 7 μ (much smaller than the wavelength of the sound wave, usually 0.2–0.5 mm) and thus meets the condition for Rayleigh scattering. Rayleigh scattering exhibits very strong frequency dependence (proportional to the fourth power of the frequency). Therefore, the intensity of the scattered ultrasound energy increases dramatically as the transmitted frequency increases.
The intensity of the scattered sound also depends on the number of RBCs and thus the quantity of blood in the sample volume. Because the scattering from blood is small compared with echoes produced by soft tissue interfaces, blood-filled vessels appear to be echo-free on the B-mode image. To enhance scattering and, therefore, increase the sensitivity to weak echoes generated from blood cells, a high-frequency transducer is often advantageous. However, at higher frequencies, the rate of attenuation of the sound beam by the intervening tissues becomes greater. Therefore, as with B-mode imaging, two opposing frequency-dependent effects (in the case of Doppler, penetration and scattered echo intensity) must be balanced by matching the transducer transmit frequency with the depth of the region of interest.
Doppler transducers usually operate in the frequency range of 2–10 MHz, because other constraints are placed on the system: a single transducer with dual imaging and Doppler functions, a desired frequency range for Doppler shift frequency, and the problem of aliasing (discussed later in this chapter). High transmit frequencies, typically 5–7 MHz, are employed for peripheral vascular Doppler examinations, whereas examinations of deep-seated vessels are performed at frequencies near 2 MHz. Most often, the transmitted Doppler frequency is somewhat lower than the nominal imaging frequency of the transducer. For example, a transducer labeled as “5 MHz” refers specifically to the B-mode imaging frequency. The transmitted frequency of sound used for Doppler evaluation in that same transducer will likely be in the range of 2–3 MHz. Some ultrasound instruments display the actual transmitted frequency used for Doppler, while others display only descriptors such as “resolution/penetration” while in the Doppler mode.
Doppler units are designed to extract the Doppler shift frequencies from received signals. The Doppler shift frequency is in the audible range (typically between 200 and 15,000 Hz). Therefore, loudspeakers are used as output devices in addition to any other type of available display. Nearly all commercially available systems provide an audio display of the Doppler signal, as the human ear is extremely sensitive to Doppler signals. For visual display, the preferred format is to convert the measured Doppler shift frequency to velocity, which is independent of instrument parameters. Doppler displays utilizing frequency expressed in kilohertz, without velocity information, are not readily comparable when multiple examinations are performed by different sonographers on different instruments.
A continuous-wave (CW) Doppler transducer contains two piezoelectric elements: one to transmit the sound waves of constant frequency continuously and one to receive the echoes continuously (Figure 5-5). A single-element transducer cannot send and receive at the same time. Since the transmitted sound wave is not pulsed, broad bandwidth transducers are not practical or even appropriate (wide frequency range yields multiple Doppler shifts for a reflector moving at constant velocity).
FIGURE 5-5. Continuous-wave Doppler transducer. Pencil-type probe has two piezoelectric crystals: one transmits continuously, the other receives continuously.
The sampling volume is restricted by the transmitted ultrasonic field (dependent on the frequency and focal properties of the sound beam) and the geometric arrangement of the elements. For the detection of a moving reflector located along the path of the transmitted beam, the resulting echo must strike the receiving crystal. The sensitive volume, or zone of sensitivity, is defined by the intersection of the transmitted ultrasound field and the reception zone. In essence then, each two-element transducer is focused to a particular depth (Figure 5-6). The two elements are tilted slightly to allow overlap between their respective fields of view (transmission and reception). A multiple-element array transducer creates a similar zone of sensitivity in CW Doppler mode by dedicating one group of elements as the transmitter and another group as the receiver (Figure 5-7).
FIGURE 5-6. Zone of sensitivity for CW Doppler transducer.
FIGURE 5-7. Multiple-element array transducer operating in CW Doppler mode. One group of elements (black) is designated for transmission and another group (gray) is assigned for reception. A zone of sensitivity is created where the wave patterns overlap.
Depending on the clinical application, the sonographer selects a CW transducer with the appropriate operating frequency and sensitive region. In a multiple-element array transducer, the operating frequency and depth of the sensitivity zone in CW Doppler mode may be adjustable, depending upon the instrument.
The transmitted sound wave interacts with various reflectors, some of which are stationary and others moving. A fraction of the incident sound intensity is reflected at each interface. If the reflector is stationary, the frequency of the reflected sound wave is the same as the transmitted frequency, and consequently no change in frequency is observed. A moving interface causes the frequency of the echo to shift up or down depending on whether the movement is toward or away from the sound source.
Measurement of the Doppler shift frequency is based on the principle of wave interference. The Doppler effect causes the reflected wave received from a moving interface to vary slightly in frequency from the original transmitted wave. When waves with different frequencies are algebraically added together, they yield a slowly oscillating broad pattern of peaks and valleys, called the beat frequency (Figure 5-8). The beat frequency equals the difference in frequency between the two waves (transmitted and received) and thus corresponds to the Doppler shift frequency.
FIGURE 5-8. Doppler signal processing. (A) Continuous reference transmitted signal of constant frequency (25 cycles). (B) Continuous echo-induced signal of constant frequency (20 cycles). (C) Addition of the transmitted and received signals in A and B forms a complex waveform. The beat frequency of five cycles composes the outer envelope (dotted line) of this complex waveform.
Figure 5-9 illustrates the steps required to generate the Doppler signal. The oscillator regulates the transmitter to emit a continuous sound wave of a single frequency. Alternating pressure on the receiving element by the returning echo is converted to an RF (radiofrequency) signal. The amplifier increases the echo-induced signal level. The reference waveform from the oscillator, which mimics the transmitted wave, is then combined with the received signal at the demodulator, generating complex resultant wave by means of wave interference. This resultant wave is then processed to remove the rapidly oscillating components; however, the slowly varying envelope corresponding to the beat frequency (dotted line in Figure 5-8C) is retained. Isolation of the beat frequency yields the Doppler signal, which has a frequency equal to the Doppler shift frequency.
FIGURE 5-9. Schematic showing components of a continuous-wave Doppler unit.
The signal processing illustrated by Figure 5-8 yielded a single-beat frequency, which denoted reflectors moving at a single, constant velocity. In a Doppler ultrasound examination of blood flow, RBCs within a vessel have a range of velocities that vary throughout the heart cycle and therefore, a range of Doppler shift frequencies will be present. The velocity of each moving reflector corresponds to a characteristic beat frequency upon echo detection and processing. Many beat frequencies representing all detected motion within the sampling volume comprise the Doppler signal. A complex Doppler signal is then formed by the summation of all the Doppler shift frequencies present after demodulation.
The complex Doppler signal is amplified, filtered to remove unwanted low-frequency components caused by slow-moving structures such as vessel walls, and then routed to a loudspeaker for audible “display.” The pitch of the audio output corresponds to the frequency shift between the transmitted and received sound waves and indicates the flow velocity within the vessel. As flow velocity becomes greater, a higher pitch is heard. A typical audio Doppler display for an artery exhibits a rhythmic rise and fall in the audible frequency due to the acceleration and deceleration of blood with systole and diastole.
Large, slow-moving specular reflectors in the body (e.g., vessel walls or heart valves) generate strong echoes with relatively low Doppler shift frequencies. These low frequencies produce a distracting thumping sound often referred to as “wall thump.” Filtering removes these low frequencies, which are normally not of major interest and could mask other signals. The operator control, wall filter, rejects all frequencies below the threshold value, known as the cutoff frequency (Figure 5-10).
FIGURE 5-10. Wall filter control.
The cutoff frequency is usually set by default to remove Doppler shift frequencies below 100 Hz. Depending on the manufacturer and model, the cutoff frequency can be adjusted to values as low as 40 Hz and as high as 1000 Hz (1 kHz). Most units automatically set the threshold value based on the study type selected in the preset menu. Because the wall filter control removes all frequencies below the cutoff value, care must be taken so that slow-moving flow is not excluded from the display. Thus, the wall filter should be set at the lowest possible value to remove wall thump while not eliminating any important blood-flow components of the Doppler signal. This is particularly true for slow venous flow as well as for the slight flow reversal that occurs in a normal triphasic arterial waveform (discussed in the following chapter).
CW Doppler has high sensitivity to detect slow flow with low Doppler shift frequencies and, further, can discriminate small differences in flow velocity (Figure 5-11). The long sampling time of CW Doppler enables this modality to identify small changes in frequency corresponding to slow flow. At the other extreme, high-velocity flow is accurately measured with no limitation in velocity range. However, extensive flow volumes, such as those encountered within the left ventricle, cannot be accurately assessed with CW Doppler because precise depth information is not possible. The observed Doppler signal can be extremely complex, because the sum of Doppler shifts generated by all the moving interfaces within the sensitive volume is portrayed. If the sampling volume includes multiple vessels, the superposition of resulting Doppler shifts becomes especially problematic. Therefore, CW Doppler is limited to those clinical applications in which sensitivity volume can be associated with a single vessel, such as the brachial or femoral artery. CW Doppler is commonly employed to evaluate flow patterns in heart valves. In this case, even though the large sampling area of CW Doppler records flow from other portions of the atria and ventricles, the easily recognizable flow pattern of the aortic and mitral valves is readily identified. Coupled with the fact that CW Doppler has essentially no practical limit to the velocity that can be measured, this modality is ideal to assess stenosis in valves, such as the aortic valve. (Aortic stenosis often produces velocities in the range of 500–600 cm/s, which would be impossible to determine accurately with a pulsed-wave (PW) Doppler system.)
FIGURE 5-11. Sensitivity of continuous-wave Doppler to slow flow. The echo-induced signal from a slow-moving reflector (dotted line) requires several cycles to be differentiated from the reference transmitted signal (solid line).
PW Doppler provides quantitative depth information of the moving reflectors. Depth of echo formation is obtained via the echo-ranging principle in similar fashion to B-mode imaging. The transducer is electrically stimulated to produce a short burst of ultrasound and then is silent to listen for echoes before another pulsed wave is generated. Because of the requirement to assign depth, there is a physical limit to the number of Doppler pulses that can be transmitted in a given amount of time. Also, Doppler shift frequency determination entails longer pulse duration than in B-mode imaging. The necessity for increased pulse duration lies in the desire to detect received frequencies associated with slow flow that are almost the same as the transmitted frequency. Imagine that the pulse duration was confined to three cycles as in a typical B-mode acquisition (Figure 5-12). Certainly, the ability to distinguish small changes compared with the transmitted frequency becomes more difficult as pulse duration is shortened.
FIGURE 5-12. Pulsed-wave transmission of few cycles is unable to detect low-velocity reflector. Reference transmitted signal (solid line) and echo-induced signal (dotted line) are nearly identical.
The received signals are electronically gated for processing so only the echoes that are detected in a narrow time interval after transmission, corresponding to a specific depth, contribute to the Doppler signal. The delay time before the gate is turned on determines the depth of the sample volume; the amount of time the gate is activated establishes the axial length of the sample volume (Figure 5-13). Gate parameters are selected by the operator; thus, the axial size of the sensitive volume and the depth of the sample can be adjusted. The axial sample length can be as small as 1 mm. The remaining dimensions of the sampling volume are dictated by the beam width in the in-plane direction and in elevation direction. Figure 5-14 illustrates the designation of the sampling region along the Doppler scan line in a B-mode image. Transducer frequency and focusing characteristics influence the dimensions of the ultrasonic field.
FIGURE 5-13. In pulsed-wave Doppler, the timing gate determines the depth and axial length of the sampling volume.
FIGURE 5-14. Operator-defined sampling area for pulsed-wave Doppler. The dotted line indicated the direction of sampling and parallel horizontal lines mark the axial extent of the sensitive region.
Multiple echoes from a moving reflector separated in time must be accrued to detect the motion. In order to achieve this, transmitted pulses are repeatedly directed along the same scan line to interrogate the sampling volume. Suppose a photographer took a single stop-action photograph (with an extremely short shutter time) of a car traveling west at 60 miles per hour. If you were shown that photograph, you would be unable to tell if the car was moving or not. And certainly, the direction of travel and speed would be indiscernible. However, if a series of stop-action photographs were acquired over a specific time period and then shown rapidly one after the other, the motion of the car would be clearly depicted, and the speed could be computed if the rate of sampling were known.
In PW Doppler, the basic CW design is modified to accommodate range gating and to collect successive processed echoes for analysis. Accurate time registration is critical for proper depth assignment of the Doppler signals. Gating is based on elapsed time following each transmitted pulse, and the time between consecutive echoes from a reflector is set by the pulse repetition period (PRP). The PRP is the time interval from the beginning of one transmit pulse to the beginning of the next transmit pulse. A single gate limits the interrogation to one depth along the scan line. The direction of sampling is indicated on the display by the Doppler cursor. Echoes formed along the Doppler scan line, but outside the sampling volume, are rejected. Only those echoes generated from within the sampled volume contribute to the Doppler display.
For reflectors moving at uniform and constant velocity within the sampling volume, a series of echoes from successive transmitted pulses are acquired over time. The depth-specific echo from each transmitted pulse, when processed, provides a single instantaneous value of the Doppler signal (beat frequency). The measured values obtained from multiple transmitted pulses are combined to form the time-varying contour of the Doppler shift frequency (Figure 5-15). In essence, the transmitted pulse rate (Doppler pulse repetition frequency or PRF) indicates how often the Doppler signal is sampled. Typically, a sequence of 64–128 pulses is transmitted along the line of sight to interrogate flow within the sample volume. The total observation time is usually 10 ms or less.
FIGURE 5-15. Pulsed-wave Doppler signal processing. A series of transmitted pulses are directed along the Doppler line of sight. (A and B) For each transmitted pulse, the echo-induced signal from moving reflectors within the sampling volume is combined with the reference signal to yield the net signal. The net signal from successive transmit pulses varies due reflector movement. Note the change in position of RBCs between transmit pulses in (A) and (B). (C) The net signals from multiple transmitted pulses when placed on a time axis compose the beat frequency. The first two points of sampling from A and B are shown as solid lines. Subsequent measurements are indicated by the dotted lines. Connecting all the data points yields the projected beat frequency.
In PW Doppler, the beat frequency is not as well defined as with CW Doppler, because the pulsed echoes are equivalent to sampling the Doppler signal at discrete intervals. The oscillatory pattern can be more accurately delineated if the sampling occurs repeatedly at short intervals. This requires a high Doppler PRF.
Blood flows with a range of velocities within the sample volume and gives rise to multiple Doppler shift frequencies. These combine via interference to yield a complex Doppler signal, which represents all flow velocities present in the sampled volume. Fortunately, methods have been developed to isolate the individual velocity components and then display this information in an easy to understand format.
Velocity Detection Limit
PW Doppler has a limit with respect to the maximum beat frequency that can be detected accurately. This upper frequency boundary is called the Nyquist limit, which is caused by discrete (noncontinuous) sampling. The maximum Doppler shift frequency equals one-half the sampling rate, given by the Doppler PRF. Noncontinuous sampling creates a very important impediment in PW Doppler. To accurately measure a fast moving reflector producing a high Doppler shift frequency, a rapid sampling rate is necessary; however, a high PRF restricts the depth that can be interrogated, because a specific time is required to receive the echoes arising from that depth before the next transmitted pulse. Thus, as the depth to the vessel or structure is increased, more time is required between transmit pulses, and the maximum Doppler shift frequency that can be measured becomes lower. The problem becomes more complex because the Doppler shift frequency is also proportional to the transmitted frequency. The most problematic situation for PW Doppler occurs for deep-lying structures with high-velocity flow in which the Doppler angle to flow is near 0 degrees. This combination of factors arises frequently in the Doppler evaluation of heart valves, particularly with disorders such as aortic stenosis in which the velocities can be very high.
Table 5-1 illustrates the effect of the depth of interest and transmitted frequency on the maximum velocity limit when angle to flow is unchanged. As the depth of interest is increased, the maximum reflector velocity that can be measured is decreased. Importantly, a low-frequency transducer allows higher velocities to be detected. A larger Doppler angle extends the maximum velocity limit. At a depth of 10 cm with 5 MHz transmitted frequency, the maximum velocity limit increases from 84 to 119 cm/s when the Doppler angle is changed from 45 to 60 degrees. This velocity constraint occurs because the motion of the reflector is sampled at discrete intervals and not continuously, as with CW Doppler ultrasound. In contrast with PW Doppler, CW Doppler has no maximum velocity limit. (Since the CW transducer is continuously transmitting, there is essentially no “pulse repetition frequency” and therefore no Nyquist limit).
TABLE 5-1 • Maximum Velocity Limit in Pulsed-Wave Doppler with Different Transmit Frequencies
The following is a real-world example which illustrates a practical application of the maximum velocity limit in a Doppler examination. The maximum PRF for a 10 cm depth is approximately 7700 pulses per second. Using a 3.5-MHz transducer with a Doppler angle of 30 degrees, the maximum Doppler shift frequency, which can be accurately measured, is 3850 Hz, or a velocity of 98 cm/s. If the transmit frequency were lowered to 2 MHz, the maximum detectable velocity would increase to 171 cm/s. Changing the depth of interest to 15 cm while maintaining the transducer frequency at 3.5 MHz reduces the detectable maximum velocity to 65 cm/s. Fortunately, these conditions are such that the physiologic velocities of normal velocity blood flow (except within the heart) usually occur within the detectable range of PW Doppler units.
At a minimum, two measurements are required per beat cycle to define the Doppler shift frequency unambiguously. This is the reason the Nyquist limit (upper limit for detection of the Doppler shift frequency) is equal to one-half the Doppler PRF. Because the beat frequency is sampled intermittently in PW Doppler, limited data are available for calculation of the Doppler shift (each transmit pulse ultimately contributes one point on the waveform of the Doppler shift frequency). If the Doppler PRF is not adequate to generate at least two points per beat cycle of the Doppler shift frequency, the Doppler shift frequencies above the Nyquist limit will be misinterpreted as lower than their actual value (Figure 5-16). This error in the measurement of the Doppler shift frequency caused by a low sampling rate is called aliasing.
FIGURE 5-16. Intermittent sampling of the beat frequency (solid line). (A) Multiple measurements per cycle allow accurate assessment of the beat frequency. (B) As few as two measurements per cycle also provide accurate interpretation of the beat frequency. (C) If the sampling rate is less than two times per cycle, then the true beat frequency (solid line) is misinterpreted as a lower frequency (dotted line).
Imagine that a race car is traveling around an oval racetrack at constant speed. A series of photographs closely spaced in time would accurately depict the motion as the car advances around the track. Indeed, as long as at least two photographs are taken for each lap, the interpretation of the movement would be correct. Now suppose the car accelerates to a higher speed, while the frequency of the photographs remains unchanged. At this faster speed, there may be 1.5 photographs taken for each lap of the car (three photos every two laps). This series of photographs will now appear to show the car moving backward around the track at slower than the actual speed. Thus, there is a minimum sampling rate (2 photos per lap) that accurately portrays the motion of the car around the track, analogous to the minimum sampling rate, or PRF, in Doppler applications.
Because velocity information is almost always displayed in velocity units as opposed to frequency units, the Nyquist limit is typically given in velocity. The Nyquist limit may be displayed separately from the velocity scale; however, it is important for the sonographer to know that the Nyquist limit is equal to the maximum velocity shown on the scale. There are both a positive and a negative value displayed for the Nyquist limit. If the baseline is moved up or down from the middle of the spectral display, the maximum velocity limit for forward and reverse flow is no longer the same (Figure 5-17).
FIGURE 5-17. Baseline is placed off center. The maximum velocity in the forward and reverse directions is not the same (arrows). The + and – maximum velocity values are indicated by arrows.
If the baseline is moved all the way to the top or bottom of the spectral display, the Nyquist limit is extended to the greatest possible value in a single direction for the given sampling rate. However, any flow that is present in the opposite direction is unknown (Figure 5-18).
FIGURE 5-18. Baseline is moved to the bottom of the display. Measurements of velocity are restricted to one direction only, but the maximum velocity in the forward direction that can be displayed without aliasing is extended. |
All matter possesses certain characteristics, with which they are identified. A pen, we use to write can be described by its color, color of ink, length, diameter, mass, transparency etc. These characteristics are known as properties of the pen. Some of them are measurable. Length, diameter, mass are the properties those can be measured. Others like color, transparency can be described. The properties of a matter which can be measured are Physical Quantities.
Physical Quantity: A Characteristic/ property of a system that can be measured to determine its amount is called a Physical Quantity.
Example: Mass, length, area, volume, density, speed, acceleration, force, pressure etc.
Based on their origin (dependency) physical quantities can be classified as: (i) basic or fundamental and (ii) derived physical quantities.
Fundamental Physical Quantity:
A physical quantity that has got its own origin and cannot be derived from any other physical quantity is called fundamental or basic physical quantity.
There are seven fundamental physical quantities. They are mass, length, time, temperature, electric current, amount of substance and luminous intensity. Along with these there are two supplementary fundamental quantities: plane angle and solid angle. Let us learn each of them in detail.
Length: The separation between two points is termed as length. It is generally used to denote the distance between two points or depict the size of an object.
Mass: If we are assigned to push a big rock and a small stone, the stone is easily displaced with less effort. Big rock is tough to be pushed and needs additional effort to displace. Mass is measure of resistance to motion of an object. Mass is a measure of resistance to motion of a body. The property of a body to oppose motion is known as inertia. We will learn in detail about inertia in later chapters of the subject. The inertia of a body is measure of its mass.
Time: The measure of sequence of events is termed as time. It is measured using a clock which is generally associated with a watch or modern days electronic gadgets.
Temperature: In summer days we feel hot while in winter we feel too cold. It is necessary to measure the degree of hotness or coldness of a body or region. It is measured using the physical quantity: temperature. Temperature is the measure of degree of hotness or coldness of a body or system.
Electric Current: Matter is composed of atoms. Atoms constitute positively charged protons, negatively charged electrons and neutral neutrons. When these charges particles move, they constitute an electric current. The rate of flow of charge is called electric current. It is measured using device named ammeter.
Amount of Substance: Representing a matter in terms of number of its constituent particles like atoms and molecules is the measure of amount of substance.
Luminous Intensity: It is advisable not to look at sun with bare eyes. Even though we have looked at it during the day resulting to which our eyes gets blank for some time. This is due to high brightness of light emitted by the sun. This brighteners is represented using luminous intensity.
Taking a look at the two supplementary physical quantities:
Plane Angle: The angle formed by two intersecting lines in a plane is called plane angle. In the figure below two lines and intersecting at a point form four angles as shown.
Solid Angle: The field of view of an object or surface obtained from a point is called solid angle. In the figure below, an object when viewed from forms a solid angle notified by the hatched region.
Derived Physical Quantities:
These physical quantities are obtained from combination of fundamental physical quantities.
They do not have origin of their own. Most of the physical quantities in nature are derived quantities which have obtained their existence from the seven fundamental physical quantities.
Consider a rectangular region with known length and breadth. These two adjacent lengths are fundamental physical quantitoes. Area being a physical quantity too, it is derived by multiplying the length of two adjacent sides of the rectangle. This simple example shows area to be considered as a derived physical quantity.
In another example, speed is obtained by dividing distance covered with time taken. This shows, speed is also a derived physical quantity which is obtained from the two fundamental quantities: length and time.
Representation of a physical quantity requires mentioning its magnitude (amount) in each case while some require an additional feature of depicting the direction along which it is acting. Classifying in terms of method of representation, physical quantities can be classified as; (i) scalar and (ii) vector.
Scalar Quantity: A physical quantity that requires mentioning its magnitude only is termed as a scalar.
The depiction of magnitude involves a number and associated unit. In an example, a bag of rice is quantified by specifying its mass say 25 kg. The representation 25 kg in which 25 is a numeral and kg is unit which is enough to mention its mass. Few more examples include volume, temperature, energy, time, electric current etc.
Vector Quantity: Unlike a scalar, a vector quantity requires both magnitude and direction for representation of the physical quantity.
Consider a case of moving an object on a flat surface from a location (point A) to another (point B). A certain minimum force is required to change its position. If a force of say 10 Newton is applied on it, the object will certainly move. If we applied a force of 10N but in direction of C, it won’t serve our purpose as it will not reach its desired destination B. But if the force is directed along direction of B, it will certainly reach point B. This indicates necessity of mentioning direction of force along with its magnitude. Not only in case of force, there are other physical quantities too that need direction along with magnitude to give a clear picture of them.
Other quantities like displacement, velocity, acceleration, momentum, etc. which need direction along with magnitude to be represented. There are different approaches to represent a vector quantity which we will learn in the topic of motion in a plane.
Area is a physical quantity which can be considered as a scalar as well as a vector based on its purpose. As a scalar quantity its magnitude i.e., the region occupied is represented. But while denoting it as a vector quantity, along with its magnitude the direction of area is also needed to be mentioned. The direction perpendicular to plane of area is considered as its direction.
A certain category of physical quantity called tensor which is quite different from both scalars and vectors will be briefed later in the topic of mechanical properties of solids. |
The disk and washer methods are useful for finding volumes of solids of revolution. In this article, we’ll review the methods and work out a number of example problems. By the end, you’ll be prepared for any disk and washer methods problems you encounter on the AP Calculus AB/BC exam!
Solids of Revolution
The disk and washer methods are specialized tools for finding volumes of certain kinds of solids — solids of revolution. So what is a solid of revolution?
Starting with a flat region of the plane, generate the solid that would be “swept out” as that region revolves around a fixed axis.
For example, if you start with a right triangle, and then revolve it around a vertical axis through its upright leg, then you get a cone.
Here’s another cool example of a solid of revolution that you might have seen hanging up as a decoration! Tissue paper decorations that unfold from flat to round are examples of solids of revolution. Watch the next few seconds of the video below to see how it unfolds in real time.
The Disk and Washer Methods: Formulas
So now that you know a bit more about solids of revolution, let’s talk about their volumes.
Suppose S is a solid of revolution generated by a region R in the plane. There are two related formulas, depending on how complicated the region R is.
The simplest case is when R is the area under a curve y = f(x) between x = a and x = b, revolved around the x-axis.
Now imagine cutting the solid into thin slices perpendicular to the x-axis. Each slice looks like a disk or cylinder, except that the outer surface of the disk may have a curve or slant. Let’s approximate each slice by a cylinder of height dx, where dx is very small.
In fact, I like to think of each disk as being generated by revolving a thin rectangle around the x-axis. Then you can see that the height of the rectangle, y, is the same as the radius of the disk.
Now let’s compute the volume of a typical disk located at position x. The radius is y, which itself is just the function value at x. That is, r = y = f(x). The height of the disk is equal to dx (think of the disk as a cylinder standing on edge).
Therefore, the volume of a single cylindrical disk is: V = π r2h = π f(x)2dx.
This calculation gives the approximate volume of a thin slice of S. Next, to approximate the volume of the entirety of S, we have to add up all of the disk volumes throughout the solid. For simplicity, assume that the thickness of each slice is constant (dx). Also, for technical reasons, we have to keep track of the various x-values along the interval from a to b using the notation xk for a “generic” sample point.
Finally, by letting the number of slices go to infinity (by taking a limit as n → ∞), we develop a useful formula for volume as an integral.
Example 1: Disk Method
Let R be the region under the curve y = 2x3/2 between x = 0 and x = 4. Find the volume of the solid of revolution generated by revolving R around the x-axis.
Let’s set up the disk method for this problem.
The volume of the solid is 256π (roughly 804.25) cubic units.
Now suppose the generating region R is bounded by two functions, y = f(x) on the top and y = g(x) on the bottom.
This time, when you revolve R around an axis, the slices perpendicular to that axis will look like washers.
A washer is like a disk but with a center hole cut out. The formula for the volume of a washer requires both an inner radius r1 and outer radius r2.
We’ll need to know the volume formula for a single washer.
V = π (r22 – r12) h = π (f(x)2 – g(x)2) dx.
As before, the exact volume formula arises from taking the limit as the number of slices becomes infinite.
Example 2: Washer Method
Determine the volume of the solid. Here, the bounding curves for the generating region are outlined in red. The top curve is y = x and the bottom one is y = x2
This is definitely a solid of revolution. We’ll set up the formula with f(x) = x (top) and g(x) = x2 (bottom). But what should we use as a and b?
Well, just as in some area problems, you may have to solve for the bounds. Clearly the region is bounded by the two curves between their common intersection points. Set f(x) equal to g(x) and solve to locate these points of intersection.
x = x2 → x – x2 = 0 → x(1 – x) = 0.
We find two such points: x = 0 and 1. So set a = 0 and b = 1 in the formula.
Example 3: Different Axes
Set up an integral that computes the volume of the solid generated by revolving he region bounded by the curves y = x2 and x = y3 around the line x = -1.
Be careful not to blindly apply the formula without analyzing the situation first!
This time, the axis of rotation is a vertical line x = -1 (rather than the horizontal x-axis). The radii will be horizontal segments, so think of x1 and x2 (rather than y-values).
Furthermore, because everything is turned on its side compared to previous problems, we have to make sure both boundary functions are solved for x. The thickness of the washer is now dy (instead of dx).
Finally, because the axis of revolution is one unit to the left of the y-axis, that adds another unit to each radius. (The further away the axis, the longer the radius must be to reach the figure, right?) Take a look at the graph below to help visualize what’s going on.
- Inner Radius: x = y3 + 1
- Outer Radius: x = y1/2 + 1
As before, set the functions equal and solve for points of intersection. Those are again at x = 0 and 1.
Using the Washer Method formula for volume, we obtain:
The problem only asks for setup, so we are done at this point.
More from Clemmonsdogpark
About Shaun Ault
Shaun earned his Ph. D. in mathematics from The Ohio State University in 2008 (Go Bucks!!). He received his BA in Mathematics with a minor in computer science from Oberlin College in 2002. In addition, Shaun earned a B. Mus. from the Oberlin Conservatory in the same year, with a major in music composition. Shaun still loves music -- almost as much as math! -- and he (thinks he) can play piano, guitar, and bass. Shaun has taught and tutored students in mathematics for about a decade, and hopes his experience can help you to succeed!
Clemmonsdogpark blog comment policy: To create the best experience for our readers, we will approve and respond to comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! :) If your comment was not approved, it likely did not adhere to these guidelines. If you are a Premium Clemmonsdogpark student and would like more personalized service, you can use the Help tab on the Clemmonsdogpark dashboard. Thanks! |
Clines and Continuous Variation
Clines and Continuous Variation
Frank Livingstone, a specialist in genetic anthropology, has written that “there are no races, only clines” (Living-stone 1962, p. 279). For centuries, both everyday folk beliefs and the sciences presumed that “races” were separated by genetic boundaries, with a high degree of biological similarity among the members of each group. This was based on thinking in terms of a discrete distribution of traits. It was believed, for example, that all subSaharan Africans had black skin, all Europeans were white, and all Asians were yellow. Thinking in terms of homogeneous populations with discrete traits and boundary lines was supported by the selective perception that certain external physical traits fit stereotypical traditions. In the twentieth century, however, thinking in terms of continuous variation, also called clines, came to provide a more useful and precise way to analyze human variation, making the concept of “race” obsolete. Traits that were assumed to be unique to each race are in fact distributed continuously. For example, skin color, based mostly on the frequency of pigment (melanin), is darker near the equator and becomes lighter as one moves in a northern direction, reaching its lowest frequency in northern latitudes among populations that have resided in those areas for thousands of years.
The concept of cline was first proposed by the British biologist Julian Huxley in 1938. He derived the name from Greek word klinein, meaning “to lean.” He defined cline as a “gradation in measurable characters” (Huxley 1938, p. 219). A cline can be based on either directly observable external biological traits, also called phenotypes (e.g., hair color, skin pigmentation, stature), or it can be derived from genes (e.g., ABO blood type, sickle-cell hemoglobin) and referred to as a genotype. Clines may be continuous and vary gradually over a region, or they may vary abruptly. There may be steep clines or gradual clines as well as sudden midcline reversals. Clinal maps of England show areas where 15
percent of the population have red hair adjacent to areas where less than 5 percent have red hair. Variation may not be due to absolute barriers, but may instead be influenced by partly passable mountain ranges, deserts, and bodies of water. Even before the time of Columbus, clines were created or disrupted by the movement of peoples, a trend that intensified after 1492 with the enslavement and forced emigration of millions of Africans and the migration of Europeans into North and South America. The result resembles a weather map on which lines separate temperature variations. On a biological cline map, the lines separating phenotypical traits are called isophenes. Lines referring to genotype frequencies are isogenes. Similar illustrations of gradients are seen in maps of elevations of land contours above sea level, in this sense the word cline is related to incline and decline in altitude.
Together with his coauthor A. C. Haddon, Huxley presented the evidence for clines in 1936 in a pioneering map (see Figure 1) that showed the decrease of B-type hemoglobin in Europe and its increase into western Asia. Haddon and Huxley concluded that the evidence of clines invalidated the race concept’s assertions of racial homogeneity and boundary lines making for discrete races. Later, computers would make possible the analysis of more complete data into interval maps showing other clinal patterns. The exact numerical value varies, but any cline can be represented by a set of intervals. In this sense, a cline refers to both the concept of continual variation and a method of measuring and depicting variation in the frequency of any physical feature or gene frequency over a geographic area.
The pioneering efforts of Huxley and Haddon did not receive immediate acceptance. The idea of “race” was too strongly established in Western folk beliefs and scientific tradition. But newer research studies would provide a catalyst for change. Among the first was Livingstone’s 1958 study of sickle-cell anemia, which showed that it was more frequent in malarial areas. Prior to this it was believed by some that genes for sickle-cell anemia were a discrete racial trait of black Africans. Livingstone was able to show that the alleles for sickling (Hbs) are most frequent in populations in West Africa but decline in frequency in areas to the north and east, and are still less frequent around the Mediterranean and throughout South Asia. This is because another mutation, for hemoglobin E, also resists malaria in areas where the intensity of agriculture affects the frequency of mosquitoes.
Malaria continues to kill millions of people, mostly children, each year. Inheriting an allele for sickling from each parent leads to extreme anemia, severely reducing the number of offspring and the percent of sickling alleles in the population. Those inheriting normal hemoglobin—that is, without any sickling blood—contract malaria and have a significant death rate and a reduced number of offspring. Yet inheriting one such allele confers a resistance to the symptoms of malaria. Frequency of survival and reproduction with one sickle-cell allele is relatively greatest in areas where there is more agriculture being practiced, for the clearing of the land produces standing water where mosquitoes can breed. Therefore, the continuous variation over geographic regions is not due to biological race but is produced by human cultural practices in malarial climates.
Livingstone’s data was reported in a list, but a map developed later depicts a graphic clinal pattern (Johnston 1982, Figure 2). It is clearly a clinal pattern distributed through malarial regions of Africa, Europe, and Asia. Livingstone’s data demonstrated that continuous clinal variation occurs
within populations and across their boundaries, in clear disproof of the validity of the idea of race.
Another influence on the cline concept was presented by C. Loring Brace in “A Nonracial Approach Towards the Understanding of Human Diversity” in The Concept of Race (1964). Brace’s nonracial approach was the use of clines, and he illustrated it with four clinal maps (derived from Biasutti 1941), covering skin color, hair form, facial form based on relative tooth size, and nose form. All of these are traditional observable physical features (phenotypes) that had been used to construct racial stereotypes. Each clinal pattern can be studied, and Brace showed that evolutionary hypotheses could be developed and tested regarding their origin and distribution. When the four clinal patterns are overlaid on each other, it clearly demonstrates that racial boundaries do not exist, because the clinal patterns are not congruent and do not covary. Instead, they are discordant; that is, their distribution does not correspond with racial boundary lines. Brace declared that it was “extremely difficult to say where one population ends and another begins” (Brace 1964, p. 104). Thinking in terms of clines in this way clarified that racial boundaries are arbitrary cultural errors. The discordance of clines was further presented to biologists by Paul Ehrlich and Richard W. Holm (1964). The biologists Edward O. Wilson and W. L. Brown (1953) used clinal data as a basis for rejecting the concept of “subspecies,” in the sense of race.
Beginning in 1938–1939, and again in 1952–1954, genetic anthropologist Joseph B. Birdsell measured Australian Aborigines for a number of traits. Using this data, Birdsell constructed numerous clinal maps. He viewed the data in the context of the concept of race up to the early 1970s, but in 1975 he wrote, in Human Evolution, that “The use of the term race has been discontinued because it is scientifically undefinable and carries social implications that are harmful and disruptive” (p. 505). In 1993 he published Microevolutionary Patterns in Aboriginal Australia, A Gradient Analysis of Clines. It contains a large number of clinal maps showing lack of covariation, contrary to the Western image of there being one stereotypical image of Australian Aborigines. In 1994, the geneticists L. Luca Cavalli-Sforza, Paola Menozzi, and Alberto Piazza published a worldwide analysis using a database of 76,676 gene frequencies from aboriginal ethnic groups that were believed to be in the same location at the time of the study as they were at the end of the fifteenth century, although the gene pool and ethnic identity of each group had likely altered. They published more than 500 clinal maps, which were condensed into worldwide summary maps using 128 gene variants (alleles). The result did not correspond to racial boundary lines; and the coauthors rejected the race concept as a scientific failure and race classification as a futile exercise.
Acceptance of clines as a basis for rejecting the race concept was resisted by some anthropologists, especially by forensic anthropologists who asserted that they could identify an individual’s race by examining his or her skull. In doing so they ignored the fact that while crania might have some feature attributed to a person of one race, a particular skull could be that of a very light-skinned person who could be identified either as black or white. In addition, cranial features vary clinally within populations and change over time. Outspoken in defense of race was the forensic anthropologist Alice Brues in People and Races (1977). Brues wrote that clines were sometimes the appropriate concept to use, while at other times race was both a necessary and valid concept. Brues pointed out the apparent differences between races with a scenario of flying from a Scandinavian city and landing in Nairobi, Kenya. Brace replied that walking or bicycling between these two areas and progressing southward along the Nile, one would view a gradual change in physical features.
Acceptance of the new clinal concept and data on continuous variation became widespread beginning in the 1970s in anthropology, although the concept was less often explicitly stated than was the underlying and crucial fact of continuous variation. There continues to be reluctance among some scientists to relinquish race as the traditional and convenient way of extending to human populations the classification system of the Swedish botanist Carolus Linnaeus (1707–1778).
Thinking that uses the race concept assumes a high degree of uniformity of each trait, as well as the association of these traits within a population. Brace pointed out that this association “obscures the factors influencing the occurrence and distribution of any single trait. The most important thing for the analyses of human variation is the appreciation of the selective pressures which have operated to influence the expression of each trait separately” (Brace 1964, p. 107; italics in original). One example, as described above, is Livingstone’s explanation of the cline for sickle-cell allele in relation to the frequency of malaria, which in turn is affected by the intensity of agriculture. Brace proposed explanations for clinal distribution of nose form, hair form, skin color, and relative tooth size affecting face profile. Skin pigment is a protective response to ultraviolet radiation, which causes skin cancer. However, there is some uncertainty about the frequency of skin cancer as an influence on natural selection (through differential fertility), because the cancer develops after the years when reproduction is most likely. A stronger explanation for increased melanin is found in the effect of ultraviolet rays in reducing folic acid (folate) in the body. Low levels of folic acid result in a defect in the neural tube (spina bifida) of the developing fetus, and they may also affect the production of sperm (Jablonski 2004). The clinal pattern in melanin arises as the intensity of ultraviolet exposure decreased away from the equator. The presence of populations with lesser amounts of melanin as one proceeds north occurs because the reduced degree of ultra-violet intensity allows for the persistence of adequate folic acid, coupled with the need to generate more vitamin D for normal bone growth and the possibility of resistance of lighter skin to frostbite.
The covariation of hair form and skin color are an exception to the pattern of clinal discordance. Hair on the head varies, and for a biological reason—spiral and wooly hair insulates the head from ultraviolet radiation. Clinal patterns tend toward smaller teeth in areas with longer histories of food production from agriculture, while larger teeth occur in areas of hunting and gathering. Dental reduction began in the northern latitudes when cooking and the use of pottery for more liquid foods began, reaching equatorial areas later. As food became more tender, natural selection did not require large teeth and mutations for smaller teeth could accumulate. Stature, meanwhile, varied in response to climate. In cold climates, body temperature is conserved by stocky bodies and short arms and legs. In hot, dry areas, a more linear body with long arms and legs dissipates heat more efficiently. The small stature of pygmies is an exception to the linear pattern, but they live in a hot, moist rainforest, along with other species that are smaller than closely related species living in the open savannas.
Particular genetic conditions, such as Tay-Sachs disease or sickle-cell anemia, have mistakenly been viewed as identifying particular races. Tay-Sachs is a condition in which inheriting two recessive genes is lethal. It has been attributed to Jews and explained by the possibility that the presence of one gene conferred a resistance to tuberculosis among the Ashkenazic Jews of eastern Europe who lived in crowded ghettos. The condition is also found in other populations but at a lower frequency, and a slightly different mutation also causes Tay-Sachs among French Canadians of Quebec. Racial stereotypes attribute other features to one or another particular race, such as uniform epicanthic folds over the eyes, prominent cheekbones, or thick lips. However, these vary by degree in a clinal pattern. Explanations for them as advantageous adaptations have not been established. They may have originated in one small population of related families and dispersed with population expansion, becoming more varied due to mating with members of other populations. Clinal variations in physical features are most commonly explained as advantageous for survival in different and sometimes extreme geographic locations. These biological features, mislabeled in the past as racial markers, did not necessarily make migration into those areas possible, but they may have evolved in gradations after movement into those areas. The spread of humans throughout the globe occurred because humans had the potential to live in many different areas, from the Arctic Circle to the semi-arid, near deserts of southwestern Africa. It has been suggested that races varied in their achievements because of their hereditary intelligence, but no proven method of measurement free of the cultural variation in IQ tests has been devised. Genes relating to intelligence have not been found, although many different negative mutations may reduce the functioning intelligence of an individual. The kind of achievements of various populations is best viewed not as the result of biological differences, but rather as a result of human flexibility for problem solving expressed in diverse cultures.
The availability of clinal data was necessary to bring about thinking without the idea of biological races, and an awareness of continuous variation has made racist stereotypes more difficult to use. Clinal thinking has become standard among anthropologists, and it is increasing among biologists.
Biasutti, Renato. 1941. Le razzi e i popoli della terra, 3rd ed. Vol. 1. Turin, Italy: Unione Tipographico-Editrice.
Birdsell, Joseph B. 1975. Human Evolution: An Introduction to the New Physical Anthropology, 2nd ed. Chicago: Rand McNally.
Brues, Alice. 1977. People and Race. New York: Macmillan.
Cavalli-Sforza, L. Luca, Paolo Menozzi, and Albert Piazza. 1994. The History and Geography of Human Genes. Princeton, NJ: Princeton University Press.
Ehrlich, Paul, and Richard Holm. 1964. “A Biological View of Races.” In The Concept of Race, edited by Ashley Montagu, 153–179. New York: Free Press.
Huxley, Julian, and A. C. Haddon. 1936. We Europeans: A Survey of “Racial” Problems. New York: Harper.
———. 1938. “Clines: An Auxiliary Taxonomic Principle.” Nature 142: 219–220.
Jablonski, Nina G. 2004. “The Evolution of Human Skin and Skin Color.” Annual Review of Anthropology 33: 585–623.
Johnston, Francis E. 1982. Physical Anthropology. Dubuque, IA: Wm. C. Brown.
Livingstone, Frank B. 1958. “Anthropological Implication of Sickle Cell Gene Distributions in West Africa.” American Anthropologist 60 (3): 533–562.
———. 1962. “On the Non-Existence of Human Races.” Current Anthropology 3: 279–281.
Moore, John. 1994. “Ethnogenetic Theory.” National Geographic Research and Exploration 10 (1): 10–37.
———. 1995. “Putting Anthropology Back Together Again: The Ethnogenetic Critique of Cladistic Theory.” American Anthropologist 96 (4): 925–948.
Wilson, Edward O., and William Brown, Jr. 1953. “The Subspecies Concept and Its Taxonomic Application.” Systematic Zoology 2 (3): 97–111.
"Clines and Continuous Variation." Encyclopedia of Race and Racism. . Encyclopedia.com. (August 19, 2019). https://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/clines-and-continuous-variation
"Clines and Continuous Variation." Encyclopedia of Race and Racism. . Retrieved August 19, 2019 from Encyclopedia.com: https://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/clines-and-continuous-variation
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:
Modern Language Association
The Chicago Manual of Style
American Psychological Association
- Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
- In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. |
Consider a function which involves the change in velocity of a vehicle moving from one point to another. The change in velocity is certainly dependent on the speed and direction in which the vehicle is travelling. If the acceleration is to be calculated, then the limits of the function are essential. The theory of derivative is derived from limits. This article deals with the concept of derivatives along with a few solved derivative examples.
A function which denotes the rate of change of the other function can be called the derivative of that function. The method of finding a derivative can be called differentiation. It is represented as a dependent variable in terms of an independent variable through an equation. The derivative is a value which changes with respect to its input. A derivative is denoted as f’(x).
The change in the slope of a function can be termed as derivative. It is represented as follows:
Below are some important rules used to solve the derivative problems.
Constant multiple rule:
Sum and difference rule:
Some basic derivatives
Examples on Derivative for IIT JEE
Example 1: Find the derivative of x6+x3+2
Using the power rule (d/dx)x6 = 6x5
(d/dx)x3 = 3x2
(d/dx)2 = 0
Hence the derivative of x6+x3+2 = 6x5+3x2
Example 2: Solve
Example 3: What is the differential coefficient of ax + log x . sinx?
Let y = ax + log x . sinx
Differentiating w.r.t. x, we get
dy / dx = ax loge(a) + (1 / x) sin x + log x . cos x
Example 4: If f(x) = logx (logx), then f′(x) at x = e is _____.
f(x) = logx (logx)
= log (log x) / logx
f′(x)= 1 / x − 1 / x log (logx) / (logx)2
⇒ f′(e) = [1 / e − 0] / 1
= 1 / e
Example 5: What is the local maximum value of the function log x / x ?
Let f(x) = log x / x
⇒f′(x) = 1 / x2 − log x / x2
For maximum or minimum value of f (x), Find f′(x) = 0
f′(x) = 1 − logex / x2 = 0 or 1− logex / x2 = 0
∴ logex = 1
or x = e, which lie in (0,∞).
For x = e,
d2y / dx2 =−1 / e3, which is −ve.
Hence y is maximum at x = e and its maximum value = loge / e = 1 / e.
Example 6: If f(2) = 4, f′(2) = 1 then what is the value of limx→2 x f(2) − 2 f(x) / x−2?
Given f(2) = 4, f′(2) = 1
limx→2 x f(2) − 2 f(x) / (x−2) = limx→2 x f(2) − 2 f(2) + 2 f(2) − 2 f(x) / (x−2)[limx→2 (x−2) f(2) / (x − 2)] − [limx→2 2 f(x) − 2 f(2) / (x−2)]
f(2) − 2 limx→2 f(x) − f(2) / [x−2] = f(2) − 2 f′(2)
= 4− 2(1)
Example 7: In the Mean-Value theorem f(b) − f(a) / b−a = f′(c), if a = 0, b = 1 / 2 and f(x) = x (x−1) (x−2) , then what is the value of c?
From mean value theorem f′(c)= f(b) − f(a) / b−a
Given that a=0, f(a)=0 and b = 1 / 2, f(b) = 3 / 8
f′(x) = (x−1) (x−2) + x (x−2) + x (x−1)
f′(c) = (c−1) (c−2) + c (c−2) + c (c−1)
= c2 − 3c +2 + c2 − 2c + c2 − c
f′(c) = 3c2 −6c + 2
According to mean value theorem,
f′(c)= f(b) − f(a) / b−a
3c2 − 6c + 2= (3/8)−0 / (1/2) − 0 = 3 / 4
3c2 − 6c + 5 / 4 = 0
On solving the above equation the value of c obtained is
c = 6 ± √[36−15] / 2 × 3
= [6 ± √21] / 6
c = [1 ± √21] / 6 |
Remote sensing is the art or science of acquiring information about objects targets on the Earth's surface by using sensors mounted on platforms located at a distance from the targets. Measurements are made in different wavelength regions on interactions between the targets and electromagnetic radiation (EMR).
bThe EM spectrum covers various wavelengths/frequencies of electro-magnetic radiation from lowest frequencies (longest wavelengths), radio spectrum to highest frequencies, gamma rays. Remote sensing with respect to wavelength regions is also divided into two types: Optical and Radar remote sensing. The optical wavelength region (0.30-15.0 nm), is further subdivided as Visible (0.38-0.72nm), Near IR (0.72-1.30nm), Middle IR (1.30-3.00nm) and Far IR (7.00-15.0nm). Multi-spectral scanners which are operated in the visible and infrared regions of the spectrum are used extensively as remote sensing tools for a wide variety of applications.
The figure (b) illustrates the microwave portion of the spectrum. Microwave region extends from 0.3 to 300 GHz (1 m to 1 cm in wavelength). The different microwave regions are represented by letters and are indicated in the figure (b). The different frequency and wave length bands of microwave region are given in the below table:
Get your grade
or your money back
using our Essay Writing Service!
Types of Remote Sensing: With respect to the energy resources used, there are two types o remote sensing: Active and Passive. Active remote sensing detects reflected responses from objects that are irradiated from artificially generated energy sources. They provide their own illumination and hence comprises of a transmitter and a receiver (Radar imaging systems (RADAR: RAdio Detection And Ranging, Scatterometers, Altimeters) while Passive remote sensing detects the reflected or emitted electro-magnetic radiation from natural sources. Passive sensors are receivers that measure the radiation backscattered from the scene under observation (Microwave Radiometers). Radar systems are commonly based on the measurement of signal time delays.
The basic principle of radar is transmission and reception of pulses. Short high-energy pulses are emitted by the transmitter and the returning echoes are recorded by the receiver. It provides information on magnitude, phase, time interval between pulse emission and return from the object, polarization and Doppler frequency.
A short pulse is transmitted from the radar and when the pulse strikes a target, a signal returns to the antenna. The time delay between the signal transmitted and the signal received gives the distance between target and sensor. As the speed of light at which the pulse propagates is much faster than the platform velocity, the echo of the pulse from the ground is assumed to be received at the same spacecraft position at which the pulse was transmitted.
Imaging radars used for remote sensing are side-looking airborne radars (SLARs). A platform carries a side looking antenna perpendicular to the flight direction of the platform and transmits radar pulses in a direction different from the flight path. As the platform moves one beam width forward, the return signals come from a different strip on the ground. These signals intensity-modulate the line on the cathode-ray tube and produce a different image on a line on the film adjacent to the original line. As the platform moves forward, a series of these lines is imaged onto the film, and the result is a two-dimensional picture of the radar return from the surface.
The area continuously imaged from the radar beam is called the swath and can be divided into near range and far range. Each transmitted wave front hits the target surface at near range and sweeps across the swath to far range.
Slant range and Ground range:
Distance from the radar to the scatterer is called range. As the radar is located at some altitude above the ground, this is not same as distance along the ground. Thus, the dimension in the image is called slant range.
The figure shows two types of radar data display: - slant range image, in which distances are measured between the antenna and the target. Slant range data is the natural result of radar range measurements. A slant range coordinate is defined in a direction normal to the flight path and an azimuth coordinate is defined in the direction along the flight path. In Ground range image, distances are measured between the platform ground track and the target, and placed in the correct position on the chosen reference plane. Transformation to ground range requires correction at each data point for local terrain slope and elevation. Ground range resolution (Rr) is the horizontal expression of the slant range resolution and is expressed mathematically as:
Always on Time
Marked to Standard
θD is the depression angle and t is the pulse duration.
The differences between the many imaging radars used in remote sensing are primarily due to the antenna which determines the spatial resolution in the azimuth direction (Raney 1998). Imaging radars can be divided in two main categories, depending on the imaging technique used: Real Aperture Radar (RAR) also called Side Looking Airborne Radar (SLAR) and the Synthetic Aperture Radar (SAR).
Both SLAR and SAR are side-looking systems with an illumination direction usually perpendicular to the flight line. The difference lies in the resolution of the along-track, or azimuth direction. SLAR have azimuth resolution determined by the antenna beamwidth, so that it is proportional to the distance between the radar and the target (slant-range).
SAR improves natural radar resolution by focusing the image through a process known as synthetic aperture processing which synthesizes a very long antenna by combining signals (echoes) received by the radar as it moves along its flight track. A synthetic aperture is constructed by moving a real aperture or antenna through a series of positions along the flight track. These systems have azimuth resolution (along-track resolution) that is independent of the distance between the antenna and the target. SAR takes advantage of Doppler's history of the radar echoes generated by the forward motion of the platform to synthesize a large antenna, enabling high azimuth resolution in the resulting image despite a physically small antenna.
SAR works on the principle of Doppler Effect, a property of waves reflected (or emitted) by moving objects. If a wave is reflected or emitted by an object approaching a receiver, its frequency as observed by the receiver is increased; if the object is receding, its frequency as observed by the receiver is decreased. A narrow radar beam is projected at right angles to the forward motion of a platform. Distant objects moves across this side-looking beam as the aircraft moves in a straight line. As an object first enters the beam, its relative motion has a component that is toward the platform and which Doppler-shifts its RADAR reflection to higher frequencies. As the object passes through the centerline of the beam, it ceases to get closer to the aircraft. At this fraction of a second, its reflection ceases to be Doppler shifted. Next, as the object passes through the trailing half of the beam, it begins to move away from the aircraft, which Doppler-shifts its reflection to lower frequencies. Thus, although reflections from all objects at a given distance from the RADAR return to its antenna at the same moment, reflections from objects ahead of the aircraft are Doppler shifted to higher frequencies, and those from objects trailing the aircraft are shifted to lower frequencies. This effect can be used to distinguish objects inside the beam, achieving an angular resolution that is higher than the beam's physical width.
Comparing the Doppler-shifted frequencies to a reference frequency allows many returned signals to be "focused" on a single point, effectively increasing the length of the antenna that is imaging at that particular point. The result is a very narrow effective antenna beam width, even at far ranges without requiring long antenna or short operating wavelength. Within the wide antenna beam, returns from features in the area ahead of the aircraft will have up shifted frequencies resulting from Doppler Effect. Returns from features in the area behind the aircraft will have down shifted frequencies. Returns from features near the centre line of beam width have less or no frequency shift.
This aperture synthesis is achieved by coherently integrating the returned signal pulse-to-pulse as the radar moves along its path. The azimuth resolution attained in this manner is half a wavelength divided by the change in viewing angle during the aperture formation process.
Interpreting radar data depends on an understanding of the interaction between system parameters and target characteristics. Both RAR and SAR systems have specific operational parameters which will influence the interaction between the pulses transmitted and the targets on the Earth's surface.
Wavelength: As discussed above, the electromagnetic spectrum (refer 1st Fig) illustrates wide range of microwave wavelengths/frequencies. The interaction of microwaves and targets on Earth's land surface is dependent on the wavelength used. Penetration depth increases with the wavelength (Elachi 1988). The roughness of a surface on a SAR image is also influenced by the wavelength used.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
The temporal and geometric behaviour of the electric field vector of an electromagnetic wave transmitted or received by a radar system is the polarization. It refers to the geometry of the tip of the electric vector (E) as it evolves with time. Remote sensing radars are usually designed to transmit either vertically polarized or horizontally polarized radiation and the radar can receive either vertically or horizontally polarized radiation, or both. The letters H and V designates the planes of transmitted and received polarization for Horizontal and Vertical. Therefore, the polarization of a radar image is HH, for horizontal transmit, horizontal receive, VV for vertical transmit, vertical receive, HV for horizontal transmit vertical receive, and vice versa. When the polarization of the received radiation is the same as the transmitted radiation, the image is said to be like polarized or co-polarized. When the polarization of the received radiation is orthogonal to the transmitted radiation, the image is said to be cross-polarized. Cross-polarized signals are usually a result of multiple scattering by the target or terrain. They are weaker than the co-polarized. The backscatter of microwaves from an object depends on polarization of the incident wave and also the geometric structure of the object.
A radar system which can record two different polarizations i.e., Orthogonal polarizations is a dual polarization Radar. Radar that is capable of acquiring more than one independent polarization measurement, either simultaneously or separately is Multi-polarization Radar. A multi-polarization radar system can have two to four possible polarizations and are not phase coherent. Multi-polarization radar may have only one channel, which acts as a switch between different polarizations. Radar systems designed to collect image data of a scene using two orthogonal transmit polarizations and the same two polarizations on receive is Quadrature polarization or Polarimetric Radar. The detailed explanation of polarimetry is given in Chapter 8. Transmit and receive channels are in orthogonal, and four channels are required to make the measurements (typically HH, HV, VV & VH).
Targets on the Earth's surface scatter microwave radiation differently depending on the polarization of the wave transmitted. If the plane of polarization of the transmitted wave is parallel to the main line of polarization of the target being sensed the like polarized backscatter is stronger. The cross-polarization or depolarization of the transmitted wave is also a function of the amount of multiple volumetric scattering taking place at the targets. SAR systems with cross-polarised receiving capabilities can provide additional information for the image interpretation and understanding the target/wave interaction (Lewis and Anderson 1998).
Incident angle: The incident angle (θ) is a major factor influencing the radar backscatter and the appearance of the targets in the images. This angle is defined between the radar pulse and a line perpendicular to the Earth's land surface. Figure 2.2 illustrates the system and local incident angles. In a flat surface, θ is the complement of the depression angle (γ) (Jensen 2000). Smaller incidence angle results in larger backscatter value.
In general, images acquired at small incident angles (less than 30°) emphasize variations in surface slope, and geometric distortions due to layover and foreshortening in mountainous regions can be severe. Images with large incident angles have reduced geometric distortion and emphasize variations in surface roughness, although radar shadows increase (Lillesand and Kiefer 2000).
RADAR EQUATION PRINCIPLE:
The fundamental relation between the characteristics of the radar, the target, and the received signal is given by the radar equation. It predicts performance in terms of signal-to-interference ratio based upon the radar hardware, the distance to the target, the target's radar cross section, and the total system noise.
Five primary factors that determine signal strength are given in the radar equation: the density of radiated power at the range of the target; the radar reflectivity of the target and the spreading of radiation along the return path to the radar; the effective receiving area or aperture of the antenna; the time over which the target is illuminated; and signal losses caused by physical phenomena, such as conversion to heat, and processing losses.
The geometry of scattering from an isolated radar target (scatterer) is shown in the figure, with the parameters that are involved in the radar equation.
When a power Pt is transmitted by an antenna with gain Gt, the power per unit solid angle in the direction of the scatterer is PtGt, where the value of Gt in that direction is used. At the scatterer, -------------- (1)
where Ss is the power density at the scatterer. The spreading loss (1/4Ï€R2) is the reduction in power density associated with spreading of the power over a sphere of radius 'R' surrounding the antenna. Total power intercepted by the scatterer is obtained by the product of power density and effective receiving area of the scatterer:
'Ars' depends on the effectiveness of the scatterer as a receiving antenna.
As the scatterers are neither a perfect conductors nor perfect isolators, some of the power received by the scatterer is absorbed in losses and the rest is reradiated in various directions. The fraction absorbed is 'fa', so the fraction reradiated is '(1- fa)', and the total reradiated power is
The conduction and displacement currents that flow in the scatterer result in reradiation that has a pattern. Effective receiving area of the scatterer is a function of its orientation relative to the incoming beam, so Ars in the equation is given for the direction of the incoming beam.
The reradiation pattern may not be the same as the pattern of 'Ars', and the gain in the direction of the receiver is the relevant value in the reradiation pattern. Thus,
where Pts is the total reradiated power, Gts is the gain of the scatterer in the direction of the receiver, and (1/4Ï€R2) is the spreading factor for the reradiation. Radar has two spreading factors and if Rr = Rt, the total distance is 2Rt; Therefore, (1/4Ï€) 2 (1/Rt) 4
The power entering the receiver is given by;; where the area Ar is the effective aperture of the receiving antenna, not its actual area. Hence, the equation is
The factors associated with the scatterer are combined in the square brackets.
These factors are difficult to measure individually, and hence they are normally combined into one factor, the radar scattering cross section:
The cross-section, 'σ' is a function of the directions of the incident wave and the wave toward the receiver, as well as that of the scatterer shape and dielectric properties. The final form of the radar equation is obtained as
If the receiving and transmitting locations are the same, the transmitter and receiver distances are the same. The same antenna is used for transmitting and receiving, so the gains and effective apertures are the same, that is:
Rt= Rr =R; Gt= Gr =G; At= Ar =A.
Since the effective area of an antenna is related to its gain by we may rewrite the radar equation as
where two forms are given, one in terms of the antenna gain and the other in terms of the antenna area.
The targets scatter the energy transmitted by the radar in all directions. Radar records the energy scattered in the backward direction and is called backscatter. The intensity of each pixel in a radar image is proportional to the ratio between the density of energy scattered and the density of energy transmitted from the targets in the Earth's land surface (Waring et al. 1995).
The backscatter is measured as a complex number, which contains information about the amplitude (easily converted to σ° by specific equations) and the phase of the backscatter (Baltzer 2001). For SAR applications other than interferometry (detail explanation is given in Chapter-8) and polarimetry, however, the phase carries no useful information and can be discarded (Oliver and Quegan 1998). The information that remains when the phase is discarded is related to the amplitude of the backscatter. After linear detection and processing, amplitude SAR data are converted to an amplitude (or magnitude) image. After square-law detection and processing, amplitude SAR data are converted to an intensity (or power) image (Kingsley and Quegan 1992).
The energy backscattered is related to the variable referred as radar cross-section (σ), and is the amount of transmitted power absorbed and reflected by the target. The backscatter coefficient (σ°) is the amount of radar cross-section per unit area (A) on the ground (Jensen 2000).
σ° is a characteristic of the scattering behaviour of all targets within a pixel, varies over several orders of magnitude and is expressed as a logarithm with decibel units (Waring et al. 1995).
Backscatter coefficient is a function of wavelength, polarization and incidence angle, as well as target characteristics such as roughness, geometry and dielectric properties. The targets will be distinguishable in radar images if their backscatter components are different and the radar spatial resolution is adequate to discriminate between targets (Trevett, 1986).
RADAR Reflector Surfaces
RADAR reflectors represent the geometric orientation of the target that interacts with the radar pulse angle, pole and size. The reflectors can be described in 3 groups: specular, diffuse and corner or double-bounce. In general shrub and forest cover types and some crops represent diffuse reflectors where the RADAR pulse is diffused at different angles and some of the energy is directed back to the receiver thus the signal received is neither high nor low. A specular reflector is a mostly flat or non rough surface (calm water, grass field, bare soil, beach, etc.). The pulse hits the flat surface and most of the energy is directed out away from the surface at a right angle away from the receiver thus little energy is recorded. A corner reflector usually involves two adjacent surfaces (double bounce) and is a combination of a specular surface and a vertical object (e.g. trees) or a surface with strong angles such as a building. The RADAR pulse hits the specular surface first and the signal going out at a right angle (1st bounce) then interacts with a vertical or angular surface (2nd bounce) thus directing most or nearly all of the energy directly back to the receiver. This is the highest signal thus the object (tree or building in this case) would have a bright appearance on a SAR image.
The electrical characteristics of targets also determine the intensity of backscatter. The complex dielectric constant is a measure of the electrical characteristics of objects, indicating the reflectivity and conductivity of various materials (Lillesand and Kiefer 2000). The moisture content within materials has a direct influence on the dielectric constant and reflectivity. The more liquid water within a material the more reflectivity/backscatter is produced (Waring et al. 1995). Most materials have a dielectric constant ranging from 3 to 8 when dry, while water has a dielectric constant of around 80. Forest canopies are excellent reflectors and appear bright in the image because of the leaves high moisture content, while dry soils absorb the radar signal and produce very low (or no) backscatter (Jensen 2000).
RADAR IMAGE CHARACTERISTICS
Speckle is due to the variation in backscatter for non-homogenous cells. It gives a grainy appearance to the Radar images. Speckle is caused by the high coherence of the illumination source that causes phase interference from random scattering points. The narrow bandwidth when combined with the surface roughness at the wavelength scale produces a pattern in grainy appearance. It is the unwanted and dominating noise. It degrades the SAR image products. Speckle is an undesirable feature containing little information. This is caused by random constructive and destructive interference from the multiple scattering returns that will occur within each resolution cell.
The salt-and-pepper texture of speckle is related to radar system parameters and the nature of the surface being imaged. The classical speckle model assumes the presence of a large number of independent point reflectors with similar scattering characteristics within the resolution cell. When illuminated by the SAR, each target contributes backscatter energy, which along with phase and power changes, is then coherently summed for all scatterers. This summation can be either high or low, depending on constructive or destructive interference. This statistical fluctuation (variance), or uncertainty, is associated with the brightness of each pixel in SAR imagery.
Speckle carries the information about the imaging system and is useful in describing the texture of image, identifying terrain features, examining the reflectivity and system transformation processes. Speckle is essentially a form of noise, which degrades the quality of an image and makes interpretation more difficult. Thus, it is generally desirable to reduce speckle prior to interpretation and analysis. Smoothening and Filtering are commonly used to reduce speckle. The speckle effect is reduced by using the multi look images and also by averaging the number of samples, increasing the time bandwidth products. Multi-look processing reduces speckle at the cost of spatial resolution. Filtering can reduce the speckle still inherent in the actual SAR image data, which tend to reduce statistical variance in conventional image classification schemes
Speckle reduction can be achieved in two ways:
Multi-look processing and
Multi-look processing refers to the division of the radar beam into several narrower sub-beams. Each sub-beam provides an independent look at the illuminated scene, as the name suggests. Each of these looks will also be subject to speckle, but summing and averaging them together to form the final output image will reduce the amount of speckle. Multi-looking is done during data acquisition.
Speckle reduction by spatial filtering is performed on the output image in a digital image analysis environment. Speckle reduction filtering consists of moving a small window of a few pixels in dimension (e.g. 3x3 or 5x5) over each pixel in the image, applying a mathematical calculation using the pixel values within that window and replacing the central pixel with the new value. The window is moved along in both the row and column dimensions one pixel at a time, until the entire image has been covered. By calculating the average of a small window around each pixel, a smoothening effect is achieved and the visual appearance of the speckle is reduced.
Both multi-look processing and spatial filtering reduce speckle at the expense of resolution, since they both smoothens the image. Therefore, the amount of speckle reduction should be chosen based on user application and information required. If fine detail and high resolution is required then little or no multi-looking/spatial filtering should be done. If broad-scale interpretation and mapping is the application, then speckle reduction techniques may be more appropriate and acceptable. The speckle suppression techniques used in the present study are explained in Chapter 4.
Radar Image Distortions:
The Radar image obtained has distortions due to slant range geometry, topographic variations etc. These should be addressed and discussed for better understanding of SAR image processing.
Distortions in the Radar image are due to the side-looking viewing geometry and radar is a distance measuring system. Some of these distortions are Scale distortions and Relief distortions.
A1 B1Slant-range scale distortion occurs because the radar is measuring the distance to features in slant-range rather than the true horizontal distance along the ground. This results in a varying image scale, moving from near to far range. The same distance on the ground the radar sees A1 and B1 as A2 and B2. The scale is shortened in the near range compared to the far range. Although targets A1 and B1 are the same size on the ground, their apparent dimensions in slant range A2 and B2 are different. This causes targets in the near range to appear compressed relative to the far range.
Relief Displacement: Radar images are subject to relief displacement. It is one-dimensional and occurs perpendicular to the flight path. The displacement is reversed with targets being displaced towards, instead of away from the sensor. Radar foreshortening and layover are two consequences that result from relief displacement. Taller objects may appear closer than shorter objects with the same horizontal locations.
Tall objects and very steep terrain can block the RADAR pulse from reaching the backside of the object (the geometry is called foreshortening or layover).
Foreshortening: It is a special case of elevation displacement. When the radar beam reaches the slope of a tall feature e.g. a mountain, at the same moment, the foreshortening will occur.
When the radar beam reaches the base of a tall feature tilted towards the radar, before it reaches the top, foreshortening will occur. As the radar measures distance in slant-range, the slope A to B will appear compressed and the length of the slope will be represented incorrectly A' to B'. Foreshortening depends on the angle of mountaintop with the angle of incidence of the Radar. Foreshortening is maximum when the radar beam is perpendicular to the slope, i.e., when the slope, base, and the top are imaged simultaneously.
Layover: Layover occurs when the radar beam reaches the top of a tall feature (B) before it reaches the base (A). The return signal from the top of the feature is received earlier than the signal from the bottom. As a result, the top of the feature is displaced towards the radar from its true position on the ground, and
Blays over the base of the feature B' to A'. Layover effects on a radar image look very similar to effects due to foreshortening. As with foreshortening, layover is most severe for small incidence angles, at the near range of a swath, and in mountainous terrain.
Radar Shadow occurs when the radar beam is not able to illuminate the ground surface. Both foreshortening and layover result in radar shadow. Shadows occur in the down range dimension (i.e. towards the far range), behind vertical features or slopes with steep sides. Since the radar beam does not illuminate the surface, shadowed regions will appear dark on an image, as no energy is available to be backscattered. As incidence angle increases from near to far range, shadow effects as the radar beam looks more and more obliquely at the surface. This image illustrates radar
Objectshadow effects on the right side of the hillsides that are being illuminated from the left. The null area is called a RADAR shadow and there is no return signal therefore the area and any lower lying objects in this zone of the image appear black on a image.
With these introduction to Radar image characteristics, we now discuss the Radar remote sensing of forests and its applications.
Radar remote sensing of forests:
Microwave remote sensing is very useful in the application of the forestry as the microwaves are capable of penetrating through the forest canopy and contribute to the monitoring of the forest and to understand the ecosystem processes.
Microwave radiation penetrates significant distances into a vegetation canopy and interacts most strongly with structures (leaves, stems etc) on scales comparable with the radiation's wavelength. Depending on wavelength and polarization, radar can penetrate the canopy to different depths (figure 2.2), and can sense plant parts of different sizes, shapes, and water content. This ability of radar to probe the canopy, and the expectation of retrieving biophysical forest descriptions, underlie much of the international impetus for forest radar research (Sun et al, 1998).
Microwave interaction depends on the angle of incidence and wavelength of the Radar. It is illustrated that there exists the relationship between the C, L, P-band radar backscatter and forest biomass and growing stock volume (Tansey et al.2004). Hence, these bands are used to retrieve the canopy biophysical parameters.
Radar remote sensing is also used to achieve biomass estimates and carbon accounting. Radar data also provides information about terrain surface and vegetation canopies (Heri et al, 1999). Synthetic Aperture Radar (SAR) provides important characteristics of soil and vegetation covers for instance, inundation below closed canopies, fresh woody biomass of forested areas, freeze/thaw conditions of soil and vegetation, soil moisture and surface roughness in areas of low vegetation, and information on the orientation and structure of objects on the ground that reflect the incoming microwave radiation (Kasischke et al., 1997; Morrissey et al. 1996).
Trees and other vegetation are usually moderately rough on the longer wavelength scale. Hence, they appear as moderately bright features in the image. Structures of trees affect the backscattering coefficient (Touzi et al, 2004). Backscattering and penetration varies within a forest canopy. In longer wavelengths, the effect of the trunk is very large. In shorter wavelengths, leaves play an important role in backscatter. This is due to the forest composition, tree density, and canopy thickness. The scattering properties are governed by size, shape and orientation of surface within forest canopy (Floyd et al, 1998). The parameters that are important in forest inventory are tree density, stand age and timber volume. These parameters are interrelated and also they depend on the tree growth and stand development. By visual interpretation, the different types of land cover classes are discriminated using the backscatter intensity and texture. The different types of forests can be discriminated from polarimetric SAR data and image fusion technique.
The changes in tone and texture are related to crown closure or foliage density. Backscatter is dependent on crown closure than height (Floyd et al, 1998). Backscatter is also sensitive to the target's electrical properties, including water content.
Fig 2.2: Penetration of different wavelengths in the canopyThe magnitude of the scattering mechanisms and the importance of the different components are dependent on geometric factors (e.g., structural attributes of trees, canopy and soil surface roughness) and dielectric properties of vegetation and underlying surface (e.g., moisture content of vegetation and soil) (Dobson et al., 1995). Wavelength, polarization and incidence angle of radiation control these scattering mechanisms (Leckie and Ranson, 1998) and the final backscatter as a result of surface and/or volume scattering.
In X band, which is a short wavelength band, the backscatter results mainly from the upper part of the canopy (Le Toan et al., 1992) and the leaves, twigs and small branches (Leckie and Ranson, 1998). There is little penetration of the radiation into the canopy; therefore, volumetric scattering and soil contribution to the final backscatter are weak. At C band, which is an intermediate wavelength, greater penetration of the radiation into the canopy enables further sources of scattering to be active and so there is some volume-scattering. The main sources of scattering at C band are secondary branches and leaves (Ranson and Sun, 1994; Leckie and Ranson, 1998). At longer L and P band wavelengths, the penetration of the radiation into the canopy is deeper and components from the lower parts of the canopy are included in the scattering (Le Toan et al., 1992), as well as the major woody biomass components (trunks and branches) (Dobson et al., 1992). Trunk-ground and crown-ground interactions are important at these wavelengths (Leckie and Ranson, 1998) and are mainly dependent on the canopy structure and openness. Foliage and mall branches act as attenuators of the radiation at these wavelengths (Kasischke et al., 1997). The main components and scattering mechanisms of the total backscatter from forests comprise backscatter from (1) crown surface and volume, (2) trunks, (3) direct from the ground, (4) crown-ground scattering and (5) double-bounce scattering from trunk and ground (Leckie and Ranson, 1998).
Direct backscatter from canopy top
Multiple scattering and volume scattering in the vegetation.
Direct backscatter from land surface.
The combination of multiple channels and polarizations provides greater advantage for estimating total biomass. This is due to SAR unique capabilities to distinguish woody from herbaceous biomass and to penetrate the vegetative canopy to detect underlying surface conditions (Harry Stern, 1998). Cross-polarization SAR gives accurate results in estimating aboveground biomass. Measurement of species diversity and biomass on both a spatial and temporal level may be possible through appropriately chosen remote sensing data types (Alex et al, 2003).
The intensity of radar backscatter is sensitive to the forest parameters such as diameter at breast height (dbh) and tree mean height. The saturation problem is also common in radar data. The saturation levels depend on the wavelengths (i.e. different Bands, such as C, L, P), polarization (such as HH, HV, VV and VH), and the characteristics of vegetation stand structure and ground conditions. The variation in the allocation of the biomass to various structures such as stems, branches, leaves; their sizes, numbers and orientation influence backscatter. Backscatter saturates at a biomass, which is related to Radar wavelength. C-band can measure forestry biomass up to app. 50 tons/ha, L-band can measure up to 100 tons/ha and P-band can measure up to 200 tons/ha (Floyd et al., 1998).
Biomass estimation from SAR data has been under investigation for the last two decades, the main conclusions being that the retrieval accuracy increases for increasing radar wavelength and the interferometric SAR (InSAR) coherence has the potential to provide stem volume estimates in some case comparable to in situ data. |
Students measure the perimeter of a square and create squares with specific perimeters in this lesson for the early-elementary classroom. The lesson includes templates for AppleWorks, Inspiration, and Kidspiration educational software.
4 Views 20 Downloads
Backyard Building - Area and Perimeter
Turn young mathematicians into landscape architects with this four-lesson series on area and perimeter. Beginning with a basic introduction to calculating perimeter and area using non-standard units of measurement, this instructional...
2nd - 4th Math CCSS: Adaptable
Perimeters of Squares and Rectangles
If you're looking for some perimeter practice problems for beginners, you've found them! Learners reference an example that takes them through the steps of finding the perimeter of a square or rectangle. Then, they complete nine on their...
2nd - 4th Math CCSS: Designed
Global Atmospheric Change: The Math Link
Change up the classroom atmosphere with this interdisciplinary resource. Following along with the children's book Mr. Slaptail's Curious Contraption, these math worksheets provide practice with a wide range of topics including simple...
K - 5th Math CCSS: Adaptable
Find the Perimeter of a Polygon with More Than 4 Sides
Perimeter moves beyond squares and rectangles in a lesson on irregular-shaped polygons. The lesson focuses on the strategy of counting and then adding side lengths. Several examples are slowly walked through, giving learners plenty of...
5 mins 2nd - 4th Math CCSS: Designed
Find Perimeter with Missing Side Lengths
In the fourth video of this 7-part series, learners find out how to measure perimeter of a square or rectangle with the aid of square tiles. A review of the difference between a square and rectangle begins the lesson. Then, examples walk...
3 mins 2nd - 4th Math CCSS: Designed |
A disc brake is a type of brake that uses calipers to squeeze pairs of pads against a disc or "rotor" to create friction. This action retards the rotation of a shaft, such as a vehicle axle, either to reduce its rotational speed or to hold it stationary. The energy of motion is converted into waste heat which must be dispersed.
- 1 Design
- 2 History
- 3 Brake disc
- 4 Calipers
- 5 Brake pads
- 6 Common problems
- 7 Patents
- 8 See also
- 9 References
- 10 External links
Development of disc-type brakes began in England in the 1890s. In 1902, the Lanchester Motor Company designed brakes that looked and operated in a similar way to a modern disc-brake system even though the disc was thin and a cable activated the brake pad. Other designs were not practical or widely available in cars for another 60 years. Successful application began in airplanes before World War II, and even the German Tiger tank was fitted with discs in 1942. After the war, technological progress began to arrive in the 1950s, leading to a critical demonstration of superiority at the 1953 24 Hours of Le Mans race, which required braking from high speeds several times per lap. The Jaguar racing team won, using disc brake equipped cars, with much of the credit being given to the brakes' superior performance over rivals equipped with drum brakes. Mass production began with the 1955 Citroën DS.
Compared to drum brakes, disc brakes offer better stopping performance because the disc is more readily cooled. As a consequence discs are less prone to the brake fade caused when brake components overheat. Disc brakes also recover more quickly from immersion (wet brakes are less effective than dry ones).
Most drum brake designs have at least one leading shoe, which gives a servo-effect. By contrast, a disc brake has no self-servo effect and its braking force is always proportional to the pressure placed on the brake pad by the braking system via any brake servo, braking pedal, or lever. This tends to give the driver better "feel" and helps to avoid impending lockup. Drums are also prone to "bell mouthing" and trap worn lining material within the assembly, both causes of various braking problems.
The disc is usually made of cast iron, but may in some cases be made of composites such as reinforced carbon–carbon or ceramic matrix composites. This is connected to the wheel and/or the axle. To retard the wheel, friction material in the form of brake pads, mounted on the brake caliper, is forced mechanically, hydraulically, pneumatically, or electromagnetically against both sides of the disc. Friction causes the disc and attached wheel to slow or stop.
Development of disc brakes began in England in the 1890s.
The first caliper-type automobile disc brake was patented by Frederick William Lanchester in his Birmingham factory in 1902 and used successfully on Lanchester cars. However, the limited choice of metals in this period meant that he had to use copper as the braking medium acting on the disc. The poor state of the roads at this time, no more than dusty, rough tracks, meant that the copper wore quickly making the system impractical.
Successful application began in airplanes and tanks before and during World War II. In Britain, the Daimler Company used disc brakes on its Daimler Armoured Car of 1939, the disc brakes, made by the Girling company, were necessary because in that four-wheel drive (4x4) vehicle the epicyclic final drive was in the wheel hubs and therefore left no room for conventional hub-mounted drum brakes.
At Germany's Argus Motoren, Hermann Klaue (1912-2001) had patented disc brakes in 1940. Argus supplied wheels fitted with disc brakes e.g. for the Arado Ar 96. The German Tiger I heavy tank, was introduced in 1942 with a 55 cm Argus-Werke disc on each drive shaft.
The American Crosley Hot Shot is often given credit for the first production disc brakes. For six months in 1950, Crosley built a car with these brakes, then returned to drum brakes. Lack of sufficient research caused reliability problems, such as sticking and corrosion, especially in regions using salt on winter roads. Drum brake conversions for Hot Shots were quite popular. The Crosley disc was a Goodyear development, a caliper type with ventilated disc, originally designed for aircraft applications.
Chrysler developed a unique braking system, offered from 1949 to 1953. Instead of the disc with caliper squeezing on it, this system used twin expanding discs that rubbed against the inner surface of a cast-iron brake drum, which doubled as the brake housing. The discs spread apart to create friction against the inner drum surface through the action of standard wheel cylinders. Because of the expense, the brakes were only standard on the Chrysler Crown and the Town and Country Newport in 1950. They were optional, however, on other Chryslers, priced around $400, at a time when an entire Crosley Hot Shot retailed for $935. This four-wheel disc brake system was built by Auto Specialties Manufacturing Company (Ausco) of St. Joseph, Michigan, under patents of inventor H.L. Lambert, and was first tested on a 1939 Plymouth. Chrysler discs were "self energizing," in that some of the braking energy itself contributed to the braking effort. This was accomplished by small balls set into oval holes leading to the brake surface. When the disc made initial contact with the friction surface, the balls would be forced up the holes forcing the discs further apart and augmenting the braking energy. This made for lighter braking pressure than with calipers, avoided brake fade, promoted cooler running, and provided one-third more friction surface than standard Chrysler twelve-inch drums. Today's owners consider the Ausco-Lambert very reliable and powerful, but admit its grabbiness and sensitivity.
First use in racing
The first use of disc brakes in racing was in 1951, one of the BRM Type 15s using a Girling-produced set, a first for a Formula One car. Reliable caliper-type disc brakes later appeared in 1953 on the Jaguar C-Type racing car. These brakes helped the company to win the 1953 24 Hours of Le Mans, developed in the UK by Dunlop. That same year, the aluminum bodied Austin-Healey 100S, of which 50 were made, was the first car sold to the public to have disc brakes, fitted to all 4 wheels.
The first mass production use of the modern disc brake was in 1955, on the Citroën DS, which featured caliper-type front disc brakes among its many innovations. These discs were mounted inboard near the transmission, and were powered by the vehicle's central hydraulic system. This model went on to sell 1.5 million units over 20 years with the same brake setup.
The Jensen 541, with four-wheel disc brakes, followed in 1956. Triumph exhibited a 1956 TR3 with disc brakes to the public, but the first production cars with Girling front-disc brakes were made in September 1956.
Disc brakes were most popular on sports cars when they were first introduced, since these vehicles are more demanding about brake performance. Discs have now become the more common form in most passenger vehicles, although many (particularly light weight vehicles) use drum brakes on the rear wheels to keep costs and weight down as well as to simplify the provisions for a parking brake. As the front brakes perform most of the braking effort, this can be a reasonable compromise.
Many early implementations for automobiles located the brakes on the inboard side of the driveshaft, near the differential, while most brakes today are located inside the wheels. An inboard location reduces the unsprung weight and eliminates a source of heat transfer to the tires.
Historically, brake discs were manufactured throughout the world with a strong concentration in Europe and America. Between 1989 and 2005, manufacturing of brake discs migrated predominantly to China.
In the U.S.
After a 10-year hiatus, America built another production automobile equipped with disc brakes - the 1963 Studebaker Avanti (the Bendix system was optional on some of the other Studebaker models). Front disc brakes became standard equipment in 1965 on the Rambler Marlin (the Bendix units were optional on all American Motors' Rambler Classic and Ambassador models), as well as on the Ford Thunderbird, and the Lincoln Continental. A four-wheel disc brake system was also introduced in 1965 on the Chevrolet Corvette Stingray.
The first motorcycles to use disc brakes were racing vehicles. MV Agusta was the first to offer a front disc brake motorcycle to the public on a small scale in 1965, on their relatively expensive 600 touring motorcycle, using a mechanical brake linkage. In 1969 Honda introduced the more affordable CB750, which had a single hydraulically-actuated front disc brake (and a rear drum brake), and which sold in huge numbers. Disc brakes are now common on motorcycles, mopeds and even mountain bikes.
The brake disc (or rotor) is the rotating part of a wheel's disc brake assembly, against which the brake pads are applied. The material is typically gray iron, a form of cast iron. The design of the discs varies somewhat. Some are simply solid, but others are hollowed out with fins or vanes joining together the disc's two contact surfaces (usually included as part of a casting process). The weight and power of the vehicle determines the need for ventilated discs. The "ventilated" disc design helps to dissipate the generated heat and is commonly used on the more-heavily loaded front discs.
Discs for motorcycles, bicycles, and many cars often have holes or slots cut through the disc. This is done for better heat dissipation, to aid surface-water dispersal, to reduce noise, to reduce mass, or for marketing cosmetics.
Slotted discs have shallow channels machined into the disc to aid in removing dust and gas. Slotting is the preferred method in most racing environments to remove gas and water and to deglaze brake pads. Some discs are both drilled and slotted. Slotted discs are generally not used on standard vehicles because they quickly wear down brake pads; however, this removal of material is beneficial to race vehicles since it keeps the pads soft and avoids vitrification of their surfaces. On the road, drilled or slotted discs still have a positive effect in wet conditions because the holes or slots prevent a film of water building up between the disc and the pads.
A floating disc is splined, rather than rigidly fixed, to the hub as a way of avoiding thermal stress, cracking and warping. This allows the disc to expand in a controlled symmetrical way and with less unwanted heat transfer to the hub.
Motorcycles and scooters
Lambretta introduced the first high-volume production use of a single, floating, front disc brake, enclosed in a ventilated cast alloy hub and actuated by cable, on the 1962 TV175, followed by the range-topping GT200 in 1964. The 1969 Honda CB750 introduced hydraulic disc brakes on a large scale to the wide motorcycle public, following the lesser known 1965 MV Agusta 600, which had cable-operated mechanical actuation.
Unlike car disc brakes that are buried within the wheel, bike disc brakes are in the airstream and have optimum cooling. Although cast iron discs have a porous surface which give superior braking performance, such discs rust in the rain and become unsightly. Accordingly, motorcycle discs are usually stainless steel, drilled, slotted or wavy to disperse rain water. Modern motorcycle discs tend to have a floating design whereby the disc "floats" on bobbins and can move slightly, allowing better disc centering with a fixed caliper. A floating disc also avoids disc warping and reduces heat transfer to the wheel hub. Calipers have evolved from simple single-piston units to two-, four- and even six-piston items. Compared to cars, motorcycles have a higher center of mass:wheelbase ratio, so they experience more weight transfer when braking. Front brakes absorb most of the braking forces, while the rear brake serves mainly to balance the motorcycle during braking. Modern sport bikes typically have twin large front discs, with a much smaller single rear disc. Bikes that are particularly fast or heavy may have vented discs.
Early disc brakes (such as on the early Honda fours and the Norton Commando) sited the calipers on top of the disc, ahead of the fork slider. Although this gave the brake pads better cooling, it is now almost universal practice to site the caliper behind the slider (to reduce the angular momentum of the fork assembly). Rear disc calipers may be mounted above (e.g. BMW R1100S) or below (e.g. Yamaha TRX850) the swinging arm: a low mount is marginally better for CG purposes, while an upper siting keeps the caliper cleaner and better-protected from road obstacles.
One problem with motorcycle disc brakes is that when a bike gets into a violent tank-slapper (high speed oscillation of the front wheel) the brake pads in the calipers are forced away from the discs, so when the rider applies the brake lever the caliper pistons push the pads towards the discs without actually making contact. The rider immediately brakes harder, which pushes the pads onto the disc much more aggressively than during normal braking. For example, The Michele Pirro incident at Mugello,Italy 1st June 2018. At least one manufacturer has developed a system to counter the pads being forced away.
A modern development, particularly on inverted ("upside down", or "USD") forks is the radially mounted caliper. Although these are fashionable, there is no evidence that they improve braking performance, nor do they add to the stiffness of the fork. (Lacking the option of a fork brace, USD forks may be best stiffened by an oversize front axle).
Mountain bike disc brakes may range from simple, mechanical (cable) systems, to expensive and powerful, multi-piston hydraulic disc systems, commonly used on downhill racing bikes. Improved technology has seen the creation of vented discs for use on mountain bikes, similar to those on cars, introduced to help avoid heat fade on fast alpine descents. Although less common, discs are also used on road bicycles for all-weather cycling with predictable braking, although drums are sometimes preferred as harder to damage in crowded parking, where discs are sometimes bent. Most bicycle brake discs are made of steel. Stainless steel is preferred due to its anti-rust properties. Discs are thin, often about 2 mm. Some use a two-piece floating disc style, others use a floating caliper, others use pads that float in the caliper, and some use one moving pad that makes the caliper slide on its mounts, pulling the other pad into contact with the disc. Because the "motor" is small, an uncommon feature of bicycle brakes is that the pads retract to eliminate residual drag when the brake is released.[clarification needed] In contrast, most other brakes drag the pads lightly when released so as to minimise initial operational travel.[clarification needed]
Disc brakes are increasingly used on very large and heavy road vehicles, where previously large drum brakes were nearly universal. One reason is that the disc's lack of self-assist makes brake force much more predictable, so peak brake force can be raised without more risk of braking-induced steering or jackknife on articulated vehicles. Another is disc brakes fade less when hot, and in a heavy vehicle air and rolling drag and engine braking are small parts of total braking force, so brakes are used harder than on lighter vehicles, and drum brake fade can occur in a single stop. For these reasons, a heavy truck with disc brakes can stop in about 120% the distance of a passenger car, but with drums stopping takes about 150% the distance. In Europe, stopping distance regulations essentially require disc brakes for heavy vehicles. In the U.S., drums are allowed and are typically preferred for their lower purchase price, despite higher total lifetime cost and more frequent service intervals.
Rail and aircraft
Still-larger discs are used for railroad cars, trams and some airplanes. Passenger rail cars and light rail vehicles often use disc brakes outboard of the wheels, which helps ensure a free flow of cooling air. Some modern passenger rail cars, such as the Amfleet II cars, use inboard disc brakes. This reduces wear from debris, and provides protection from rain and snow, which would make the discs slippery and unreliable. However, there is still plenty of cooling for reliable operation. Some airplanes have the brake mounted with very little cooling, and the brake gets quite hot in a stop. This is acceptable as there is sufficient time for cooling, where the maximum braking energy is very predictable. Should the braking energy exceed the maximum, for example during an emergency occurring during take-off, aircraft wheels can be fitted with a fusible plug to prevent the tyre bursting. This is a milestone test in aircraft development.
For automotive use, disc brake discs are commonly made of grey iron. The SAE maintains a specification for the manufacture of grey iron for various applications. For normal car and light-truck applications, SAE specification J431 G3000 (superseded to G10) dictates the correct range of hardness, chemical composition, tensile strength, and other properties necessary for the intended use. Some racing cars and airplanes use brakes with carbon fiber discs and carbon fiber pads to reduce weight. Wear rates tend to be high, and braking may be poor or grabby until the brake is hot. For this reason, many performance-oriented vehicles or trucks used for heavy towing are equipped with the slotted or vented rotors. Such upgrades eliminate excessive heat and remove contaminants that may interfere with gripping power. Usually performance rotors are installed as an aftermarket upgrade or come as a part of a performance package from the factory. The main drawback of vented and slotted rotors is high wear.
In racing and very-high-performance road cars, other disc materials have been employed. Reinforced carbon discs and pads inspired by aircraft braking systems such as those used on Concorde were introduced in Formula One by Brabham in conjunction with Dunlop in 1976. Carbon–carbon braking is now used in most top-level motorsport worldwide, reducing unsprung weight, giving better frictional performance and improved structural properties at high temperatures, compared to cast iron. Carbon brakes have occasionally been applied to road cars, by the French Venturi sports car manufacturer in the mid 1990s for example, but need to reach a very high operating temperature before becoming truly effective and so are not well suited to road use. The extreme heat generated in these systems is visible during night racing, especially at shorter tracks. It is not uncommon to see the brake discs glowing red during use.
Ceramic discs are used in some high-performance cars and heavy vehicles.
The first development of the modern ceramic brake was made by British engineers for TGV applications in 1988. The objective was to reduce weight, the number of brakes per axle, as well as provide stable friction from high speeds and all temperatures. The result was a carbon-fibre-reinforced ceramic process which is now used in various forms for automotive, railway, and aircraft brake applications.
Due to the high heat tolerance and mechanical strength of ceramic composite discs, they are often used on exotic vehicles where the cost is not prohibitive. They are also found in industrial applications where the ceramic disc's light weight and low-maintenance properties justify the cost. Composite brakes can withstand temperatures that would damage steel discs.
Porsche's Composite Ceramic Brakes (PCCB) are siliconized carbon fiber, with high temperature capability, a 50% weight reduction over iron discs (hence reducing the vehicle's unsprung weight), a significant reduction in dust generation, substantially extended maintenance intervals, and enhanced durability in corrosive environments. Found on some of their more expensive models, it is also an optional brake for all street Porsches at added expense. They can be recognized by the bright yellow paintwork on the aluminum six-piston calipers. The discs are internally vented much like cast-iron ones, and cross-drilled.
In automotive applications, the piston seal has a square cross section, also known as a square-cut seal.
As the piston moves in and out, the seal drags and stretches on the piston, causing the seal to twist. The seal distorts approximately 1/10 of a millimeter. The piston is allowed to move out freely, but the slight amount of drag caused by the seal stops the piston from fully retracting to its previous position when the brakes are released, and so takes up the slack caused by the wear of the brake pads, eliminating the need for return springs.
In some rear disc calipers, the parking brake activates a mechanism inside the caliper that performs some of the same function.
Disc damage modes
Discs are usually damaged in one of four ways: scarring, cracking, warping or excessive rusting. Service shops will sometimes respond to any disc problem by changing out the discs entirely, This is done mainly where the cost of a new disc may actually be lower than the cost of labour to resurface the old disc. Mechanically this is unnecessary unless the discs have reached manufacturer's minimum recommended thickness, which would make it unsafe to use them, or vane rusting is severe (ventilated discs only). Most leading vehicle manufacturers recommend brake disc skimming (US: turning) as a solution for lateral run-out, vibration issues and brake noises. The machining process is performed in a brake lathe, which removes a very thin layer off the disc surface to clean off minor damage and restore uniform thickness. Machining the disc as necessary will maximise the mileage out of the current discs on the vehicle.
Run-out is measured using a dial indicator on a fixed rigid base, with the tip perpendicular to the brake disc's face. It is typically measured about 1⁄2 in (12.7 mm) from the outside diameter of the disc. The disc is spun. The difference between minimum and maximum value on the dial is called lateral run-out. Typical hub/disc assembly run-out specifications for passenger vehicles are around 0.002 in (0.0508 mm). Runout can be caused either by deformation of the disc itself or by runout in the underlying wheel hub face or by contamination between the disc surface and the underlying hub mounting surface. Determining the root cause of the indicator displacement (lateral runout) requires disassembly of the disc from the hub. Disc face runout due to hub face runout or contamination will typically have a period of 1 minimum and 1 maximum per revolution of the brake disc.
Discs can be machined to eliminate thickness variation and lateral run-out. Machining can be done in situ (on-car) or off-car (bench lathe). Both methods will eliminate thickness variation. Machining on-car with proper equipment can also eliminate lateral run-out due to hub-face non-perpendicularity.
Incorrect fitting can distort (warp) discs. The disc's retaining bolts (or the wheel/lug nuts, if the disc is sandwiched in place by the wheel) must be tightened progressively and evenly. The use of air tools to fasten lug nuts can be bad practice, unless a torque wrench is used for final tightening. The vehicle manual will indicate the proper pattern for tightening as well as a torque rating for the bolts. Lug nuts should never be tightened in a circle. Some vehicles are sensitive to the force the bolts apply and tightening should be done with a torque wrench.
Often uneven pad transfer is confused for disc warping. The majority of brake discs diagnosed as "warped" are actually the result of uneven transfer of pad material. Uneven pad transfer can lead to thickness variation of the disc. When the thicker section of the disc passes between the pads, the pads will move apart and the brake pedal will raise slightly; this is pedal pulsation. The thickness variation can be felt by the driver when it is approximately 0.17 mm (0.0067 in) or greater (on automobile discs).
Thickness variation has many causes, but there are three primary mechanisms which contribute to the propagation of disc thickness variations. The first is improper selection of brake pads. Pads which are effective at low temperatures, such as when braking for the first time in cold weather, often are made of materials which decompose unevenly at higher temperatures. This uneven decomposition results in uneven deposition of material onto the brake disc. Another cause of uneven material transfer is improper break-in of a pad/disc combination. For proper break-in, the disc surface should be refreshed (either by machining the contact surface or by replacing the disc) every time the pads are changed. Once this is done, the brakes are heavily applied multiple times in succession. This creates a smooth, even interface between the pad and the disc. When this is not done properly the brake pads will see an uneven distribution of stress and heat, resulting in an uneven, seemingly random, deposition of pad material. The third primary mechanism of uneven pad material transfer is "pad imprinting." This occurs when the brake pads are heated to the point that the material begins to break-down and transfer to the disc. In a properly broken-in brake system (with properly selected pads), this transfer is natural and actually is a major contributor to the braking force generated by the brake pads. However, if the vehicle comes to a stop and the driver continues to apply the brakes, the pads will deposit a layer of material in the shape of the brake pad. This small thickness variation can begin the cycle of uneven pad transfer.
Once the disc has some level of variation in thickness, uneven pad deposition can accelerate, sometimes resulting in changes to the crystal structure of the metal that composes the disc. As the brakes are applied, the pads slide over the varying disc surface. As the pads pass by the thicker section of the disc, they are forced outwards. The foot of the driver applied to the brake pedal naturally resists this change, and thus more force is applied to the pads. The result is that the thicker sections see higher levels of stress. This causes uneven heating of the surface of the disc, which causes two major issues. As the brake disc heats unevenly it also expands unevenly. The thicker sections of the disc expand more than the thinner sections due to seeing more heat, and thus the difference in thickness is magnified. Also, the uneven distribution of heat results in further uneven transfer of pad material. The result is that the thicker-hotter sections receive even more pad material than the thinner-cooler sections, contributing to a further increase in the variation in the disc's thickness. In extreme situations, this uneven heating can cause the crystal structure of the disc material to change. When the hotter sections of the discs reach extremely high temperatures (1,200–1,300 °F or 649–704 °C ), the metal can undergo a phase transformation and the carbon which is dissolved in the steel can precipitate out to form carbon-heavy carbide regions known as cementite. This iron carbide is very different from the cast iron the rest of the disc is composed of. It is extremely hard, brittle, and does not absorb heat well. After cementite is formed, the integrity of the disc is compromised. Even if the disc surface is machined, the cementite within the disc will not wear or absorb heat at the same rate as the cast iron surrounding it, causing the uneven thickness and uneven heating characteristics of the disc to return.
Scarring (US: Scoring) can occur if brake pads are not changed promptly when they reach the end of their service life and are considered worn out. Once enough of the friction material has worn away, the pad's steel backing plate (for glued pads) or the pad retainer rivets (for riveted pads) will bear upon the disc's wear surface, reducing braking power and making scratches on the disc. Generally a moderately scarred / scored disc, which operated satisfactorily with existing brake pads, will be equally usable with new pads. If the scarring is deeper but not excessive, it can be repaired by machining off a layer of the disc's surface. This can only be done a limited number of times as the disc has a minimum rated safe thickness. The minimum thickness value is typically cast into the disc during manufacturing on the hub or the edge of the disc. In Pennsylvania, which has one of the most rigorous auto safety inspection programs in North America, an automotive disc cannot pass safety inspection if any scoring is deeper than .015 inches (0.38 mm), and must be replaced if machining will reduce the disc below its minimum safe thickness.
To prevent scarring, it is prudent to periodically inspect the brake pads for wear. A tire rotation is a logical time for inspection, since rotation must be performed regularly based on vehicle operation time and all wheels must be removed, allowing ready visual access to the brake pads. Some types of alloy wheels and brake arrangements will provide enough open space to view the pads without removing the wheel. When practical, pads that are near the wear-out point should be replaced immediately, as complete wear out leads to scarring damage and unsafe braking. Many disc brake pads will include some sort of soft steel spring or drag tab as part of the pad assembly, which drags on the disc when the pad is nearly worn out. The produces a moderately loud squealing noise, alerting the driver that service is required. This will not normally scar the disc if the brakes are serviced promptly. A set of pads can be considered for replacement if the thickness of the pad material is the same or less than the thickness of the backing steel. In Pennsylvania, the standard is 1/32".
Cracking is limited mostly to drilled discs, which may develop small cracks around edges of holes drilled near the edge of the disc due to the disc's uneven rate of expansion in severe duty environments. Manufacturers that use drilled discs as OEM typically do so for two reasons: appearance, if they determine that the average owner of the vehicle model will prefer the look while not overly stressing the hardware; or as a function of reducing the unsprung weight of the brake assembly, with the engineering assumption that enough brake disc mass remains to absorb racing temperatures and stresses. A brake disc is a heat sink, but the loss of heat sink mass may be balanced by increased surface area to radiate away heat. Small hairline cracks may appear in any cross drilled metal disc as a normal wear mechanism, but in the severe case the disc will fail catastrophically. No repair is possible for the cracks, and if cracking becomes severe, the disc must be replaced. These cracks occur due to the phenomenon of low cycle fatigue as a result of repeated hard braking.
The discs are commonly made from cast iron and a certain amount of surface rust is normal. The disc contact area for the brake pads will be kept clean by regular use, but a vehicle that is stored for an extended period can develop significant rust in the contact area that may reduce braking power for a time until the rusted layer is worn off again. Rusting can also lead to disc warping when brakes are re-activated after storage because of differential heating between unrusted areas left covered by pads and rust around the majority of the disc area surface. Over time, vented brake discs may develop severe rust corrosion inside the ventilation slots, compromising the strength of the structure and needing replacement.
Calipers are of two types, floating or fixed. A fixed caliper does not move relative to the disc and is thus less tolerant of disc imperfections. It uses one or more pairs of opposing pistons to clamp from each side of the disc, and is more complex and expensive than a floating caliper.
A floating caliper (also called a "sliding caliper") moves with respect to the disc, along a line parallel to the axis of rotation of the disc; a piston on one side of the disc pushes the inner brake pad until it makes contact with the braking surface, then pulls the caliper body with the outer brake pad so pressure is applied to both sides of the disc. Floating caliper (single piston) designs are subject to sticking failure, caused by dirt or corrosion entering at least one mounting mechanism and stopping its normal movement. This can lead to the caliper's pads rubbing on the disc when the brake is not engaged or engaging it at an angle. Sticking can result from infrequent vehicle use, failure of a seal or rubber protection boot allowing debris entry, dry-out of the grease in the mounting mechanism and subsequent moisture incursion leading to corrosion, or some combination of these factors. Consequences may include reduced fuel efficiency, extreme heating of the disc or excessive wear on the affected pad. A sticking front caliper may also cause steering vibration.
Another type of floating caliper is a swinging caliper. Instead of a pair of horizontal bolts that allow the caliper to move straight in and out respective to the car body, a swinging caliper utilizes a single, vertical pivot bolt located somewhere behind the axle centerline. When the driver presses the brakes, the brake piston pushes on the inside piston and rotates the whole caliper inward, when viewed from the top. Because the swinging caliper's piston angle changes relative to the disc, this design uses wedge-shaped pads that are narrower in the rear on the outside and narrower on the front on the inside.
Pistons and cylinders
The most common caliper design uses a single hydraulically actuated piston within a cylinder, although high performance brakes use as many as twelve. Modern cars use different hydraulic circuits to actuate the brakes on each set of wheels as a safety measure. The hydraulic design also helps multiply braking force. The number of pistons in a caliper is often referred to as the number of 'pots', so if a vehicle has 'six pot' calipers it means that each caliper houses six pistons.
Brake failure can result from failure of the piston to retract, which is usually a consequence of not operating the vehicle during prolonged storage outdoors in adverse conditions. On high-mileage vehicles, the piston seals may leak, which must be promptly corrected.
Brake pads are designed for high friction with brake pad material embedded in the disc in the process of bedding while wearing evenly. Friction can be divided into two parts. They are: adhesive and abrasive.
Depending on the properties of the material of both the pad and the disc and the configuration and the usage, pad and disc wear rates will vary considerably. The properties that determine material wear involve trade-offs between performance and longevity.
The brake pads must usually be replaced regularly (depending on pad material, and drivestyle), and some are equipped with a mechanism that alerts drivers that replacement is needed, such as a thin piece of soft metal that rubs against the disc when the pads are too thin causing the brakes to squeal, a soft metal tab embedded in the pad material that closes an electric circuit and lights a warning light when the brake pad gets thin, or an electronic sensor.
Generally road-going vehicles have two brake pads per caliper, while up to six are installed on each racing caliper, with varying frictional properties in a staggered pattern for optimum performance.
Early brake pads (and linings) contained asbestos, producing dust which should not be inhaled. Although newer pads can be made of ceramics, Kevlar, and other plastics, inhalation of brake dust should still be avoided regardless of material.
Sometimes a loud noise or high pitched squeal occurs when the brakes are applied. Most brake squeal is produced by vibration (resonance instability) of the brake components, especially the pads and discs (known as force-coupled excitation). This type of squeal should not negatively affect brake stopping performance. Techniques include adding chamfer pads to the contact points between caliper pistons and the pads, the bonding insulators (damping material) to pad backplate, the brake shims between the brake pad and pistons, etc. All should be coated with an extremely high temperature, high solids lubricant to help reduce squeal. This allows the metal to metal parts to move independently of each other and thereby eliminate the buildup of energy that can create a frequency that is heard as brake squeal, groan, or growl. It is inherent that some pads are going to squeal more given the type of pad and its usage case. Pads typically rated to withstand very high temperatures for extended periods tend to produce high amounts of friction leading to more noise during brake application.
Cold weather combined with high early-morning humidity (dew) often worsens brake squeal, although the squeal generally stops when the lining reaches regular operating temperatures. This more strongly affects pads meant to be used at higher temperatures. Dust on the brakes may also cause squeal and commercial brake cleaning products are designed to remove dirt and other contaminants. Pads without a proper amount of transfer material could also squeal, this can be remedied by bedding or re-bedding the brake pads to brake discs.
Some lining wear indicators, located either as a semi-metallic layer within the brake pad material or with an external "sensor", are also designed to squeal when the lining is due for replacement. The typical external sensor is fundamentally different from the noises described above (when the brakes are applied) because the wear sensor noise typically occurs when the brakes are not used. The wear sensor may only create squeal under braking when it first begins to indicate wear but is still a fundamentally different sound and pitch.
Judder or shimmy
The judder phenomenon can be classified into two distinct subgroups: hot (or thermal), or cold judder.
Hot judder is usually produced as a result of longer, more moderate braking from high speed where the vehicle does not come to a complete stop. It commonly occurs when a motorist decelerates from speeds of around 120 km/h (74.6 mph) to about 60 km/h (37.3 mph), which results in severe vibrations being transmitted to the driver. These vibrations are the result of uneven thermal distributions, or hot spots. Hot spots are classified as concentrated thermal regions that alternate between both sides of a disc that distort it in such a way that produces a sinusoidal waviness around its edges. Once the brake pads (friction material/brake lining) come in contact with the sinusoidal surface during braking, severe vibrations are induced, and can produce hazardous conditions for the person driving the vehicle.
Cold judder, on the other hand, is the result of uneven disc wear patterns or disc thickness variation (DTV). These variations in the disc surface are usually the result of extensive vehicle road usage. DTV is usually attributed to the following causes: waviness and roughness of disc surface, misalignment of axis (runout), elastic deflection, wear and friction material transfers. Either type could potentially fixed by ensuring a clean mounting surface on either side of the brake disc between the wheel hub and brake disc hub before usage and paying attention to imprinting after extended usage by leaving the brake pedal heavily depressed at the end of heavy usage. Sometimes a bed in procedure can clean and minimize DTV and lay a new even transfer layer between the pad and brake disc. However it will not eliminate hot spots or excessive run out.
When braking force is applied, the act of abrasive friction between the brake pad and the disc wears both the disc and pad away. The brake dust that is seen deposited on wheels, calipers and other braking system components consists mostly of disc material. Brake dust can damage the finish of most wheels if not washed off. Generally, a brake pad that aggressively abrades more disc material away, such as metallic pads, will create more brake dust. Some higher performing pads for track use or towing use may wear away much quicker than a typical pad causing additional dust from heightened brake disc wear and brake pad wear.
- GB 190226407 Lanchester Frederick William Improvements in the Brake Mechanism of Power-propelled Road Vehicles 1903-10-15
- US 1721370 Boughton Edward Bishop Brake for use on vehicles 1929-07-16
- GB 365069 Rubury John Meredith Improvements in control gear for hydraulically operated devices and particularly brakes for vehicles 1932-01-06
- GB 377478 Hall Frederick Harold Improvements in wheel cylinders for hydraulic brakes 1932-07-28
- US 1954534 Norton Raymond J. Brake 1934-04-10
- US 1959049 Buus Niels Peter Valdemar Friction Brake 1934-05-15
- US 2028488 Avery William Leicester Brake 1936-02-21
- US 2084216 Poage Robert A. and Poage Marlin Z. V-type brake for motor vehicles 1937-06-15
- US 2140752 La Brie Brake 1938-12-20
- DE 695921 Borgwar Carl Friedrich Wilhelm Antriebsvorrichtung mit hydraulischem Gestaenge... 1940-09-06
- US 2366093 Forbes Joseph A. Brake 1944-12-26
- US 2375855 Lambert Homer T. Multiple disk brake 1945-05-15
- US 2405219 Lambert Homer T. Disk brake 1946-08-06
- US 2416091 Fitch Fluid pressure control mechanism 1947-02-12
- US 2466990 Johnson Wade C, Trishman Harry A, Stratton Edgar H. Single disk brake 1949-04-12
- US 2485032 Bryant Brake apparatus 1949-10-08
- US 2544849 Martin Hydraulic brake automatic adjuster 1951-03-13
- US 2591793 Dubois Device for adjusting the return travel of fluid actuated means 1952-04-08
- US 2746575 Kinchin Disc brakes for road and other vehicles 1956-05-22
- ES 195467Y Sanglas Freno de disco para motociclos 1975-07-16
- Deaton, Jamie Page (11 November 2008). "How Brake Rotors Work". HowStuffWorks. Retrieved 26 November 2017.
- "disc brake". Merriam-Webster Dictionary. 16 November 2017. Retrieved 26 November 2017.
- Lentinello, Richard (April 2011). "The first car with disc brakes really was . ." Hemmings Sports & Exotic Car. Retrieved 26 November 2017.
- https://www.google.gg/patents/US2323052 Disk brake for use in motor cars, airplanes, and the like US 2323052 A
- "Lexikon der Wehrmacht - Ar 96". www.lexikon-der-wehrmacht.de. Retrieved 15 April 2018.
- "Tiger I Information Center - Transmission and Steering". www.alanhamby.com. Retrieved 15 April 2018.
- Langworth, Richard M. (1994). Chrysler and Imperial: The Postwar Years. Motorbooks International. ISBN 0-87938-034-9.
- Fearnley, Paul (13 June 2013). "Le Mans 1953: Jaguar's gigantic leap - History, Le Mans". Motor Sport Magazine. Retrieved 14 December 2015.
- Lawrence, Mike (1991). A to Z of Sports Cars 1945–1990. Bay View Books. ISBN 978-1-870979-81-8.
- October 17. The Motor. 1956.
- Lentinello, Richard (April 2011). "The first car with disc brakes really was . ." Hemmings Sports & Exotic Car. Retrieved 5 May 2018.
- "Why do we use disc brakes in front and drum brakes in rear?". Quora. August 2013. Retrieved 14 December 2015.
- "The Avanti — Born in Palm Springs". Point Happy Interactive. Retrieved 14 December 2015.
- Auto Editors ofConsumer Guide (17 December 2007). "1963–1964 Studebaker Avanti". auto.howstuffworks.com. Retrieved 14 December 2015.
- Auto Editors of Consumer Guide (26 October 2007). "Introduction to the 1965–1967 AMC Marlin". auto.howstuffworks.com. Retrieved 14 December 2015.
- "What's new at American Motors". Popular Science. 185 (4): 90–91. October 1964. Retrieved 14 December 2015.
- Long, Brian (2007). The Book of the Ford Thunderbird from 1954. Veloce Publishing. p. 104. ISBN 978-1-904788-47-8. Retrieved 11 November 2010.
- Auto Editors ofConsumer Guide (27 November 2007). "1964–1965 Lincoln Continental". auto.howstuffworks.com. Retrieved 14 December 2015.
- Auto Editors of Consumer Guide (14 December 2015). "1965 Corvette". auto.howstuffworks.com. Retrieved 14 December 2015.
- Frank, Aaron (2003). Honda Motorcycles. MotorBooks/MBI. p. 80. ISBN 0-7603-1077-7.
- Ihm, Mark. "Introduction to Gray Cast Iron Brake Rotor Metallurgy" (PDF). SAE. Retrieved 14 December 2015.
- Motorcycle History:Brakes, Ride Apart, 8 December 2013, Retrieved 2 June 2016
- Motor Cycle, 10 September 1964, p.7 Lambretta centrespread advert. "G.T. 200 The sportsman's choice. Top speed nearly 70 mph. Fast yet one of the safest scooters ever - front disc and rear drum brakes make the GT a real smooth stopper". Accessed and added 2015-02-19
- Motor Cycle, 25 November 1965, pp.748-751. Lambretta servicing hints. "Disc Brake Adjustment. Adjustment of the disc brake on the GT models is quite simple...Remove one of the plastic grilles from the openings let into the left side of the hub. Slacken the locknut and, with a hexagon key, turn the adjuster clockwise until the wheel will not revolve. Back off for three-quarters of a turn and tighten the locknut". Accessed and added 2015-02-23
- Motorcycle Mechanics, April 1969, UK Lambretta Concessionaires advert, p.19. "...Lambretta have been fitting disc brakes on their most powerful models for over five years". Accessed and added 2015-02-20
- Motorcycle Mechanics, October 1969, pp.45-47. Slowdown Lowdown by John Robinson "...the Lambretta disc brake has only one pad operated by the cable, the other being fixed. The first pad pushes the disc on to the second pad". Accessed and added 2015-02-21
- Glimmerveen, John. "Disc Brakes". About.com Autos. Retrieved 15 February 2015.
- Kresnicka, Michael. "Disc Brake Tech". motorcycle.com. Retrieved 15 February 2015.
- Sutherland, Howard (2004). Sutherland's Handbook for Bicycle Mechanics Chapter 11 - Brakes (PDF) (7th ed.). Sutherland's Bicycle Shop Aids. p. 13. Archived from the original (PDF) on 14 October 2013. Retrieved 15 February 2015.
- Ganaway, Gary (2002-01-28). "Air Disc Brake Production, Use & Performance" (PDF). NDIA Tactical Wheeled Vehicles Conference, Monterey California. Retrieved 2010-11-11.
- Zahl, Timothy (March 2017). "How To Select And Install Performance Brake Rotors". CARiD.com.
- Henry, Alan (1985). Brabham, the Grand Prix Cars. Osprey. p. 163. ISBN 978-0-905138-36-7.
- Mavrigian, Mike; Carley, Larry (1998). Brake Systems: OEM & Racing Brake Technology. HP Books. p. 81. ISBN 9781557882813.
- Puhn, Fred (1987). Brake Handbook. HP Trade. p. 31. ISBN 9780895862327.
- Smith, Carroll. "Warped- Brake Disc and Other Myths". Stoptech.com. Retrieved 18 January 2014.
- Rashid, A., and Strömberg, N. (2013), "Sequential simulation of thermal stresses in disc brakes for repeated braking", Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, v. 227(8), pp. 919–929.
- Erjavec, Jack (2003), Automotive Brakes, Cengage Learning, ISBN 9781401835262
- "HP Plus - Autocross & Track Brake Compound". Hawk Performance. Missing or empty
- "FAQ's". Centric Parts. 2010. Missing or empty
- Abdelahamid, M.K. (1997), "Brake judder analysis: Case studies", SAE, Technical Paper Series, no. 972027.
- de Vries, A. et al. (1992), "The brake judder phenomenon", SAE Technical Paper Series, no. 920554.
- Engel, G.H. et al. (1994), "System approach to brake judder", SAE Technical Paper Series, no. 945041.
- Gassmann, S. et al. (1993), "Excitation and transfer mechanism of brake judder", SAE Technical Paper Series, no. 931880.
- Jacobsson, H. (1996), "High speed disc brake judder – the influence of passing through critical speed", In EuroMech – 2nd European Nonlinear Oscillations Conference, Prague, no. 2, pp. 75–78.
- Jacobsson, H. (1997), "Wheel suspension related disc brake judder", ASME, no. DETC97/VIB-4165, pp. 1–10.
- Jacobsson, H. (1998), "Frequency Sweep Approach to Brake Judder, Licentiate of engineering", Chalmers University of Technology Sweden.
- Jacobsson, H. (1999), SAE Technical Paper Series, no. 1999-01-1779, pp. 1–14.
- Stringham, W. et al. (1993), "Brake roughness – disc brake torque variation", disc distortion and vehicle response, SAE Technical Paper Series, no. 930803.
- Thoms, E. (1988), "Disc brakes for heavy vehicles", IMechE, pp. 133–137.
- Anderson, E., et al. (1990), "Hot spotting in automotive friction systems", Wear, v. 135, pp. 319–337.
- Barber, R., J. et al. (1985), "Implications of thermoelastic instabilities for the design of brakes", J. Tribology, v. 107, pp. 206–210.
- Inoue, H. (1986), Analysis of brake judder caused by thermal deformation of brake discs, SAE Technical Paper Series, no. 865131.
- Rhee, K.S. et al. (1989), "Friction–induced noise and vibration of disc brakes", Wear, v. 133, pp. 39–45.
- J. Slavič, M.D. Bryant and M. Boltežar (2007), "A new approach to roughness-induced vibrations on a slider.", J. Sound and Vibration, Vol. 306, Issues 3–5, 9 October 2007, pp. 732–750.
- Kim, M.-G. et al. (1996), "Sensitivity analysis of chassis system to improve shimmy and brake judder vibration on the steering wheel", SAE Technical Paper Series, no. 960734.
- "Brake dust". EBC Brakes. Retrieved 18 January 2014.
- "Brake dust". EBC Brakes. Retrieved 18 January 2014.
- Hawk Performance. "HP Plus - Autocross & Track Brake Compound." Hawk Performance. Hawk Performance, n.d. Web. 11 Apr. 2017.
|Wikimedia Commons has media related to Disk brakes.|
- Using Ceramics, Brakes Are Light but Cost Is Heavy
- Disc brake pads, free video content from CDX eTextbook
- A new approach to roughness-induced vibrations on a slider
- Evaluation/explanation of the disc brake system, pad selection, and disc "warp"
- Common Brake Facts to calculate Pedal Ratio, Disc/Drum or Disc/Disc configurations, and calculations to determine if you need residual valves in your Disc Brake system |
Law of Supply And DemandLaw Of Supply And Demand: What's That?
The law of demand and supply is the backbone of a market economy. The fundamental concept refers to the relationship between the sellers and buyers of a particular resource. Here, a change in one of the parameters causes a change in another. According to this theorem, when there is a higher demand for a commodity, the need for its supply will be high and vice versa. The equation states that the desire for a product and its fulfilment are interdependent. Law of Supply and Demand: How is it Used?
The theorem states that this law is based separately on demand and supply. They work together to form a theorem explaining:How demand and supply are related to each other?
How do they interact in the market economy to determine the market price?
How is equilibrium value maintained?
Usually, there is a decrease in the price of goods when the supply of products is higher than the demand and vice versa. That's how the law of supply and demand works. Many other factors affect this law, influencing the quantities and prices in the market. Understanding The Law Of Supply And Demand
The law of demand and supply is an economic law that acts as a support system for most economic principles. It determines people's interest in a particular good or service.
If it has a limited supply, the value will be higher. Also, if the price of goods is high, the ratio of people purchasing it will be lower. There are many factors that are capable of affecting supply and demand. This can cause the demand-supply intersection point in the graph to either increase or decrease.Let's explain it in simpler words:
Suppose a fruit vendor sells apples at a specific price (say, Rs. 150 per kg). At that price point, suppose ten people buy apples from that vendor.
Now, because of a rise in the production of apples in the valley, the supply of apples has increased significantly. This causes a change in the demand-supply equilibrium, leading to an oversupply of apples in the market. It, in turn, makes the price of apples fall, making the fruit vendor sell his apples at a lower price (say, Rs. 80 per kg). Once the price dropped, people who were earlier unable to buy apples rushed to the fruit vendor to buy them. This increased the number of buyers in the market, say 20.
Once the number of apple buyers increases from 10 to 20 after the fall in price, the excess supply in the market goes away. The equilibrium in the market is attained. Once the equilibrium returns, the price rises again. What Is Demand?
The law of demand states that the higher the price of a product, the demand for that product will be lower (fewer people will buy it), provided all other factors remain constant. It counts the number of consumers willing to buy products at different price ranges within a specific time period. Demand for any commodity shows that the consumer is interested in purchasing the goods and can pay for them.
A buyer can purchase minimal amounts of goods at a high price. This is because the opportunity of buying that good decreases as the price increases. This law shows a downward slope when put on a graph.
What Is Supply?
The law of supply is yet another essential fundamental economic concept. It shows the quantities sold at a specific price. Supply is the total number of a particular good or service currently available for the consumers. You can also equate the supply of a product with the number of goods available within a specific price range.
Unlike demand, the supply graph has an upward slope. A higher quantity of a product is supplied when the price is higher. When the value of a particular service is higher, a vendor is willing to provide more of that service to the customer.
What Are Supply And Demand Curves?
The laws of demand and supply graph are plotted in curves. The point where these curves meet is the market quantity supplied. The market tends to move towards equilibrium, in general. When there is a shift in total demand and supply, the point of intersection moves accordingly.
The supply curve is a vertically curved line, and the demand curve is a downward sloping one. This is because of the diminished marginal utility law. Based on the supply to the market and the price the sellers charge, they can increase or decrease the quantity they sell. That's why the supply curve moves downwards through time.What Are the Factors that affect Supply?
Certain factors affect supply and determine if it is higher or lower. The factors that affect supply are:
Other factors affecting supply include:
- Technology to combine inputs, and many more
What Are the Factors Affecting Demand?
- The number of sellers
- Their capacity to produce goods in a given time
- Presence of competitors in the market
- Institutional costs
- Other regulations
Consumer preference is one of the main factors that affect the demand for a product or service. Other substitutes or complementary products of other consumer goods with varied prices can also change the demand. Factors affecting demand include:
Why Is The Law of Supply and Demand Essential?
- Cost and price of the product at the time of its purchase
- Quality of the product
- Price of complementary products
- The presence of other alternative products or services in the market also affects demand.
The Law of Supply and Demand act in unison. Their interacting relationship helps investors and economists to determine the prices, qualities, and quantities of goods and services in a market economy. It is an efficient method that you can use to make better decisions and discover an appropriate value for products. It also helps in the fulfilment of consumer demand. Disclaimer: This content is authored by an external agency. The views expressed here are that of the respective authors/ entities and do not represent the views of Economic Times (ET). ET does not guarantee, vouch for or endorse any of its contents nor is responsible for them in any manner whatsoever. Please take all steps necessary to ascertain that any information and content provided is correct, updated and verified. ET hereby disclaims any and all warranties, express or implied, relating to the report and any content therein. |
“The diffraction grating is a useful device for analyzing light sources. It consists of a large number of equally spaced parallel slits.” Its working principle is based on the phenomenon of diffraction. The space between lines acts as slits and these slits diffract the light waves thereby producing a large number of beams that interfere in such a way to produce spectra.
A transmission grating can be made by cutting parallel lines on a glass plate with a precision ruling machine. The space between the lines is transparent to the light and hence acts as separate slits. A reflection grating can be made by cutting parallel lines on the surface of refractive material. Gratings that have many lines very close to each other can have very small slit spacing. For example, a grating ruled with 5000 lines/cm has a slit spacing d=1/5000 cm=2.00×10-4 cm.
A section of a diffraction grating is illustrated in the figure. A plane wave is an incident from the left, normal to the plane of the grating. A converging lens brings the rays together at point P. The pattern observed on the screen is the result of the combined effects of interference and diffraction. Each slit is produced diffraction, and the diffracted beams interfere with one another to produce the final pattern.
The waves from all slits are in phase as they leave the slits. However, for some arbitrary direction θ measured from the horizontal, the waves must travel different path lengths before reaching point p. From the figure, we note that the path difference’ δ ‘ between rays from any two adjacent slits is equal to d sin θ. If this path difference is equal to one wavelength or some integral multiple of a wavelength, then waves from all slits are in phase at point P and a bright fringe is observed. Therefore, the condition for maxima in the interference pattern at the angle θ is.
Diffraction grating formula
d sin θ =mλ
We can use this expression to calculate the wavelength if we know the grating spacing and the angle 0. If the incident radiation contains several wavelengths, the mth-order maximum for each wavelength occurs at a specific angle. All wavelengths are seen at θ =0, corresponding to m=0, the zeroth-order maximum (m=1) is observed at the angle that satisfies the relationship sin θ =λ/d: the second-order maximum (m=2) is observed at a larger angle θ, and so on.
The intensity distribution for a diffraction grating obtained with the use of a monochromatic source. Note the sharpness of the principal maxima and the broadness of the dark areas. This is in contrast to the broad bright fringes characteristic of the double-slit interference pattern. Because the principal maxima are so sharp, they are very much brighter than double-slit interference maxima.
Grating element definition
Distance between two consecutive slits (lines) of the grating is called a grating element. Grating element ‘d’ is calculated as:
Grating element =Length of grating/Number of lines
Dispersion and resolving power
The ability of a grating to produce spectra that permit precise measurement of wavelengths is determined by two intrinsic properties of grating.
- The separation Δθ between the spectral lines that differ in wavelength by small amount Δλ.
- The width or sharpness of the lines.
The dispersion D of the grating is defined as:
“The angular separation Δθ per unit wavelength Δλ is called the dispersion D of the grating.”
D = Δθ/Δλ
For lines of nearly equal wavelengths to appear as widely as possible,we would like our grating to have the largest possible dispersion.
Since the grating equation is:
d Sinθ =mλ
Differentiating the above equation we have:
d cosθdθ = mλ
Now in terms of small differences the above equation we have:
d cosθ dθ =m dλ
Now in terms of small differences, the above relation can be written as:
d cosθ Δθ =mΔλ
Δθ/Δλ =m/d cosθ
D = m/d cosθ
From the above relation, we see that the dispersion D increases as the spacing between the slits ‘d’ decreases. We can also increase the dispersion by working at higher-order ( large m). Note that the dispersion does not depend on the number of rulings N.
Resolving power definition
“The resolving power of an instrument is its ability to reveal minor details of the object under examination.”
Resolving Power of grating
“The resolving power of grating is a measure of how effectively it can separate or resolve two wavelengths in a given order of their spectrum”.
The diffraction grating is most useful for measuring accurately. Like the prism, the diffraction grating can be used to disperse a spectrum into its wavelength components. The grating is the more precise device if we want to distinguish two closely spaced wavelengths.
According to Rayleigh’s Criterion ” For two nearly equal wavelengths λ1 and λ2 between which a diffraction grating can just barely distinguish, the resolving power R of the grating is defined as:
R = λ/Δλ
Thus,a grating that has a high resolving power can distinguish small differences in wavelength.
Where λ = λ1 + λ2 /2 and Δλ = λ1 – λ2
Thus, resolving power increases with the increasing order number and with an increasing number of illuminated slits. If the lines are to be narrow, the angular separation δθ is small, then corresponding wavelengths interval Δλ must be small, and by equation(1) the resolving power must be large. To find the physical property of the grating that determines to resolve power R, we write the spacing between nearby lines as:
⇒R = Nm
Thus resolving power increases with the order number m and number of lines N.Resolving power is independent of the separation d of the slits.
Watch also video |
From big data to self-driving vehicles, artificial intelligence (AI) has undeniably transformed the way many industries operate today. But, despite playing a more significant role in our everyday lives, not many are still aware of what AI and machine learning (ML) do. This post aims to break down some of the basic concepts surrounding AI in digestible pieces.
What is AI, and How does It Work?
AI is a term that refers to a computer or machine’s ability to accomplish tasks or make decisions, just like humans. AI designers aim to reproduce human attributes such as creativity, logical reasoning, and knowledge acquisition in systems to varying levels. Virtual assistants and chatbots in travel booking sites are a clear demonstration of how AI can automate specific tasks that only humans could perform in the past.
What are the Fundamental AI Concepts?
To fully understand how AI works, you need to learn about the following basic concepts first:
1. Machine Learning
In the simplest terms, machine learning (ML) is a subset of AI. Its core lies in the idea that computer systems can learn on their own from data obtained from performing previous tasks and past experiences. That means that you don’t have to pre-program an AI device every time you need it to work on a job.
ML has three subcategories—supervised, unsupervised, and reinforcement.
- Supervised learning occurs when an AI system arrives at a predictable conclusion based on existing data.
- Unsupervised learning, meanwhile, takes place when the AI agent produces an unpredictable result, which it was not pre-trained to do.
- Reinforcement learning (also known as “goal-oriented programming”) deals with training the AI algorithm to recognize rewards and punishments so that it can come up with the best solution to a problem.
2. Deep Learning
This takes ML up a notch. This subset of AI refers to a system’s ability to take unstructured data from multiple sources, analyze it, and apply it to solve new problems. Deep learning is also known as “differential programming.”
3. Artificial neural network (ANN)
Artificial neural network refers to a system or an algorithm used in deep learning that mimics how the human brain’s neural circuits function, such as when making sense of things and events.
What are Other Relevant Concepts in AI?
Listed below are more AI-related terms that can deepen your understanding.
- Categorization: Building a successful AI system requires creating a type of category or benchmark for a specific field. These criteria or metrics are used by the machine to diagnose a problem. After further analysis, its diagnosis could eventually lead to a fitting solution.
- Classification: This is a property of an AI model that points to its ability to “classify” what type of problem it encounters, what causes it, and what solution can best remedy it. In medical diagnostics, for example, an AI-powered tool identifies an illness based on its unique qualities.
- Collaborative filtering: This refers to the capability of an AI system to make decisions or give recommendations on its own based on what it learned from a user’s past preferences and actions. An example of its results is a recommendation that you receive via ads or media platforms.
- Natural language processing (NLP): This is a characteristic of advanced neural networks that describes their ability to interpret tasks and produce outputs that humans can read. The term also pertains to a field of computer science that focuses on developing computers that can understand natural language through reading and listening. Conversational AI platforms, such as messaging apps and chatbots, use NLP.
- Data mining: This involves extracting unstructured data from various databases and websites to enrich predictive AI algorithms. AI systems use statistical methods to analyze aggregated data for trends and associations, which, in turn, allow them to generate new information. |
6. End of Pleistocene
|1. Introduction||2. 8.200 Cold Period|
|3. Climatic Optimum||4. A Green Sahara|
|5. After Optimum||6. Roman Warm Period|
|7. Migration Time Cold Period||8. Medieval Warm Period|
|9. Little Ice Age||10. Modern Warm Period|
|11. Sun Spots||12. Literature|
If you want to know what the climate was like in the Holocene, simply take some outerwear and go out into nature and look around. Holocene refers namely to the present since the end of the Weichsel glaciation.
The interglacial Holocene has supported the development and growth of human civilizations, it has been the cradle of civilization, not to say their uterus. It started around 11,700 years before present with a sudden warming from the cold period called Younger Dryas. In only ten years time the temperature in Greenland rose with an impressive 8 degrees, which corresponds to that North Europe's climate was replaced with a Mediterranean climate. It is not known, what caused this rapid rise in temperature.
Cenozoic is the period of the mammals, which followed the Mesozoic that was the period of dinosaurs. Tertiary is that part of Cenozoic, where no humans
existed , and Quaternary means the part of Cenozoic, where humans exist. Quaternary is composed of Pleistocene and Holocene. Pleistocene is the period that we in common language call the Ice Age. Holocene represents the
present, which basically is a Pleistocene interglacial period. Holocene is represented by the thin red line on the far left. The climate of the Holocene is the subject of this Article.
During the following one thousand years, the temperature increased so that climate became several degrees warmer than today. About 8,000 years before present, in Hunter Stone Age, occurred the hottest period throughout the Holocene. This initiated the warm period called the Holocene Optimum, which lasted almost until about 4,500 years before present, whereafter the temperature continued to drop through bronze age, iron age and historical time until it reached a low point in "The Little Ice Age" in the years 1600- 1700. Within the last few hundred years, the temperature has again increased, but not to such heights as in Hunter Stone Age.
This graph is taken from
Wikipedia. It shows eight different reconstructions of Holocene temperature. The thick black line is the average of these. Time progresses from left to right.
On this graph the Stone Age is shown only about one degree warmer than present day, but most sources mention that Scandinavian Stone Age was about 2-3 degrees warmer than the present; this need not to be mutually excluding statements, because the curve reconstructs the entire Earth's temperature, and on higher latitudes the temperature variations were greater than about equator.
Some reconstructions show a vertical dramatic increase in temperature around the year 2000, but it seems not reasonable to the author, since that kind of graphs cannot possibly show temperature in specific years, it must necessarily be smoothed by a kind of mathematical rolling average, perhaps with periods of hundred years, and then a high temperature in a single year, for example, 2004 will be much less visible.
The trend seems to be that Holocene's highest temperature was reached in the Hunter Stone Age about 8,000 years before present, thereafter the temperature has generally been steadily falling, however, superimposed by many cold and warm periods, including the modern warm period.
However, generally speaking, the Holocene represents an amazing stable climate, where the cooling through the period has been limited to a few degrees.
The general decline in temperatures since 8000 years before present was
overlaid by several cold and warm periods. Thus we speak of five to seven cold periods in Holocene, including the Little Ice Age and several warm periods, including the Minoan, Roman, Medieval and Modern Warm Periods.
Temperature variations in the Holocene compared with the previous Weichsel ice age based on analysis of Greenland ice cores. The vertical scale on the left shows the temperature on the surface of the ice, and the horizontal scale is years before present. Time progresses from left to right. It appears that the climate of the Holocene really has been very stable and the temperature has only varied a few degrees - The most dramatic event so far has been the 8,200 cold period and the ensuing Holocene maximum in the Stone Age. By contrast, the climate in the previous ice age was not nearly as stable. The temperature often varied more than 20 degrees during a few hundred years or perhaps less.
The Holocene cold and warm periods, however, represent only small
temperature changes compared to both the glaciation periods and the other
This unique climatic stability made the development of agriculture possible, it created the basis for the development of civilizations and enabled eventually the industrial revolution and consequently the modern world with its technique and myriads of people. Had we not had a window of about 10,000 years of stable climate with only small temperature variations, civilization would not have been nearly as developed, if at all existent, and Earth's population would have been only a fraction of the current.
Temperature variations in Holocene compared with the preceding interglacials - The vertical scale shows the temperature on the ice surface, and the
horizontal time scale is in thousands of years before present. It can be seen the Holocene temperature graph has a different shape than the previous interglacial periods. Holocene has a nearly flat top, which represents a fairly stable climate through ten thousand years, while the preceding interglacials are generally pointed, that is the temperature has risen to a maximum and then declined again, maybe after only a few hundred years. Only Holocene could offer a stable climate for a long time, during which agriculture and civilization could develop.
Note moreover almost all heating periods characteristic shape. The heat comes suddenly, perhaps in a few decades and then decreases slowly. This is also the case for the Holocene, except that temperature has dropped much slower than in the other interglacials and warming periods.
Many believe that the declining Milankovitch insolation is the cause of the
general cooling trend during the Holocene. The Milankovitch insolation is the theoretical insolation (received energy from the Sun) at 65 degrees northern latitude in June. Its variation in the Holocene is mainly due to changes in the axis tilt, such that the northern hemisphere, in the beginning, received a big June insolation, as it turned more directly against the sun during the summer, while the axis in the present is more upright, and therefore the northern hemisphere receives not so much solar radiation in summertime.
Temperature and Milankovitch insolation during Holocene.
The upper reddish graph represents the temperature in Celsius on the ice surface in Greenland. The curve is generally falling through the Holocene but overlaid by many cold and hot periods. Many operate with six cold spells, of which the best known are the 8,200 cold period and the Little Ice Age. The most famous hot periods are the Minoan, the Roman and the Medieval warm periods.
The Norwegian Axel Blytt and the Swede Rutger Sernander developed in the 1800's the Blytt-Sernander period breakdown of the Holocene climate based on studies of Danish peat bogs. It includes the periods Preboreal, Boreal, Atlantic, Subboreal and Subatlantic that are shown at the bottom of the figure. Preboreal is also known as the Birch-pine period. There are many different opinions about when the various Blytt-Sernander periods begin and end.
Today, some believe that this classification is outdated and prefer other divisions, which includes the Holocene Climatic Optimum, Postglacial and Neoglacial that all are shown at the top of this figure in colors.
The yellow-green graph shown below the temperature curve represents the theoretical Milankovitch insolation in Holocene in Watts per m2. - The Milankovitch insolation is solar radiation on 65. northern latitude in the month of June. It can be seen that insolation-maximum occurred about 10,000 years before present with about 470 W/m2, and since then insolation has been declining steadily down to today's low value that is slightly less than 430 W/m2.
In the very early Holocene, northern Europe became vegetated by an open and light
birch forest mixed with aspen, willow, mountain ash and pine. The ever milder climate caused average summer temperatures to rise to 18-20 degrees, while winter temperature stood at just below freezing. The composition of the forest trees changed; pine pushed back birch; hazel, elm, oak, ash, alder, fir and linden immigrated.
About 8,200 years ago, there was a sharp cooling in the Northern hemisphere. It has been attributed to an excessive supply of cold glacial meltwater
from glaciers in the Hudson Bay area. Data from Disko Bay show that here too
was a large production of melt water. Samples taken from the ocean floor at
Spitsbergen indicate that here the Arctic waters pushed further south
already 8,800 years ago.
From HOCLAT - A web-based Holocene Climate Atlas (see link below).
Reconstructed summer air temperature from pollen analysis of sediments from the bottom of a Swedish lake at 58.55 northern latitude and 13.67 eastern
longitude, which is a small Swedish lake between Vanern and Vattern.
The original data is the very thin line in the diagram at the top. It has been smoothed with a form of mathematical rolling average over 500 years, it is the blue line. In addition, the original data also have been mathematically smoothed over 3,000 years, it is the red line. The figure at the bottom shows how much the blue line deviates from the red line, which is a measure of climate change. The blue areas thus show how much the temperature of a cold period deviates from the more average temperature in this age, and the red areas show how much the temperature of a warming period exceeds the more average temperature in this age.
It is seen that the 8,200 cold spell represents a very severe climate change. It has been a sudden change for Stone Age hunters. Moreover, it looks as if the cold periods come at regular intervals. The cold Period 5.900 years before present took place at the transition from Hunter-Stone Age to the Peasant Stone Age. The cold period 3,500 was the beginning of the Bronze Age. The small cold period around 1,800 occurred a few hundred years after birth of Christ, perhaps it was at that time, the Goths left the island of Skanza (Scandinavia). The Little Ice Age seems to come somewhat early.
Many explain the 8,200 cold period as a result of a large discharge of cold meltwater into the Atlantic from Lake Agassiz at the edge of the Laurentide ice sheet in North America.
It may seem paradoxical that a warmer weather in the Arctic, which caused the melting of ice caps and sea ice and thus the production of cold fresh water, caused a colder climate in Northern Europe and probably also in North America. This is explained by that the large amounts of cold fresh water, that is lighter than salt water, disturbed the ocean currents, and a weakened Gulf Stream was the cause of colder weather along the North Atlantic coasts. Many believe that such meltwater mechanism also caused The Younger Dryas cold period.
Analysis of oxygen isotopes in stalagmites from Costa Rica shows a dry period around 8,200 before present. From "Tropical response to the 8200 yr B.P. cold event? --" by Matthew S. Lachniet with others.
Even more beyond comprehension is it that a team of American geologists from
the University of Buffalo and other universities have found that during the
cold period glaciers on Baffin Island increased. One can only conclude
that if there simultaneously was produced meltwater, precipitation must have been even very big in the region.
Analyses of oxygen isotopes in stalagmites from caves in Costa Rica have shown that there was a dry period about 8,200 years before present, caused by weaker monsoon and reduced precipitation in Central America. It questions the melt-water Gulf Stream theory, as the climate in Costa Rica's is not dependent of a warm Gulf Stream, this region belongs indeed the area near the equator, which supplies the heat to the Gulf Stream.
The cold period cannot, however, be detected in the Southern Hemisphere, neither in drill cores from the ice sheet in Antarctica glaciers in Bolivia or in samples taken from the seabed off the mouth of Murray River in Australia. This indicates that the cold period can have been a truly North Atlantic phenomenon, perhaps caused by variations in the sea currents.
But after a while, the sea currents in the North Atlantic found back to their old routes if that had been the cause, and about 8,000 years before present began the warmest period of the Holocene ever.
In sediments from the bottom of the lakes, Huelmo and Mascardi, in the Andes Mountains in respectively Chile and Argentina scientists have found evidence of a cold period on the Southern Hemisphere, which lasted 800 years and occurred between 11,400 and 10,200 years before present.
The parasitic plant mistletoe on a willow tree - During the Holocene optimum
parasitic plant mistletoe was widely found in southern Scandinavia. Today it grows further south, in southern England, Central and Southern Europe.
The hottest time in the Holocene occurred in the Stone Age about 8,000 years
before present, it is called the Holocene Maximum. This warm climate continued largely through 3,500 years until 4,500 before present, when it was Neolithic period in Northern Europe.
It is assumed that the average temperature was 2-3 degrees higher than today. This is supported by the fact that plants such as mistletoe and the subtropical aquatic plant Trapa natans grew widespread in south Scandinavia. Linden, elm, spruce and oak were the most common trees in northern Europe's dense forests, which closed the continent's interior into a big impenetrable forest.
In Denmark, scientists have studied Stone Age settlements from the Holocene Climatic Optimum's period and found bones of various terrestrial and marine animals, including swordfish, sturgeon, sardine and tuna, dalmatian pelican and pond turtle, all of which are species that today live in warmer climes.
Pine stub in Cairngorm Mountains that is 4,000 to 4,500-year-old.
In Cairngorm Mountains in central Scotland, you can find stubs of 4,000 - 4,500 old pine trees, which grew 650 meter above sea level. This altitude is slightly above the limit for dwarf trees and stunted trees today.
Another testimony of warmer climate in the past can be found in Dartmoor in Southern England, though slightly later than the Holocene Optimum. Here Bronze Age farmers cultivated the land in 450 meters above sea level, which should be compared with the absolute limit on agriculture today, that is an altitude of 300 meters.
A team of scientists from the University of Copenhagen have analyzed driftwood and beach ridges along the coast of north-eastern Greenland and thereby uncovered the extension of sea ice during the Holocene Optimum.
Driftwood, that end up on the coast of northeastern Greenland come from North America and Siberia. It has used several years to complete its journey and would only reach the coast of Greenland , if it is encased in ice, since free driftwood will sink to the bottom during such a long journey.
Driftwood on the beach of Spitsbergen.
By collecting and dating driftwood with the carbon-14 method, researchers could calculate the amount of sea ice in different time periods.
Svend Funder and his colleagues also examined beach ridges along the coast. Today beach ridges are not formed along the coast of Northern Greenland, as sea ice shields the coast year round. By the carbon 14 method, the beach ridges have been determined to originate from the Holocene Optimum, during which period the sea must have been ice-free, at least in the summertime.
It was concluded that sea ice reached a minimum between 8,500 and 6,000 years ago, when the limit for full-year sea ice was located 1.000 km further north than in the present, and in summertime it covered an area only half as large as the sea ice area in the summer of 2007, when sea ice in recent times had its minimum.
Some studies indicate that the sea surface temperature of the world's oceans was up to 5 degrees higher than today's surface temperature (Darby, 2001).
Painting by Ivan Shishkin and Konstantin Savitsky - Throughout most of the first part of Holocene, most of Europe, Asia and North America was covered by forest. A large part of the biosphere's carbon was tied up in the wood of the trees. Agriculture was introduced and as the trees rotted away or were burned, the atmospheric concentration of carbon increased in the form of CO2.
The ice cap in Peary Land in northern Greenland was drilled in 1977. The ice core contained distinct refrozen meltwater layers all way down to the bedrock, which indicates that it did not contain ice from the Weichsel glaciation. That is to say that the world's northernmost ice sheet melted completely away during the Holocene Optimum and was only restored when the climate became colder about 4,500 years ago.
Since less water was bound at the poles as inland ice than nowadays, the World Sea surface level at that time was 3 meters above today's sea surface level.
At the end of Maglemose hunters' period, around 8,500 before present, the climate in northern Europe had evolved into a so-called Atlantic climate. It was a mild and humid coastal climate with summer temperatures 2-3 degrees higher than today.
As sea surface level in the World Ocean rose, it caused the salty seawater to enter the Ancylus Lake (Baltic Sea), and the water in the Baltic Sea basin again became salt. The new sea is called the Littorina Sea after the saltwater snail Littorina littorea. It lasted several hundred years before the salt content reached its maximum.
As the kilometers-thick Scandinavian ice sheet began to melt, it formed the
freshwater lake, the Baltic Ice Lake. It was a cold sea with drifting
icebergs. The lake surface was higher than the sea surface of the World's oceans. Some believe that the ice-lake was emptied by a major flood disaster around year 9,600 BC, but most believe that it happened gradually.
The landscape of northern Europe was dominated by icy cold steppes and regular tundra roamed by a small number of reindeer hunters.
After the lake made connected with the World Sea, it became a brackish sea called the Yoldia sea, named after the mussel Yoldia arctica. The Yoldia Sea had connection with the world's oceans through a strait that was located where the Swedish lakes and the Gota river are today.
In the early hunter-Stone Age the tundra became vegetated of a birch forest mixed with aspen, willow, mountain ash and pine.
When Scandinavia was freed from the weight of the huge masses of ice, the land lifted, and the uplift cut off the Yoldia Sea's connection with the world's oceans, and it became once again a fresh water lake called the Ancylus Lake after the freshwater snail Ancylus fluviatilis. Ancylus Lake maybe had drain through central Sweden at the Great Lakes.
As the climate became milder, and the summer average temperature rose to 18-20 degrees, and winter temperatures rarely fall below freezing, also the composition of the forest trees changed, pine replaced birch and hazel, elm, oak, ash, alder, fir and linden became common.
Around 7,000 before present, the climate of northern Europe was a so-called Atlantic climate. It was a mild and humid coastal climate with summer temperatures 2-3 degrees higher than today. The water level in the world's oceans increased after some time making the salty sea water enter the Ancylus Lake, and the water in the Baltic Sea basin again became salt. The new sea is called the Littorina Sea after the saltwater snail Littorina littorea.
Because of land uplift the Littorina Sea's connection to the World Ocean during the past 2,000 years had become increasingly narrower and shallower, making it to the brackish sea, that we know today as the Baltic Sea.
In Australia scientists have analyzed sediments in the seabed off the mouth of the river Murray and found that from 17,000 to 13,500 years before present the Australian climate was wetter than it is in the present. There have with certainty been found no indications of dry periods either in the Younger Dryas or 8,200 years before present, indicating that these cold periods are phenomena limited to the northern hemisphere.
The partially dried up Black Sea around 5,500 before present.
Samples of bottom sediments in the Australian Lake Frome and Lake Woods
show that the climate in early Holocene between 9,500 and 8,000 years ago, and again 7,000 to 4,200 years ago, was considerably wetter than in the present. The beginning of modern climatic conditions in Australia with periodic rainy seasons took place about 4,000 years ago.
Analyses of sediments from the Cariaco Basin in Venezuela indicates that the amount of water discharged into the basin during the Holocene Optimum was much greater than today, indicating that precipitation in the area must have been much larger in the first half of the Holocene than it is today. (Uriarte, Haug).
One of the geographical events in Europe, that most brings our thoughts to the Biblical account of the Flood, is perhaps the sudden flooding of the partially dried up Black Sea, which took place 5,500 years before present.
For reasons, we can only guess, the inland sea had lost its connection to the world's oceans and was partially dried out. Its sea surface lay 150 m under the sea surface of the World Sea. The Black Sea is fed by many large and water-rich rivers, just think of the Danube, Dnester, Dneieper and Don. It is difficult to understand that it may have lost more water by evaporation than it received from the rivers. It must be evidence that it really has been very hot during the Holocene Optimum. It is assumed that the temperature during this period was 2-3 degrees higher than in present.
Detail of the motif The Flood by Michelangelo from the Sistine Chapel.
A marginal increase in the sea surface of the world's ocean 5,500 years before present created a small crack in the barrier of the Bosphorus, a negligible trickle of seawater into the Black Sea basin quickly evolved into a huge waterfall of salt water 200 times greater than Niagara. It is assumed that the sea water gushed into the half dried up Black Sea and got sea surface level to rise by 15 cm. a day, and thus raised the water level the 150 meters up to the World Ocean surface level during about three years.
When the flood occurred, there was Neolithic time in Northern Europe and this there had surely been a long time in the area around the Black Sea. The oceanographer Robert Ballard has examined the Black Sea bottom using an underwater robot and found evidence of human habitation.
Many peoples have the story of an initial flood among their old myths. In the Genesis of the Bible God separated the waters and created Heaven and Earth. The Bible has the story about the flood, and how Noah and his family survived. In the Egyptian creation myth was in the beginning also a chaos of water, and the god Ra separated the waters and created the world. In the Scandinavian mythology, the gods Odin, Vile, and Ve killed the original giant Ymer, in the flood that was created by Hrymer's blood all his children, the rimturses, drowned except for Bergelmer and his wife, and by these were the Jotuns. Even the Australian Aborigines have an initial flood among their old myths.
Left: Rock Painting with giraffes from Tassili in southern Algeria.
Right: Rock Painting with an elephant from Tadrart Acacus in Libya.
During a longer period of time, that roughly corresponds to the Holocene Optimum, Northern Africa experienced a time of a considerably more wet and rainy climate than that, which now prevails in the region. Many states the period to be 8,500 to 3,500 BC (10,500 to 5,500 years before present), but the dating seems to be uncertain.
Where now is barren and scorched desert, was then savannah with widespread grassland and some trees. There lived lions elephants, giraffes and other animals that are now characteristic of southern Africa.
The former professor of African history at London University Roland Oliver described the landscape as follows: "The major mountain ranges Tibesti and Hoggar, which today are bare rocks, were then covered with forests of oak and walnut, linden, alder and elm. The lower slopes, along with the smaller mountains - Tassili and Acacus to the north, Ennedi and Air to the south - were covered with olive, juniper and Aleppo pine. Through the grasslands of the valleys' rivers were flowing teeming with fish."
Rock Painting depicting men in boats from Tassili-n-Ajjer in southern
Rock Art all over Sahara recalls a time when the country was greener and
home to lions, elephants, giraffes, antelopes, hippos and crocodiles. A picture from Tassili, which today is a scorched desert, shows men, who stand in boats sailing on water. This shows that there existed lakes and rivers in places, where today cannot be found a straw of grass.
Most rock paintings in the Sahara are found in Algeria, Libya, Morocco and Niger, and to a lesser extent in Egypt, Sudan, Tunisia and some Sahel countries. The Air Mountains in Niger, the Tassili-n-Ajjer plateau in southeastern Algeria and the Fezzan region in southwestern Libya are particularly rich in old rock paintings.
Lake Chad reached a maximum extent of about 400,000 square kilometers, which is larger than the modern Caspian Sea, with a surface level about 30 meters higher than in modern times.
Climate-related settlements in the eastern Sahara through the major phases of the Holocene. Red dots indicate main resettlement areas, white dots indicate more isolated settlements in climatic refuge locations and cyclical shifts of pastures. Precipitation zones are indicated by green nuances in the light of best estimates based on geological, archaeological and archaeo-
(A) During last Glacial Maximum and late Pleistocene, that is 20,000 to 8,500 BC (22,000 to 10,500 years before present) the Sahara desert was devoid of any settlement outside the Nile Valley, and the desert stretched 400 km. farther south, than it does today.
(B) With the sudden onset of monsoon rain around 8,500 BC, the hyper-arid desert was replaced by savannah-like landscapes, which quickly became inhabited by prehistoric people. In the early Holocene optimum southern Sahara and the Nile Valley were apparently too humid and dangerous for appreciably human settlement.
(C) Around 7,000 BC human settlements have been well established throughout eastern Sahara, where they created a cattle-nomadic culture.
(D) Decreasing monsoon rain caused a beginning drying out of the Egyptian part of the Sahara around 5,300 BC. The prehistoric people were forced to seek into the Nile Valley, settling in oases or to emigrate to the Sudanese Sahara, where rainfall and surface water was still sufficient. Sahara's return to actual desert conditions about 3,500 BC coincided with the initial stages of Egyptian civilization in the Nile Valley. - Kuper and Kropelin (2006).
Around 3500 BC the desert again spread across North Africa, and the scattered cattle nomads moved to the Nile Valley, where they began tilling the soil, and where they created the first Dynasty and thus founded the famous Egyptian culture.
Left: Sphinx from Luxor - A sphinx is a lion with a human head.
Right: Lions that represent the god Aker
In pharaonic times, there were still lions in Egypt. They lived on the border of the desert, where they were known as the keepers of the eastern and western horizon or guardians of the eastern and western descent to the underworld. Sphinxes may depict a pharaoh as a lion figure with a human head.
There lived elephants in North Africa long after the desert had returned in the central Sahara.
North African forest elephants were somewhat smaller than both the Indian elephant and the African steppe elephant. Its Latin name sounds Loxodonta Africana Pharaoensis, and it was exterminated in the second century; reportedly many have been killed in the Roman arenas.
It puzzled Cicero that when twenty elephants, an unprecedented number, were attacked by spearmen in the arena, their trumpeting of distress so harrowed the spectators that everyone in the theater began to weep. The show was given by the great man Pompey.
The expulsion from The Garden of Eden painted by Natoire in the year 1740.
Also, the Arabian desert in the Middle East and the Rajasthan desert between India and Pakistan experienced a wet period in the first part of Holocene. In the dry out lakes of the deserts have been found spores from plants, which are characteristic of a savannah vegetation.
Other studies indicate that Central Asia in the early Holocene experienced a wetter climate than today, while summer temperatures, that was from 2 to 3.5 degrees higher than today, prevailed. In China rice could be planted almost a full month earlier than it usually is the case today. Bamboo groves could be found three latitudes farther north than they are found in modern times. (Uriarte, Chu Ko-chen)
Many peoples have old myths about an original homeland, which they left in the distant past. The ancient Doric Greeks immigrated from the north, the Scandinavian peoples remember Asgaard and Midgaard, according to their ancient myths the Romans originally came from Troy, and probably the best-known myth of this kind is the Biblical story of the expulsion from the Garden of Eden. It is quite likely that the factors, which forced the peoples to emigrate, have been associated with such climate changes that took place in connection with the end of the Holocene Optimum.
Around 5,500 to 5,000 years before present occurred the Piora cold period, which is named after the Val Piora valley in Switzerland, which was the first place, where it was identified by using pollen analysis. The more heat-loving trees as elm and linden became rarer and never again regained their dominant position in the woods. There have been found indications of this cold period in both Alaska, the Andes in Colombia and in the mountains of Kenya (Lamb).
Left: Precipitation in the Rajasthani desert. It is seen that in the period before the Holocene it was a fairly dry desert, the precipitation peaked around 6,000 years before present, and while Harappan culture existed the rainfall was 600 to 800 mm per year. This can be compared to the average annual precipitation in Denmark, which is 745 mm. (Fra H. H. Lamb: Climate, History and the Modern World).
Right: The cities of Harappa and Mohenjodaro existed in the Indus valley 4,000 years ago.
In the Indus Valley, where today Rajastan's arid Thar desert is spreading, the cities of Harappa and Mohenjodaro flourished between 4,600 and 3,900 years before present. When their civilization was at its peak, it covered an area, which was larger than the Nile valley and Mesopotamia combined. The inhabitants cultivated wheat, barley, melons, dates and perhaps cotton. On the savannah and along the now dry river lived elephants, rhinos and water buffaloes. The annual rainfall is estimated to have been between 400 and 800 mm.
In the Arabian desert has also been found evidence of human habitation from about 5,000 years before present.
In the Minoan warm period millet was grown in southern Scandinavia.
Not much is known about the Minoan warm period beyond, what can be gauged from cores from boreholes in the ice sheet. That the climate really was warmer then may be
derived from that in the Minoan warm period, which occurred during the bronze age, millet was grown in southern Scandinavia. Today Millet is grown in tropical and subtropical regions, it is an important crop in Asia, Africa and in the southern U.S. The average annual temperature in Mississippi and Alabama is about 10 degrees, which should be compared with today's average annual temperature in Denmark, which is 8 degrees. So maybe the climate in the Minoan warm period, was about 2 degrees warmer than present in southern Scandinavia.
As you may know, Rome is said to have been founded by Romulus and Remus in 753 BC The Roman historian Livy tells us that in the city's early history occurred a few severe winters when there was ice on the Tiber, and the snow stayed for many days. Before the Roman warm period beech trees is said to have been growing in mountains around Rome.
Climate changes have always taken place, it is documented in the Bible. Jeremiah 18:14 in the Old Testament says: "Does the snow of Lebanon leave the crags of Sirion? Do the mountain waters run dry, the cold flowing streams?" indicating that it was really relatively cool around the Mediterranean when Jeremiah lived around 600 BC. In our days is no eternal snow on the mountains of Lebanon.
Left: Sea ice in the Arctic Ocean - The white area represents the extent of sea ice 31. August 2007. The red line marks the average distribution of sea ice in August between the years 1979 and 2000. It is seen that in modern times there is quite a long way from Iceland to the sea ice at the Greenland coast north of Scoresbysund. Whether Pytheas landed on the Faroe Islands, Iceland and Western Norway, only a day's sailing to the solidified sea means that the sea ice in the summer had a very significantly greater extent than in the present.
Right: Pytheas' travels - From Histoire des Mares: Pytheas le massaliote.
Around 310-300 BC the Greek explorer Pytheas traveled from Massalia (Marseille) along the shores of Western Europe. He came to Scotland and
Hebrides, where he saw the waves, which were "80 cubits high" (cubit is an ancient unit of length on 45.72 cm). He sailed to the island of Thule, which was located 6 days and 6 nights sailing north of Berrice, which is assumed to be Shetland. There is uncertainty about, whether Pytheas' Thule was the Faroe Islands, Iceland or western Norway.
The distance between Shetland and Faroe Islands is 150 nautical miles, and the distance between Shetland and Iceland is about 380 nautical miles. On the journey to the Faroe Islands, he ,therefore, should have kept a speed of 1 knot, which sounds pretty manageable, also for his time. If Thule was Iceland, he should have kept a speed of 2.6 knots, which does not sound impossible with a good wind.
He describes Thule as an island located six days sailing north of Shetland, near the frozen sea. There is no night at midsummer, he says, indicating that the location must be on the Arctic Circle and that he visited the island in the summer. The frozen sea is one day's sailing north of the island, he says, which also indicates that the island must be Iceland, rather than the Faroe Islands.
Left: A statue of
Pytheas in front of the stock exchange in Marseille.
Right: A reconstruction of Pytheas ship. A vessel of this type cannot have navigated the North Atlantic in winter. It emphasizes that he visited Thule in the summertime.
However, in modern times the sea ice nearest Iceland in the summer is found north of Scorebysund on Greenland's east coast. the distance from Iceland to the north of Scoresbysund is more than 350 nautical miles. Pytheas sailed maybe 2.6 knots, so it would have taken him almost 6 days and nights to get to the frozen sea - with the extension of sea ice in modern time.
But as he wrote that the sea ice was only a day's sailing to the north of Thule, we can conclude that the sea ice in the Arctic Ocean in the summer had much bigger extension in his time, 300 BC, than it has today.
Pytheas mentioned that the island was inhabited. People lived on millet and other herbs and on fruits and roots, and where there were cereal and honey, they got their drink from it. The country was rainy and lacked sunshine, he wrote. This leads many to think that he, in fact, landed in Norway. However, if the frozen sea was only one day's sailing away, it indicates only even bigger extension of sea ice.
Fresco in Pompeii, depicting a Roman orgy - Casa dei Casti Amanti.
As seen on their light dress it must have been quite hot.
The Roman warm period started quite suddenly around 250 BC and ended about 400 AD. The ancient Greeks and Romans lived in a fairly pleasant climate, which you can also see from the airy robes, in which the antique statues are often dressed.
Some studies in a bog in Penido Vello in Spain have shown that in Roman times it was around 2-2.5 degrees warmer than in the present.
The Roman warm period is amply documented by numerous analyzes of sediments, tree rings, ice cores and pollen - especially from the northern hemisphere. Studies from China, North America, Venezuela, South Africa, Iceland, Greenland and the Sargasso Sea have all demonstrated the Roman Warm Period. Additionally, it has been documented by ancient authors and historical events.
The Roman Columella wrote in the first century after Christ in "De Re Rustica "(book 1), citing the "reliable author Saserna": "Areas (in Italy), which previously due to the regular severity of the weather could not provide any protection for vine plants or olive trees planted there, now that the former cold have subsided produce olives and wine in the greatest abundance."
Coins from Carthage with elephant motifs. Note the first coin from the left,
the man is quite large relative to the elephant, which indicates that the elephants really were relatively small.
Hannibal brought a whole army, equipped with 37 war elephants, over the Alps in 218 BC - in winter.
The ancient writer Pausanias wrote in the second century on the use of war elephants: "For although the use of ivory in arts and crafts apparently has been known from ancient times of all men, so no one had seen the actual animals, before the Macedonians invaded into Asia, except the Indians themselves, the Libyans and their neighbors." It sounds like he thinks that elephants naturally belong in both India and Libya.
Roman bridges in Syria, Jordan and Iraq. The rivers, which they led over, are
long ago dried up.
Top left: The ruins of a Roman bridge between the villages of Ayyash and Ain Abu Jima in the northwestern part of the Jebel Bishri by the river Euphrates. - Photo: Minna Lonnqvist.
Top right: Roman bridge in Uthma in Syria - photo: arminhermann.
Middle left: Roman bridge in Maharda in Syria.
Middle right: The Roman bridge Djemarin in Syria.
Bottom left: The Roman bridge Ain Diwar by Malikiyeh in Syria near the Turkish border.
Bottom right: Roman bridge that spans the Sabun Suyu, which was a tributary of Afrin in Syria.
It seems unlikely that Hannibal should have imported his 37 war elephants all the way from India to his native Spain. Therefore, most assume that they were African elephants.
The ordinary African steppe elephant is difficult to domesticate, and therefore the animals most likely belonged to a now extinct elephant species called North African forest elephant that was slightly smaller than both the Indian elephant and the common African steppe elephant.
Around the year 400 AD the Roman Symmachus complained in his letters about the duties, which he had to pay for the bears, which he imported from North Africa to use in the circus game that his son was obliged to give in connection with his entry into the senatorial order. The crocodiles, that he had managed to find, refused to eat, and he was worried that the poor animals should die from starvation before they could play their part in his son's circus game.
Locating vineyards and olive trees is also a good indicator of climate. During the culmination of the Roman warm period, olive trees grew in the Rhine Valley in Germany. Citrus trees and grapes were cultivated in England as far north as near Hadrian's Wall near Newcastle. Scientists have found olive presses in Sagalassos in the Anatolian highlands of present-day Turkey, which is an area, where it today is too cold to cultivate olives.
The continued spread of vineyards to the north can be deduced from a decree of Emperor Domitian which prohibits the cultivation of wine in the Empire's western and northern provinces beyond the Alps. The decree was 280 AD revoked by Probus, who allowed the Romans to introduce vineyard in Germany and England.
The ruined city of Petra in the Jordanian desert. It was supplied with water from a constructed channel. We must assume that once in the past when the city was built, there was a supply of water on site. As the climate became
drier, they built a channel for water, finally, people gave up and abandoned the city. It is known that in the Crusader period 1100 AC Petra was
still inhabited. King Baldwin 1. of Jerusalem stayed for a while in the
Strabo wrote that around the years 120 to 114 BC a series of storms occurred in the North Sea causing the so-called Cymbrian Flood that covered large areas along the coasts of Denmark and northern Germany with water and thereby caused the Cimbrians and Teutones migration.
North Africa was Rome's granary, which can be difficult to imagine today. But it was much more green and fertile than in the present. The city of Petra in Jordan thrived between 300 BC and 100 CE, today it is abandoned and lies far out in the Jordanian desert.
In the Roman warm period prevailed a more humid climate in North Africa and the Middle East than today. In Alexandria Claudius Ptolemy wrote a diary of the weather in 120 AD. It shows a remarkable difference from today's climate of this place. It rained every month, except August. There was thunder in all summer months and in some other months, and very hot days were most common in July and August.
Ptolemy of Alexandria also wrote about four rivers in Arabia and trade routes that previously had been used, but already in his time were impassable. In the Middle East ruins of many Roman bridges still exists that are built over rivers, which are now dry.
The Roman Warm Period ended around 350-400 AD.
Top: Vandals, Svebes and Alans crossed the frozen Rhine in 406 AD - painting by an unknown artist.
Bottom: Density of growth rings in larch trees at Zermat in the Alps. Time progresses from left to right. The vertical red line marks the year 400 AD - from "Climate History and the Modern World" by H. H. Lamb.
The Vandals crossed the frozen Rhine New Year's Eve 406 AD Kr, thus commencing the Peoples Migration time and heralded the downfall of the Western Roman Empire. The fact that the Rhine was frozen, demonstrates a completely different climate than that, which prevailed when olive trees were growing in the Rhine Valley. I do not recall that the Rhine has been frozen in modern times.
Many believe that widespread drought in central Eurasia triggered the migrations towards both China and the Roman Empire from about 300 AD to 500 AD.
Top: The Scandinavian legend about the Fimbul winter.
Bottom: Changes in the upper tree-line in two areas in the White Mountains of California and the Alps in Switzerland and Austria. It shows that the tree-line and thus the temperature largely have been declining during at least 3,000 years - The vertical red line marks 400 AD. - from "Climate History and the Modern World" by H.H. Lamb.
H. H. Lamb wrote in his "Climate, History and the Modern World": "For centuries in Roman times from about 150 BC to 300 AD or some few decades later, camel caravans used the Great Silk Road through Asia for trading in luxury goods from China. But from the fourth century AD, which we know from changes in water level in the Caspian Sea and study of irregularities in rivers, lakes and abandoned cities in Sinkiang and Central Asia, drought developed in such an extent that it stopped the traffic on this route. Other severe stages of this drought occurred between 300 AD and 800 AD, and especially around these dates, as it can be seen from old shores lines and old port structures that indicate a very low sea surface level in the Caspian Sea around these times." (page 159).
Chinese cave painting from the Mogave Caves at Dun Huang from the Northern Wei period (386-535 AD) - Some tough men - were they kings?
The drought in Eurasia thus appeared to have had two maxima, at about 300
AD and around 800 AD.
Already around 300 AD, China had problems with refugees from the steppe. "The Five Hu " peoples from the north, Xiong Nu, Xianbei, Di, Qiang and Jie took refuge in the empire behind the great wall. The mandarins ordered them to travel back to their homelands, they answered with force and created their own migration states. This began the period in Chinese history called "The Sixteen kingdoms".
The drought on the eastern steppe also drove many peoples towards the Roman Empire. From around year 400 AD Visigoths, Ostrogoths, Vandals, Alans, Svebes, Huns, Gepides, Angles, Saxons, Franks, Jutes, Alemanns, Burgunds and Langobards invaded the empire. Later attacked Avars, Magyars, Arabs, Vikings and Wends. The political divisions of modern Europe is mainly a result of the showdown among all these peoples.
Ragnarok - In the Scandinavian mythology is told about the Fimbul Winter that will herald the Ragnarok-battle that is the end of the world.
In the first part of Snorri's Edda (55), Gylfaginning, is told: "Then said Ganglere: What tidings are to be told of Ragnarok? Of this, I have never heard before. Har answered: Great things are to be said thereof. First, there is a winter called the Fimbul-winter, when snow drives from all quarters, the frosts are so severe, the winds so sharp and piercing, that there is no joy in the sun. There are three such winters in succession, without any intervening summer. But before these, there are three other winters, during which great wars rage all over the world. Brothers slay each other for the sake of gain, and no one spares his father or mother in that manslaughter and adultery."
The Byzantine historian Procopius recorded of 536 AD, in his report on the Vandal war: "during this year a most dread portent took place. For the sun gave forth its light without brightness - and it seemed exceedingly like the sun in eclipse, for the beams it shed were not clear. From the moment the phenomenon showed up, humans all time were affected by war, famine and other deadly things." His fellow byzantian Lydus wrote: "The sun became weak - almost a full year - so that the fruits died without harvest."
Michael, the Patriarch of Antioch in Syria (1126-1199 AD), wrote about the year 536 AD: "The sun became dark and the eclipse lasted for 18 months."
The Irish Annals of Ulster recorded: "A shortage of bread in the year 536 AD." The Annals of Inisfallen wrote: "A shortage of bread in the years 536-539 AD". In China was reported from these years that snow fell in August.
Gregory of Tours wrote in "History of the Franks" (Book 3:37) from the years 539 to 594 AD: "In this year the winter was terrible and more rigorous than usual, so that the rivers were kept in the iron grip of the frost and made into a road for the people like it was dry land. Birds, too, were affected by cold and hunger, and were captured by hand without using the snare, when the snow was deep."
Extent of sea ice in the Arctic Ocean in winter - The yellow line shows the
average spread of ice in January, from 1979 to 2000. The white area represents the distribution in 2011.
This hard winter in the Loire Valley has been dated by Gregory as the year when Theodobert died. This happened 37 years after the death of Clovis, who was king of the Franks in the years 465-511 AD. The Year of the harsh winter is then 548 AD. This suggests that the severe winters in Europe from the year 536 AD may had stretched at least to 548 AD (Most about year 536 AD is from Flemming Rickfors - see link below).
In Jaeren in Norway, large areas were abandoned as farmland around year 500 AD, which indicates a colder and harsher climate. Studies of peat bogs in Jutland show evidence of shifting sand from around the same time. (Lamb)
The Irish monk and geographer Dicuil wrote the book "The Mensura Orbis Terrae" that became known at the Carolingian court in the year 825 AD. He described islands in the ocean that previously were inhabited by hermits, who now, however, have been displaced by Vikings. He provides a description from monks, who had lived in "Thule" until year 765 AD. They had experienced the frozen sea, which was located one day's sailing to the north. They told of Thule that "there was no darkness to prevent any from doing, what they wanted to do". Their description of the sun's path, as well as the temperature, fits perfectly on Iceland.
Reconstruction of an Irish curragh from Tim Severin's book "The Brendan Voyage" - To prove that Sct. Brendan really could sail from Ireland via the Faroe Islands, Iceland and Greenland to Labrador, Tim Severin built a curragh and carried out the journey. - It is obvious that such a vessel will not be able to cope with a winter in the North Atlantic. The legend of St. Brendan's journey contains no information on climate.
We must think that he believes "one day sailing" in the summer. It must have been altogether impossible for the Irish monks to navigate the North Atlantic in winter in their small vessels that may have resembled the traditional leather-clad Irish curragh. This indicates that the extent of sea ice has been considerably bigger around 700-800 AD than it is today.
Monastery annals tell of increasing severe winters. Thus the winter 763-64 AD in many places in Europe was described as a winter with a huge snowfall and heavy losses of olive and fig trees in southern Europe.
Also, the winter 858-60 AD was clearly unusual severe. There was ice on the Strait of Dardanelles, and the ice on the Adriatic Sea near Venice was thick enough to support fully loaded wagons.
Also around 860 AD, the Norwegian Floki Vilgerdason navigated the waters around Iceland as the first Scandinavian. When he visited the northern Arnarfjord, he found it packed with ice; which indicates that the climate was considerably colder than nowadays, also in northern countries.
Heat and cold periods through 2000 years compiled on the basis of the density of the growth-rings of pine trees in northern Scandinavia from the period 138 BC to 2006 AD - The blue line is the actual measurements. The red
graph is result of a mathematical smoothing with a 100-year rolling average. The dotted line above and below the red temperature graph is representing uncertainty. The red dotted line at the top shows the general trend, namely that the Middle Ages were warmer than today, and that the Roman era was warmer than the Middle Ages. The vertical gray fields represent selected 30-year periods. The temperature scale to the left shows deviation from a mean temperature over the period 1951-1980. JJA means: June, July, August. Following data from Jan Esper, Ulf Buntgen, Mauri Timonen and David C. Frank "Variability and extremes of northern Scandinavian summer temperatures over the past two millennia." from 2012.
Advanced and accurate measurements of density of the growth rings in Northern Scandinavian pine trees have formed the basis for a highly accurate modern reconstruction of temperatures over the past 2,000 years. It shows that today's warm period is colder than the medieval warming, which again was colder than the Roman era. In modern times, there have been some particularly warm years, such as 2004, but they will be much less visible after a mathematical smoothing. Probably there have always been few years with exceptional heat or cold. It also follows from the historical accounts above about particular severe winters.
Sea surface temperature in the East China Sea (between Japan, Taiwan and China). It is seen that changes in temperature did not happen simultaneously over the whole Earth. The Roman Warm Period took place also in China, the cold spell of the Peoples migration period was significant, but not very long lasting, instead, it was replaced by the Sui-Tang heating period. The Medieval warm period was not particularly significant in East Asia and nor was the Little Ice Age. But the steadily falling temperature trend has been the same in China as around the Atlantic.
Jan Esper and his co-authors to "Variability and extremes of northern Scandinavian - - " conclude that their results "provide evidence of
considerable warming during the Roman and Medieval warm period in larger scale and of longer duration than the twentieth century heating period." More specifically they identify the Medieval Warm Period to has taken place around 700 to 1300 AD and identify the warmest 30-year intervals during this period, which occurred from 918 to 947 AD in which period the June, July and August temperatures were about 0.3 degrees hotter than the hottest 30-year interval in the current warm period. Their findings differ from other researchers, who think that the Medieval Heating period began around 950 AD.
Norse settlements in Greenland. Eastern Settlement, M stands
for Middle Settlement and V stands for western settlement. When the Norse settlements were on its peak, it is estimated from the size and number of farms that there have been between 3,000 and 5,000 Scandinavians in Greenland, which roughly corresponds to the number of inhabitants in Copenhagen at the same time.
In North America, there seems to have been a relatively warm and humid period between 700 AD and 1,200 AD, where maize cultivation spread along Mississippi up to Minnesota. In the waste piles of the past in Iowa archaeologists have found remains of bones of elk and deer, which are woodland animals, but after 1200 AD, they were rather abruptly replaced by bones of bison, which is a steppe animal, indicating a change to a drier climate.
Pollen analyzes from Lake Chad in Africa conducted by J. Maley from the Languedoc University in France shows a maximum of pollen from water-intensive plants in the period 700-1,200 AD, and that the water consuming plants gradually disappeared during the period 1,300 to 1,500 AD (Lamb).
The Medieval warming did not occur simultaneously across the Earth. In East Asia, it was partially replaced by the Sui-Tang heating period that occurred between about 500 and 800 years AD. The medieval warmth was most noticeable around the North Atlantic, but even in Antarctica, it can be traced.
The story of the Scandinavian settlements in Greenland is a good illustration of the Medieval warm period.
In 986 Erik the Red sailed to Greenland with 35 ships. Only 14 ships reached the destination, some sank, and others returned to Iceland. Most of the remaining 14 ships sailed into the fjords to the south around Julianehaab and founded the Eastern Settlement, others sailed a little further north and founded the small Middle Settlement around the present Ivigtut, and some sailed all way up to Godthaab fjord and founded the Western Settlement.
The Greenlanders built farms, houses and churches. There was both a monastery and a nunnery.
In Norse farmers' manure heaps, archaeologists have found big quantities of fish bones from cod. It shows that it generally has been warmer than in the present because, throughout, the 1900s, it has not been possible to fish for cod in Greenland waters; it's been too cold.
The Greenlander Thorkell Farserk was a cousin of Eric the Red. Once when he awaited Erik to visit, he would fetch a sheep that grazed on the island Hvalsey in Hvalseyjarfjord. Since he accidentally did not have a boat available, he swam out to the island, got hold of sheep and swam back again, that he could entertain his cousin.
The distance to the island is slightly more than 3.2 kilometers. Dr. Pugh from the Medical Research Laboratories has given his assessment of this achievement to H. H. Lamb. From studies of Channel swimmers' endurance, we know that 10 degrees will be the absolute lowest water temperature, which allowed an experienced swimmer to swim this distance. In modern times, the water temperature in Hvalseyjarfjord usually is in the range of 3-6 degrees in the summer. Therefore, the water at that time must have been at least 4 degrees warmer than today.
In the Norwegian medieval document Kongespejlet (Kings Mirror) from about 1,250 AD is told about the sailing route to Greenland: "As soon as the great ocean has been crossed, there is such an abundance of ice that nothing like this is known from any other place in the whole world, and it is so far from land that there are no less than four or more days to travel to there over the ice, but this ice is more to the North-East and North off the land than from the south-west and West."
This must mean that when you sailed due west from Iceland in the summer, which was the original route, you would meet sea ice along the coast of Greenland. Nowadays there is usually no sea ice as far south in the summer. But maybe they sailed very early in the year.
The sailing routes of the Vikings to Iceland, Greenland and America. - From McGovern and Perdikaris, 2000.
About Greenland is further narrated in Kongespejlet: "But since you asked, if the country was free of ice or not, or it was covered with ice like the ocean, then you should know for sure that it is a small part of the country that is free of ice, but all the rest is covered with it, and people do not know, whether the country is large or small, because all mountain areas and also all valleys are hidden by the ice, so that you nowhere find opening on it." - "Few in number is the people in this country because there are few places, which are so ice-free that they are habitable" - "But since you ask about on what people are living in that country, since they do not sow
grain, you must know that there yet exist many other countries, where the people do not sow, and though people are living in them, because humans do not live by bread alone. About Greenland is the saying is that there are good pastures and both good and large farms because they have much cattle and many sheep and produce much butter and cheese. On this the inhabitants have their living since for the most part, also meat and all kinds of hunting prey, both reindeer-, whale-, white bear- and seal-meat, On these things the Greenlanders have their living." - "But since you asked if the sun shines in Greenland, or whether it could ever happen that it was beautiful weather as in other countries, then you must know for sure that there can be beautiful sunshine and that the country most of the time at summertime can be called weather-good."
Reconstructed Viking house at L'Anse aux Meadows in Newfoundland" - After
excavation of 2,400 Viking objects, there is no doubt that the Vikings
discovered America long before Columbus.
"But there is a big difference in the movements of the sun because as soon as it becomes winter, it is almost always night, but as soon as it is summer, it is almost all the time day. And when the sun goes highest, it has sufficient force for shine and brightness, but only little to warming and heat; however, it has so much force that where the ground is free of ice, it heats so much that it can provide good and fragrant grass; therefore, people can quite well live in the country, where it is thawed, but that is indeed very little." - "When it is stormy weather it happens with greater rigor there than in most other places, both in terms of the power of the storms and the violence of frost and snow."
But, it seems that the ancient document Kongespejlet was not completely well informed. Danish researchers from the National Museum have found small pieces of charred barley in Greenland Viking middens. The finding proofs that the Norse Greenlanders actually cultivated barley. They were able to produce the important main ingredient that is needed to brew beer.
"If the grain had been imported, it would have been threshed, so when we find parts not threshed, it is a very strong indication that the first Norsemen in Greenland cultivated their own grain", project leader Peter Steen Henriksen said. "One must assume, that if the grain had arrived in Greenland with a ship, it had been threshed first, otherwise it would take up too much space."
Visible ditches and furrows from medieval fields at Redesdale in Northumberland in England 300-320 meters above sea surface - From H. H. Lamb
"Climate, History and the Modern World"
Another good indicator of climate is the spread of wineyards. When Wilhelm the Conqueror in 1086 AD prepared Doomsday Book, he recorded
Wineries in 46 locations in southern England, from East Anglia to the modern
Somerset. Today there are 400 English wine producers, but it must be seen in
the light of the fact that modern wine-growers have developed wines that are more cold tolerant than the historical varieties. H. H. Lamb concludes that the medieval average summer temperature probably was 0.7 to 1.0 degrees higher than today, and the climate must have been less prone to frost in May.
At the abandoned village Houndtor in 400 meters above sea surface level in the highlands of Dartmoor in the county of Devon in England and at Redesdale in Northumberland near the border with Scotland 300-320 meters above sea surface level still exists visible traces of cultivated fields from the Middle Ages (H. H. Lamb). In this altitude, grain cannot be cultivated in modern times.
In medieval York archaeologists have found the presence of a species of the beetle "heterogaster urtica" that today lives only on nettles in sunny places in Southern England, which also indicates that Middle Ages was warmer than the present (Lamb).
In the Middle Ages two of the Sicilian rivers, Erminio and San Leonardo, were described as navigable, which today is quite impossible even with small vessels. This shows that precipitation was bigger in the warm climate of the medieval.
The Rhone Glacier in north-eastern Switzerland on a postcard from 1870
compared to reality in 2006.
The development of this glacier has been very visible, as it can be seen from the nearby small town. During the last 120 years, the glacier has pulled about 1,300 meters back and left a trail of bare stone.
There is no consensus on when the Little Ice Age started and ended, but let
us stick to that the Medieval warming period ended and the Little Ice Age began around the year 1300 AD, as proposed by Jan Esper and his co-authors above. During the Little Ice Age winters were alternately mild and very cold just like today, but generally it was colder - and it became really cold around 1690 AD that may be designated as the culmination of the little ice age. Also around 1660 and 1770 AD winters were extraordinary cold. The winter of 1850 AD was also very cold, but then the temperature rose, and one can say that the Little Ice Age ended and the modern warm period began.
The cold weather came earlier at high latitudes than in southern Europe. In Greenland and in northern and northeastern parts of Iceland growing of barley was abandoned as early as about 1300 AD, while growing of wine in southern England and Northern France and cultivation of oranges in Provence of southern France were first abandoned about 1500 AD.
Eight time series of glacier advances and withdrawals through the Holocene
during the last 6000 years. Most are not continuous, because they are based on morphological evidences as moraines, U-shaped glacier valleys etc. The Scandinavian graphs are continuous. To make it easier to compare, they are all presented as graphs. The graphs of the Alps are best supported by growth rings in trees and other documentation for the past 3,500 years. Except for Scandinavia and the Alps the exact times for glacial retreats are not known, and they are shown somewhat arbitrarily. The brown areas show, where the graphs have been concluded from indirect evidence such as residues of trees above the modern tree-line, buried topsoil etc.
When the graph is above the corresponding horizontal line, it indicates the warming and smaller glaciers than today, and when the graph is below the line it indicates cooling, and glaciers bigger than today. Note that the names of the mountains are placed at the start of the graph itself, and not at the connected horizontal line. All graphs end on the corresponding horizontal line, which represents the present.
Sources of information:
Franz Josef Land (An archipelago in the Arctic Ocean): Lubinsky et al., 1999,
Spitsbergen (An island in the Arctic Ocean, also known as Svalbard): Svendsen and Mangerud (1997) and Humlum et al. (2005).
Northern Scandinavia: Nesje et al. (2005), Bakke et al. (2005), IPCC (2007).
Southern Scandinavia: Matthews et al. (2000,2005), Lie et al. (2004), IPCC (2007).
Alps: Holzhauser et al. (2005),Jo'rin et al. (2006).
Brooks Range (In the northern Alaska): Ellis and Calkin (1984).
Western Cordillera-North America: Koch and Clague (2006).
Western Cordillera-South America: Koch and Clague (2006).
Glaciers of the Northern Hemisphere have typically been smaller in the past, and then they have grown down through the Holocene to a maximum during the Little Ice Age, after which they have pulled back a little.
From: "Mid- to Late Holocene climate change: an overview" by Heinz Wanner, Jurg Beer, Jonathan Butikofer with others.
The Little Ice Age seems to have been most noticeable in Europe and North America; but it could also be felt in China, Alaska, Caucasus and Himalayas; everywhere the glaciers increased. Only South America seems to have been quite unaffected.
Cultivation of heat demanding fruit trees, such as mandarins and oranges had to be abandoned in China's southern Jiangxi province, where these varieties before had been cultivated for hundreds of years. In the first half of the 1600s, China was plagued by droughts and floods. Unable to pay their taxes to the Ming emperors the peasants made revolt and thus paved the way for the Manchu conquest of China and the subsequent Qing dynasty.
The Western Settlement in the Godthaab fjord in Greenland was abandoned already around 1350 AD in the beginning of the cooling period.
Icelandic sagas tell that 1350 AD the bishop of the Eastern Settlement was reported that the Western Settlement was in need of assistance to drive away the aggressive Inuits, whom the Norsemen called Skrellings. The bishop sent a church envoy, Ivar Bardson, to the rescue. But when he arrived to the Western Settlement, he found the country deserted except for a few loose livestock. Taking the Icelandic sagas literally, it was more Skrellings than it was the worsening climate that caused the end of the western settlement. Apparently, the meeting between Inuits and Norsemen did not always process peaceful.
The western settlement covered very close to what today is Nuuk (Godthaab) municipality. The red dots indicate farms.
In 1723 the Norwegian explorer and missionary Hans Egede visited Godthaab district, and he asked the Inuits in Ujaragssuit near the ruins of the Western Settlements church, if they had destroyed it. They replied no and told that Qavdlunak (Norsemen) did it themselves, before they departed.
Ivar Bardsson lived in Greenland from 1341 to 1364. He wrote on navigation from Iceland to Greenland: "From Snefelsness in Iceland to Greenland by the shortest route, two days and three nights. To be sailed due west. - In the sea there is a reef called Gunbjornsskaer. It was the old route, but now comes the ice from the north, so close to the reefs that no one can sail the old route without risking his life."
The last certain report we have about the Eastern Settlement is from some Icelandic travelers' account of a wedding in Hvalsey church: "Thousands and four hundred eight years after our Lord Jesus Christ's birth we were present, saw and heard in Hvalsoy in Greenland that Sigrid Bjornsdatter were married with Thorstein Olafson." Thus it is written in the laconically Icelandic sources that tell of Norse life in Greenland. The wedding took place - the first Sunday after Cross Fair - 14. September 1408 AD.
Excavation of Norse grave in Vatnahverfi in Julianeh�b district.
In 1492 Pope Alexander VI expressed, however, his anxiety about the situation in Christendom's northern outpost: "The Church in Garda lies at the end of the world in Greenland, and the people, who dwell there, are accustomed to live on dried fish and milk due to the lack of bread, wine and oil - shipping to that country is very irregular because of widespread ice on the water - no ship has called their shores for eighty years, it is believed - or if travel takes place, it is thought, only in August - and it is also said that no bishop or priest has had office in eighty years or so." - "The people of Greenland have been abandoned by the church so long that they have returned to the "pagan practice", wrote the Pope as he offered the Benedictine monk Matthias Knutson the position as bishop of Gardar, if he would be willing to travel there and lead the people back to Christianity.
Carbon-14 analysis from bones taken from Norse cemeteries in Greenland, suggest that the Eastern Settlement existed until about the year 1500 AD.
During the "Little Ice Age" in Europe flourished life on frozen canals in the Netherlands. People were skating on the ice and shopped in market stalls, which was established on the ice. It was a popular motif for several painters - Painting by Francis G. Maye.
There are told many Eskimo legends about battles between Norsemen and Inuits. They are about Norsemen, who kill many Inuits, and Inuits, who kill Norsemen. Hans Egede made many trips along the coast in search of the missing Norsemen. When he 1723 came to Julianehaab district, it seemed to him that the Inuits there were "quite beautiful and white" as opposed to those, whom he had previously met. His son Niels Egede was the first to learn the Inuit language. To him the Inuits told that the Norsemen were attacked by pirates, and their women and children fled to the Inuits. When they returned all the Norse houses had been burned down, and the men have been killed. The Inuits then went deeper into the fjord and married the Norse women.
A ship commanded by John "Greenlander" was in 1540 traveling from Hamburg to Iceland, but was blown off course by a storm. The crew went ashore in Greenland at the Eastern Settlement. They found a settlement that looked like those in Iceland, but the buildings were empty apart from the body of an old man dressed in leather with a cap of cloth lying on the floor in a house with a worn knife in his hand.
Temperature and humidity near Bern and Zurich in Switzerland as an average for each decade from around 1520 to 1820 AD - The solid line represent temperature and the dotted line represent humidity - It is seen that the Little Ice Age peaked around 1690. In addition, it can be seen that there has been widespread frost in the spring months, March, April, May. Even in the summer months, there has been freezing weather - Prepared by Dr. Christian Pfister from the Geographical Institute at the University of Bern - From "Climate, History and the Modern World" by H. H. Lamb.
When Hans Egede in 1741 after his stay in Greenland lived in Copenhagen, he wrote about a monk of Greenlandic blood: "At a German Autorem named Dithmarum Blefkenium I otherwise found an account about a monk, who was supposed to have been born in Greenland, with the bishop from that same place Anno 1645 (presumably it must be 1545) and should have travelled to Norway and since lived in Iceland l546, where he, according to his report, personally should have spoken to him. This same monk should have told strange and curious things about a Dominican Monastery in Greenland called Sct. Thomas Monastery, into which he in his childhood had been submitted by his parents with the intention that he there should become monk."
Also another source, which Hans Egede mentions, talks about this monk. That is a Danish sea captain named Jacob Hall, who had also met the Greenlandic monk and described him as follows: "He had a wide face, and his color was brown".
Around 1623-25 AD Bjorn Jonsson from Skardsa in Iceland reported that he had found pieces of wreckage on the beach, which were typical for ships built in Greenland.
In 1529 a huge Turkish army under Sultan Suleyman had to withdraw from a siege of Vienna due to poor military results, cold rain and heavy snow as early as October.
The Turkish siege of Vienna in 1529 - Due to cold rain and heavy snow
already in October the Turks had to withdraw from a siege of Vienna in the
middle of the month. They lost many soldiers and equipment during the retreat to Constantinople through snow and mud.
According to a weather diary from Zurich from the period 1546-1576 AD the
frequency of snowfall increased 44% in the first part of the period until 1563, and increased further by 63% in the last part to the year 1576 AD (Lamb).
Tycho Brahe's observations in Denmark from 1582 to 1597 indicate a winter temperature, that was 1.5 degree below the average temperature of the period 1880 to 1930 AD. Furthermore, Tycho Brahe's observations tell that wind from the east was dominant. In his notes southeast was the most frequent wind direction (Lamb).
A priest in eastern Iceland named Olafur Einarsson wrote in the early 1600s a poem that illustrates the Icelanders' problems:
Formerly the earth produced all sorts
of fruit, plants and roots.
But now almost nothing grows -
Then the floods, the lakes and the blue waves
Brought abundant fish.
But now hardly one can be seen.
The misery increases more.
The same applies to other goods -
Frost and cold torment people
The good years are rare.
If everything should be put in a verse
Only a few take care of the miserables -
The Battle of Tybrind Vig on 30. January 1658. The Swedish king Karl 10. Gustav marched over the ice on the strait of Lille Belt with 10,000 men at night and caught the Danes completely by surprise. Painting by Johan Philip Lemke.
In winter of 1657-1658 war broke out between Denmark-Norway and Sweden. By this time the Swedish King led war in Poland. The winter proved to be unusually cold, and all the Danish waters became completely covered by ice. By the announcement of the Danish declaration of war, the Swedish King Carl Gustav lost any interest in Poland and turned immediately against Denmark. He led his ten thousand men with horses and a few cannons over the ice from island to island, and soon they showed up in front of the walls of Copenhagen. At the same time, the Danes were unable to use their fleet because of the ice. Denmark-Norway was completely unprepared for this development, and King Frederik 3 asked for negotiations. At the Peace of Roskilde, Denmark-Norway had to submit Scania, Blekinge, Bornholm and the Norwegian provinces Bohuslen and Trondheim Province.
Around 1580 AD, the Denmark Strait between Greenland and Iceland in several summers was completely blocked by pack ice. In the winter of 1695 AD Iceland was completely surrounded by sea ice.
Already about 1615 AD, the Faroe cod fishing began to fail, and through the thirty years under the Little Ice Age climax 1675-1704 AD, there were absolutely no cod in Faroe waters. A recent Danish study has shown that cod can thrive in many different temperatures, but they seem to prefer temperatures between 1 and 8 degrees when they breed. Maybe water temperatures in the North Atlantic were too low. Through most of modern time, it has not been possible to fish for cod in the waters around Greenland, probably for the same reason.
The Medieval warming and the Little Ice Age. - From the BBC documentary: "The Great Global Warming Swindle".
In Norway new small glaciers formed in the mountains of Hardanger during the maximum of the Little Ice Age. Between 1690 and 1710 AD there were numerous cases in Norway about farms that were destroyed by advancing glaciers. Thus the Nigard glacier advanced 3 km between 1710 and 1743 AD and destroyed thereby a farm named Nigard. The owner sent a letter to King Frederick 5, in which he asked for compensation for the destruction.
Also in North America, the winters were very cold and long. The inhabitants of the small English colony of Jamestown, which had been founded in 1607 on the coast of Virginia in the current U.S., complained about unusual long and cold winters. Quebec's founder Samuel Champlain noted that in June in the year 1608 AD there was ice that could bear at the shores of Lake Superior.
There is no consensus on when the Little Ice Age ended, and the Modern
Warming Period began, but I will stick to around 1850 AD, as proposed
by Jan Esper, Ulf Bontgen with others above.
The global temperature in the modern warm period - The red dotted line is results from GISS, which is Goddard Institute for Space Studies at NASA. The dotted green line are the results from the UK Met Office Hadley Centre compiled by the Climatic Research Unit from the University of East Anglia. The solid red line is the results from the Goddard Institute for Space Studies at NASA's results, which are mathematically smoothed with a 10-year rolling average. The solid green line is the results from the UK Met Office Hadley Centre, which are mathematically smoothed with a 10-year rolling
average. The horizontal 0.0 line represents the average of temperatures from the period 1850 - 1899 (UK Met Office Hadley Centre) and 1880-1899 (NASA GISS). - The graph is from the European Environment Agency. The upward trend of the solid lines since 1998 must have been caused by some mathematical subtleties, as the temperature did not increase since 1998, which is also shown by the dotted lines.
Since 1850, the Earth's average global temperature has increased by around 0.8 degrees compared to the average of the period between 1850 and 1899. Europe has warmed 1.2 degrees, which is more than the global average.
It started cold. In January and February 1864 AD, when the Danish soldiers waited behind the ancient defence-dike Dannevirke on the the Prussian and Austrian armies, both the wide marshes of western Schleswig, that should protect their right flank, and the fjord Slien, that should protect their left flank, was completely frozen, and precisely therefore, they made an organized retreat to the Dybboel redoubts to avoid encirclement. It also seems that it has been pretty cold in the trenches of Flanders during the first World War.
Top: Painting by Nils Simonsen "Episode of the retreat from Dannevirke
on 5 - 6. February 1864 "painted in 1864.
Bottom: Two young women from Malmo in front of the frozen Oresund in 1924.
There was occasionally ice on the Thames. In 1924 the inner Danish waters were completely frozen, and one could walk from Scania to Copenhagen.
Some of the older generations of Danes can probably remember that their family in the countryside had a sleigh standing in a dusty corner of the barn. At the beginning of the twentieth century it was as a matter of course that there was snow in winter, and when you wanted to go somewhere, you simply harnessed the horses for the sleigh. Perhaps they were used for the last time during the severe winters in the 1940's.
Boise city in Oklahoma USA was 15. April 1935 hit by a giant dust storm that blew the upper topsoil layer of the fields. This great disaster that struck the American Midwest, is called "the Dust Bowl".
At the end of the 1800's the Western Prairie was the last land in North America that became cultivated. The settlers had a few good years, and then the problems began with locusts and drought. The "Dust Bowl" of the 1930's was caused by degraded soil and years without rain, entire fields took off to the air in big black dust storms that could blow for days. "There was dust everywhere. It came into the houses, in the food and between the teeth" a settler told. Hundreds of thousands of people loaded the few possessions, they had and fled to California - as in Steinbeck's "The Grapes of Wrath".
Also in China, very large areas of newly reclaimed land in the provinces of Shaanxi and Inner Mongolia were reduced to desert in the first half of the 1900's.
From 1915 and throughout the interwar period the temperature increased, and by the end of the Second World War, the average temperature has risen about one and a half degrees since 1850. This warming period started before automobiles, airplanes and other CO2 emitting vehicles were invented at all, and the man-made CO2 emission was negligible.
Top: Ice winter in 1956 at Dalum near Odense - Denmark.
Bottom: Emissions of CO2 from fossil sources from the year 1800 to 2000 AD. - from Wikipedia.
During the industrial boom in 1950-1980 the industry bloomed as never before, it supplied cars, refrigerators, airplanes and all kinds of consumer goods. The most of anthropogenic CO2 has been emitted precisely in this
period. Supporters of the theory of anthropogenic global warming believe that CO2 emissions have made the global temperature to go up. But nevertheless, in the period, where the industrial boom took place, the temperature dropped over four decades and Northern Europe and North America experienced again winters with lots of snow.
Icebreaker in Store Belt of Denmark in the winter 1981-82 - Foto Ove Hesstrup Hansen.
The Ice winter of 1981-82 was the coldest that ever has been measured in
Denmark. The winter already started at 7. December and 17. December was measured minus 25.6 degrees Celsius in Jutland - during the daytime.
First, during the economic crisis in the late eighties temperature again began to rise.
One-third of all human CO2 emissions have taken place since 1998. However, the global temperatures have not increased in that same period.
The theory of anthropogenic greenhouse gases that cause global warming assumes that the long-wave heat radiation from the heated earth becomes blocked by CO2, which acts as the glass in a greenhouse, and thereby prevents heat to radiate back into space. The long-wave outgoing heat radiation from the greenhouse interior is stopped by the glass, which converts the energy of the radiation into heat in the glass.
Top: A locomotive is dug free of snow on the Roskilde railway - Denmark - in winter 1942.
Bottom: Ice packs on the Oeresund at Charlottenlund near Copenhagen in the winter of 1987.
Therefore, scientists suppose that if the theory of anthropogenic global warming because of the greenhouse effect is true, then we should be able to measure a warming in the atmosphere. In the same way that the long wave-length outgoing heat radiation deliver its energy as heat to the glass in the greenhouse, thus the long-wave outgoing heat radiation from the Earth surface and the clouds should deliver some of its energy as heat to the atmosphere, since it is here, it will be stopped by CO2, it is said. Scientists have figured out that most heat should be found in an altitude of about 10 km. But whether scientists measure with weather balloons or satellites, there can be detected no warming in this altitude, on the contrary. It is clear that the heating takes place at the surface of the Earth and not in the atmosphere - suggesting that the idea of man-made global warming due to CO2 emissions is not true.
It is true that both atmospheric CO2 and global temperature has increased in the modern warming period; But the warming does not match the theory, it has occurred the wrong place at the wrong times.
The sun as it looked like on 8. June 2013 - from spaceweather.com - For a very long time no sunspots had appeared and experts had begun to fear of a new Little Ice Age. But now, finally, some small spots showed up.
On the Sun one can see some dark spots which are called sunspots. They are typical of the size of Earth, which means between 4,000 and 50,000 kilometers in diameter. The spots are about 1,000 degrees Celsius cooler than the rest of the sun's surface, which is approx. 5,750 degrees. Contrary to what one might believe, the sun radiates most energy when there are many sunspots, because simultaneously with the spots warmer areas occur around them, which more than compensate for the lower temperature of the spots.
Sunspots were first described by the Greek philosopher Teofrastos from Lesbos in 300 BC. He saw some strange black spots on the sun's surface. There are also earlier reports from China, which tell us that sunspots have been seen with the naked eye; it is sometimes possible - with caution when the sun is low in the sky and blurred by haze. Galileo directed his telescope towards the sun in 1613, observed, described sunspots systematic and issued a "Letter on sun-spots".
Some sunspots grow very large lasting several months, others become only a few hundred square kilometers big and disappear within a few days. Since the mid-1800's we have known that the number of sunspots varies with a period of 11 years so that every 11 years there will occur maximum sunspots.
The daily number of sunspots since 1900 from the Solar Influences Data Analysis
Centre (SIDC). Note the 11-year cycle. We are just now in 2013 about maximum sunspot activity of period of 24, which run from around 2009 to around 2020. Maximum sunspot activity should occur about 2014-15, but the number of daily sunspots is never the less still quite low.
Systematic counts of sunspots have been routine since Galilei invented the telescope. During the Little Ice Age the astronomer Cassini from Paris reported in 1671 that he had found a sunspot, and it was the first one that he had seen in many years. The Englishman Edward Maunder studied ancient records on sunspots, and came to the conclusion that during the Little Ice Age there were virtually no sun-spots. The period was named the Maunder minimum.
It is such that, when there are many sunspots, the solar magnetic field will be strong and deflect much cosmic radiation directed toward the Earth. When there are no or only a few sunspots, the Sun's magnetic field will be weak allowing more cosmic radiation to hit the Earth.
Top: Carbon-14 and Beryllium-10 are created when cosmic rays enter the atmosphere.
Bottom: Solar activity through 1.000 years as revealed by carbon-14 analysis. Oort minimum refers to a minor cold period in the Medieval heating period, note also that Wolf and Spurs minimum occurred in the Little Ice Age. The graph ends around all-time maximum in 1950. The activity has since then decreased significantly.
When cosmic rays hit Earth's atmosphere new isotopes are generated, especially carbon-14 and beryllium-10. When the cosmic radiation is strong, there will be formed many of these isotopes, and when the cosmic radiation is weak, there will not be formed as many. Both carbon-14 and beryllium-10 are unstable isotopes that decay over a very long time.
By analyzing historical records, carbon-14 content of trees growth rings and beryllium-10 content in ice cores from the ice caps, scientists have been able to reconstruct past levels of cosmic radiation and identify other periods, when the Sun's activity and its magnetic field has been weak, such as the Oort Minimum, Wolf Minimum, Sporer Minimum and of course Maunders Minimum.
The radiation from the sun has an intensity of 1,370 W/m2 on an imaginary surface perpendicular to the line between the Sun and Earth located above the atmosphere at the equator.
There is a correlation between the number of sun-spots and the intensity of the solar radiation that reaches Earth. When there are many sunspots the suns radiation will be stronger. Currently, the radiation from the sun has a strength of 1,370 W/m2 on an imaginary surface perpendicular to the line between the Sun and Earth located above the atmosphere at the equator. Solar irradiance on this surface oscillates with an amplitude of 1.2 W/m2 between the maximum and minimum number of sunspots. That is only 0.09% of the total radiation - It will not at all be noticeable!
But, however, there are very powerful amplification mechanisms.
The sun has a magnetic field which is several thousand times stronger than Earth's. Currently, the Sun field is about 2,000 gauss, which should be compared to Earth's field that is 1 gauss. It stretches far into space, entirely out beyond Pluto's orbit. Since 1990, the solar magnetic field has decreased from 2,700 gauss to the current about 2,000 gauss. Sunspots are regions on the Sun with intense magnetic activity. Many sunspots are a sign that the solar magnetic field is strong and few sunspots mean that the field is less strong.
Eigil Friis-Christensen and Henrik Svensmark found a very close correlation
between Sun's magnetic activity and Earth's temperature - From the
documentary "The Cloud Mystery".
The sun is a star in the Milky Way Galaxy, which contains at least 100
billion other stars. Some stars explode as super novaes thereby emitting particles, may be electrons, protons, neutrons, or ionized atomic nuclei, which enter Earth's atmosphere - sometimes with near the speed of light. Earth and the solar system are thus constantly exposed to cosmic radiation.
But only some of the cosmic rays hit the Earth; a big part is deflected by the Sun's strong magnetic field. When there are many sunspots, and the Sun's magnetic field is strong, it will deflect much radiation and the cosmic radiation, which enters the atmosphere will be weak. But a lazy sun with few or no sunspots will have a weaker magnetic field and deflect a smaller part of the cosmic radiation. The radiation, which enters the atmosphere, will, therefore, be more intense.
The Sun's has a very strong magnetic field, which extends all the way to the orbit of Pluto and beyond. It should be noted that northern lights are created by particles emitted from Sun, which meet Earth's magnetic field. It is not created directly by the Solar magnetic field.
The Danish scientists Eigil Friis-Christensen and Henrik Svensmark have demonstrated that clouds are created by cosmic radiation. This means that when the cosmic radiation, that enters the atmosphere, is strong, the Earth's cloud cover will be extensive, and when the cosmic radiation is weak, Earth's cloud cover will be less extensive.
We imagine generally that clouds are composed of water vapor. It is not the case since water vapor is a transparent gas. Clouds consist of aerosols, which are clumps of molecules of different kinds, mainly water molecules. Aerosols are formed around a particle or ion.
When a cosmic particle enters Earth's atmosphere with tremendous speed, it beats the electrons lose from all the molecules that it hits on its way thereby creating a trail of ions that quickly find together creating aerosols in an atmosphere containing water vapor, the aerosols will then create clouds.
When cosmic radiation particles enter Earth's atmosphere with great energy, they create ions and thus aerosols, which accumulate into clouds. (Photo from NASA)
During the last 100 years, until the beginning of the new millennium, the Sun's magnetic field has doubled. Precisely for this reason, the cosmic radiation that hits Earth dropped by about 15%. This caused that there are now fewer low clouds over the Earth. Low clouds have a cooling effect, and as there have been fewer of them, we probably have here the explanation of the Modern Warm Period. Today Earth's average cloud cover is about 60% - 70%. Small changes in cloud cover bring about changes in climate.
We have often on our own bodies learned that cloud cover has a marked cooling effect. We are lying on the sand at the beach after a swim, bathed in sunshine. Then a cloud passes the sun, and immediately we feel the heat disappear.
During the Little Ice Age, about 1700 AD was a period of practically no sunspots at all. We can, therefore, assume that the solar magnetic field was weak and therefore allowed a great deal of the cosmic rays to enter the Earth's atmosphere. The cosmic rays caused clouds to form to fairly large extent. This extensive cloud cover reflected the sun's rays from their white upper side preventing the sun to heat the Earth, and therefore the Little Ice Age was such a cold period.
Average annual number of sunspots since 1600 from "Climate, History and the
Modern World" by H. H. Lamb.
Eigil Friis-Christensen and Henrik Svensmark's theory, that variations in the solar magnetic field are the cause of climate change, challenges the prevailing theory that man-made CO2 is causing global warming.
Christensen and Svensmark's theory is based on simple assumptions and simple experiments, which can be confirmed by experience of our daily lives; such as that when a cloud passes the sun, you will feel that it gets colder. The fog chamber is a simple device, which has been known since C.T.R. Wilson in 1927 won the Nobel Prize for the invention.
The theory of anthropogenic CO2 as the cause of global warming is far more subtle and speculative, and it requires to a greater extent that common people blindly believe experts.
Top: Earth's temperature since 1880. It can be seen that during the industrial expansion in the postwar period, when most CO2 were emitted, the temperature dropped down to the 1980's ice winters. First with the economic crisis of the eighties it again began to rise - From the documentary "The Great Global Warming Swindle".
Bottom: Traces from trajectories of elementary particles in a Wilson cloud chamber.
The content of CO2 in the atmosphere is today 385 ppm (parts per million), which is 0.0385 %, that is close to four ten thousand parts. It is really not readily obvious that a marginal change in such a small volume can change the climate significantly.
The enormous popular support for the theory of anthropogenic CO2 emissions as the cause of climate change is by all accounts due to that the theory is in tune with our Judeo-Christian culture's idea that we are all sinners. Furthermore, the accusation against the industrial companies, that they are the main culprits of emissions, suits well with the feminist's attack against the white men and the socialist concept of the evil capitalists.
Links, some Danish - some English:
Drivtoemmer og strandvolde afsloerer 10.000 aars variationer i havisen Jens Ramskov in Ingenioeren.
Historic variations in Sea Levels Part 1- from the Holocene to Romans (pdf).
Late Pleistocene and Holocene climate of SE Australia reconstructed from dust and river loads deposited offshore the River Murray Mouth Franz Gingele, Patrick De Deckker og Marc Norman (pdf).
HOCLAT - A web-based Holocene Climate Atlas Heinz Wanner og Stefan Ritz - University of Bern, Bern, Switzerland (pdf).
Mid- to Late Holocene climate change: an overview Heinz Wanner, Jurg Beer, Jonathan Butikofer med flere (pdf).
Drivtoemmer fundet paa Nordgroenland - Youtube Interview med Svend Funder lektor ved Grundforskningscenteret for Geogenetik.
Roman Warming (was it global?) JoNova Tackling tribal groupthink.
Klimaskifte - Fimbulvetr - den store vinter aar 536 e. Kr. Verasir - Som altid behandler Flemming Rickfors emnet meget grundigt.
Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. by Jan Esper, Ulf Buntgen, Mauri Timonen og David C. Frank (pdf).
Not So Hot in East China. World Climate Report.
Om Groenland, dets natur og klimatiske forhold - et uddrag fra kongespejlet. Oversat fra Islandsk og med forord af Chr. Dorph (pdf).
Nordboernes livsgrundlag i Sydvest Groenland. af Naja Mikkelsen og Antoon Kuijpers - Specialartikel fra GEUS' Aarsberetning for 2000.
Vikingerne dyrkede korn paa Groenland. af Sybille Hildebrandt - Videnskab.dk.
Evidence of a medieval warm period in Antarctica. SPPI & CO2SCIENCE ORIGINAL PAPER.
History of Medieval Greenland and associated places, like Iceland and Vinland. Marc Carlson has prepared a very useful list of all known intelligence from Norse Greenland inc. an inventory of sources and links.
What Water Temperatures Can Cod Handle? The Fish Site.
Global and European temperature (CSI 012/CLIM 001) - Assessment published May 2011 by European Environment Agency.
Svensmark: The Cloud Mystery youtube Dokumentar af Lars Oxfeldt Mortensen. - 52 minutter.
Solpletterne forsvinder om faa aar, spaar amerikanske forskere Jens Ramskov - Ingenioeren.
Solarmonitor.org Here you can follow the development of sun spots.
Holocene climatic and environmental changes in the arid and semi-arid areas of China: a review Z. D. Feng, C. B. An og H. B. Wang.
To The Horror Of Global Warming Alarmists, Global Cooling Is Here af Peter Ferrara i Forbes.
The Great Global Warming Swindle - youtube BBC dokumentar.
I never tire of recommending Flemming Rickfors' section on Vinland and Greenland (danish) Fra Groenland til Nyaland - Asernes Aet.
Earth's Climate History (Kindle Edition) by Anton Uriarte.
Climate, History and the Modern World (Kindle Edition) by H. H. Lamb.
Syun-Ichi Akasofu's model of interglacials as result of heat pulses.
Everything is relative, Syun-Ichi Akasofu of the International Arctic Research Center, University of Alaska Fairbanks seems to think. Most believe that the current average global temperature of 14-15 degrees is normal, and ice ages are abnormal. Syun-Ichi believe that the ordinary normal average temperature on our planet is a glacial temperature of about 5 degrees and the 15 degrees occurs only in short periodic heating periods, which happens every approx. 100,000 years. The heat always comes quickly and then disappears returning the climate down to the normal 5 degrees. Syun-Ichi does not mention, but with such a view, you can almost only think that the periodic heat pulses come from the Sun.
The Big Ice Age or The Big Steamy Age? Syun-Ichi Akasofu International Arctic Research Center, University of Alaska Fairbanks.
Muller and MacDonald suggest that the recurring glaciations and interglacials are due to that Earth at regular intervals moves through regions of space with cosmic dust, which is assumed to reduce the solar radiation that Earth receives.
One can visualize the alterative astronomical cycle that Muller and MacDonald have found, which suits the climatic records. Imagine a flat disc with the sun in the middle and the nine planets that orbit around it close to the disc. In fact, all the planets orbit close to such an imaginary disc. It is believed that this imaginary disc space contains more cosmic dust than the rest of the space. At regular intervals Earth's orbit tilts slowly out of the level of the imaginary disc, heat up because of the more clean space, then returns again. When Muller the first time calculated the cycle of Earth's orbit deviation from the Solar System's imaginary disc level in 1993, he found that it repeated every 100,000 years.
Astronomical Theory Offers New Explanation For Ice Age Berkeley Lab - Research News by Jeffery Kahn. June 11, 1997 - explanation of Muller and MacDonald's theory. |
Published at Monday, 04 November 2019. Worksheet. By Odile Mallet.
These place value worksheets are great for testing children on writing numbers out in expanded form that include decimals. You may select 2 and 3 digit numbers with tenths, hundredths, or thousandths decimals. These place value worksheets are appropriate for Kindergarten, 1st Grade, and 2nd Grade.
So, if your child looks at math word problems more like a waking nightmare than a fun challenge, you may want to check out the signs it’s time to hire an online math tutor and help them get back on track.
This section contains all of the graphic previews for the Triangle Worksheets. We have a triangle fact sheet, identifying triangles, area and perimeters, the triangle inequality theorem, triangle inequalities of angles and angles, triangle angle sum, the exterior angle theorem, angle bisectors, median of triangles, finding a centroid from a graph and a set of vertices for your use. These geometry worksheets are a good resource for children in the 5th Grade through the 10th Grade.
Any content, trademark’s, or other material that might be found on the Sandbaronline website that is not Sandbaronline’s property remains the copyright of its respective owner/s. In no way does Sandbaronline claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
Copyright © 2020 Sandbaronline. All Rights Reserved. |
Understanding multiplication following counting, addition, and subtraction is good. Kids understand arithmetic via a normal progression. This progress of understanding arithmetic is truly the adhering to: counting, addition, subtraction, multiplication, and ultimately section. This declaration leads to the query why understand arithmetic in this series? More importantly, why discover multiplication right after counting, addition, and subtraction before division?
The following specifics respond to these concerns:
- Youngsters learn counting first by associating visual things with their fingertips. A concrete case in point: The number of apples are there inside the basket? More abstract example is how aged have you been?
- From counting phone numbers, the subsequent reasonable stage is addition combined with subtraction. Addition and subtraction tables can be very beneficial teaching assists for the kids since they are visual tools making the changeover from counting less difficult.
- Which should be discovered up coming, multiplication or division? Multiplication is shorthand for addition. At this time, kids have got a company understand of addition. Therefore, multiplication may be the next logical type of arithmetic to learn.
Evaluate essentials of multiplication. Also, review the fundamentals how to use a multiplication table.
Allow us to assessment a multiplication case in point. Employing a Multiplication Table, flourish 4 times a few and have a solution twelve: 4 by 3 = 12. The intersection of row about three and line several of a Multiplication Table is twelve; twelve is definitely the solution. For children beginning to find out multiplication, this is certainly straightforward. They can use addition to eliminate the trouble therefore affirming that multiplication is shorthand for addition. Instance: 4 x 3 = 4 4 4 = 12. It is really an superb guide to the Multiplication Table. The additional benefit, the Multiplication Table is graphic and mirrors to understanding addition.
Where will we begin understanding multiplication using the Multiplication Table?
<”img” src="”https://www.printablemultiplication.com/wp-content/uploads/2020/11/multiplication-charts-1-12-1-100-free-download-and-1-scaled.jpg”" alt="”Multiplication" 1="1-12" 2="&" 3="1-100" 4="[Free" 5="Download" 6="And”"/>
- First, get familiar with the table.
- Begin with multiplying by 1. Begin at row # 1. Move to column number 1. The intersection of row 1 and column one is the answer: 1.
- Repeat these steps for multiplying by one. Flourish row 1 by columns one particular through a dozen. The solutions are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 respectively.
- Recurring these steps for multiplying by two. Flourish row two by posts 1 by way of 5 various. The responses are 2, 4, 6, 8, and 10 respectively.
- Let us bounce in advance. Replicate these methods for multiplying by five. Grow row 5 various by posts one particular via 12. The replies are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, and 60 respectively.
- Now allow us to improve the amount of problems. Repeat these actions for multiplying by a few. Multiply row a few by posts 1 through twelve. The responses are 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, and 36 correspondingly.
- Should you be at ease with multiplication up to now, consider using a analyze. Remedy the next multiplication difficulties in your thoughts then assess your responses towards the Multiplication Table: multiply 6 as well as two, flourish 9 and three, increase one and eleven, multiply four and four, and flourish seven as well as two. The situation replies are 12, 27, 11, 16, and 14 correspondingly.
Should you got 4 away from five problems correct, make your personal multiplication tests. Calculate the replies in your thoughts, and check them using the Multiplication Table. |
! In an @esa
and scientists on the ground will work together with new technologies to guide the rover. This approach could be used for future exploration of the solar system. Details: https://t.co/xVDyj4JeqX pic.twitter.com/kOr887r0tV
— NASA (@NASA) October 22, 2019
ESA astronaut Luca Parmitano will control this rover remotely in November to simulate remote control of future lunar rovers. In the experiment, known as ANALOG-1, he will use the rover and its arm to move rocks instead of cones.
An astronaut on the space station will practice remotely driving a robot on the moon this November.
Future - Astronauts - Rovers - Moon - Mars
In the future, astronauts might remotely control rovers on the moon, or even on Mars, from nearby orbiting stations. To see how well this might work, astronauts on the International Space Station will soon conduct ANALOG-1, a European Space Agency (ESA) experiment designed to test how well a crew on the International Space Station might be able to control a rover on the moon in collaboration with a ground team on Earth.
"Space is such a harsh place for humans and machines. Future exploration of the solar system may involve sending robotic explorers to test the waters on uncharted planets before sending humans," William Carey, ESA scientist and principal investigator for the ANALOG-1 experiment, said in a NASA statement. "The approach could greatly increase the scientific return on those missions, as well as offer a way to avoid potential contamination from humans landing on the surface before we can answer questions about existing or previous life on Mars."
Experiment - Place - November - Hours
The experiment, which is scheduled to take place this November, will last about two hours, during...
Wake Up To Breaking News! |
However, for most people, the equivalence between abstract vectors and real circuit quantities is not an easy one to grasp. Earlier in this chapter, we saw how AC voltage sources are given voltage figures in complex form (magnitude and phase angle), as well as polarity markings.
Being that alternating current has no set “polarity” as the direct current does, these polarity markings and their relationship to phase angle tend to be confusing. This section is written in an attempt to clarify some of these issues.
Voltage is an inherently relative quantity. When we measure a voltage, we have a choice in how we connect a voltmeter or other voltage-measuring instrument to the source of voltage, as there are two points between which the voltage exists, and two test leads on the instrument with which to make the connection.
In DC circuits, we denote the polarity of voltage sources and voltage drops explicitly, using “+” and “-” symbols, and use color-coded meter test leads (red and black). If a digital voltmeter indicates a negative DC voltage, we know that its test leads are connected “backward” to the voltage (red lead connected to the “-” and black lead to the “+”).
Batteries have their polarity designated by way of intrinsic symbology: the short-line side of a battery is always the negative (-) side and the long-line side always the positive (+): (Figure below)
Conventional battery polarity.
Although it would be mathematically correct to represent a battery’s voltage as a negative figure with reversed polarity markings, it would be decidedly unconventional: (Figure below)
Decidedly unconventional polarity marking.
Interpreting such notation might be easier if the “+” and “-” polarity markings were viewed as reference points for voltmeter test leads, the “+” meaning “red” and the “-” meaning “black.” A voltmeter connected to the above battery with red lead to the bottom terminal and black lead to the top terminal would indeed indicate a negative voltage (-6 volts).
Actually, this form of notation and interpretation is not as unusual as you might think: it is commonly encountered in problems of DC network analysis where “+” and “-” polarity marks are initially drawn according to educated guess, and later interpreted as correct or “backward” according to the mathematical sign of the figure calculated.
In AC circuits, though, we don’t deal with “negative” quantities of voltage. Instead, we describe to what degree one voltage aids or opposes another by phase: the time-shift between two waveforms. We never describe an AC voltage as being negative in sign, because the facility of polar notation allows for vectors pointing in an opposite direction.
If one AC voltage directly opposes another AC voltage, we simply say that one is 180o out of phase with the other.
Still, voltage is relative between two points, and we have a choice in how we might connect a voltage-measuring instrument between those two points. The mathematical sign of a DC voltmeter’s reading has meaning only in the context of its test lead connections: which terminal the red lead is touching, and which terminal the black lead is touching.
Likewise, the phase angle of an AC voltage has meaning only in the context of knowing which of the two points is considered the “reference” point. Because of this fact, “+” and “-” polarity marks are often placed by the terminals of an AC voltage in schematic diagrams to give the stated phase angle a frame of reference.
Let’s review these principles with some graphical aids. First, the principle of relating test lead connections to the mathematical sign of a DC voltmeter indication: (Figure below)
Test lead colors provide a frame of reference for interpreting the sign (+ or -) of the meter’s indication.
The mathematical sign of a digital DC voltmeter’s display has meaning only in the context of its test lead connections. Consider the use of a DC voltmeter in determining whether or not two DC voltage sources are aiding or opposing each other, assuming that both sources are unlabeled as to their polarities.
Using the voltmeter to measure across the first source: (Figure below)
(+) The reading indicates black is (-), red is (+).
This first measurement of +24 across the left-hand voltage source tells us that the black lead of the meter really is touching the negative side of voltage source #1, and the red lead of the meter really is touching the positive. Thus, we know source #1 is a battery facing in this orientation: (Figure below).
The 24V source is polarized (-) to (+).
Measuring the other unknown voltage source: (Figure below)
(-) Reading indicates black is (+), red is (-).
This second voltmeter reading, however, is a negative (-) 17 volts, which tells us that the black test lead is actually touching the positive side of voltage source #2, while the red test lead is actually touching the negative. Thus, we know that source #2 is a battery facing in the opposite direction: (Figure below)
17V source is polarized (+) to (-)
It should be obvious to any experienced student of DC electricity that these two batteries are opposing one another. By definition, opposing voltages subtract from one another, so we subtract 17 volts from 24 volts to obtain the total voltage across the two: 7 volts.
We could, however, draw the two sources as nondescript boxes, labeled with the exact voltage figures obtained by the voltmeter, the polarity marks indicating voltmeter test lead placement: (Figure below)
Voltmeter readings as read from meters.
According to this diagram, the polarity marks (which indicate meter test lead placement) indicate the sources aiding each other. By definition, aiding voltage sources add with one another to form the total voltage, so we add 24 volts to -17 volts to obtain 7 volts: still the correct answer.
If we let the polarity markings guide our decision to either add or subtract voltage figures—whether those polarity markings represent the true polarity or just the meter test lead orientation—and include the mathematical signs of those voltage figures in our calculations, the result will always be correct.
Again, the polarity markings serve as frames of reference to place the voltage figures’ mathematical signs in the proper context.
The same is true for AC voltages, except that phase angle substitutes for the mathematical sign. In order to relate multiple AC voltages at different phase angles to each other, we need polarity markings to provide frames of reference for those voltages’ phase angles. (Figure below)
Take for example the following circuit:
Phase angle substitutes for ± sign.
The polarity markings show these two voltage sources aiding each other, so to determine the total voltage across the resistor we must add the voltage figures of 10 V ∠ 0° and 6 V ∠ 45° together to obtain 14.861 V ∠ 16.59°.
However, it would be perfectly acceptable to represent the 6-volt source as 6 V ∠ 225°, with a reversed set of polarity markings, and still arrive at the same total voltage: (Figure below)
Reversing the voltmeter leads on the 6V source changes the phase angle by 180°.
6 V ∠ 45° with negative on the left and positive on the right is exactly the same as 6 V ∠ 225° with positive on the left and negative on the right: the reversal of polarity markings perfectly complements the addition of 180° to the phase angle designation: (Figure below)
Reversing polarity adds 180° to phase angle
Unlike DC voltage sources, whose symbols intrinsically define polarity by means of short and long lines, AC voltage symbols have no intrinsic polarity marking. Therefore, any polarity marks must be included as additional symbols on the diagram, and there is no one “correct” way in which to place them.
They must, however, correlate with the given phase angle to represent the true phase relationship of that voltage with other voltages in the circuit.
In Partnership with Microchip Technology
by Robert Keim
by Jake Hertz
by Aaron Carman
by Aaron Carman |
A contour line (also isoline, isopleth, or isarithm) of a function of two variables is a curve along which the function has a constant value. It is a cross-section of the three-dimensional graph of the function f(x, y) parallel to the x, y plane. In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines.
More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value. The gradient of the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. A level set is a generalization of a contour line for functions of any number of variables.
Contour lines are curved, straight or a mixture of both lines on a map describing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of the surface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from estimated surface elevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method of interpolation affects the reliability of individual isolines and their portrayal of slope, pits and peaks.
- 1 Types
- 1.1 Equidistants (isodistances)
- 1.2 Isopleths
- 1.3 Meteorology
- 1.4 Physical geography and oceanography
- 1.5 Geology
- 1.6 Environmental science
- 1.7 Ecology
- 1.8 Social sciences
- 1.9 Statistics
- 1.10 Thermodynamics, engineering, and other sciences
- 1.11 Other phenomena
- 2 History
- 3 Technical construction factors
- 4 Plan view versus profile view
- 5 Labeling contour maps
- 6 See also
- 7 References
- 8 External links
Contour lines are often given specific names beginning "iso-" (Ancient Greek: ἴσος isos "equal") according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix "iso-" can be replaced with "isallo-" to specify a contour line connecting points where a variable changes at the same rate during a given time period.
The words isoline and isarithm (ἀριθμός arithmos "number") are general terms covering all types of contour line. The word isogram (γράμμα gramma "writing or drawing") was proposed by Francis Galton in 1889 as a convenient generic designation for lines indicating equality of some physical condition or quantity; but it commonly refers to a word without a repeated letter.
An isogon (from γωνία or gonia, meaning 'angle') is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the term isogon has specific meanings which are described below. An isocline (from κλίνειν or klinein, meaning 'to lean or slope') is a line joining points with equal slope. In population dynamics and in geomagnetics, the terms isocline and isoclinic line have specific meanings which are described below.
Equidistant is a line of equal distance from a given point, line, polyline.
In geography, the word isopleth (from πλῆθος or plethos, meaning 'quantity') is used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area. An example is population density, which can be calculated by dividing the population of a census district by the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process of interpolation. The idea of an isopleth map can be compared with that of a choropleth map.
In meteorology, the word isopleth is used for any type of contour line.
Meteorological contour lines are based on generalization from the point data received from weather stations. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available.
Meteorological contour maps may present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future
Thermodynamic diagrams use multiple overlapping contour sets (including isobars and isotherms) to present a picture of the major thermodynamic factors in a weather system.
An isobar (from βάρος or baros, meaning 'weight') is a line of equal or constant pressure on a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. In meteorology, the barometric pressures shown are reduced to sea level, not the surface pressures at the map locations. The distribution of isobars is closely related to the magnitude and direction of the wind field, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting.
Isallobars are lines joining points of equal pressure change during a specific time interval. These can be divided into anallobars, lines joining points of equal pressure increase during a specific time interval, and katallobars, lines joining points of equal pressure decrease. In general, weather systems move along an axis joining high and low isallobaric centers. Isallobaric gradients are important components of the wind as they increase or decrease the geostrophic wind.
An isopycnal is a line of constant density. An isoheight or isohypse is a line of constant geopotential height on a constant pressure surface chart.
An isotherm (from θέρμη or thermē, meaning 'heat') is a line that connects points on a map that have the same temperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated. An isotherm at 0 °C is called the freezing level.
An isogeotherm is a line of equal mean annual temperature. An isocheim is a line of equal mean winter temperature, and an isothere is a line of equal mean summer temperature.
rainfall and air moisture
An isoneph is a line indicating equal cloud cover.
An isochalaz is a line of constant frequency of hail storms, and an isobront is a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously.
Snow cover is frequently shown as a contour-line map.
Freeze and thaw
An isopectic line denotes equal dates of ice formation each winter, and an isotac denotes equal dates of thawing.
Physical geography and oceanography
Elevation and depth
Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, through topographic maps and bathymetric charts, up to continental-scale maps.
In cartography, the contour interval is the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived.
There are several rules to note when interpreting terrain contour lines:
- The rule of V's: sharp-pointed vees usually are in stream valleys, with the drainage channel passing through the point of the vee, with the vee pointing upstream. This is a consequence of erosion.
- The rule of O's: closed loops are normally uphill on the inside and downhill on the outside, and the innermost loop is the highest area. If a loop instead represents a depression, some maps note this by short lines radiating from the inside of the loop, called "hachures".
- Spacing of contours: close contours indicate a steep slope; distant contours a shallow slope. Two or more contour lines merging indicates a cliff. By counting the number of contours that cross a segment of a stream, you can approximate the stream gradient.
Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is given at the bottom of the map. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used with hypsometric tints on a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases.
An isopotential map is a measure of electrostatic potential in space, often depicted in two dimensions with the electostatic charges inducing that electric potential. The term equipotential line or isopotential line refers to a curve of constant electric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions, equipotential surfaces may be depicted with a two dimensional cross-section, showing equipotential lines at the intersection of the surfaces and the cross-section.
The general mathematical term level set is often used to describe the full collection of points having a particular potential, especially in higher dimensional space.
In the study of the Earth's magnetic field, the term isogon or isogonic line refers to a line of constant magnetic declination, the variation of magnetic north from geographic north. An agonic line is drawn through points of zero magnetic declination. An isoporic line refers to a line of constant annual variation of magnetic declination .
An isoclinic line connects points of equal magnetic dip, and an aclinic line is the isoclinic line of magnetic dip zero.
An isodynamic line (from δύναμις or dynamis meaning 'power') connects points with the same intensity of magnetic force.
Besides ocean depth, oceanographers use contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular, isobathytherms are lines showing depths of water with equal temperature, isohalines show lines of equal ocean salinity, and Isopycnals are surfaces of equal water density.
Various geological data are rendered as contour maps in structural geology, sedimentology, stratigraphy and economic geology. Contour maps are used to show the below ground surface of geologic strata, fault surfaces (especially low angle thrust faults) and unconformities. Isopach maps use isopachs (lines of equal thickness) to illustrate variations in thickness of geologic units.
In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps with isoplats. Some of the most widespread applications of environmental science contour maps involve mapping of environmental noise (where lines of equal sound pressure level are denoted isobels), air pollution, soil contamination, thermal pollution and groundwater contamination. By contour planting and contour ploughing, the rate of water runoff and thus soil erosion can be substantially reduced; this is especially important in riparian zones.
An isoflor is an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity.
In economics, contour lines can be used to describe features which vary quantitatively over space. An isochrone shows lines of equivalent drive time or travel time to a given location and is used in the generation of isochrone maps. An isotim shows equivalent transport costs from the source of a raw material, and an isodapane shows equivalent cost of travel time.
Contour lines are also used to display non-geographic information in economics. Indifference curves (as shown at left) are used to show bundles of goods to which a person would assign equal utility. An isoquant (in the image at right) is a curve of equal production quantity for alternative combinations of input usages, and an isocost curve (also in the image at right) shows alternative combinations of input usages having equal production costs.
In statistics, isodensity lines or isodensanes are lines that joint points with the same probability density. Isodensanes are used to display bivariate distributions.
Thermodynamics, engineering, and other sciences
Various types of graphs in thermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types of phase diagrams.
- isochasm: aurora equal occurrence
- isochor: volume
- isodose: Absorbed dose of radiation
- isophene: biological events occurring with coincidence such as plants flowering
- isophote: illuminance
The idea of lines that join points of equal value was rediscovered several times. In 1701, Edmond Halley used such lines (isogons) on a chart of magnetic variation. The Dutch engineer Nicholas Cruquius drew the bed of the river Merwede with lines of equal depth (isobaths) at intervals of 1 fathom in 1727, and Philippe Buache used them at 10-fathom intervals on a chart of the English Channel that was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of the Duchy of Modena and Reggio by Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, and Charles Hutton used them in the Schiehallion experiment. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the Corps of Engineers, Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects for Rocca d'Aufo.
By around 1843, when the Ordnance Survey started to regularly record contour lines in Great Britain and Ireland, they were already in general use in European countries. Isobaths were not routinely used on nautical charts until those of Russia from 1834, and those of Britain from 1838.
When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop are air quality and noise pollution contour maps, which first appeared in the US, in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters. In 2007, Pictometry International was the first to allow users to dynamically generate elevation contour lines to be laid over oblique images.
Technical construction factors
To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, line color, line type and method of numerical marking.
Line weight is simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in the topographic map above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals.
Line color is the choice of any number of pigments that suit the display. Sometimes a sheen or gloss is used as well as color to set the contour lines apart from the base map. Line colour can be varied to show other information.
Line type refers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred.
Numerical marking is the manner of denoting the arithmetical values of contour lines. This can be done by placing numbers along some of the contour lines, typically using interpolation for intervening lines. Alternatively a map key can be produced associating the contours with their values.
If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However, if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope.
Plan view versus profile view
Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile are air pollutant concentrations and sound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality or noise health effects on people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used in air pollution and noise pollution studies.
Labeling contour maps
Labels are a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy. Contour labels can be oriented so a reader is facing uphill when reading the label.
Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, called automatic label placement.
- Courant, Richard, Herbert Robbins, and Ian Stewart. What Is Mathematics?: An Elementary Approach to Ideas and Methods. New York: Oxford University Press, 1996. p. 344.
- contour line
- contour map
- Tracy, John C. Plane Surveying; A Text-Book and Pocket Manual. New York: J. Wiley & Sons, 1907. p. 337.
- Davis, John C., 1986, Statistics and data analysis in geology, Wiley ISBN 0-471-08079-9
- Oxford English Dictionary; see also: Nature, 40, 1889, p.651.
- Robinson AH (1971). "The genealogy of the isopleth". Cartographic Journal. 8: 49–53.
- T. Slocum, R. McMaster, F. Kessler, and H. Howard, Thematic Cartography and Geographic Visualization, 2nd edition, Pearson, 2005, ISBN 0-13-035123-7, p. 272.
- ArcGIS, Isopleth: Contours, 2013.
- NOAA's National Weather Service, Glossary.
- Edward J. Hopkins, Ph.D. (1996-06-10). "Surface Weather Analysis Chart". University of Wisconsin. Retrieved 2007-05-10.
- World Meteorological Organisation. "Isallobar". Eumetcal. Retrieved 12 April 2014.
- World Meteorological Organisation. "Anallobar". Eumetcal. Retrieved 12 April 2014.
- World Meteorological Organisation. "Katallobar". Eumetcal. Retrieved 12 April 2014.
- "Forecasting weather system movement with pressure tendency". Chapter 13 - Weather Forecasting. Lyndon State College Atmospheric Sciences. Retrieved 12 April 2014.
- DataStreme Atmosphere (2008-04-28). "Air Temperature Patterns". American Meteorological Society. Archived from the original on 2008-05-11. Retrieved 2010-02-07.
- Sark (Sercq), D Survey, Ministry of Defence, Series M 824, Sheet Sark, Edition 4 GSGS, 1965, OCLC OCLC 27636277. Scale 1:10,560. Contour intervals: 50 feet up to 200, 20 feet from 200 to 300, and 10 feet above 300.
- "isoporic line". 1946. Retrieved 2015-07-20.
- "Isobel". 2005-01-05. Retrieved 2010-04-25.
- Specht, Raymond. Heathlands and related shrublands: Analytical studies. Elsevier. pp. 219–220.
- Laver, Michael and Kenneth A. Shepsle (1996) Making and breaking governments pictures.
- Fernández, Antonio (2011). "A Generalized Regression Methodology for Bivariate Heteroscedastic Data". Communications in Statistics - Theory and Methods. Taylor and Francis. 40 (4): 601. doi:10.1080/03610920903444011.
- Thrower, N. J. W. Maps and Civilization: Cartography in Culture and Society, University of Chicago Press, 1972, revised 1996, page 97; and Jardine, Lisa Ingenious Pursuits: Building the Scientific Revolution, Little, Brown, and Company, 1999, page 31.
- R. A. Skelton, "Cartography", History of Technology, Oxford, vol. 6, pp. 612-614, 1958.
- Colonel Berthaut, La Carte de France, vol. 1, p. 139, quoted by Close.
- C. Hutton, "An account of the calculations made from the survey and measures taken at Schehallien, in order to ascertain the mean density of the Earth", Philosophical Transactions of the Royal Society of London, vol. 68, pp. 756-757
- C. Close, The Early Years of the Ordnance Survey, 1926, republished by David and Charles, 1969, ISBN 0-7153-4477-3, pp. 141-144.
- T. Owen and E. Pilbeam, Ordnance Survey: Map Makers to Britain since 1791, HMSO, 1992, ISBN 0-11-701507-5.
- Imhof, E., "Die Anordnung der Namen in der Karte," Annuaire International de Cartographie II, Orell-Füssli Verlag, Zürich, 93-129, 1962.
- Freeman, H., "Computer Name Placement," ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449-460.
|Wikimedia Commons has media related to Contour lines.| |
This example demonstrates why we ask for the leading coefficient of x to be "non-negative" instead of asking for it to be "positive".
Let us look at the typical parallel line problem. There are a number of reasons.
For standard form equations, just remember that the A, B, and C must be integers and A should not be negative. There is one other rule that we must abide by when writing equations in standard form. We need to find the least common multiple LCM for the two fractions and then multiply all terms by that number!
Whatever you do to one side of the equation, you must do to the other side! First, standard form allows us to write the equations for vertical lines, which is not possible in slope-intercept form. Writing Equations in Standard Form We know that equations can be written in slope intercept form or standard form.
For horizontal lines, that coefficient of x must be zero. When we move terms around, we do so exactly as we do when we solve equations!
This topic will not be covered until later in the course so we do not need standard form at this point. Of course, the only values affecting the slope are A and B from the original standard form.
Solution That was a pretty easy example. Remember standard form is written: Remember that vertical lines have an undefined slope which is why we can not write them in slope-intercept form. But why should we want to do this? We now know that standard form equations should not contain fractions.
However, you must be able to rewrite equations in both forms. We have seen that we can transform slope-intercept form equations into standard form equations. Our first step is to eliminate the fractions, but this becomes a little more difficult when the fractions have different denominators!
If you find that you need more examples or more practice problems, check out the Algebra Class E-course. Equations that are written in standard form: However it will become quite useful later.
Solution Slope intercept form is the more popular of the two forms for writing equations. A third reason to use standard form is that it simplifies finding parallel and perpendicular lines. The usual approach to this problem is to find the slope of the given line and then to use that slope along with the given point in the point-slope form for a linear equation.
Any line parallel to the given line must have that same slope. We can move the x term to the left side by adding 2x to both sides.To summarize how to write a linear equation using the slope-interception form you Identify the slope, m.
This can be done by calculating the slope between two known points of the line using the slope formula. Equation of a Line from 2 Points.
First, let's see it in action.
Here are two points (you can drag them) and the equation of the line through them. Check for yourself that those points are part of the line above! Different Forms There are many ways of writing linear equations, but they usually have constants (like "2" or "c") and must have simple variables (like "x" or "y").
Two point form calculator This online calculator can find and plot the equation of a straight line passing through the two points. The calculator will generate a step-by-step explanation on how to obtain the result. Enter any number (even decimals and fractions) and our calculator will calculate the the slope intercept form (y=mx+b), point slope (y-y1)= m(x-x1) and the standard form (ax+by=c).
Just type the two points, and we'll take it form there. Point Slope Form and Standard Form of Linear Equations.
Here’s the graph of a generic line with two points plotted on it. The slope of the line is “rise over run.” When we write the equation, we’ll let x be the time in months, and y be the amount of money saved. After 1 month, Andre has $Download |
Acoustic theory is the field relating to mathematical description of sound waves. It is derived from fluid dynamics. See acoustics for the engineering approach.
The propagation of sound waves in a fluid (such as water) can be modeled by an equation of continuity (conservation of mass) and an equation of motion (conservation of momentum) . With some simplifications, in particular constant density, they can be given as follows:
where is the acoustic pressure and is the flow velocity vector, is the vector of spatial coordinates , is the time, is the static mass density of the medium and is the bulk modulus of the medium. The bulk modulus can be expressed in terms of the density and the speed of sound in the medium () as
If the flow velocity field is irrotational, , then the acoustic wave equation is a combination of these two sets of balance equations and can be expressed as
where we have used the vector Laplacian,
The acoustic wave equation (and the mass and momentum balance equations) are often expressed in terms of a scalar potential where . In that case the acoustic wave equation is written as
and the momentum balance and mass balance are expressed as
Derivation of the governing equations
The derivations of the above equations for waves in an acoustic medium are given below.
Conservation of momentum
The equations for the conservation of linear momentum for a fluid medium are
where is the body force per unit mass, is the pressure, and is the deviatoric stress. If is the Cauchy stress, then
where is the rank-2 identity tensor.
We make several assumptions to derive the momentum balance equation for an acoustic medium. These assumptions and the resulting forms of the momentum equations are outlined below.
Assumption 1: Newtonian fluid
In acoustics, the fluid medium is assumed to be Newtonian. For a Newtonian fluid, the deviatoric stress tensor is related to the flow velocity by
where is the shear viscosity and is the bulk viscosity.
Therefore, the divergence of is given by
Using the identity , we have
The equations for the conservation of momentum may then be written as
Assumption 2: Irrotational flow
For most acoustics problems we assume that the flow is irrotational, that is, the vorticity is zero. In that case
and the momentum equation reduces to
Assumption 3: No body forces
Another frequently made assumption is that effect of body forces on the fluid medium is negligible. The momentum equation then further simplifies to
Assumption 4: No viscous forces
Additionally, if we assume that there are no viscous forces in the medium (the bulk and shear viscosities are zero), the momentum equation takes the form
Assumption 5: Small disturbances
An important simplifying assumption for acoustic waves is that the amplitude of the disturbance of the field quantities is small. This assumption leads to the linear or small signal acoustic wave equation. Then we can express the variables as the sum of the (time averaged) mean field () that varies in space and a small fluctuating field () that varies in space and time. That is
Then the momentum equation can be expressed as
Since the fluctuations are assumed to be small, products of the fluctuation terms can be neglected (to first order) and we have
Assumption 6: Homogeneous medium
Next we assume that the medium is homogeneous; in the sense that the time averaged variables
and have zero gradients, i.e.,
The momentum equation then becomes
Assumption 7: Medium at rest
At this stage we assume that the medium is at rest which implies that the mean flow velocity is zero, i.e. . Then the balance of momentum reduces to
Dropping the tildes and using , we get the commonly used form of the acoustic momentum equation
Conservation of mass
The equation for the conservation of mass in a fluid volume (without any mass sources or sinks) is given by
where is the mass density of the fluid and is the flow velocity.
The equation for the conservation of mass for an acoustic medium can also be derived in a manner similar to that used for the conservation of momentum.
Assumption 1: Small disturbances
From the assumption of small disturbances we have
Then the mass balance equation can be written as
If we neglect higher than first order terms in the fluctuations, the mass balance equation becomes
Assumption 2: Homogeneous medium
Next we assume that the medium is homogeneous, i.e.,
Then the mass balance equation takes the form
Assumption 3: Medium at rest
At this stage we assume that the medium is at rest, i.e., . Then the mass balance equation can be expressed as
Assumption 4: Ideal gas, adiabatic, reversible
In order to close the system of equations we need an equation of state for the pressure. To do that we assume that the medium is an ideal gas and all acoustic waves compress the medium in an adiabatic and reversible manner. The equation of state can then be expressed in the form of the differential equation:
where is the specific heat at constant pressure, is the specific heat at constant volume, and is the wave speed. The value of is 1.4 if the acoustic medium is air.
For small disturbances
where is the speed of sound in the medium.
The balance of mass can then be written as
Dropping the tildes and defining gives us the commonly used expression for the balance of mass in an acoustic medium:
Governing equations in cylindrical coordinates
If we use a cylindrical coordinate system with basis vectors , then the gradient of and the divergence of are given by
where the flow velocity has been expressed as .
The equations for the conservation of momentum may then be written as
In terms of components, these three equations for the conservation of momentum in cylindrical coordinates are
The equation for the conservation of mass can similarly be written in cylindrical coordinates as
Time harmonic acoustic equations in cylindrical coordinates
The acoustic equations for the conservation of momentum and the conservation of mass are often expressed in time harmonic form (at fixed frequency). In that case, the pressures and the flow velocity are assumed to be time harmonic functions of the form
where is the frequency. Substitution of these expressions into the governing equations in cylindrical coordinates gives us the fixed frequency form of the conservation of momentum
and the fixed frequency form of the conservation of mass
Special case: No z-dependence
In the special case where the field quantities are independent of the z-coordinate we can eliminate to get
Assuming that the solution of this equation can be written as
we can write the partial differential equation as
The left hand side is not a function of while the right hand side is not a function of . Hence,
where is a constant. Using the substitution
The equation on the left is the Bessel equation which has the general solution
where is the cylindrical Bessel function of the first kind and are undetermined constants. The equation on the right has the general solution
where are undetermined constants. Then the solution of the acoustic wave equation is
Boundary conditions are needed at this stage to determine and the other undetermined constants.
- ↑ Douglas D. Reynolds. (1981). Engineering Principles in Acoustics, Allyn and Bacon Inc., Boston. |
|This article needs additional citations for verification. (May 2016) (Learn how and when to remove this template message)|
Instruction pipelining is a technique that implements a form of parallelism called instruction-level parallelism within a single processor. It therefore allows faster CPU throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. The basic instruction cycle is broken up into a series called a pipeline. Rather than processing each instruction sequentially (finishing one instruction before starting the next), each instruction is split up into a sequence of dependent steps so different steps can be executed in parallel and instructions can be processed concurrently (starting one instruction before finishing the previous one).
The first step is always to fetch the instruction from memory; the final step is usually writing the results of the instruction to processor registers or to memory. Pipelining seeks to let the processor work on as many instructions as there are dependent steps, just as an assembly line builds many vehicles at once, rather than waiting until one vehicle has passed through the line before admitting the next one. Just as the goal of the assembly line is to keep each assembler productive at all times, pipelining seeks to keep every portion of the processor busy with some instruction. Pipelining lets the computer's cycle time be the time of the slowest step, and ideally lets one instruction complete in every cycle.
Pipelining increases instruction throughput by performing multiple operations at the same time, but does not reduce latency, the time needed to complete a single instruction. Indeed, pipelining may increase latency due to additional overhead from breaking the computation into separate steps, and depending on how often the pipeline stalls or needs to be flushed.
The term pipeline is an analogy to the fact that there is fluid in each link of a pipeline, as each part of the processor is occupied with work.
Central processing units (CPUs) are driven by a clock. Each clock pulse need not do the same thing; rather, logic in the CPU directs successive pulses to different places to perform a useful sequence. There are many reasons that the entire execution of a machine instruction cannot happen at once; in pipelining, effects that cannot happen at the same time are made into dependent steps of the instruction.
For example, if one clock pulse latches a value into a register or begins a calculation, it will take some time for the value to be stable at the outputs of the register or for the calculation to complete. As another example, reading an instruction out of a memory unit cannot be done at the same time that an instruction writes a result to the same memory unit.
Number of steps
The number of dependent steps varies with the machine architecture. For example:
- The IBM Stretch project proposed the terms Fetch, Decode, and Execute that have become common.
- The classic RISC pipeline comprises:
- Instruction fetch
- Instruction decode and register fetch
- Memory access
- Register write back
- The Atmel AVR and the PIC microcontroller each have a two-stage pipeline.
- Many designs include pipelines as long as 7, 10 and even 20 stages (as in the Intel Pentium 4).
- The later "Prescott" and "Cedar Mill" Netburst cores from Intel, used in the latest Pentium 4 models and their Pentium D and Xeon derivatives, have a long 31-stage pipeline.
- The Xelerated X10q Network Processor has a pipeline more than a thousand stages long.
As the pipeline is made "deeper" (with a greater number of dependent steps), a given step can be implemented with simpler circuitry, which may let the processor clock run faster. Such pipelines may be called superpipelines.
A processor is said to be fully pipelined if it can fetch an instruction on every cycle. Thus, if some instructions or conditions require delays that inhibit fetching new instructions, the processor is not fully pipelined.
The model of sequential execution assumes that each instruction completes before the next one begins; this assumption is not true on a pipelined processor. A situation where the expected result is problematic is known as a hazard. Imagine the following two register instructions to a hypothetical RISC processor:
1: add 1 to R5 2: copy R5 to R6
If the processor has the 5 steps listed in the initial illustration, instruction 1 would be fetched at time t1 and its execution would be complete at t5. Instruction 2 would be fetched at t2 and would be complete at t6. The first instruction might deposit the incremented number into R5 as its fifth step (register write back) at t5. But the second instruction might get the number from R5 (to copy to R6) in its second step (instruction decode and register fetch) at time t3. It seems that the first instruction would not have incremented the value by then. The above code invokes a hazard.
Writing computer programs in a compiled language might not raise these concerns, as the compiler could be designed to generate machine code that avoids hazards.
In some early DSP and RISC processors, the documentation advises programmers to avoid such dependencies in adjacent and nearly adjacent instructions (called delay slots), or declares that the second instruction uses an old value rather than the desired value (in the example above, the processor might counter-intuitively copy the unincremented value), or declares that the value it uses is undefined. The programmer may have unrelated work that the processor can do in the meantime; or, to ensure correct results, the programmer may insert NOPs into the code, partly negating the advantages of pipelining.
Pipelined processors commonly use three techniques to work as expected when the programmer assumes that each instruction completes before the next one begins:
- Processors that can compute the presence of a hazard may stall, delaying processing of the second instruction (and subsequent instructions) until the values it requires as input are ready. This creates a bubble in the pipeline (see below), also partly negating the advantages of pipelining.
- Some processors can not only compute the presence of a hazard but can compensate by having additional data paths that provide needed inputs to a computation step before a subsequent instruction would otherwise compute them, an attribute called operand forwarding.
- Some processors can determine that instructions other than the next sequential one are not dependent on the current ones and can be executed without hazards. Such processors may perform out-of-order execution.
A branch out of the normal instruction sequence often involves a hazard. Unless the processor can give effect to the branch in a single time cycle, the pipeline will continue fetching instructions sequentially. Such instructions cannot be allowed to take effect because the programmer has diverted control to another part of the program.
A conditional branch is even more problematic. The processor may or may not branch, depending on a calculation that has not yet occurred. Various processors may stall, may attempt branch prediction, and may be able to begin to execute two different program sequences (eager execution), both assuming the branch is and is not taken, discarding all work that pertains to the incorrect guess.[a]
A processor with an implementation of branch prediction that usually makes correct predictions can minimize the performance penalty from branching. However, if branches are predicted poorly, it may create more work for the processor, such as flushing from the pipeline the incorrect code path that has begun execution before resuming execution at the correct location.
Programs written for a pipelined processor deliberately avoid branching to minimize possible loss of speed. For example, the programmer can handle the usual case with sequential execution and branch only on detecting unusual cases. Using programs such as gcov to analyze code coverage lets the programmer measure how often particular branches are actually executed and gain insight with which to optimize the code.
- Self-modifying programs
- The technique of self-modifying code can be problematic on a pipelined processor. In this technique, one of the effects of a program is to modify its own upcoming instructions. If the processor has an instruction cache, the original instruction may already have been copied into a prefetch input queue and the modification will not take effect.
- Uninterruptible instructions
- An instruction may be uninterruptible to ensure its atomicity, such as when it swaps two items. A sequential processor permits interrupts between instructions, but a pipelining processor overlaps instructions, so executing an uninterruptible instruction renders portions of ordinary instructions uninterruptible too. The Cyrix coma bug would hang a single-core system using an infinite loop in which an uninterruptible instruction was always in the pipeline.
- Pipelining keeps all portions of the processor occupied and increases the amount of useful work the processor can do in a given time. Pipelining typically reduces the processor's cycle time and increases the throughput of instructions. The speed advantage is diminished to the extent that execution encounters hazards that require execution to slow below its ideal rate. A non-pipelined processor executes only a single instruction at a time. The start of the next instruction is delayed not based on hazards but unconditionally.
- A pipelined processor's need to organize all its work into modular steps may require the duplication of registers that increases the latency of some instructions.
- By making each dependent step simpler, pipelining can enable complex operations more economically than adding complex circuitry, such as for numerical calculations. However, a processor that declines to pursue increased speed with pipelining may be simpler and cheaper to manufacture.
- Compared to environments where the programmer needs to avoid or work around hazards, use of a non-pipelined processor may make it easier to program and to train programmers. The non-pipelined processor also makes it easier to predict the exact timing of a given sequence of instructions.
To the right is a generic pipeline with four stages: fetch, decode, execute and write-back. The top gray box is the list of instructions waiting to be executed, the bottom gray box is the list of instructions that have had their execution completed, and the middle white box is the pipeline.
The execution is as follows:
|0||Four instructions are waiting to be executed|
|9||The execution of all four instructions is completed|
A pipelined processor may deal with hazards by stalling and creating a bubble in the pipeline, resulting in one or more cycles in which nothing useful happens.
In the illustration at right, in cycle 3, the processor cannot decode the purple instruction, perhaps because the processor determines that decoding depends on results produced by the execution of the green instruction. The green instruction can proceed to the Execute stage and then to the Write-back stage as scheduled, but the purple instruction is stalled for one cycle at the Fetch stage. The blue instruction, which was due to be fetched during cycle 3, is stalled for one cycle, as is the red instruction after it.
Because of the bubble (the blue ovals in the illustration), the processor's Decode circuitry is idle during cycle 3. Its Execute circuitry is idle during cycle 4 and its Write-back circuitry is idle during cycle 5.
When the bubble moves out of the pipeline (at cycle 6), normal execution resumes. But everything now is one cycle late. It will take 8 cycles (cycle 1 through 8) rather than 7 to completely execute the four instructions shown in colors.
Pipelining began in earnest in the late 1970s in supercomputers such as vector processors and array processors. One of the early supercomputers was the Cyber series built by Control Data Corporation. Its main architect, Seymour Cray, later headed Cray Research. Cray developed the XMP line of supercomputers, using pipelining for both multiply and add/subtract functions. Later, Star Technologies added parallelism (several pipelined functions working in parallel), developed by Roger Chen. In 1984, Star Technologies added the pipelined divide circuit developed by James Bradley. By the mid 1980s, supercomputing was used by many different companies around the world.
- Early pipelined processors without any of these heuristics, such as the PA-RISC processor of Hewlett-Packard, dealt with hazards by simply warning the programmer; in this case, that one or more instructions following the branch would be executed whether or not the branch was taken. This could be useful; for instance, after computing a number in a register, a conditional branch could be followed by loading into the register a value more useful to subsequent computations in both the branch and the non-branch case.
- "Best Extreme Processor: Xelerated X10q". The Linley Group. Retrieved 2014-02-08.
- John Paul Shen, Mikko H. Lipasti (2004). Modern Processor Design. McGraw-Hill Professional.
- Sunggu Lee (2000). Design of Computers and Other Complex Digital Devices. Prentice Hall.
- "CMSC 411 Lecture 19, Pipelining Data Forwarding". Csee.umbc.edu. Retrieved 2014-02-08.
- "High performance computing, Notes of class 11". hpc.serc.iisc.ernet.in. September 2000. Retrieved 2014-02-08.
- Raul Rojas (1997). "Konrad Zuse's Legacy: The Architecture of the Z1 and Z3". IEEE Annals of the History of Computing. 19 (2).
|Wikimedia Commons has media related to Pipeline (computer hardware).|
|Wikibooks has a book on the topic of: Microprocessor Design/Pipelined Processors| |
What Is Decimal Data Type in SQL?
In SQL, the Decimal data type is used to store numeric values with a fixed precision and scale. It is commonly used to represent monetary or financial data, where accuracy is of utmost importance. The precision determines the total number of digits that can be stored, while the scale controls the number of decimal places.
Precision and Scale
The precision refers to the total number of digits that can be stored in a decimal value. This includes both the digits before and after the decimal point.
For example, a decimal column with a precision of 5 can store values like 123.45 or -9876. A higher precision allows for more accurate representation of numbers.
The scale, on the other hand, specifies the maximum number of decimal places that can be stored in a decimal value. For instance, if a column has a scale of 2, it can store numbers like 10.25 or -3.99.
The combination of precision and scale helps ensure that numeric values are stored accurately without any loss of information.
Declaring Decimal Columns
In SQL, when creating a table, you can specify a column as having the Decimal data type by using the DECIMAL(precision, scale) or NUMERIC(precision, scale) syntax. Here’s an example:
CREATE TABLE Products ( Price DECIMAL(8, 2), Quantity DECIMAL(5) );
In this example, we have two columns: Price and Quantity. The Price column has a precision of 8 with 2 decimal places (e.g., $1234.56), while the Quantity column has a precision of 5 with no decimal places (e., 1000).
Performing Arithmetic Operations
Decimal data types allow for precise arithmetic calculations, ensuring accurate results. When performing calculations on decimal values, the precision and scale of the result are determined by the database engine based on the input values and the operation performed.
For example, let’s say we have two decimal columns: Price and Quantity. We can calculate the total value of a product by multiplying these two columns:
SELECT Price * Quantity AS TotalValue FROM Products;
In this query, the database engine will automatically determine the precision and scale of the TotalValue column based on the inputs (Price and Quantity). The result will be a decimal value with an appropriate precision and scale to maintain accuracy.
- The Decimal data type in SQL is used to store numeric values with fixed precision and scale.
- Precision determines the total number of digits that can be stored, while scale controls the number of decimal places.
- Decimal columns are declared using DECIMAL(precision, scale) or NUMERIC(precision, scale) syntax.
- Arithmetic operations on decimal values maintain accuracy by adjusting precision and scale as needed.
The Decimal data type is essential for handling monetary or financial data accurately in SQL databases. By understanding its properties and usage, you can ensure accuracy when storing and manipulating numeric values in your database tables. |
3.1 Coordinate Systems
As is typical in computer graphics, pbrt represents three-dimensional points, vectors, and normal vectors with three coordinate values: , , and . These values are meaningless without a coordinate system that defines the origin of the space and gives three linearly independent vectors that define the , , and axes of the space. Together, the origin and three vectors are called the frame that defines the coordinate system. Given an arbitrary point or direction in 3D, its coordinate values depend on its relationship to the frame. Figure 3.1 shows an example that illustrates this idea in 2D.
In the general -dimensional case, a frame’s origin and its linearly independent basis vectors define an -dimensional affine space. All vectors in the space can be expressed as a linear combination of the basis vectors. Given a vector and the basis vectors , there is a unique set of scalar values such that
The scalars are the representation of with respect to the basis and are the coordinate values that we store with the vector. Similarly, for all points , there are unique scalars such that the point can be expressed in terms of the origin and the basis vectors
Thus, although points and vectors are both represented by , , and coordinates in 3D, they are distinct mathematical entities and are not freely interchangeable.
This definition of points and vectors in terms of coordinate systems reveals a paradox: to define a frame we need a point and a set of vectors, but we can only meaningfully talk about points and vectors with respect to a particular frame. Therefore, in three dimensions we need a standard frame with origin and basis vectors , , and . All other frames will be defined with respect to this canonical coordinate system, which we call world space.
3.1.1 Coordinate System Handedness
There are two different ways that the three coordinate axes can be arranged, as shown in Figure 3.2. Given perpendicular and coordinate axes, the axis can point in one of two directions. These two choices are called left-handed and right-handed. The choice between the two is arbitrary but has a number of implications for how some of the geometric operations throughout the system are implemented. pbrt uses a left-handed coordinate system. |
2. Graphs of Linear Functions
It is very important for many math topics to know how to quickly sketch straight lines. When we use math to model real-world problems, it is worthwhile to have a sense of how straight lines "work" and what they look like.
We met this topic before in The Straight Line. The following section serves as a reminder for you.
a. Slope-Intercept Form of a Straight Line: `y = mx + c`
If the slope (also known as gradient) of a line is m, and the y-intercept is c, then the equation of the line is written:
`y = mx + c`
The line `y = 2x + 6` has slope `m = 6/3 = 2` and `y`-intercept `c = 6`.
Graph of the linear equation `y = 2x+6`.
b. Intercept Form of a Straight Line: `ax + by = c`
Often a straight line is written in the form ax + by = c. One way we can sketch this is by finding the x- and y-intercepts and then joining those intercepts.
Sketch the line 3x + 2y = 6.
The x-intercept (that is, when `y = 0`) is:
3x = 6
x = 2.
The y-intercept (that is, when `x = 0`) is:
2y = 6
y = 3.
Joining the intercepts `(2, 0)` and `(0, 3)` gives the graph of the straight line 3x + 2y = 6:
Graph of the linear equation `3x + 2y = 6`.
Slope of a Line
The slope (or gradient) of a straight line is given by:
`m=text(vertical rise)/text(horizontal run)`
We can also write the slope of the straight line passing through the points (x1, y1) and (x2, y2) as:
Using this expression for slope, we can derive the following.
c. Point-slope Form of a Straight Line: `y − y_1= m(x − x_1)`
If a line passes through the point (x1, y1) and has slope m, then the equation of the line is given by:
`y − y_1= m(x − x_1)`
Find the equation of the line with slope `−3`, and which passes through `(2, −4)`. |
The Card System
Students participate in a system of learning names, dates, places in history classes. They apply the knowledge in discussions, on tests and in written work.
3 Views 0 Downloads
The US Financial System
Here is a very unique activity in which learners simulate operations of a fractional reserve banking system, ultimately gaining a better understanding of how banks work and process money creation through lending. It includes a Story of...
9th - 12th Social Studies & History CCSS: Adaptable
The Federal Reserve System: Unit
Have your class investigate the functions of the Federal Reserve Banks in this 29 page unit. They participate in a banking activity that explores the fractional reserve banking system. They identify the three basic functions of the...
10th - Higher Ed Social Studies & History
Sources of Law
From where do United States citizens derive their laws? This resource offers an overview of the various sources of law, such as the Constitution, statutes passed by Congress, and judicial precedents established through court cases. It...
6th - 12th Social Studies & History CCSS: Designed
Credit Reports—and You Thought Your Report Card Was Important
Get the facts about credit and take a close look at what factors into a consumer credit report with this fantastic lesson. Your pupils will read informational texts, read sample financial documents, and discuss the advantages and...
9th - 12th Math CCSS: Designed
The Fed Explains: Payments System
How is the Federal Reserve like your body's circulatory system? A short video explains the ways the Fed keeps money moving through the economy much like blood passes through veins and arteries. Focusing on retail payments, the video...
4 mins 9th - 12th Math CCSS: Adaptable
What Gives a Dollar Bill Its Value?
What makes a one hundred-dollar bill valuable? Here is an excellent, straight-forward animated video to help your learners grasp the concept of inflation and the purpose and policies of the Federal Reserve System.
4 mins 10th - 12th Social Studies & History CCSS: Adaptable |
The legacy of thestill lives on. It began in the 15th century and only ended in the 19th. Even today, the descendants of slaves deal with horrific racism. This led to the rise of the Black Lives Matter movement in the US. Nothing in human history compares with ’s magnitude, cruelty or sustained brutality.
was not a new institution in the 15th century. It was invented even before the Middle Ages. In ancient times, the losing side in war was often enslaved and made to pay for its misfortune with servitude. was common in the Roman world. In the 10th century, the Vikings captured men and women in their raids and then sold them off in the slave markets along the Volga River and the Caspian Sea.
Black Lives Matter Shines the Spotlight on the Shadow of Slavery
Later, the Ottomans acquired European slaves, trained them in the art of war and made them a part of their standing armies. The Ottoman janissaries are a classic example. Some of the slaves rose to become kings and ruled as Mamluks in Egypt and India. In, a large number of slaves were imported into Iraq from Zanzibar in the 9th century to clear the local swamps. These were called the Zanj, which derives from zanjir, a Farsi word that means an iron shackle or chain.
Yet it is important to remember thatin ancient and medieval societies was appreciably different from the type that Europeans introduced into . In earlier societies, the slave was held in servitude for a limited time for specific reasons, and the color of the skin was not as big a factor as to whether a person did or did not become a slave. In most cases, the slave had some rights that the master had to respect. In ancient Egypt, Greece and Rome, there were clearly defined codes of conduct governing the relationship between the slaves and their masters. The main point is simple: of the ancient world were treated with some humanity.
The distinguishing feature of the modern racial superiority in the 17th and 18th centuries. They forged the myth of a people with no history and culture.is its total dehumanization of the slave. This continued for four centuries and paved the way for the colonization of . were considered no better than cattle or chattel who could be bought and sold in the market. Many Christian churches played a grim role in creating this narrative of a sub-human sub-Saharan . Many European thinkers created an elaborate ideology of
What Kicked Off Modern Slavery?
Early in the, Europe began to recover from the wounds of the Middle Ages and the . European skill in ship-building had improved and the continent was searching for sources of food supply. Europe wanted to feed its hungry population, earn fortunes through trade and conquer new worlds. Unsurprisingly, Europeans began to venture beyond their shores. Before the , Europeans had not embarked upon worldwide exploration. Their ships were small, unsuitable and unsafe for long sea journeys. They were powered by oars and crude sails. Cartography was primitive and so were navigating instruments, making sailing on the wide-open ocean suicidal. Most Europeans also thought the world was flat and did not want to fall off the edge.
Once Europeans acquired the knowledge and the skill to sail the high seas, they began a process of world domination that arguably still continues to this day. Modern declared that not only pagans but also Jews, heretics and schismatics would “go into the eternal fire.” In 1442, the pope later decreed that “capturing the Moors as slaves was a part of the and whoever sailed south in this pursuit would receive ablution of his sins.” Now, the had god on their side., however, began before Europe took to the high seas. The Portuguese and the Spaniards started closer home. They targeted the Moors, some of whom were of the same race though not the same religion. It began with piracy on the Moroccan coast in 1441, the year Pope Eugene IV
In 1444, the Portuguese Lagos Company was chartered under the patronage of Prince Henry. By 1465, Portugal “was transporting more than a thousand slaves a year from southern Morocco, Mauritania and Sene-Gambia,” writes Nazeer Ahmed. The Portuguese had tasted blood. Now, they advanced relentlessly along the African coast. They traded Andalusian silk, crude arms and horses for gold, ivory and slaves. They built ships along the coast to further their domination and trade. By 1490, the Portuguese were shipping more than 3,000 slaves a year from.
Until then, there was nothing terribly unusual about this slave trade. It fit an old pattern. Slaves from Europe had been sold in Egypt, Central Asia and even faraway India, while slaves from sub-Saharanhad been sold in North , Spain and India. In fact, the was declining by 1500. The European market was saturated. By the early 16th century, Lisbon alone had 10,000 African slaves, 10% of its population. There was no space for any more slaves.
Something fundamental had changed though. Generations of Americans have been taught that in “1492, sailed the ocean blue.” Indeed, thanks to , Europeans had discovered a brave new world. A small trade in ivory, gold and slaves was about to be transformed into an intricate global web of trade, piracy and politics. In 1497, Europeans landed on the island of Hispaniola in modern-day Haiti, and by 1517, they were on the American mainland. They came not only with guns, canons, lances, swords and horses, but also measles, smallpox, influenza and the bubonic plague.
Recent studies indicate as many as 56 million people died by the beginning of the 1600s. To put this in perspective, this number comprised 90% of the pre-Columbian indigenous population and 10% of the then-global population. This phenomenon has been termed the “Great Dying” and is the largest human mortality event in terms of percentage in all of history.
Initially, the Spanish looted Mayan gold. Then, they enslaved indigenous people and made them work in silver mines. Soon, they discovered there were greater fortunes to be made through. The discovery of the New World had led to the greatest agricultural exchange in history. Crops like potatoes, tomatoes, corn and chilies traveled east, while sugar, cotton and wheat traveled west. Soon, the Spanish and the Portuguese discovered that sugar, a crop first grown in India, grew wonderfully well in Cuban and Brazilian climes.
Until the 16th century, sugar was a luxury good. It made its way from India through Muslim and Venetian merchants to the tables of the rich and mighty. Now, there was a killing to be made by growing it in the New World. The indigenous population was unsuited for this sort of labor and was dying off anyway. So, plantations needed labor. Indentured labor from Europe proved insufficient. African slave labor was ideal because plantation owners did not have to pay them wages and Africans were immune to Old World diseases.
In 1515, the first shipload of sugar arrived in Spain from Cuba. Since Native Americans did not quite survive European diseases, plantations needed to import labor. So, in 1518, the first shipload of slaves arrived in Cuba from West Africa. On the way to Africa, European ships carried goods like brandy, rum and guns. The infamous triangular slave trade was born. In the late 18th and early 19th century, this trade really took off. The Oxford Encyclopedia of Economic History states that the population of slaves in Cuba shot up from 39,000 in 1773 to over 400,000 in the 1840s. As industrialization took off, “the Cuban sugar industry became the most technologically advanced in the world, utilizing narrow-gauged railroads, steam-driven mills, and centrifuges to produce approximately one-third of the world’s sugar.”
The Portuguese emulated the Spanish in Portugal. In North America, tobacco and cotton plantations appeared. All of these were worked by slaves. The Europeans pursued a policy of ethnic cleansing in the Americas and of enslaving in Africa to make their fortunes and give birth to the modern international transatlantic capitalist system.
Triangular Trade, Joint-Stock Company, Race and Family
The processing of sugar yields molasses as a byproduct. Fermented molasses can be made into rum. Soon, not only sugar but also molasses made their way to rum distilleries that sprang up in England, Holland and France. Much of the rum was consumed in Europe, but some of it found its way to West Africa. European merchants started paying for the slaves with rum, guns, horses, cheap industrial products and even fine muslin cloths from India. Guns were in demand by the African slave agents. They made hunting down slaves much easier. Rum, guns and slaves destabilized West Africa. They enabled men to get drunk, dehumanize and kill each other. These were negative externalities of an enormously profitable sugar, rum and slave trade. In the process, Europe and America grew rich while Africa was bled dry.
The story of the African slave trade is essentially the story of the second rise of Europe. After the collapse of the Roman Empire in the 5th century, the Catholic Church culturally unified Europe but the continent retreated inward. The Crusades focused on recovering former Roman territory and ultimately failed. The rise of Islam closed the doors to the Middle East by the 15th century. Europe began a new start by expelling the Moors from Spain and expanding to the New World. They found new markets, materials, manpower, land and fortunes.
The slave trade was no business for the common man. It required enormous capital. Ships had to be built, soldiers hired, fortifications erected and depots maintained in distant lands. Initially, only the kings, noblemen and rich merchants could supply this capital. Portugal wanted to keep a monopoly on this trade. Therefore, it sought justification for its slave-trading position as well as its early discoveries through Papal Bulls. However, the lure of profits was too great to keep out interlopers. Dutch, English and French pirates started preying on Portuguese shipping in the 16th century, with rich merchants in London, Liverpool, Paris and Amsterdam financing their expeditions. On occasions, even their monarchs participated.
At some point, the Dutch engineered a clever revolution. They became the first Europeans to open up expensive international trade for broader participation. They came up with the ingenious idea of a joint-stock company, arguably the single most important development of the 17th century. Following their suit, the English and the French set up joint-stock companies too, transforming trade, business and eventually the entire global economy.
Eventually, British companies backed by the Royal Navy took pole position in the global race for trade, fortune, power and empire. It all began with initial raids on Spanish shipping, which had yielded valuable silver that helped prop up the English currency. Indeed, much of the rivalry between the European powers in the 16th and 17th centuries was for gold and silver, the then-international standards of exchange. This was the era of mercantilism where every country tried to have a positive balance of trade. Commodities were exported in return for gold and silver. Exports increased the supply of precious metals, which backed up the currency of a state. Imports had the opposite effect because they led to the drain of gold and silver.
Spain had grown rich and powerful on the supply of Mexican silver, while England had lost much of its silver to imports from other countries such as Morocco and Holland. When Elizabeth I ascended the throne in 1558, the English pound had lost much of its value. Elizabethan pirates such as Sir Francis Drake and Sir Walter Raleigh saved the day for England and laid the ground for its prosperity. Their successors in the form of joint-stock companies made Britain richer than its wildest dreams.
In the era of joint-stock companies, the transatlantic slave trade exploded. The United Kingdom’s National Archives tell us that “Britain transported 3.1 million Africans (of whom 2.7 million arrived) to the British colonies in the Caribbean, North and South America and to other countries” between 1640 and 1807. An estimated 7 million slaves were transported from Africa to America in the 18th century. This figure for the period between the 16th to the 19th century is estimated at 10 to 12 million.
Such was the madness of the time that a major argument raged among the slave traders. For a hundred years, they argued about the pros and cons of a tight pack and a loose pack. When the slaves were tightly packed on the ship, more slaves could be taken to the New World but more died on board. In a loose pack, more survived the journey because slaves had more breathing space.
Eventually, the Industrial Revolution lowered the demand for slaves. Machine labor became cheaper than human labor, including slave labor. Transporting and feeding slaves cost money. Besides, public opinion started to turn against the institution of slavery by the end of the 18th century. By 1807, Britain outlawed the international slave trade and the Royal Navy started blocking slave ships. Other European countries followed suit. In the New World, slavery continued. There were many competing systems of slavery here.
In South America and in the West Indies, the slave master did not outlaw the African drum, African ornamentations, African religion or other things dear to the Africans that they had often carried over from their ancestral home. This permitted a form of cultural continuity among the slaves in the West Indies, Cuba and South America. Here, plantation owners would buy a shipload or half a shipload of slaves. These slaves usually came from the same areas in Africa and spoke the same language. They retained a sense of community and a certain same basic culture. On the whole, families were kept together. If a slave or an island was sold to a plantation owner at the other end of the island, he could still walk to see the relatives.
In the United States, plantation owners were determined to destroy every element of slave culture. The system was deeply cruel. No other system did anything to demean and destroy the personality of the slave than the American one. Plantation owners ruthlessly sold family members in the free market, ensuring that relatives increasingly lived away from each other. The American slave system operated almost like the American brokerage system.
If a person bought 20 slaves at the beginning of the week, he would, if the price was right, sell 10. These men could then be resold within a few days. The family, the most meaningful and valuable thing in African life, was systematically and deliberately destroyed. Conditions in South America were far from humane, but slaves managed to stay in their family groups and, therefore, preserved some of their cultural continuity. Their lot was still miserable, though, and slavery was tragic for all Africans in all of the Americas.
To justify their barbarity to Africans at home, Europeans needed an ideology. Enslaving human beings is upsetting, but making animals work for human benefit is eminently justifiable. Thus was born the monster of modern racism and the elaborate fiction of the civilizing mission or what Rudyard Kipling called “the White Man’s Burden.” Any visit to a slave fort in Africa shows how this civilizing mission was fundamentally uncivilized and inhuman. In Elmina Castle in modern-day Ghana, slaves were chained, confined to dungeons with no toilets, branded and sent through the “Door of No Return” to the New World. Conditions on slave ships and plantations were horrific. Slaves were beaten, tortured, raped and often worked to death.
Instead of inspiring an exchange and amalgam of cultures, Christopher Columbus set in motion a tragic clash of cultures that has not ended to this day. The Black Lives Matter movement is in no small part due to his toxic legacy in the New World. It is underpinned by a terrible European idea that the white race was superior and the only one that produced anything worthy of being called a culture.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. |
2. Click HERE to complete an activity on perimeter.
3. Click HERE to complete a 2nd activity on perimeter.
4. Complete the worksheet on perimeter.
If you finish early....
5. Create a video on Explain Everything on how to find the perimeter of a shape. Follow the direction below.
Page 1- Title Page: How to Find the Perimeter of a Shape
Page 2- perimeter of an EQUILATERAL TRIANGLE
(all the sides are the same length)
Page 3- perimeter of a SQUARE (all the sides are the same length)
Page 4- perimeter of a RECTANGLE (2 sides are the same length).
Page 5- Concluding Page- Thanks for watching! |
As British royal families fought the War of the Roses in the 1400s for control of England's throne, a grouping of stars was waging its own contentious skirmish — a star war far away in the Orion Nebula.
The stars were battling each other in a gravitational tussle, which ended with the system breaking apart and at least three stars being ejected in different directions. The speedy, wayward stars went unnoticed for hundreds of years until, over the past few decades, two of them were spotted in infrared and radio observations, which could penetrate the thick dust in the Orion Nebula.
This dramatic view of the center of the Orion Nebula reveals the home of three speedy, wayward stars that were members of a now-defunct multiple-star system. The stellar grouping broke apart 500 years ago, flinging the three stars out of their birthplace.
The image, taken by NASA's Hubble Space Telescope, combines observations taken in visible light with the Advanced Camera for Surveys and in near-infrared light with the Wide Field Camera 3. A grouping of hefty, young stars, called the Trapezium Cluster, is at the center of the image. Several hundred stars are sprinkled throughout the image. Many of them appear red because their light is being scattered by dust.
The box just above the Trapezium Cluster outlines the location of the three stars. A Hubble close-up view of the stars is shown at top right. The birthplace of the multi-star system is marked "initial position." Two of the stars — labeled BN, for Becklin-Neugebauer, and "I," for source I — were discovered decades ago. Source I is embedded in thick dust and cannot be seen. The third star, "x," for source x, was recently discovered to have moved noticeably between 1998 and 2015, as shown in the inset image at bottom right. Source x is traveling at an unusually high speed of 209 200 km/h (130 000 mph), which is 30 times faster than the velocity of most stars in the nebula.
Astronomers found the speedy source x by comparing observations taken in 1998 by the Near Infrared Camera and Multi-Object Spectrometer with those taken in 2015 by the Wide Field Camera 3. Hubble's discovery of the high velocity of source x has helped astronomers solve the long-standing mystery of how the stars BN and source I acquired their fast motions. Credit: NASA, ESA, K. Luhman (Penn State University), and M. Robberto (STScI)
The observations showed that the two stars were traveling at high speeds in opposite directions from each other. The stars' origin, however, was a mystery. Astronomers traced both stars back 540 years to the same location and suggested they were part of a now-defunct multiple-star system. But the duo's combined energy, which is propelling them outward, didn't add up. The researchers reasoned there must be at least one other culprit that robbed energy from the stellar toss-up.
Now NASA's Hubble Space Telescope has helped astronomers find the final piece of the puzzle by nabbing a third runaway star. The astronomers followed the path of the newly found star back to the same location where the two previously known stars were located 540 years ago. The trio resides in a small region of young stars called the Kleinmann-Low Nebula, near the center of the vast Orion Nebula complex, located 1 300 light-years away.
"The new Hubble observations provide very strong evidence that the three stars were ejected from a multiple-star system," said lead researcher Kevin Luhman of Penn State University in University Park, Pennsylvania. "Astronomers had previously found a few other examples of fast-moving stars that trace back to multiple-star systems, and therefore were likely ejected. But these three stars are the youngest examples of such ejected stars. They're probably only a few hundred thousand years old. In fact, based on infrared images, the stars are still young enough to have disks of material leftover from their formation."
This three-frame illustration shows how a grouping of stars can break apart, flinging the members into space.
The first panel shows four members of a multiple-star system orbiting each other. In the second panel, two of the stars move closer together in their orbits. In the third panel, the closely orbiting stars eventually either merge or form a tight binary. This event releases enough gravitational energy to propel all of the stars in the system outward, as shown in the third panel. Credit: NASA, ESA, and Z. Levy (STScI)
All three stars are moving extremely fast on their way out of the Kleinmann-Low Nebula, up to almost 30 times the speed of most of the nebula's stellar inhabitants. Based on computer simulations, astronomers predicted that these gravitational tugs-of-war should occur in young clusters, where newborn stars are crowded together. "But we haven't observed many examples, especially in very young clusters," Luhman said. "The Orion Nebula could be surrounded by additional fledging stars that were ejected from it in the past and are now streaming away into space."
This video reveals the motion of a newly discovered runaway star in the Orion Nebula.
The images in the two frames were taken 17 years apart by NASA's Hubble Space Telescope. The first image was taken in 1998 by Hubble's Near Infrared Camera and Multi-Object Spectrometer; the second, in 2015 by the Wide Field Camera 3. The bright object at the bottom right of both frames is a foreground star.
The speedy star, called "source x," is moving at roughly 209 200 km/h (130 000 mph). The star was a member of a multiple-star system that was propelled from its birthplace 500 years ago. The system members engaged in a gravitational tussle that resulted in the breakup of the grouping. Source x and at least two other stars were ejected in different directions.
The team's results will appear in the March 20, 2017 issue of The Astrophysical Journal Letters.
Luhman stumbled across the third speedy star, called "source x," while he was hunting for free-floating planets in the Orion Nebula as a member of an international team led by Massimo Robberto of the Space Telescope Science Institute in Baltimore, Maryland. The team used the near-infrared vision of Hubble's Wide Field Camera 3 to conduct the survey. During the analysis, Luhman was comparing the new infrared images taken in 2015 with infrared observations taken in 1998 by the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). He noticed that source x had changed its position considerably, relative to nearby stars over the 17 years between Hubble images, indicating the star was moving fast, about 209 200 km/h (130 000 mph).
The astronomer then looked at the star's previous locations, projecting its path back in time. He realized that in the 1470s source x had been near the same initial location in the Kleinmann-Low Nebula as two other runaway stars, Becklin-Neugebauer (BN) and "source I."
This video starts with a ground-based image of the night sky, taken by Akira Fujii, zooms on the star formation region of the Orion Nebula — observed by Martin Kornmesser — and ends with a detailed view of the nebula as seen by Hubble. Video courtesy HubbleESA
BN was discovered in infrared images in 1967, but its rapid motion wasn't detected until 1995, when radio observations measured the star's speed at 60,000 miles per hour. Source I is traveling roughly 35 405 km/h (22 000 mph). The star had only been detected in radio observations; because it is so heavily enshrouded in dust, its visible and infrared light is largely blocked.
The three stars were most likely kicked out of their home when they engaged in a game of gravitational billiards, Luhman said. What often happens when a multiple system falls apart is that two of the member stars move close enough to each other that they merge or form a very tight binary. In either case, the event releases enough gravitational energy to propel all of the stars in the system outward. The energetic episode also produces a massive outflow of material, which is seen in the NICMOS images as fingers of matter streaming away from the location of the embedded source I star.
Scale and compass image for Orion Nebula. Credit: NASA, ESA, and STScI
Future telescopes, such as the James Webb Space Telescope, will be able to observe a large swath of the Orion Nebula. By comparing images of the nebula taken by the Webb telescope with those made by Hubble years earlier, astronomers hope to identify more runaway stars from other multiple-star systems that broke apart.
Featured image: Orion Nebula. Credit: NASA, ESA, M. Robberto (Space Telescope Science Institute/ESA) and the Hubble Space Telescope Orion Treasury Project Team
If you value what we do here, open your ad-free account and support our journalism.
Producing content you read on this website takes a lot of time, effort, and hard work. If you value what we do here, select the level of your support and register your account.
Your support makes this project fully self-sustainable and keeps us independent and focused on the content we love to create and share.
All our supporters can browse the website without ads, allowing much faster speeds and a clean interface. Your comments will be instantly approved and you’ll have a direct line of communication with us from within your account dashboard. You can suggest new features and apps and you’ll be able to use them before they go live.
You can choose the level of your support.
Stay kind, vigilant and ready! |
Positive peer relationships are associated with academic achievement, sense of connection to school and overall health and well-being (Juvonen, Espinoza & Knifsend, 2012; Ladd, Kochenderfer & Coleman, 1997; Li, Doyle Lynch, Liu & Lerner, 2011).
There are many ways to foster positive peer relationships in the classroom through various strategies, routines and learning experiences. Here, we share three main pathways (explore the tabs below for practical strategies and further information)
In addition to us, as educators, getting to know our students, a foundation to a healthy classroom climate is for students to truly get to know each other as well. This requires building a safe space in the classroom, where students can show up, authentically, with all aspects of themselves - and feel seen and welcome. In order for students to feel safe to be themselves, it is important to consider and incorporate trauma-informed practices, transformative SEL, and culturally responsive / antiracist approaches in the classroom. For more information about these approaches, see links at the bottom of the page.
There are several strategies and practices that can help students to get to know each other better - while also providing opportunities to practice empathy and perspective-taking.
Some activities for building community within the classroom:
- SEL Signature Practices: Utilizing Welcoming/Inclusion Activities, Brain Breaks, and Optimistic Closures are an effective way to build community among peers in the classroom.
- Two minutes talks: Students write down (on slips of paper) a question they would like to discuss as a group (e.g., would you rather...). Set aside two minutes at the beginning or end of class for student-led discussions around these questions - twice a week. Create a schedule for students to take turns being the 'facilitator' (or co-facilitators, for those less comfortable). They key is to allow students to lead the discussion - you can stand to the side and observe the conversations unfold.
- Class playlist
- Mix and Mingle
- Gab and Go
Cooperative learning is one way to foster positive peer relationships in your classroom and contribute to a caring learning environment. For more than 35 years, Johnson and Johnson (1978, 2009) have explored how cooperative learning structures enhance how students establish and maintain positive peer relationships, the way students feel about themselves, their teacher and school, and how students learn. Cooperative learning is a strengths-based approach to designing learning experiences and recognizes that each individual’s unique contribution benefits the whole group. In cooperative learning structures, small groups of students work together to meet a shared goal which requires positive interdependence among members. Through cooperative learning, students are implicitly taught that the success of the group is dependent on shared contribution and effort, and that, although everyone is different and brings unique skills, each person is valued and contributes in some way to the success of the group (Johnson & Johnson, 2012). In such a strengths-based and collaborative environment, prosocial acts become common, expected behaviours. Below, we outline three cooperative learning structures that can be applied to different learning experiences and across disciplines.
The Fishbowl is a cooperative learning structure that allows students to observe their peers while they work together to solve a problem or complete a task. A group of students forms a small circle (usually around a work area) and the remaining students form an outer circle, standing so they can observe the students in the center. The students in the small, inner circle are given a task to complete or a topic to discuss. The outer circle observes and listens to the students as they work, taking notes about what they notice and wonder about. The students in the outer circle do this without speaking. Once the task is complete, the teacher can facilitate a discussion about what was learned through the process (and often, the groups change and another observation cycle happens).
The Jigsaw structure is well suited for a group project or class inquiry. Divide the class into equal groups. These groups become the “home” group where students focus on one particular aspect or topic of a project. The students in the home group, explore and learn together and plan how to share their learning with peers from other home groups. After students are familiar with their topic/content, they move into mixed groups (where there is a representative from each home group). Each person now shares (or teaches) the others what they learned in their home group. Essentially, each student from a home group (puzzle piece) fits together with others from different homegroups to work together to form a whole (puzzle).
Inside Outside Circles
Inside Outside Circle is a cooperative learning structure that can be used to generate and share ideas, synthesize learning or collaboratively solve problems. It is a quick way for students to access the knowledge of several peers in a short amount of time. Divide the class into two even groups. One group stands in a circle facing outward, the other group forms a circle around the other group facing inward (concentric circles). Each student should be facing a partner. A prompt (question, quote, etc.) is shared and the pairs are invited to respond for a given time. When done, the outside circle rotates and is paired with a new partner. The new dyad responds to another prompt. (To learn more about cooperative learning and other cooperative structures, visit: www.co-operative.org or www.kaganonline.com )
Circles are an Intentional Space to:
- Support participants in bringing forward their ‘core self’ - to help them conduct them-selves based on the values that represent who they are when they are at their best
- Make visible our interconnectedness even in the face of very serious differences
- Recognize and access the gifts of every participant
- Elicit individual and collective wisdom
- Engage participants in all aspects of the human experience—mental, physical, emotional and spiritual or meaning-making
- Practice value-based behaviour when it might feel risky to do so. The more people practice this behaviour in circle, the more these habits are strengthened to carry the behaviour into other parts of their lives
How is circle different from other group processes?
- The talking piece regulates the dialogue by determining who speaks and when. This dramatically reduces the responsibility of the facilitator for managing the flow of the discussion.
- Because participants collectively create the guidelines, the guidelines are owned by the circle members. This, too, reduces the role of the facilitator as the enforcer of the guidelines.
- The facilitator participates as another member of the circle, sharing experiences and perspectives from his/her own life when the talking piece comes to him or her.
- Participants are not judged by the quality or content of their participation in circle.
- Circles do not try to direct participants toward a pre-determined outcome.
Getting started with Circles:
Remember to start small. Time is always a factor in a classroom setting, so do not plan too much to start. Start with one circle a week, and set aside enough time to give students a chance to practice being together in this way. Start with fun name games, circle games and get to know you questions as a way of making it a safe space that is inviting and comfortable.
Physically arrange participants in a circle, whether by sitting in chairs/desks that face inward or standing in a circle so that all participants can view one another. A circle may be as short as a Check-In Question and a Check-Out Question to build community and communication skills or as long as an entire class period to discuss content and curriculum.
Effective questions are framed to:
- Encourage participants to speak from their own lived experiences
- Invite participants to share stories from their lives
- Focus on feelings and impacts rather than facts
- Help participants transition from discussing difficult or painful events to discussing what can be done now to make things better (restorative circle)
- A Suggested Guide to Circles (download the Word Doc/formatted text version Download download the Word Doc/formatted text version) Guide to Circles_Rural Ed Modules.pdf
Comments are closed, but trackbacks and pingbacks are open. |
When sequencing the genome of an organism, many short segments of DNA are created using technology such as the Illumina program. This makes the process pf creating the genome fast and cheap. However, a rather challenging problem is that there is no information on how to put the many short segments of DNA back together into an assembled whole. Because of this, the reconstruction of a genome is often done using computer algorithms and prediction programs. Once completed, these finished genomes are put into a database and utilized as reference genomes for future sequencing projects. The below two sources highlight the importance of the use of reference genomes to genomics:
Piecing together the best reference genome
“When researchers create a reference genome, DNA from the organism of interest is first sequenced in short pieces. A big challenge when making a genome assembly is piecing these genome sequence fragments back together correctly. Once reassembled, you’re left with a reference genome. This can be used to answer fundamental questions about biology, disease, and biodiversity.”
A reference standard for genome biology
“Reference genomes are the cornerstone of modern genomics. These high-quality genomes are differentiated from draft genomes by their completeness (low number of gaps), low number of errors, and high percentage of sequence assembled into chromosomes.”
“Sequencing today is largely dominated by Illumina’s high-throughput, short-read (∼100–200 base pairs) technology, which has made the process of decoding genomes much faster and cheaper. But this speed comes with a cost. The millions of short, overlapping reads generated by these instruments represent a complex puzzle that must be pieced together in the correct order and orientation. Computational algorithms are needed to assemble reads into continuous segments of DNA sequence (known as contigs) and to subsequently order and orient these contigs into chains (known as scaffolds), which often contain gaps. To improve scaffolding, additional long-range information is needed from technologies like Hi-C (which chemically cross-links neighboring chromatin domains), optical mapping (which visualizes fluorescent probes bound to single immobilized DNA molecules) and linked reads (sets of barcoded short reads from the same DNA molecule). Even these approaches still fall short when attempting to decipher intractable genomic features such as repetitive DNA, G+C-rich sequence or structural rearrangements that span distances much longer than a short read.”
“Beyond these practical concerns, there is no definitive method to verify the correctness of the finished product. For some species, even simple information like the number of expected chromosomes is unknown. In most cases, researchers perform several checks to evaluate the quality of a final assembly. The assembly size can be compared with existing genome size estimates for that organism or can be estimated using statistical approaches. Algorithms can be applied to identify all the sequences of set length (called k-mers) in an Illumina library that are likely to be real, and then to work out what fraction of those k-mers are recovered in the new assembly. The final assembly can also be inspected for ‘core’ genes—a set of genes common in species related to the sequenced species.
In the absence of a perfect measure of genome correctness, the definition of a high-quality genome continues to evolve, together with advances in sequencing and assembly.”
While it is clear that reference genomes are an extremely important part of the fabric of modern genomics, the last of the above sources points out two glaring challenges related to their use:
- There is no definitive method to verify the correctness of the finished product
- The definition of a high-quality genome continues to evolve
If there is no method to verify the correctness of a finished reference genome and the definition for what constitutes a “high-quality” genome is constantly evolving, how can one be sure that any genome built from a reference genome is ever accurate? In fact, how could any genome ever assembled be considered accurate? In the case of “SARS-COV-2,” the genome was created using multiple reference genomes. One of them was RaTG13, a bat “coronavirus:”
A pneumonia outbreak associated with a new coronavirus of probable bat origin
“Full-length genome sequences were obtained from five patients at an early stage of the outbreak. The sequences are almost identical and share 79.6% sequence identity to SARS-CoV. Furthermore, we show that 2019-nCoV is 96% identical at the whole-genome level to a bat coronavirus.”
“Full-length genome sequences of SARS-CoV BJ01, bat SARSr-CoV WIV1, bat coronavirus RaTG13 and ZC45 were used as reference sequences.”
We then found that a short region of RNA-dependent RNA polymerase (RdRp) from a bat coronavirus (BatCoV RaTG13)—which was previously detected in Rhinolophus affinis from Yunnan province—showed high sequence identity to 2019-nCoV. We carried out full-length sequencing on this RNA sample (GISAID accession number EPI_ISL_402131). Simplot analysis showed that 2019-nCoV was highly similar throughout the genome to RaTG13 (Fig. 1c), with an overall genome sequence identity of 96.2%.
So what exactly is RaTG13?
“Bat coronavirus RaTG13 is a SARS-like betacoronavirus that infects the horseshoe bat Rhinolophus affinis. It was discovered in 2013 in bat droppings from a mining cave near the town of Tongguan in Mojiang county in Yunnan, China. As of 2020, it is the closest known relative of SARS-CoV-2, the virus that causes COVID-19.”
This bat “Coronavirus” is the closet known relative of “SARS-COV-2.” But what happens if it turns out that RatG13 doesn’t exist? What if it is an inaccurate and faulty reference genome? What would that mean for “SARS-COV-2,” a “virus” which has a 96.2% identity match and used RatG13 as a reference genome? This next source lays out many of the problems with the RatG13 genome:
Scientists claim serious data discrepancies in RaTG13 sequence
“A new preprint* published in September 2020 by molecular biologists at the All India Institute of Medical Sciences, New Delhi, and the Indraprastha Institute of Information Technology Delhi discusses the current issues with the bat coronavirus (CoV) strain that is often considered to have very close homology with the above-mentioned virus, concluding that there are inadequate grounds to consider it to be the ancestral pool of SARS-CoV-2.
Many scientists mention the genome sequence of this bat CoVs, RaTG13, as being part of the ancestral descent of the current virus. A recent paper in the journal Nature also mentions its 96.2% homology with SARS-CoV-2, considering it to be a fossil record of a strain whose current existence is doubtful, but which may have been the original pool from which the current virus developed.
The scientists assembled the viral genome from scratch, performed a metagenomic analysis, and looked at data quality. They concluded that the RaTG13 genome had serious issues and all data related to it required a full review.”
“Full experimental details backed up the published genomic sequence of SARS-CoV-2, but not so that of the RaTG13. This is documented by several papers that have shown up the holes in the dataset underlying the published genome of the bat virus.”
“The dataset that has been published in support of the RaTG13 genome, almost 30 kb long, has been found inadequate to reproduce the sequence or the experimental observations based on this dataset. While the dataset is unique and contains much information beyond the fragmented coronavirus sequence, not much is known about how it was generated.”
De novo RaTG13 Assembly Not Possible
“The researchers found that using the available data, they were unable to detect any contiguous sequences larger than 17 kb, using several different settings. Several matching sequences were found, but none over a fifth of the length of the reported sequence. A gap spanning 111 positions was found, and it is unclear on what basis this was filled in the published sequence.
The researchers also uncovered proof that DNA contamination is likely to have occurred. For instance, the largest contig contains genetic material with 98% similarity to the full-length mitochondrial sequence of the Chinese rufous horseshoe bat (Rhinolophus sinicus), an unlikely event since a complete assembly of such a sequence is typically interrupted by stop codons.
Secondly, non-adapter-related repetitive sequences were found in most reads, often at the same end of the read, comprising one G-quadruplex sequence and its reverse complement. This is unlikely to happen on the same end of an RNA sample since only one strand is dominant. The researchers say more information about how the experiments were carried out is crucial to rule out the possibility of gross RNA sample contamination by DNA.
Poor Data Quality
The researchers also calculated that the average coverage is 9.73, indicating a low value. This may be why only partial segments of the RaTG13 sequence are assembled. The coverage is only 2 or less for about 3,000 bases, which could markedly impair the accuracy. They draw attention to multiple ambiguous bases in the first end that could prevent de novo assembly, and to many unreliable second end reads as well.
Experimental Procedural Concerns
The significantly large differences in the bacterial content of the two referenced datasets are surprising, say the researchers, since both purport to be from similar sites, fecal and oral samples. One has only 0.65% bacteria, and ~68% Eukaryota, with the rest being unidentified. The other is ~91% bacteria and ~4% Eukaryota. This concern has been raised before.
Again, 0.1% of the first dataset is similar to plant genomes like rice and maize, which is unexpected from bat samples from creatures like the intermediate horseshoe bat Rhinolophus affinis. The researchers attribute this to contamination by possible index hopping because of evidence that the same platform has been used to sequence maize earlier. Multiplex sequencing of maize and the CoV genome of interest could lead to such contamination.
Again, the dataset also contains material identical to that of the Malayan pangolin Manis javanica, a totally different order. This again could be due to index hopping of some fragments for the same reason. This could have misdirected the discussion on the origin of the novel CoV, as some have reported that pangolin CoV genomic sequences also have close homology with that of the former.
Thus, the inference could also be that contamination accounts for the presence of various portions of the RaTG13 in the dataset, accounting for 0.0008% of the total.”
The second run also has sequences resembling another virus accession number, apart from its own accession number. This dataset is supposed to have a separate lane, and index hopping may be supposed not to have occurred here, but cross-contamination still seems to have occurred. The researchers note that this “raises a distinct possibility that sample from previous runs might not have been guarded against either index hopping or cross-contamination.”
This could explain the discrepancies in the earlier dataset. Furthermore, some sequences seem to have been derived from retroviruses such as the greater horseshoe bat Rhinolophus ferrumequinum, but a whole virus could not be assembled.
While most work on the origins of SARS-CoV-2 has focused on the human CoV sequence, the current study shows that equal importance must be given to the other half of the equation, namely, RaTG13, in order to justify giving it a role in the narrative. Secondly, discussions may instead be withheld, while the precise details of the methods used to generate the RaTG13 are awaited. And thirdly, this genome should not be used in further studies until its scientific reliability is established in entirety, by independent researchers with access to the full dataset and methods used for its generation.”
Obviously, the results from these researchers carry some very heavy implications for “SARS-COV-2” as the error-filled and unreliable genome for RaTG13 was used as a reference genome during the creation of “SARS-COV-2.” They are almost genetic twins with a 96.2% sequence identity match. If the reference genome is faulty, how accurate could the genome based on it truly be? These next few sources provide some help in finding the answer:
Can I Get a Reference?
“Perhaps one of the biggest drawbacks is the need for a reference genome for comparison with your sequence. If you don’t have one of these to compare your results to, you have no real way of determining what is normal and what is unique about your sample. Good luck identifying an insertion mutation without an unaltered genome to compare to! While de novo sequencing for when a reference is not available is possible, it can lead to more errors since you have nothing to compare to.”
Standards for Sequencing Viral Genomes in the Era of High-Throughput Sequencing
“To alleviate any reliance on particular aspects of the different sequencing technologies, we have made two assumptions that should be valid in most viral sequencing projects. The first assumption is a basic understanding of the genomic structure of the virus being sequenced, including the expected size of the genome, the number of segments, and the number and distribution of major open reading frames (ORFs). Fortunately, genome structure is highly conserved within viral groups (6), and although new viruses are constantly being uncovered, the discovery of a novel family or even genus remains relatively uncommon (7). In the absence of such information, the defined standards can still be applied following further analysis to determine genome structure. The second assumption is that the genetic material of the virus being described can be accurately separated from the genomes of the host and/or other microbes, either physically or bioinformatically. Depending on the technology used, it is critical that the potential for cross-contamination of samples during the sample indexing/bar coding process and sequencing procedure be addressed with appropriate internal controls and procedural methods (8).”
“Additionally, with the current suite of primarily sequence similarity-based pathogen identification tools, the ability to detect novel pathogens is wholly dependent on high-quality reference databases (22). There is a trend toward requiring a complete genome sequence when a description of a novel virus is being published, and we agree that this is a good goal;”
From these two sources, we can see that without an accurate reference genome, there is no way to tell what is normal or unique about a sample and that multiple errors are more likely to occur without a reference to compare to. This means that the ability to detect novel “viruses” is entirely dependent on having a high-quality reference database with accurate genomes. Without this, they would not know what they have. Yet we already know from previous sources that they do not have a definitive nor perfect method for determining the correctness of a reference genome and what constitutes a “high-quality” genome constantly changes.
Interestingly, the last article offered two assumptions they make when sequencing any “virus.” The first is that they always have a basic understanding of the genome of the “virus.” This is quite the assumption to make. They are claiming that all “viral” genomes are conserved and that it is rare to find new “virus” families. However, without knowing what novel “viruses” are out there as well as what their genetic makeup would be, how could this possibly be true? It has been estimated that the human body is made up of some 380 trillion “viruses,” most of which are undiscovered and have unknown genomes. How would they be able to know that the genetic material they are piecing together is not coming from one or more of these trillions of undiscovered families?
The second assumption is that the “virus” can be separated completely from the host and/or bacterial genetic material. In other words, they assume that the “virus” is free of off-target genetic material and that whatever is sequenced is completely “viral” in nature. If this has always been the assumption since the first “viral” genome was sequenced, how was it ever confirmed that any material was ever completely “viral” to begin with? In order for that to be an accurate assumption, they would have needed to know all human/bacterial sequences from the very first “viral” genome ever created to be able to accurately separate human from bacterial DNA/RNA in order to determine what is uniquely “viral.” This is obviously not the case as to date there still is not a completed human genome and there are at least an estimated 38 trillion bacteria said to be living inside of us. As with the “virome,” our biome is made up of mostly undiscovered bacteria with unknown genomes. Without purification/isolation, there would always be off-target genetic material in the sample. The WHO has admitted that even with purification, “non-viral” genetic material will be sequenced. This means that they are assuming that human and/or bacterial genetic material is “viral” when in fact it is not.
Putting it all together, the “SARS-COV-2” genome is only as good as its reference genomes. However, the reference genomes are only as good as the reference genomes used to create them. If any or all of the references are faulty, the whole chain of genomes built from them are faulty as well. Judging from the breakdown of RatG13, the closest relative to “SARS-COV-2,” there are numerous errors within the genome such as:
- The dataset that has been published in support of the RaTG13 genome, almost 30 kb long, has been found inadequate to reproduce the sequence or the experimental observations based on this dataset
- De Novo assembly was not possible
- The researchers uncovered proof that DNA contamination is likely to have occurred
- There was poor data quality with multiple ambiguous bases in the first end that could prevent de novo assembly, and many unreliable second end reads as well
- There were numerous experimental procedural concerns, especially involving practices that lead to cross-contamination of maize and pangolin DNA being contained within the genome
- The researchers concluded that this genome should not be used in further studies until its scientific reliability is established in entirety
In the beginning of this chain of references, there would have had to have been purified/isolated “virus” particles free of any off-target genetic material used in the creation of a completely accurate reference genome. It is clear that this was not the case for RatG13, the closest genetic relative of “SARS-COV-2.” RatG13 is an error-filled, contaminated mess with mysterious origins. If RatG13 is the faulty reference genome it appears to be, then that immediately places the “SARS-COV-2” genome under the same umbrella. If the reference genome is not accurate, there is no way the genome created by using it is accurate as well. |
Each year, NASA’s Chandra X-ray Observatory helps celebrate American Archive Month by releasing a collection of images using X-ray data in its archive.
The Chandra Data Archive is a sophisticated digital system that ultimately contains all of the data obtained by the telescope since its launch into space in 1999. Chandra’s archive is a resource that makes these data available to the scientific community and the general public for years after they were originally obtained.
Each of these six new images also includes data from telescopes covering other parts of the electromagnetic spectrum, such as visible and infrared light. This collection of images represents just a small fraction of the treasures that reside in Chandra’s unique X-ray archive.
From left to right, starting on the top row, the objects are:
Westerlund 2: A cluster of young stars – about one to two million years old – located about 20,000 light years from Earth. Data in visible light from the Hubble Space Telescope (green and blue) reveal thick clouds where the stars are forming. High-energy radiation in the form of X-rays, however, can penetrate this cosmic haze, and are detected by Chandra (purple).
3C31: X-rays from the radio galaxy 3C31 (blue), located 240 million light years from Earth, allow astronomers to probe the density, temperature, and pressure of this galaxy, long known to be a powerful emitter of radio waves. The Chandra data also reveal a jet blasting away from one side of the central galaxy, which also is known as NGC 383. Here, the Chandra X-ray image has been combined with Hubble’s visible light data (yellow).
PSR J1509-5850: Pulsars were first discovered in 1967 and today astronomers know of over a thousand such objects. The pulsar, PSR J1509-5850, located about 12,000 light years from Earth and appearing as the bright white spot in the center of this image, has generated a long tail of X-ray emission trailing behind it, as seen in the lower part of the image. This pulsar has also generated an outflow of particles in approximately the opposite direction. In this image, X-rays detected by Chandra (blue) and radio emission (pink) have been overlaid on a visible light image from the Digitized Sky Survey of the field of view.
Abell 665: Merging galaxy clusters can generate enormous shock waves, similar to cold fronts in weather on Earth. This system, known as Abell 665, has an extremely powerful shockwave, second only to the famous Bullet Cluster. Here, X-rays from Chandra (blue) show hot gas in the cluster. The bow wave shape of the shock is shown by the large white region near the center of the image. The Chandra image has been added to radio emission (purple) and visible light data from the Sloan Digital Sky Survey showing galaxies and stars (white).
RX J0603.3+4214: The phenomenon of pareidolia is when people see familiar shapes in images. This galaxy cluster has invoked the nickname of the “Toothbrush Cluster” because of its resemblance to the dental tool. In fact, the stem of the brush is due to radio waves (green) while the diffuse emission where the toothpaste would go is produced by X-rays observed by Chandra (purple). Visible light data from the Subaru telescope show galaxies and stars (white) and a map from gravitational lensing (blue) shows the concentration of the mass, which is mostly (about 80%) dark matter.
CTB 37A: Astronomers estimate that a supernova explosion should occur about every 50 years on average in the Milky Way galaxy. The object known as CTB 37A is a supernova remnant located in our Galaxy about 20,000 light years from Earth. This image shows that the debris field glowing in X-rays (blue) and radio waves (pink) may be expanding into a cooler cloud of gas and dust seen in infrared light (orange).
NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.
Source: NASA press release - Read More from NASA's Chandra X-ray Observatory. |
Ahead of an October 20 attempt to bring extraterrestrial rocks from an asteroid called Bennu to Earth, NASA’s OSIRIS-REx mission has delivered new insights into its chemistry and geology.
Bennu, currently over 321 million kilometers from Earth, was chosen for study because it’s a carbonaceous chondritic rock—rich in organics, and thought to have formed in the early, oxygen-rich days of the solar system. Understanding Bennu’s physical composition, and how it was carved into its 500-meter-long shape, can help us understand how asteroids were formed back then, and what the solar system was like in its infancy.
In just a few weeks, OSIRIS-REx will attempt an audacious maneuver to collect a sample of rubble and small rock from Bennu’s surface and bring it to Earth for scientists to study. Since December 2018, the spacecraft has been orbiting Bennu from roughly a kilometer or so away and studying it with a slew of instruments. The sample collection, however, is the mission’s marquee event.
Perhaps as a prelude to this attempt, researchers just published a number of new studies about the geochemistry of Bennu today in the journals Science and Science Advances, providing some of the biggest revelations to date. Here are the most compelling.
Bennu’s watery history
In the first Science study, scientists used high-resolution images taken by OSIRIS-Rex, as well as spectroscopy (which involves analyzing electromagnetic waves emitted from Bennu to determine its chemistry), to better understand the composition and history of the asteroid’s Nightingale crater region, where the sample will be collected.
They found that boulders in this area showed bright veins, narrow in width but about a meter in length, similar to what’s found in other carbonaceous chondritic meteorites that have landed on Earth. In those cases, the veins indicate that the rock had once interacted with flowing water.
So naturally, for Bennu, “the veins suggest that water flowed through this asteroid very early in the solar system’s history,” says Hannah Kaplan, a planetary scientist with NASA’s Goddard Space Flight Center in Maryland and the lead author of the study. From the size of the veins, the researchers estimate that there was “a system of fluid flow that extended kilometers in size” back when Bennu was part of a much larger parent body. These water flows could have lasted for up to millions of years. Similar phenomena likely occurred on many other carbonaceous chondritic asteroids as well.
Carbon, carbon everywhere
Another Science study used infrared spectroscopy to demonstrate how widespread carbon-bearing minerals and hydrated clay minerals were across Bennu’s surface. According to Amy Simon, a planetary scientist at NASA’s Goddard Space Flight Center and the lead author of this study, these minerals are found all over Bennu (though they are particularly concentrated in specific boulders). This is very good news, since it means “we should find both [materials] in our returned samples,” she says.
Scientists think that Bennu formed from the rubble of a collision its parent body experienced in the main asteroid belt of our solar system. The remnants that came together as Bennu soon migrated out to an orbit closer to Earth. According to Simon, this process may be one way that small asteroid bodies delivered organics and hydrated minerals to the inner solar system, where they later became part of planets like Earth.
Rare rocks abound
One study published in Science Advances used infrared cameras to investigate the boulders and rocks that make up Bennu’s rubble-pile structure. The findings reveal that two types of rocks are common on Bennu, but one type is much more porous and brittle than rocks found on Earth, the moon, or Mars. “It is likely that we don’t have similar specimens in meteorite collections on Earth, because Bennu’s rocks are likely too weak to survive atmospheric entry,” says Ben Rozitis, a researcher at the Open University in the UK and the lead author of this study. “It is likely that OSIRIS-REx will bring back asteroid samples not previously studied by scientists in the laboratory.”
Weathering the elements
Things in space can weather down just as they do on Earth—only out there, the main forces to reckon with are solar winds and granular matter like micrometeorites. Daniella DellaGiustina, a research scientist with the University of Arizona, led a study in Science that looked at signs of this weathering on Bennu.
As it turns out, weathering is a strange process on Bennu. While most other asteroids and the moon darken (or redden) as they are weathered, Bennu actually brightens (or gets bluer). “It tells us that something about Bennu’s surface is quite different from other planetary objects we’ve observed,” says DellaGiustina. The darker the surface on Bennu, the better preserved that area should be. It just so happens Nightingale is one of the darkest areas of Bennu, which means it might be an undisturbed record of some of the most ancient activity in the solar system.
Weak gravity game
Another study in Science Advances focused on characterizing Bennu’s weak gravitational field by observing the motion of OSIRIS-REx as it orbited the asteroid, as well as the behavior of pebble-size grains of debris ejected from its surface. The measurements suggest that the asteroid’s rubble pile is unevenly distributed along its surface and is especially light at the asteroid’s equator. These data make sense with models that suggest Bennu had a period of rapid rotation at some point in its history (a hypothesis supported by another Science Advances study, looking at the hemispherical asymmetry of Bennu).
“Even though the current measurements do not definitively solve all of our questions on how rubble-pile asteroids evolve, they do significantly narrow the range of options and will provide more focus on our future investigations, both theoretical and in situ,” says D.J. Scheeres, an aerospace engineer at the University of Colorado, Boulder, and the lead author of the study.
Scheeres adds that the study also validates a novel research technique for assessing a small body’s gravitational field by studying the particles it ejects. Future missions to other asteroids can now build on this method, and try to make it faster and more accurate.
How SpaceX’s massive Starship rocket might unlock the solar system—and beyond
With the first orbital test launch of Starship on the horizon, scientists are dreaming about what it might make possible— from trips to Neptune to planetary defense.
NASA wants to use the sun to power future deep space missions
Solar energy can make space travel more fuel-efficient.
A new NASA telescope is going to look at our galaxy’s most energetic objects
IXPE will peer into black holes and neutron stars in a bid to understand the universe’s many chaotic environments.
Who is Starlink really for?
The boom in LEO satellites will probably change the lives of customers who’ve struggled for high-speed internet—but only if they can afford it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. |
For discrete variables, the probability is straightforward and can be calculated easily. But for continuous variables which can take on infinite values, the probability also takes on a range of infinite values. The function which describes the probability for such variables is called a probability density function in statistics.
What Is the Probability Density Function?
A function that defines the relationship between a random variable and its probability, such that you can find the probability of the variable using the function, is called a Probability Density Function (PDF) in statistics.
The different types of variables. They are mainly of two types:
- Discrete Variable: A variable that can only take on a certain finite value within a specific range is called a discrete variable. It usually separates the values by a finite interval, e.g., a sum of two dice. On rolling two dice and adding up the resulting outcome, the result can only belong to a set of numbers not exceeding 12 (as the maximum result of a dice throw is 6). The values are also definite.
- Continuous Variable: A continuous random variable can take on infinite different values within a range of values, e.g., amount of rainfall occurring in a month. The rain observed can be 1.7cm, but the exact value is not known. It can, in actuality, be 1.701, 1.7687, etc. As such, you can only define the range of values it falls into. Within this value, it can take on infinite different values.
Now, consider a continuous random variable x, which has a probability density function, that defines the range of probabilities taken by this function as f(x). After plotting the pdf, you get a graph as shown below:
Figure 1: Probability Density Function
In the above graph, you get a bell-shaped curve after plotting the function against the variable. The blue curve shows this. Now consider the probability of a point b. To find it, you need to find the area under the curve to the left of b. This is represented by P(b). To find the probability of a variable falling between points a and b, you need to find the area of the curve between a and b. As the probability cannot be more than P(b) and less than P(a), you can represent it as:
P(a) <= X <= P(b).
Consider the graph below, which shows the rainfall distribution in a year in a city. The x-axis has the rainfall in inches, and the y-axis has the probability density function. The probability of some amount of rainfall is obtained by finding the area of the curve on the left of it.
Figure 2: Probability Density Function of the amount of rainfall
For the probability of 3 inches of rainfall, you plot a line that intersects the y-axis at the same point on the graph as a line extending from 3 on the x-axis does. This tells you that the probability of 3 inches of rainfall is less than or equal to 0.5.
How to Find the Probability Density Function in Statistics?
Below are the are three main steps:
- Summarizing the density with a histogram: You first convert the data into discrete form by plotting it as a histogram. A histogram is a graph with categorical values on the x-axis and bins of different heights, giving you a count of the values in that category. The number of bins is crucial as it determines how many bars the histogram will have and their width. This will tell you how it will plot your density.
- Performing Parametric density estimation: A PDF can take on a shape similar to many standard functions. The shape of the histogram will help you determine which type of function it is. You can calculate the parameters associated with the function to get our density. To check if our histogram is an excellent fit for the function, you can:
- Plot the density function and compare histogram shape
- Compare samples of the function with actual samples
- Use a statistical test
- Performing Non-Parametric Density Estimation: In cases where the shape of the histogram doesn't match a common probability density function, or cannot be made to fit one, you calculate the density using all the samples in the data and applying certain algorithms. One such algorithm is the Kernel Density Estimation. It uses a mathematical function to calculate and smooth probabilities so that their sum is always 1. To do this, you need the following parameters:
- Smoothing Parameter (bandwidth): Controls the number of samples used to estimate the probability of a new point.
- Basis Function: Helps to control the distribution of samples.
How to Implement the Probability Density Function in Python?
You will see how to find the probability density function of a random sample with the help of Python. You start by importing the necessary modules, which will help you plot the histogram and find the distribution.
Figure 3: Importing necessary modules
1. Plotting a Histogram
Now generate a random sample that has a probability density function resembling a bell-shaped curve. This type of probability distribution is called a Normal Distribution.
Figure 4: Plotting a histogram
Using the pyplot library, you plotted the distribution as a histogram. As you can see, the shape of the histogram resembles a bell curve.
Figure 5: Histogram
While plotting a histogram, it is important to plot it using the right number of bins. In the above diagram, you used 10 bins. See what happens if you use 4 bins.
Figure 6: Histogram with 4 bins
As you can see, this histogram doesn’t resemble a bell shape as much as the one with 10 bins. This can make it hard to recognize the type of distribution.
2. Performing Parametric Density Estimation
Now, see how to perform parametric density estimation. First, generate a normal sample with a mean of 50 and a standard deviation of 5. 1000 samples are being generated.
Figure 7: Generating Samples
To perform parametric estimation, assume that you don't know the distribution of these samples. The first thing that you need to do with the sample is to assume a distribution for it. Let’s assume a normal distribution. The parameters associated with normal distribution are mean and standard deviation. Calculate the mean and standard deviation for the samples.
Figure 8: Calculating mean and standard deviation
Now, define a normal distribution with the above mean and standard deviation.
Figure 9: Normal distribution
Now, find the probability distribution for the distribution defined above.
Figure 10: Probability distribution for normal distribution
Now, plot the distribution you’ve defined on top of the sample data.
Figure 11: Plotting distribution on samples
As you can see, the distribution you assumed is almost a perfect fit for the samples. This means that the sample is a normal distribution. If this were not the same, you would have to assume the sample to be of some other distribution and repeat the process.
3. Performing Non-Parametric Density Estimation
It’s time to perform non-parametric estimations now. You start by importing some modules needed for it.
Figure 12: Importing necessary modules
To perform non-parametric estimations, you must use two normal samples and join them together to get a sample that does not fit any known common distribution.
Figure 13: Creating a sample
Now, plot the distribution to see what it looks like.
Figure 14: Plotting the distribution
Now, use Kernel density estimation to get a model, which you can then fit to your sample to create a probability distribution curve.
Figure 15: Creating a Kernel Density Estimation Function
You will now find the probability distribution for our kernel density estimation function.
Figure 16: Creating a Kernel Density Estimation Function
Finally, plot the function on top of your samples.
Figure 17: Plotting distribution on samples
You can see that the estimations of the kernel density estimation fit the samples pretty well. To further fine-tune the fit, you can change the bandwidth of the function.
Looking forward to a career in Data Analytics? Check out the Data Analytics Bootcamp and get certified today.
In this tutorial on ‘Everything You Need to Know About the Probability Density Function’, you understood a probability density function in statistics. You then looked at how to find the probability density function in statistics and python.
If you are keen on learning about Probability density function and related statistical concepts, you could explore a career in data analytics. Simplilearn’s Post Graduate Program in Data Analytics is one of the most comprehensive online programs out there for this. If you need any further clarifications or want to learn more about statistics and normal distribution, share your queries with us by mentioning them in this page's comments section. We will have our experts review them at the earliest. You can also understand the concept of the probability density function and other statistical concepts by checking out this video on our YouTube channel.
Have any questions for us? Leave them in the comments section of this article. Our experts will get back to you on the same, ASAP! |
politics and government of
|Canada portal Politics portal|
The politics of Canada function within a framework of parliamentary democracy and a federal system of parliamentary government with strong democratic traditions.Canada is a constitutional monarchy, in which the monarch is head of state. In practice, the executive powers are directed by the Cabinet, a committee of ministers of the Crown responsible to the elected House of Commons of Canada and chosen and headed by the Prime Minister of Canada.
Canada is described as a "full democracy", with a tradition of liberalism, and an egalitarian,moderate political ideology.Far-right and far-left politics have never been a prominent force in Canadian society.Peace, order, and good government, alongside an implied bill of rights are founding principles of the Canadian government. An emphasis on social justice has been a distinguishing element of Canada's political culture. Canada has placed emphasis on equality and inclusiveness for all its people.
The country has a multi-party system in which many of its legislative practices derive from the unwritten conventions of and precedents set by the Westminster parliament of the United Kingdom. The two dominant political parties in Canada have historically been the Liberal Party of Canada and the Conservative Party of Canada (or its predecessors) Smaller parties like the New Democratic Party, the Quebec nationalist Bloc Québécois and the Green Party of Canada have also been able to exert their own influence over the political process.
Canada has evolved variations: party discipline in Canada is stronger than in the United Kingdom, and more parliamentary votes are considered motions of confidence, which tends to diminish the role of non-Cabinet members of parliament (MPs). Such members, in the government caucus, and junior or lower-profile members of opposition caucuses, are known as backbenchers. Backbenchers can, however, exert their influence by sitting in parliamentary committees, like the Public Accounts Committee or the National Defence Committee.
Canada's governmental structure was originally established by the British Parliament through the British North America Act (now known as the Constitution Act, 1867), but the federal model and division of powers were devised by Canadian politicians. Particularly after World War I, citizens of the self-governing Dominions, such as Canada, began to develop a strong sense of identity, and, in the Balfour Declaration of 1926, the British government expressed its intent to grant full autonomy to these regions.
Thus in 1931, the British Parliament passed the Statute of Westminster, giving legal recognition to the autonomy of Canada and other Dominions. Following this, Canadian politicians were unable to obtain consensus on a process for amending the constitution until 1982, meaning amendments to Canada's constitution continued to require the approval of the British parliament until that date. Similarly, the Judicial Committee of the Privy Council in Britain continued to make the final decision on criminal appeals until 1933 and on civil appeals until 1949.
Canada's egalitarian approach to governance has emphasized social welfare, economic freedom, and multiculturalism, which is based on selective economic migrants, social integration, and suppression of far-right politics, that has wide public and political support. Its broad range of constituent nationalities and policies that promote a "just society" are constitutionally protected. Individual rights, equality and inclusiveness (social equality) have risen to the forefront of political and legal importance for most Canadians, as demonstrated through support for the Charter of Rights and Freedoms, a relatively free economy, and social liberal attitudes toward women's rights (like pregnancy termination), homosexuality, euthanasia or cannabis use. There is also a sense of collective responsibility in Canadian political culture, as is demonstrated in general support for universal health care, multiculturalism, gun control, foreign aid, and other social programs.
At the federal level, Canada has been dominated by two relatively centrist parties practicing "brokerage politics[a]", the centre-left Liberal Party of Canada and the centre-right Conservative Party of Canada. The historically predominant Liberals position themselves at the centre of the political scale with the Conservatives sitting on the right and the New Democratic Party occupying the left. Five parties had representatives elected to the federal parliament in the 2019 election: the Liberal Party who currently form the government, the Conservative Party who are the Official Opposition, the New Democratic Party, the Bloc Québécois, and the Green Party of Canada.
Currently, the Senate, which is frequently described as providing "regional" representation, has 105 members appointed by the Governor-General on the advice of the Prime Minister to serve until age 75. It was created with equal representation from each of Ontario, Quebec, the Maritime region and the Western Provinces. However, it is currently the product of various specific exceptions, additions and compromises, meaning that regional equality is not observed, nor is representation-by-population. The normal number of senators can be exceeded by the monarch on the advice of the Prime Minister, as long as the additional senators are distributed equally with regard to region (up to a total of eight additional Senators). This power of additional appointment has only been used once, when Prime Minister Brian Mulroney petitioned Queen Elizabeth II to add eight seats to the Senate so as to ensure the passage of the Goods and Services Tax legislation.
The House of Commons currently has 338 members elected in single-member districts in a plurality voting system (first past the post), meaning that members must attain only a plurality (the most votes of any candidate) rather than a majority (50 percent plus one). The electoral districts are also known as ridings.
Mandates cannot exceed five years; an election must occur by the end of this time. This fixed mandate has been exceeded only once, when Prime Minister Robert Borden perceived the need to do so during World War I. The size of the House and apportionment of seats to each province is revised after every census, conducted every five years, and is based on population changes and approximately on representation-by-population.
Canadians vote for their local Member of Parliament (MP) only. An MP need not be a member of any political party: such MPs are known as independents. When a number of MPs share political opinions they may form a body known as a political party.
The Canada Elections Act defines a political party as "an organization one of whose fundamental purposes is to participate in public affairs by endorsing one or more of its members as candidates and supporting their election." Forming and registering a federal political party are two different things. There is no legislation regulating the formation of federal political parties. Elections Canada cannot dictate how a federal political party should be formed or how its legal, internal and financial structures should be established.
Parties elect their leaders in run-off elections to ensure that the winner receives more than 50% of the votes. Normally the party leader stands as a candidate to be an MP during an election. Canada's parliamentary system empowers political parties and their party leaders. Where one party gets a majority of the seats in the House of Commons, that party is said to have a "majority government." Through party discipline, the party leader, who is elected in only one riding, exercises a great deal of control over the cabinet and the parliament.
Historically the prime minister and senators are selected by the governor general as a representative of the Queen, though in modern practice the monarch's duties are ceremonial. Consequently, the prime minister, while technically selected by the governor general, is for all practical purposes selected by the party with the majority of seats. That is, the party that gets the most seats normally forms the government, with that party's leader becoming prime minister. The prime minister is not directly elected by the general population, although the prime minister is almost always directly elected as an MP within his or her constituency. Again senators while technically selected at the pleasure of the monarch, are ceremonially selected by the governor general at the advice (and for most practical purposes authority) of the prime minister.
A minority government situation occurs when the party that holds the most seats in the House of Commons holds fewer seats than the opposition parties combined. In this scenario usually the party leader whose party has the most seats in the House is selected by the governor general to lead the government, however, to create stability, the leader chosen must have the support of the majority of the House, meaning they need the support of at least one other party.
In Canada, the provinces are considered co-sovereign; sovereignty of the provinces is passed on, not by the Governor General or the Canadian parliament, but through the Crown itself. This means that the Crown is "divided" into 11 legal jurisdictions or 11 "Crowns" - one federal (the Crown in right of Canada, and ten provincial, example being the Crown in right of British Columbia, .
Federal-provincial (or intergovernmental, formerly Dominion-provincial) relations is a regular issue in Canadian politics: Quebec wishes to preserve and strengthen its distinctive nature, western provinces desire more control over their abundant natural resources, especially energy reserves; industrialized Central Canada is concerned with its manufacturing base, and the Atlantic provinces strive to escape from being less affluent than the rest of the country.
In order to ensure that social programs such as health care and education are funded consistently throughout Canada, the "have-not" (poorer) provinces receive a proportionately greater share of federal "transfer (equalization) payments" than the richer, or "have", provinces do; this has been somewhat controversial. The richer provinces often favour freezing transfer payments, or rebalancing the system in their favour, based on the claim that they already pay more in taxes than they receive in federal government services, and the poorer provinces often favour an increase on the basis that the amount of money they receive is not sufficient for their existing needs.
Particularly in the past decade, some scholars have argued that the federal government's exercise of its unlimited constitutional spending power has contributed to strained federal-provincial relations. This power, which allows the federal government to spend the revenue it raises in any way that it pleases, allows it to overstep the constitutional division of powers by creating programs that encroach on areas of provincial jurisdiction. The federal spending power is not expressly set out in the Constitution Act, 1867; however, in the words of the Court of Appeal for Ontario the power "can be inferred" from s. 91(1A), "the public debt and property".
A prime example of an exercise of the spending power is the Canada Health Act, which is a conditional grant of money to the provinces. Regulation of health services is, under the Constitution, a provincial responsibility. However, by making the funding available to the provinces under the Canada Health Act contingent upon delivery of services according to federal standards, the federal government has the ability to influence health care delivery. This spending power, coupled with Supreme Court rulings--such as Reference re Canada Assistance Plan (B.C.)--that have held that funding delivered under the spending power can be reduced unilaterally at any time, has contributed to strained federal-provincial relations.
Except for three short-lived transitional or minority governments, prime ministers from Quebec led Canada continuously from 1968 to early 2006. Québécois led both Liberal and Progressive Conservative governments in this period.
Monarchs, governors general, and prime ministers are now expected to be at least functional, if not fluent, in both English and French. In selecting leaders, political parties give preference to candidates who are fluently bilingual.
Also, by law, three of the nine positions on the Supreme Court of Canada must be held by judges from Quebec. This representation makes sure that at least three judges have sufficient experience with the civil law system to treat cases involving Quebec laws.
Canada has a long and storied history of secessionist movements (see Secessionist movements of Canada). National unity has been a major issue in Canada since the forced union of Upper and Lower Canada in 1840.
The predominant and lingering issue concerning Canadian national unity has been the ongoing conflict between the French-speaking majority in Quebec and the English-speaking majority in the rest of Canada. Quebec's continued demands for recognition of its "distinct society" through special political status has led to attempts for constitutional reform, most notably with the failed attempts to amend the constitution through the Meech Lake Accord and the Charlottetown Accord (the latter of which was rejected through a national referendum).
Since the Quiet Revolution, sovereigntist sentiments in Quebec have been variably stoked by the patriation of the Canadian constitution in 1982 (without Quebec's consent) and by the failed attempts at constitutional reform. Two provincial referenda, in 1980 and 1995, rejected proposals for sovereignty with majorities of 60% and 50.6% respectively. Given the narrow federalist victory in 1995, a reference was made by the Chrétien government to the Supreme Court of Canada in 1998 regarding the legality of unilateral provincial secession. The court decided that a unilateral declaration of secession would be unconstitutional. This resulted in the passage of the Clarity Act in 2000.
The Bloc Québécois, a sovereigntist party which runs candidates exclusively in Quebec, was started by a group of MPs who left the Progressive Conservative (PC) party (along with several disaffected Liberal MPs), and first put forward candidates in the 1993 federal election. With the collapse of the PCs in that election, the Bloc and Liberals were seen as the only two viable parties in Quebec. Thus, prior to the 2006 election, any gain by one party came at the expense of the other, regardless of whether national unity was really at issue. The Bloc, then, benefited (with a significant increase in seat total) from the impressions of corruption that surrounded the Liberal Party in the lead-up to the 2004 election. However, the newly unified Conservative party re-emerged as a viable party in Quebec by winning 10 seats in the 2006 election. In the 2011 election, the New Democratic Party succeeded in winning 59 of Quebec's 75 seats, successfully reducing the number of seats of every other party substantially. The NDP surge nearly destroyed the Bloc, reducing them to 4 seats, far below the minimum requirement of 12 seats for Official party status.
Newfoundland and Labrador is also a problem regarding national unity. As the Dominion of Newfoundland was a self-governing country equal to Canada until 1949, there are large, though unco-ordinated, feelings of Newfoundland nationalism and anti-Canadian sentiment among much of the population. This is due in part to the perception of chronic federal mismanagement of the fisheries, forced resettlement away from isolated settlements in the 1960s, the government of Quebec still drawing inaccurate political maps whereby they take parts of Labrador, and to the perception that mainland Canadians look down upon Newfoundlanders. In 2004, the Newfoundland and Labrador First Party contested provincial elections and in 2008 in federal ridings within the province. In 2004, then-premier Danny Williams ordered all federal flags removed from government buildings as a result of lost offshore revenues to equalization clawbacks. On December 23, 2004, premier Williams made this statement to reporters in St. John's,
They basically slighted us, they are not treating us as a proper partner in Confederation. It's intolerable and it's insufferable and these flags will be taken down indefinitely. It's also quite apparent to me that we were dragged to Manitoba in order to punish us, quite frankly, to try to embarrass us, to bring us out there to get no deal and send us back with our tail between our legs.
Western alienation is another national-unity-related concept that enters into Canadian politics. Residents of the four western provinces, particularly Alberta, have often been unhappy with a lack of influence and a perceived lack of understanding when residents of Central Canada consider "national" issues. While this is seen to play itself out through many avenues (media, commerce, and so on.), in politics, it has given rise to a number of political parties whose base constituency is in western Canada. These include the United Farmers of Alberta, who first won federal seats in 1917, the Progressives (1921), the Social Credit Party (1935), the Co-operative Commonwealth Federation (1935), the Reconstruction Party (1935), New Democracy (1940) and most recently the Reform Party (1989).
The Reform Party's slogan "The West Wants In" was echoed by commentators when, after a successful merger with the PCs, the successor party to both parties, the Conservative Party won the 2006 election. Led by Stephen Harper, who is an MP from Alberta, the electoral victory was said to have made "The West IS In" a reality. However, regardless of specific electoral successes or failures, the concept of western alienation continues to be important in Canadian politics, particularly on a provincial level, where opposing the federal government is a common tactic for provincial politicians. For example, in 2001, a group of prominent Albertans produced the Alberta Agenda, urging Alberta to take steps to make full use of its constitutional powers, much as Quebec has done.
Canada is considered by most sources to be a very stable democracy. In 2006, The Economist ranked Canada the third-most democratic nation in its Democracy Index, ahead of all other nations in the Americas and ahead of every nation more populous than itself. In 2008, Canada was ranked World No. 11 and again ahead of all countries more populous and ahead of other states in the Americas. (In 2008, the United States was ranked World No. 18, Uruguay World No. 23, and Costa Rica World No. 27.)
The Liberal Party of Canada, under the leadership of Paul Martin, won a minority victory in the June 2004 general elections. In December 2003, Martin had succeeded fellow Liberal Jean Chrétien, who had, in 2000, become the first prime minister to lead three consecutive majority governments since 1945. However, in 2004 the Liberals lost seats in Parliament, going from 172 of 301 parliamentary seats to 135 of 308, and from 40.9% to 36.7% in the popular vote. The Canadian Alliance, which did well in western Canada in the 2000 election but was unable to make significant inroads in the East, merged with the Progressive Conservative Party to form the Conservative Party of Canada in late 2003.
They proved to be moderately successful in the 2004 campaign, gaining seats from a combined Alliance-PC total of 78 in 2000 to 99 in 2004. However, the new Conservatives lost in popular vote, going from 37.7% in 2000 down to 29.6%. In 2006, the Conservatives, led by Stephen Harper, won a minority government with 124 seats. They improved their percentage from 2004, garnering 36.3% of the vote. During this election, the Conservatives also made major breakthroughs in Quebec. They gained 10 seats here, whereas in 2004 they had no seats.
At the 2011 federal election, the Conservatives won a majority government with 167 seats. For the first time, the NDP became the Official Opposition, with 102 seats; the Liberals finished in third place with 34 seats. This was the first election in which the Green Party won a seat, that of leader Elizabeth May; the Bloc won 4 seats, losing official party status.
The Liberal Party, after dominating Canadian politics since the 1920s, was in decline in early years of the 21st century. As Lang (2010) concluded, they lost their majority in Parliament in the 2004 election, were defeated in 2006, and in 2008 became little more than a "rump", falling to their lowest seat count in decades and a mere 26% of the popular vote. Furthermore, said Lang (a Liberal himself), its prospects "are as bleak as they have ever been." In the 2011 election, the Liberals suffered a crushing defeat, managing to secure only 18.9% of the vote share and only 34 seats. As a result, the Liberals lost their status as official opposition to the NDP.
In explaining those trends, Behiels (2010) synthesized major studies and reported that "a great many journalists, political advisors, and politicians argue that a new political party paradigm is emerging" She claimed they saw a new power configuration based on a right-wing political party capable of sharply changing the traditional role of the state (federal and provincial) in the twenty-first-century. Behiels said that, unlike Brian Mulroney who tried but failed to challenge the long-term dominance of the Liberals, Harper's attempt had proven to be more determined, systematic and successful.
Many commentators thought it signalled a major realignment. The Economist said, "the election represents the biggest realignment of Canadian politics since 1993." Lawrence Martin, commentator for the Globe and Mail said, "Harper has completed a remarkable reconstruction of a Canadian political landscape that endured for more than a century. The realignment saw both old parties of the moderate middle, the Progressive Conservatives and the Liberals, either eliminated or marginalized."Maclean's said, the election marked "an unprecedented realignment of Canadian politics" as "the Conservatives are now in a position to replace the Liberals as the natural governing party in Canada."
Despite the grim outlook and poor early poll numbers, when the 2015 election was held, the Liberals under Justin Trudeau had an unprecedented comeback and the realignment was proved only temporary. Gaining 148 seats, they won a majority government for the first time since 2000. The Toronto Star claimed the comeback was "headed straight for the history books" and that Harper's name would "forever be joined with that of his Liberal nemesis in Canada's electoral annals". Spencer McKay for the National Post suggested that "maybe we've witnessed a revival of Canada's 'natural governing party'".
Funding changes were made to ensure greater reliance on personal contributions. Personal donations to federal parties and campaigns benefit from tax credits, although the amount of tax relief depends on the amount given. Also only people paying income taxes receive any benefit from this.
A good part of the reasoning behind the change in funding was that union or business funding should not be allowed to have as much impact on federal election funding as these are not contributions from citizens and are not evenly spread out between parties. They are still allowed to contribute to the election but only in a minor fashion. The new rules stated that a party had to receive 2% of the vote nationwide in order to receive the general federal funding for parties. Each vote garnered a certain dollar amount for a party (approximately $1.75) in future funding. For the initial disbursement, approximations were made based on previous elections. The NDP received more votes than expected (its national share of the vote went up) while the new Conservative Party of Canada received fewer votes than had been estimated and was asked to refund the difference. Quebec was the first province to implement a similar system of funding many years before the changes to funding of federal parties.
Federal funds are disbursed quarterly to parties, beginning at the start of 2005. For the moment, this disbursement delay leaves the NDP and the Green Party in a better position to fight an election, since they rely more on individual contributors than federal funds. The Green Party now receives federal funds, since it for the first time received a sufficient share of the vote in the 2004 election.
In 2007, news emerged of a funding loophole that "could cumulatively exceed the legal limit by more than $60,000," through anonymous recurrent donations of $200 to every riding of a party from corporations or unions. At the time, for each individual, the legal annual donation limit was $1,100 for each party, $1,100 combined total for each party's associations, and in an election year, an additional $1,100 combined total for each party's candidates. All three limits increase on 1 April every year based on the inflation rate.
Ordered by number of elected representatives in the House of Commons
Leaders' debates in Canada consist of two debates, one English and one French, both produced by a consortium of Canada's five major television broadcasters (CBC/SRC, CTV, Global and TVA) and usually consist of the leaders of all parties with representation in the House of Commons.
The highest court in Canada is the Supreme Court of Canada and is the final court of appeal in the Canadian justice system. The court is composed of nine judges: eight Puisne Justices and the Chief Justice of Canada. Justices of the Supreme Court of Canada are appointed by the Governor-in-Council. The Supreme Court Act limits eligibility for appointment to persons who have been judges of a superior court, or members of the bar for ten or more years. Members of the bar or superior judge of Quebec, by law, must hold three of the nine positions on the Supreme Court of Canada.
The Canadian government operates the public service using departments, smaller agencies (for example, commissions, tribunals, and boards), and crown corporations. There are two types of departments: central agencies such as Finance, Privy Council Office, and Treasury Board Secretariat have an organizing and oversight role for the entire public service; line departments are departments that perform tasks in a specific area or field, such as the departments of Agriculture, Environment, or Defence.
Scholar Peter Aucoin, writing about the Canadian Westminster system, raised concerns in the early 2000s about the centralization of power; an increased number, role and influence of partisan-political staff; personal-politicization of appointments to the senior public service; and the assumption that the public service is promiscuously partisan for the government of the day.
In 1967, Canada established a point-based system to determine if immigrants should be eligible to enter the country, using meritorious qualities such as the applicant's ability to speak both French and English, their level of education, and other details that may be expected of someone raised in Canada. This system was considered ground-breaking at the time since prior systems were slanted on the basis of ethnicity. However, many foreign nationals still found it challenging to secure work after emigrating, resulting in a higher unemployment rate amongst the immigrant population. After winning power at the 2006 federal election, the Conservative Party sought to curb this issue by placing weight on whether or not the applicant has a standing job offer in Canada. The change has been a source of some contention as opponents argue that businesses use this change to suppress wages, with corporate owners leveraging the knowledge that an immigrant should hold a job to successfully complete the immigration process.
|Party||Party leader||Candidates||Seats||Popular vote|
|%||pp change||% where|
|Bloc Québécois||Yves-François Blanchet||78||10||10||32||+220%||9.47%||1,387,030||+565,886||7.63%||+2.96pp||32.37%|
|New Democratic||Jagmeet Singh||338||44||39||24||-38.46%||7.1%||2,903,789||−566,561||15.98%||−3.75pp||15.98%|
|Independent and no affiliation||125||0||8[d]||1||-87.5%||0.3%||75,625||+26,009||0.42%||+0.14pp||1.51%|
|Christian Heritage||Rod Taylor||51||0||0||0||18,901||+3,669||0.10%||+0.01pp||0.70%|
|Veterans Coalition||Randy David Joy||25||N/A||0||0||6,300||*||0.03%||*||0.45%|
|Marxist-Leninist||Anna Di Carlo||50||0||0||0||4,124||−4,714||0.02%||−0.03pp||0.15%|
|Animal Alliance||Liz White||17||0||0||0||4,408||+2,709||0.02%||+0.01pp||0.44%|
|Pour l'Indépendance du Québec||Michel Blondin||13||N/A||0||0||3,815||*||0.02%||*||0.49%|
|Progressive Canadian||Joe Hueglin||3||0||0||0||1,534||−2,942||0.01%||−0.02pp||0.83%|
|Canada's Fourth Front||Partap Dua||7||N/A||0||0||682||*||0.00%||*||0.20%|
|The United Party||Carlton Darby||4||N/A||0||0||602||*||0.00%||*||0.32%|
|National Citizens Alliance||Stephen J. Garvey||4||N/A||0||0||510||*||0.00%||*||0.27%|
|Stop Climate Change||Ken Ranney||2||N/A||0||0||296||*||0.00%||*||0.23%|
|Canadian Nationalist Party||Travis Patron||3||N/A||0||0||281||*||0.00%||*||0.20%|
|Co-operative Commonwealth Federation[e]||1||N/A|
|Blank and invalid votes (note: the count of rejected ballots is not reported in the preliminary results)||-||-||-||-||-||-|
|Source: Elections Canada (Validated results)|
most Canadian governments, especially at the federal level, have taken a moderate, centrist approach to decision making, seeking to balance growth, stability, and governmental efficiency and economy...
First Past the Post in Canada has favoured broadly-based, accommodative, centrist parties...
Two historically dominant political parties have avoided ideological appeals in favour of a flexible centrist style of politics that is often labelled brokerage politics
...most Canadian governments, especially at the federal level, have taken a moderate, centrist approach to decision making, seeking to balance growth, stability, and governmental efficiency and economy...
Canada's party system has long been described as a "brokerage system" in which the leading parties (Liberal and Conservative) follow strategies that appeal across major social cleavages in an effort to defuse potential tensions.
First Past the Post in Canada has favoured broadly-based, accommodative, centrist parties... |
In this python tutorial, you will learn how to Find Factorial of a Number Using Recursion with the if, else and elif statement with the different operators of the python programming language.
How to Find Factorial of a Number Using Recursion?
Let’s take a look at the source code , here the values are given as input by the user in the code, the if, else and elif statements and for loop along with the assignment operators carry out the function.RUN CODE SNIPPET
# Factorial of a number using recursion def recur_factorial(n): if n == 1: return n else: return n*recur_factorial(n-1) num = int(input("Enter the integer: ")) if num < 0: print("\nSorry, factorial does not exist for negative numbers") elif num == 0: print("\nThe factorial of 0 is 1") else: print("\nThe factorial of", num, "is", recur_factorial(num))
Enter the integer: The factorial of 6 is 720
- At the start, we use
def recur_factorial(n):where the
defkeyword is used to define a function and
recur_factorialis used to call the function to get the value of the variable
- We declare an
ifstatement with the condition
n == 1:where if it is satisfied, using the
returnfunction we return the value of n, if the condition is not satisfied, it moves to the next step which is the
elsestatement where the value obtained after the execution of the equation
n*recur_factorial(n-1)is returned using the
- Here we give the user the option to enter the values and the input values are scanned using the
inputfunction and are stored in the variable
numwith the statements/strings
("Enter the integer: "), we use the
intfunction and declare the input value as an integer value.
- In the STDIN section of the code editor the input values are entered.
- We declare another
ifstatement with the condition
num < 0where if it is satisfied, the
("\nSorry, factorial does not exist for negative numbers"), if the condition is not satisfied, it moves to the next step which is the
- We declare the
elifstatement with the condition
num == 0, where if the condition is satisfied, the
("\nThe factorial of 0 is 1")
- If the above
elifstatement is not satisfied it moves to the next step which is the
elsestatement, where the
("\nThe factorial of", num, "is", recur_factorial(num), where the function
recur_factorial(num)holds the final value of the recursive function will be displayed.
- Recursive functions are functions that calls itself. It is always made up of 2 portions, the base case and the recursive case. The base case is the condition to stop the recursion. The recursive case is the part where the function calls on itself.
- The < lesser than operator is a comparison operator which returns True if the condition is satisfied, if not it returns false.
- The == equality is a comparison operator which returns True is the two items are equal and returns False if not equal.
- The input() function allows a user to insert a value into a program, it returns a string value.
- The if and else statements evaluates whether an expression is true or false. If a condition is true, the “if” statement is executed otherwise, the “else” statement is executed.
- The elif is short for else if, it allows us to check for multiple expressions.
- The colon : at the end of the if and else statement tells Python that the next line of code should only be run if the condition is true.
- The statement for the input function are enclosed in single quotes and parenthesis.
- The \n in the code indicates a new line or the end of a statement line or a string.
- The print statement/string to be displayed in enclosed in double quotes. |
In this unit students find out the length of their pace when walking and running, and compare these with the paces of others
- Make estimates of lengths between approximately 50cm and 1.5m (lengths of strides)
- Measure lengths in metres and centimetres
- Convert between metres and centimetres
In this unit the students use tape measures to measure lengths in centimeters, in metres and other decimal parts of a metre. The metreis the base unit of length in the International System of Units (SI).
Longer units of length are created by collecting multiples of 1 metre. The most common unit is the kilometre. Kilo is the prefix for 1000 so 1 kilometre = 1000 metres.
Shorter units of length are created by partitioning 1 metre into equal parts. For example, centi is the prefix for one hundredth, and milli is the prefix for one thousandth. Therefore, 1/100 of 1 metre = 1 centimetre and 1/1000 of 1 metre = 1 millimetre.
The symbols for units of length reflect the basic unit, m for metre, and the prefix. For example, μm is a micrometre since micro is the prefix for 1 millionth.
The learning opportunities in this unit can be differentiated by providing or removing support to students, by varying the task requirements. Ways to support students include:
- providing measurement tools so students can develop the practical skills required
- physically acting out the iteration of stride lengths
- providing calculators to support difficult divisions and conversions
- promoting collaborative group work with sharing and justification of ideas
- explicitly modelling the relationships between units of length, e.g. Dividing a metre strip of paper into tenths of tenths to show centimetres
- linking division with decimals to division with whole numbers, e.g. “How many fives fit in 30?” so “How many lots of 0.5 metres fit in 3 metres?”
Tasks can be varied in many ways including:
- manipulating the accessibility of the distances being measured
- reducing recording required, through the use of tables and other templates
- encouraging flexibility in the way students choose to solve problems.
The contexts for this unit can be adapted to suit the interests and cultural backgrounds of your students. Using distances that are familiar to students is motivating and encourages them to try out ideas outside of class time. Sports fields, building, playgrounds, and marae are all of potential interest. Students will enjoy measuring the length of trips they commonly make, such as to a friend or relative’s house, to their parent’s workplace, or to a favourite recreation site. You might investigate how people measures were used in the past. Māori used arms to measure the girth of tall trees for the purpose of making waka. Buildings were constructed using personal measures, like stride or foot length, of significant tipuna (ancestors) or chiefs.
- Tape measures, metre rulers, trundle wheels
- Access to Google Maps
- Pedometers (optional)
Finding the length of one’s stride.
- Ask students to estimate the length of their stride. Clarify that a stride is a ‘stretched out’ pace or step. Ask for suggestions as to how we could find the length of our normal walking stride.
- In pairs, have all the students measure the length of their walking stride in two ways:
- Method 1: Measure out ten metres. One student walks ten metres while the other counts the number of strides, expressing the last part-stride as a fraction or decimal, for example ‘about 14.5 strides’.
How will you work out the average length of one stride? (Divide 10 metres by the number of strides),
My calculator says, 10 ÷15 = 0.66… What does that mean? (66.6… centimetres)
How will you record the measure of one stride? (67cm or as a decimal of 1 metre, e.g. 0.67m rounded)
Ask students to record their stride length in two ways; for example, ‘0.69m, 69 cm.’
- Method 2: Walk 10 strides, and measure and record the total distance..
Should we start/finish measuring at heel or toe of your foot? Why is consistency important?
Suppose I walked a total of 7 metres and 60 centimetres. How will I calculate the distance of my average stride? (Express the distance using one unit, e.g. 7.6m or 760cm then divide that distance by 10.)
Record the stride length in two ways; As metres and centimetres, for example “My stride averaged 0.6.8m or 68cm.”
These data could be entered in a table:
Average length of stride (m)
Average length of stride (cm)
6. 80 metres
Comment: I estimated my stride would be about 80cm. It was shorter than I expected. My average stride is about 68-69 cm long. My measurements using both methods were within 1cm.
- Discuss the different results and the methods used.
Were your two results similar, or very different? Why?
Were your initial estimates quite accurate or not? Why?
- It is important to discuss sources of measurement error, such as going to the nearest mark, misreading the scale, converting between units, using a different measurement baseline (heel or toe), variation in walking style, etc.
Which method do they think is more reliable? why?
How could we test out the accuracy of our average stride length?
Students might suggest measuring a larger distance and matching the result with expectations from the stride measure.
- In this session students test the reliability of their stride length estimate. They do by predicting the number of strides needed to measure a known distance, measure that distance by pacing, and compare the predictions with the results.
- Establish some trusted point to point journeys within the school. Students might use trundle wheels, tape measures, or metre rulers to measure those lengths, in metres. For example, the length of the rugby field equals 75 metres or the longest wall of the hall equals 36 metres. Google Maps reduces to a scale of 5 metres so students might use that tool to find measures or confirm their own measurements.
- Ask: If your stride length is consistent, how many strides will it take you to measure these distances?
For each distance, create an estimate and record it in a table, as below. Allow the use of calculators.
Measurement (in metres
Predicted number of strides
Actual number of strides
Length of rugby field
Longest side of hall
Perimeter of junior playground
- Do students use the correct operation to calculate the number of strides? (Note that the number of strides should be greater than the number of metres. Why?)
- Do students convert the distances and their strides to the same unit, metres or centimetres?
- Discuss the operations students use. Do they use division, e.g. 5700 ÷72 = 97.22?
Do they recognise that more strides than metres fit into the same space?
Do they interpret the answer in correct ways? (strides, not metres or centimetres)
- Send the students out in smaller teams to test their predictions. Ask them to fill in the actual number of strides for each distance.
- After completing the task, gather your class and discuss:
How accurate were your predictions?
What might cause inaccuracies? (Miscounting, variable stride length, etc.)
How reliable is your stride length for measuring distances?
Could you build a path using strides instead of metres as your unit?
- Ask: How many strides would you take walking from home to school?
How would you predict the number of strides?
You might go to Google Maps, locate a hypothetical address for “Manu from Room 67”. Get directions to the school and that will display the walking distance by shortest routes.
Suppose Manu lives 1.3 kilometres from school. His stride length is 60 centimetres. How many strides will he take?
- Let students use calculators. What is the correct operation to perform?
1300 ÷0.6 1300 x 0.6 1.3 ÷ 60 1.3 x 60
130 000 ÷60 130 000 x 60
Discuss the need to express both the total distance, 1.3 km, and the stride length, 60 cm, in the same unit. Since 1.3 km equals 1300 metres it must also equal 100 x 1300 = 130 000 cm.
You might need to support students realise that “How many 60s fit into 130 000?” is a division problem, so 130 000 ÷ 60 = 2166.67 is the correct answer. Manu would take almost 2167 strides.
Let students calculate the number of strides they would take between home and school using Google Maps to measure the total distance, and their own calculation for stride length. Ask your students to record their calculations using correct units of measure.
In this session students consider the impact of varying the unit of length. They encounter inverse proportional situations in which increasing the length of the unit results in a decrease in the number of units, in a given space.
- Ask: Do taller students have longer strides?
What is your prediction? Why?
- Ask students to measure each other’s height, in pairs, and enter all heights and stride lengths on a class chart or spreadsheet. If you use a spreadsheet save the file in CSV format.
- How might we display the relationship between height and stride length?
You may need to suggest that a scatterplot is the best way to display these data, as both measures appear on a single graph. Import your CSV file into a graphing package such as CODAP or INZight. Use the package to produce a scatterplot.
- Ask: Is there a relationship between height and stride length?
How can we tell?
Students should notice that taller students tend to have longer stride lengths.
- Ask: Could you use this graph to predict the stride lengths of younger/shorter and older/taller students? How?
- Ask pairs of students to draw their own scatter graphs of heights and stride lengths for the students in your class. Ask them to write a comment explaining what the graph shows.
- Measure the length of the classroom, e.g. 10 metres. Choose two students whose stride lengths are tidy fractions of each other. For example, Mere’s stride length equals 60cm and Stefan’s equals 75cm.
Who will take the most strides to measure the length of our classroom?
Why will Mere take more strides?
Let’s get Stephan to measure. How many strides do you expect him to take?
How would we calculate that? (1000 ÷75 = 13.33 so 13 and 1/3 strides)
Let Stephan act out his measurement by strides.
If we know Stephan takes a bit over 13 strides, how many strides do we expect Mere to take?
- Invite estimates. Some students will try to calculate using Mere’s stride length, e.g. 1 000 ÷60 = 16.67 so almost 17 strides. Other students might attempt to use inverse proportions intuitively. For example, Mere takes 5 strides to cover 3 metres and Stephen only takes 4 strides. There are more than three lots of 3 metres in the length of the classroom so Mere should take at least 3 more strides than Stephan (The actual answer is 5/4 of 13.33 = 16.67).
- Pair up the students and give them a point to point distance to measure in strides. Ask them to record the results and why one person took more strides than the other.
Collect data about the stride lengths of other students, adults or animals, depending what resources are available. The following possibilities could provide rich tasks:
Mark out ten metres and ask one or two teachers, parents, or other available adults to walk the distance while the class first estimates, then counts strides, and calculates the length of their stride.
Consider stride length if students run rather than walk.
Students may be able to find the stride length of their pet cat or dog (Warning that counting of strides will not be easy!
Video from online will provide examples of athletes running over 100 metres or other distances. Videos of athletes like Usain Bolt often have slowed down footage that allows counting the number of strides. What is the athlete’s average stride length? Is the stride length longest at the start, end or middle of the race? (Over the 100m top speed is reached in the middle of the race). Do marathon runners have the same stride lengths as sprinters? Students are likely to be surprised at how far out their estimates of stride length are. Videos are also be available of horses and other animals, like Cheetahs, running over a set distance, of swimmers (count number of strokes and calculate length between strokes), or rowers or kayakers.
Work out how many strides an athlete, horse, giant, etc. would take from your home to school.
Use stride lengths to measure speed. How many strides do you take in a minute? How long will it take you to walk home from school? How long would a Cheetah take?
After a class discussion of results, students can write up a detailed account of what they have found out during the week and suggestions for further investigations. These could be used to direct further lines of inquiry or could be displayed to show other classes or parents. |
Multiplexing is the generic term used to describe the operation of sending one or more analogue or digital signals over a common transmission line at different times or speeds and as such, the device we use to do just that is called a Multiplexer.
The multiplexer, shortened to “MUX” or “MPX”, is a combinational logic circuit designed to switch one of several input lines through to a single common output line by the application of a control signal. Multiplexers operate like very fast acting multiple position rotary switches connecting or controlling multiple input lines called “channels” one at a time to the output.
Multiplexers, or MUX’s, can be either digital circuits made from high speed logic gates used to switch digital or binary data or they can be analogue types using transistors, MOSFET’s or relays to switch one of the voltage or current inputs through to a single output.
The most basic type of multiplexer device is that of a one-way rotary switch as shown.
The rotary switch, also called a wafer switch as each layer of the switch is known as a wafer, is a mechanical device whose input is selected by rotating a shaft. In other words, the rotary switch is a manual switch that you can use to select individual data or signal lines simply by turning its inputs “ON” or “OFF”. So how can we select each data input automatically using a digital device.
In digital electronics, multiplexers are also known as data selectors because they can “select” each input line, are constructed from individual Analogue Switches encased in a single IC package as opposed to the “mechanical” type selectors such as normal conventional switches and relays.
They are used as one method of reducing the number of logic gates required in a circuit design or when a single data line or data bus is required to carry two or more different digital signals. For example, a single 8-channel multiplexer.
Generally, the selection of each input line in a multiplexer is controlled by an additional set of inputs called control lines and according to the binary condition of these control inputs, either “HIGH” or “LOW” the appropriate data input is connected directly to the output. Normally, a multiplexer has an even number of 2n data input lines and a number of “control” inputs that correspond with the number of data inputs.
Note that multiplexers are different in operation to Encoders. Encoders are able to switch an n-bit input pattern to multiple output lines that represent the binary coded (BCD) output equivalent of the active input.
We can build a simple 2-line to 1-line (2-to-1) multiplexer from basic logic NAND gates as shown.
The input A of this simple 2-1 line multiplexer circuit constructed from standard NAND gates acts to control which input ( I0 or I1 ) gets passed to the output at Q.
From the truth table above, we can see that when the data select input, A is LOW at logic 0, input I1 passes its data through the NAND gate multiplexer circuit to the output, while input I0 is blocked. When the data select A is HIGH at logic 1, the reverse happens and now input I0 passes data to the output Q while input I1 is blocked.
So by the application of either a logic “0” or a logic “1” at A we can select the appropriate input, I0 or I1 with the circuit acting a bit like a single pole double throw (SPDT) switch.
As we only have one control line, (A) then we can only switch 21 inputs and in this simple example, the 2-input multiplexer connects one of two 1-bit sources to a common output, producing a 2-to-1-line multiplexer. We can confirm this in the following Boolean expression.
Q = A.I0.I1 + A.I0.I1 + A.I0.I1 + A.I0.I1
and for our 2-input multiplexer circuit above, this can be simplified too:
Q = A.I1 + A.I0
We can increase the number of data inputs to be selected further simply by following the same procedure and larger multiplexer circuits can be implemented using smaller 2-to-1 multiplexers as their basic building blocks. So for a 4-input multiplexer we would therefore require two data select lines as 4-inputs represents 22 data control lines give a circuit with four inputs, I0, I1, I2, I3 and two data select lines A and B as shown.
The Boolean expression for this 4-to-1 Multiplexer above with inputs A to D and data select lines a, b is given as:
Q = abA + abB + abC + abD
In this example at any one instant in time only ONE of the four analogue switches is closed, connecting only one of the input lines A to D to the single output at Q. As to which switch is closed depends upon the addressing input code on lines “a” and “b“, so for this example to select input B to the output at Q, the binary input address would need to be “a” = logic “1” and “b” = logic “0”.
Then we can show the selection of the data through the multiplexer as a function of the data select bits as shown.
Adding more control address lines, (n) will allow the multiplexer to control more inputs as it can switch 2n inputs but each control line configuration will connect only ONE input to the output.
Then the implementation of the Boolean expression above using individual logic gates would require the use of seven individual gates consisting of AND, OR and NOT gates as shown.
The symbol used in logic diagrams to identify a multiplexer is as follows.
Multiplexers are not limited to just switching a number of different input lines or channels to one common single output. There are also types that can switch their inputs to multiple outputs and have arrangements or 4-to-2, 8-to-3 or even 16-to-4 etc configurations and an example of a simple Dual channel 4 input multiplexer (4-to-2) is given below:
Here in this example the 4 input channels are switched to 2 individual output lines but larger arrangements are also possible. This simple 4-to-2 configuration could be used for example, to switch audio signals for stereo pre-amplifiers or mixers.
As well as sending parallel data in a serial format down a single transmission line or connection, another possible use of multi-channel multiplexers is in digital audio applications as mixers or where the gain of an analogue amplifier can be controlled digitally, for example.
Here, the voltage gain of the inverting operational amplifier is dependent upon the ratio between the input resistor, Rin and its feedback resistor, Rf as determined in the Op-amp tutorials.
A single 4-channel (Quad) SPST switch configured as a 4-to-1 channel multiplexer is connected in series with the resistors to select any feedback resistor to vary the value of Rf. The combination of these resistors will determine the overall gain of the amplifier, (Av). Then the gain of the amplifier can be adjusted digitally by simply selecting the appropriate resistor combination.
Digital multiplexers are sometimes also referred to as “Data Selectors” as they select the data to be sent to the output line and are commonly used in communications or high speed network switching circuits such as LAN´s and Ethernet applications.
Some multiplexer IC´s have a single inverting buffer (NOT Gate) connected to the output to give a positive logic output (logic “1”, HIGH) on one terminal and a complimentary negative logic output (logic “0”, LOW) on another different terminal.
It is possible to make simple multiplexer circuits from standard AND and OR gates as we have seen above, but commonly multiplexers/data selectors are available as standard i.c. packages such as the common TTL 74LS151 8-input to 1 line multiplexer or the TTL 74LS153 Dual 4-input to 1 line multiplexer. Multiplexer circuits with much higher number of inputs can be obtained by cascading together two or more smaller devices.
Then we can see that Multiplexers are switching circuits that just switch or route signals through themselves, and being a combinational circuit they are memoryless as there is no signal feedback path. The multiplexer is a very useful electronic circuit that has uses in many different applications such as signal routing, data communications and data bus control applications.
When used with a demultiplexer, parallel data can be transmitted in serial form via a single data link such as a fibre-optic cable or telephone line and converted back into parallel data once again. The advantage is that only one serial data line is required instead of multiple parallel data lines. Therefore, multiplexers are sometimes referred to as “data selectors”.
Multiplexers can also be used to switch either analogue, digital or video signals, with the switching current in analogue power circuits limited to below 10mA to 20mA per channel in order to reduce heat dissipation.
In the next tutorial about combinational logic devices, we will look at the reverse of the Multiplexer called the Demultiplexer which takes a single input line and connects it to multiple output lines. |
As artificial intelligence (AI) becomes more widely used to make decisions that affect our lives, making certain it is fair is a growing concern.
Algorithms can incorporate bias from several sources, from the people involved in different stages of their development to modelling choices that introduce or amplify unfairness. A machine learning system used by Amazon to pre-screen job applicants was found to display bias against women, for example, while an AI system used to analyze brain scans failed to perform equally well across people of different races.
"Fairness in AI is about ensuring that AI models don't discriminate when they're making decisions, particularly with respect to protected attributes like race, gender, or country of origin," says Nikola Konstantinov, a post-doctoral fellow at the ETH AI Center of ETH Zürich, in Switzerland.
Researchers typically use mathematical tools to measure the fairness of machine learning systems based on a specific definition of fairness. Individual fairness, for example, considers that a machine learning model should treat two similar individuals in a similar way. Two job applicants with comparable qualifications, but who may be of different genders, should therefore get similar results from a job screening tool; unfairness could be measured by counting how many times equally-experienced candidates receive different outcomes. However, fairness also can be measured on the population level, where group fairness looks at whether a system performs similarly across different groups, such as white people versus black people.
"If you were to now measure unfairness, one way is to check what proportion of all white applicants get offers, and also do this with black applicants then just compare them," says Konstantinov.
One of the problems with measuring fairness, however, is that the result is specifically tied to the definition of fairness used. According to mathematical proofs, it is impossible for a model to be fair with regards to several notions of fairness simultaneously. That's because as soon as you start accounting for problems to make a model fairer based on one definition, other problems will arise which will make it less fair according to a different notion of fairness. It is therefore important to carefully consider what definition of fairness is best-suited to a particular AI system, says Boris Ruf, a research data scientist at insurance giant AXA in Paris, France. "The goal of AI fairness is not to satisfy some sort of fairness, but rather to achieve the most appropriate and expected fairness objective."
Interpreting fairness measurements can also be a challenge. Such outcomes are typically a measure of the disparity between two groups or individuals, which would ideally be zero, and is therefore a result on a sliding scale rather than a concrete answer of whether it is fair or not. Fairness scores are often useful for machine learning researchers who are optimizing AI systems so they can improve their rating and make them as fair as possible. However, if a model is to be put to practical use, a more discrete measure of whether it is fair or not, either A or B, is often desirable. The challenge is therefore to determine what fairness threshold is acceptable and would correspond to a fair system. "The question of whether a model is unfair or not, as an A or B question, is essentially juridical," says Konstantinov. "Our measures may serve as a tool for defining certain requirements of AI models that are established by law or by agreed-upon good practices."
AI fairness researchers often adopt the 80% rule, which originates from labor law in the U.S., to check the existence of disparate impact, says Ruf. It is a guideline that companies use that states they should hire protected groups, such as ethnic minorities and women, at a rate that is at least 80% of that of white men. Applying this rule to AI systems means that any disparity measure between 0.8 and 1.25 would be considered fair.
Considering the context in which an AI system will be adopted, however, can help determine an appropriate fairness threshold. Stricter disparity measures could be advantageous when a machine learning model will be used to make high-risk decisions, such as in healthcare settings, whereas laxer standards could be acceptable for an AI system developed to recommend movies or songs. "The best choice always depends on the context of the application," says Ruf.
Measuring the fairness of an AI system using current methods, however, may not be enough to mitigate bias. When algorithms are being developed, unfairness can creep in at many different stages along the way and underlying issues can be obscured, since metrics are typically looking at an algorithm's performance. "It could be dangerous to just use algorithmic fairness (metrics) on (their) own," says Rumi Chunara, an associate professor at New York University.
Biased data, for example, can unknowingly contribute to unfairness. Machine learning systems are trained on datasets that may not be fair due to several factors, such as the way the data was collected (for example, if certain populations are underrepresented in the data). In recent work, Chunara and her colleagues found that mortality risk predictive models used in clinical care that were developed in one hospital or region were not generalizable to other populations. Particularly, they found disparities in performance across racial groups, which they think was caused by the training datasets used. "We should really solve the reason why that data is different for those different populations," says Chunara. "Without figuring out what's happening, we can't just put a fairness metric on an algorithm and require the outcomes to be the same."
Although some pre-processing methods can be applied to account for biases in data, they can only help overcome issues to a certain extent. If data is corrupted and prone to noise and unfairness, little can be done to make the AI system trained on it fair, says Konstantinov. "Essentially, this is an example of what machine learning people call 'garbage in, garbage out'," he adds. "Unfortunately, modern AI algorithms often not only preserve, but even amplify unfairness present in the data."
Several avenues are being explored to improve the fairness of AI systems. Chunara thinks focussing on data is important, for example by better examining its provenance and whether it is reproducible. Increasing public understanding of why it is important to collect data could also help persuade more people to contribute theirs, which could lead to more diverse sources and larger datasets.
Some researchers are also taking causal approaches to fairness to tackle discrimination in AI. Machine learning models learn patterns from data to make decisions and it's not always clear if the correlations they are detecting are relevant and fair. By focussing on making models explainable and transparent, for example, the variables involved in a decision are clearer and can then be judged as fair or not. "I follow with particular interest publications that focus on the intersectionality between algorithmic fairness and causality," says Ruf.
However, whether AI systems considered to be fair today will still be seen as fair in the long term is another issue being addressed. Characteristics of society change with time, so ensuring a model remains fair in an adaptive manner is an active line of researchers being pursued. "It may be possible to ensure that a model is completely fair by one definition on a fixed dataset and for particular situations," says Konstantinov. "However, providing long-term guarantees on fairness is an open problem."
Sandrine Ceurstemont is a freelance science writer based in London, U.K. |
In physics, the electronvolt (symbol eV; also written electron volt) is a unit of energy equal to approximately 160 zeptojoules (symbol zJ) or ×10−19 1.6joules (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, ) multiplied by the 1 J/Celementary charge (e, or 176565(35)×10−19 C). Therefore, one electron volt is equal to 1.602176565(35)×10−19 J. 1.602 Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV.
The electron volt is not an SI unit, and its definition is empirical (unlike the litre, the light year and other such non-SI units), thus its value in SI units must be obtained experimentally. Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/ √. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electron volt.
In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electron volts; it is equivalent to the GeV.
|Measurement||Unit||SI value of unit|
By mass–energy equivalence, the electronvolt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum (from E = mc2). It is common to simply express mass in terms of "eV" as a unit of mass, effectively using a system of natural units with c set to 1.
The mass equivalent of 1 eV is ×10−36 kg. 1.783
For example, an electron and a positron, each with a mass of , can 0.511 MeV/c2annihilate to yield of energy. The 1.022 MeVproton has a mass of . In general, the masses of all 0.938 GeV/c2hadrons are of the order of , which makes the 1 GeV/c2GeV (gigaelectronvolt) a convenient unit of mass for particle physics:
- = 1 GeV/c2×10−27 kg. 1.783
- 1 amu = = 931.4941 MeV/c2. 0.9314941 GeV/c2
In high-energy physics, the electron volt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy (i.e., ). This gives rise to usage of eV (and keV, MeV, GeV or TeV) as units of momentum, for the energy supplied results in acceleration of the particle. 1 eV
The dimensions of momentum units are LT−1M. The dimensions of energy units are L2T−2M. Then, dividing the units of energy (such as eV) by a fundamental constant that has units of velocity (LT−1), facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, one can describe the momentum of an electron in units of eV/c.
The fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity. For example, if the momentum p of an electron is said to be , then the conversion to MKS can be achieved by: 1 GeV
In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is widely used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses.
Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following:
The above relations also allow expressing the mean lifetime τ of an unstable particle (in seconds) in terms of its decay width Γ (in eV) via Γ = ħ/τ. For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is cτ = 459.7 µm, or a decay width of (±25)×10−4 eV. 4.302
Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds.
For example, a typical magnetic confinement fusion plasma is , or 170 megakelvin. 15 keV
As an approximation: kBT is about (≈ 0.025 eV290 K/) at a temperature of . 20 °C
The energy E, frequency v, and wavelength λ of a photon are related by
A photon with a wavelength of (green light) would have an energy of approximately 532 nm. Similarly, 2.33 eV would correspond to an infrared photon of wavelength 1 eV240 nm or frequency 1. 241.8 THz
In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material.
- ×1032 eV: total energy released from a 20 5.25kt nuclear fission device
- ×1028 eV: the energy at the 1.22Planck scale
- ×1025 eV: the approximate 1grand unification energy
- ~624 EeV (×1020 eV): energy consumed by a single 100-watt light bulb in one second ( 6.24 = 100 W ≈ 100 J/s×1020 eV/s) 6.24
- 300 EeV (×1020 eV = ~ 3J): 50 the so-called Oh-My-God particle (the most energetic cosmic ray particle ever observed)
- : one petaelectronvolt, the amount of energy measured in each of two different cosmic neutrino candidates detected by the IceCube neutrino telescope in Antarctica 1 PeV
- : the designed proton collision energy at the 14 TeVLarge Hadron Collider (which has operated at half of this energy since 30 March 2010[update])
- : a trillion electronvolts, or 1 TeV×10−7 J, about the kinetic energy of a flying 1.602mosquito
- 125.3±0.6 GeV: the energy emitted by the decay of the Higgs boson, as measured by two separate detectors at the LHC to a certainty of 5 sigma
- : the average energy released in fission of one 210 MeVPu-239 atom
- : the average energy released in 200 MeVnuclear fission of one U-235 atom
- : the average energy released in the 17.6 MeVfusion of deuterium and tritium to form He-4; this is per kilogram of product produced 0.41 PJ
- ( 1 MeV×10−13 J): about twice the 1.602rest energy of an electron
- : the energy required to 13.6 eVionize atomic hydrogen; molecular bond energies are on the order of to 1 eV per bond 10 eV
- to 1.6 eV: the 3.4 eVphoton energy of visible light
- : the 25 meVthermal energy kBT at room temperature; one air molecule has an average kinetic energy 38 meV
- : the 230 µeVthermal energy kBT of the cosmic microwave background
Notes and references
- IUPAC Gold Book, p. 75
- SI brochure, Sec. 4.1 Table 7
- "Fundamental Physical Constants from NIST".
- What is Light? – UC Davis lecture slides
- Elert, Glenn. "The Electromagnetic Spectrum, The Physics Hypertextbook". Hypertextbook.com. Retrieved 2010-10-16.
- "Definition of frequency bands on". Vlf.it. Retrieved 2010-10-16.
- "Definitions of the SI units: Non-SI units".
- Barrow, J. D. "Natural Units Before Planck." Quarterly Journal of the Royal Astronomical Society 24 (1983): 24.
- "Units in particle physics". Associate Teacher Institute Toolkit. Fermilab. 22 March 2002. Retrieved 13 February 2011.
- "Special Relativity". Virtual Visitor Center. SLAC. 15 June 2009. Retrieved 13 February 2011.
- "CODATA Value: Planck constant in eV s". Retrieved 30 March 2015.
- Open Questions in Physics. German Electron-Synchrotron. A Research Centre of the Helmholtz Association. Updated March 2006 by JCB. Original by John Baez.
- Ice-bound hunter sees first hint of cosmic neutrinos New Scientist. 22:49 24 April 2013. by Anil Ananthaswamy.
- Glossary - CMS Collaboration, CERN
- "CERN experiments observe particle consistent with long-sought Higgs boson". |
If for a polygon it is possible to construct an inscribed and circumscribed circle, then the area of this polygon is smaller than the circumscribed circumference, but larger than the area of the inscribed circle.
For some polygons, formulas are known for finding the radius of the inscribed and circumscribed circles.
Inscribed in a polygon is a circle touching all sides of a polygon. For a triangle, formula Of the radius Inscribed circle: r = ((p-a) (p-b) (p-c) / p) ^ 1/2, where p is the semiperimeter-a, b, c are the sides of the triangle. For a regular triangle, the formula is simplified: r = a / (2 * 3 ^ 1/2), and a is the side of the triangle.
The polygon described around thisA circle on which all the vertices of the polygon lie. For a triangle, the radius of the circumscribed circle is found by the formula: R = abc / (4 (p (p-a) (p-b) (p-c)) ^ 1/2), where p is the semiperimeter-a, b, c are the sides of the triangle. For a regular triangle the formula is simpler: R = a / 3 ^ 1/2.
For polygons it is not always possible to find outThe ratio of the radii of inscribed and circumscribed circumferences and the lengths of its sides. It is often limited to constructing such circles around a polygon, and then a physical measurement Of the radius Circles with the help of measuring instruments or vector space.
To construct the circumscribed circle of a convexPolygon construct bisectors of its two angles, at the intersection of which lies the center of the circumscribed circle. The radius is the distance from the point of intersection of the bisectors to the vertex of any corner of the polygon. The center of the inscribed circle lies at the intersection of perpendiculars built inward from the polygon from the centers of the sides (these perpendiculars are called the middle ones). It is enough to build two such perpendiculars. The radius of the inscribed circle is equal to the distance from the point of intersection of the median perpendiculars to the side of the polygon. |
What is Linear Regression?
Linear regression is a data plot that graphs the linear relationship between an independent and a dependent variable. It is typically used to visually show the strength of the relationship and the dispersion of results – all for the purpose of explaining the behavior of the dependent variable.
Say we wanted to test the strength of the relationship between the amount of ice cream eaten and obesity. We would take the independent variable, the amount of ice cream, and relate it to the dependent variable, obesity, to see if there was a relationship. Given a regression is a graphical display of this relationship, the lower the variability in the data, the stronger the relationship and the tighter the fit to the regression line.
- Linear regression models the relationship between a dependent and independent variable(s).
- Regression analysis can be achieved if the variables are independent, there is no heteroscedasticity, and the error terms of variables are not be correlated.
- Modeling linear regression in Excel is easier with the Data Analysis ToolPak.
There are a few critical assumptions about your data set that must be true to proceed with a regression analysis:
If those three things sound complicated, they are. But the effect of one of those considerations not being true is a biased estimate. Essentially, you would misstate the relationship you are measuring.
Outputting a Regression in Excel
The first step in running regression analysis in Excel is to double-check that the free Excel plugin Data Analysis ToolPak is installed. This plugin makes calculating a range of statistics very easy. It is not required to chart a linear regression line, but it makes creating statistics tables simpler. To verify if installed, select "Data" from the toolbar. If "Data Analysis" is an option, the feature is installed and ready to use. If not installed, you can request this option by clicking on the Office button and selecting "Excel options".
Using the Data Analysis ToolPak, creating a regression output is just a few clicks.
The independent variable goes in the X range.
Given the S&P 500 returns, say we want to know if we can estimate the strength and relationship of Visa (V) stock returns. The Visa (V) stock returns data populates column 1 as the dependent variable. S&P 500 returns data populates column 2 as the independent variable.
- Select "Data" from the toolbar. The "Data" menu displays.
- Select "Data Analysis". The Data Analysis - Analysis Tools dialog box displays.
- From the menu, select "Regression" and click "OK".
- In the Regression dialog box, click the "Input Y Range" box and select the dependent variable data (Visa (V) stock returns).
- Click the "Input X Range" box and select the independent variable data (S&P 500 returns).
- Click "OK" to run the results.
[Note: If the table seems small, right-click the image and open in new tab for higher resolution.]
Interpret the Results
Using that data (the same from our R-squared article), we get the following table:
The R2 value, also known as the coefficient of determination, measures the proportion of variation in the dependent variable explained by the independent variable or how well the regression model fits the data. The R2 value ranges from 0 to 1, and a higher value indicates a better fit. The p-value, or probability value, also ranges from 0 to 1 and indicates if the test is significant. In contrast to the R2 value, a smaller p-value is favorable as it indicates a correlation between the dependent and independent variables.
Charting a Regression in Excel
We can chart a regression in Excel by highlighting the data and charting it as a scatter plot. To add a regression line, choose "Layout" from the "Chart Tools" menu. In the dialog box, select "Trendline" and then "Linear Trendline". To add the R2 value, select "More Trendline Options" from the "Trendline menu. Lastly, select "Display R-squared value on chart". The visual result sums up the strength of the relationship, albeit at the expense of not providing as much detail as the table above. |
To describe the Doppler effect and its application to the measurement of flow velocity.
To understand the differences between continuous-wave and pulsed-wave Doppler and the clinical rationale for each.
To identify the components of the Doppler spectral waveform.
To recognize the aliasing artifact and understand the methods of reducing or eliminating it.
To understand the principles of the two major types of color Doppler imaging.
Color flow imaging
Color velocity scale
Color wall filter
Combined Doppler mode
Doppler shift frequency
Doppler spectral waveform
Maximum velocity waveform
Packet size (ensemble length)
Power Doppler imaging
The ability to identify flow patterns and measure flow velocities is one of the most important functions of diagnostic ultrasound. The sonographer must understand the factors that contribute to the Doppler information displayed on the monitor. Most of what we discuss in this chapter applies to both spectral Doppler and color flow imaging, since and each mode is governed by the Doppler equation and is ultimately subject to the same factors.
THE DOPPLER EFFECT
The Doppler effect is the observed change in frequency of a transmitted wave due to the relative motion between the source of the sound and the receiver or observer. Doppler ultrasound is a valuable tool because this methodology detects the presence, direction, velocity, and time variation of blood flow within blood vessels and in the heart. Several types of Doppler devices are available. Although each relies on the Doppler effect to detect motion, the manner in which flow information is acquired, processed, and displayed distinguishes one type of instrument from another. Some scanners offer several Doppler modes, which are selectable by the user. The most basic (inexpensive) systems offer only a single option for the Doppler mode (velocity analysis or two-dimension Doppler imaging, i.e. color flow).
The apparent frequency change produced by the Doppler effect is based on the relative motion between the source of sound and the observer, regardless of which is moving and which is stationary. When a police car with siren blaring passes a pedestrian, the audible sound is heard as a change in frequency or pitch as the vehicle approaches (the frequency appears to be higher) while the frequency of the retreating vehicle after it passes is observed to be lower. In the above illustration, the sound source is the moving vehicle, while the receiver or observer is the stationary pedestrian.
Imagine a situation in which an observer is standing in a boat in the middle of a lake. If the wind is blowing at a constant rate from the north and the waves all have the same distance between peaks (same wavelength), the stationary boat will encounter the same number of wave crests each second (constant frequency) as are produced by the wind. If the boat begins traveling in a northerly direction, into the wind, the wave crests are encountered more frequently. The observer standing in the boat sees an increase in the wave frequency, although, in actuality, the frequency of the cresting waves has not changed. If the boat turns around and begins heading south, this time with the wind (away from the source of the waves), fewer crests are seen, and to the observer the frequency appears to decrease. As the boat moves faster in either direction, the difference between the actual and observed frequencies becomes greater. The only circumstance in which these “transmitted” and “observed” frequencies are the same is when the boat is stationary.
A stationary observer views the same number of pressure waves as are emitted by the stationary source (Figure 5-1). However, the relative motion between the sound source and the receiver distorts the pattern of symmetric wavefronts and the observed frequency increases or decreases, depending upon the direction of movement. The change or difference in frequency between the transmitted frequency and the received frequency, caused by the motion, is the Doppler shift frequency (often abbreviated as “Doppler shift” or “Doppler frequency”). In the example of the police siren above, the frequency appears higher to the stationary observer as the car approaches. In this case, the relative motion of the source and the receiver is toward one another. As the police car passes and travels away from the observer, the frequency appears to decrease, since the relative movement between the source and the observer is away from one another.
FIGURE 5-1. The Doppler effect. (A) Stationary sound source and receiver, the observed frequency is the same as the frequency emitted by the sound source. (B) Sound source moving toward the receiver, the observed frequency is higher than the actual frequency emitted by the sound source. (C) Sound source moving away from the receiver, the observed frequency is lower than the actual frequency emitted by the sound source.
When considering a sound wave produced by a piezoelectric transducer, the sound source remains stationary while the moving “receiver” could be blood cells or another moving structure, such as a heart valve. The echo from the moving reflector is then observed with a Doppler shift frequency by the stationary transducer, which is now the receiver.
The magnitude of the Doppler shift frequency depends on how rapidly the sound source, the receiver, or both are moving in relation to one another. An increase in the relative velocity between the source and the receiver causes a greater deviation from the transmitted frequency. Indeed, this is the rationale behind why we perform the Doppler examination. The Doppler shift frequency (fD) produced by a moving reflector is calculated from the equation:
where c is the acoustic velocity of tissue, f is the transmitted frequency, v is the velocity of the interface, and θ is the angle between the path of reflector movement and the direction of beam propagation (called the Doppler angle or angle to flow) as illustrated in Figure 5-2. Note that the letter “c” in the Doppler equation represents the velocity of sound in tissue instead of the usual “v” for velocity. In this mathematical symbolism, the character “v” is reserved for the velocity of the flowing blood. The number 2 in the equation represents two separate (and equal) Doppler shifts that occur in Doppler ultrasound. The first Doppler shift occurs between the stationary sound source, the transducer, and the “observer,” the moving blood cells. The second Doppler shift takes place as the moving blood cells (now the sound source) reflect the sound wave back to the stationary transducer, which now becomes the receiver.
FIGURE 5-2. Doppler ultrasound detection of reflector velocity. The Doppler angle θ is defined by the reflector path with respect to the transmitted beam.
As a reflector moves directly toward a 5-MHz transducer at a velocity of 50 cm/s, the angle to flow is 0 degrees and the observed frequency is 5,003,247 Hz, corresponding to a Doppler shift frequency of 3247 Hz above the original transmitted frequency (Figure 5-3). If the flow is away from that transducer at 50 cm/s, the observed frequency is 4,996,753 Hz or 3247 Hz below the original transmitted frequency. The Doppler angle gives the component of the velocity along the direction of propagation for the ultrasound beam. If the Doppler angle is increased from 0 to 30 degrees, the Doppler shift frequency is 2.8 kHz instead of the 3.2 kHz obtained for parallel incidence. For a given reflector velocity, the Doppler shift frequency decreases as the Doppler angle is increased (Figure 5-4).
FIGURE 5-3. Received echo frequency is 3247 Hz above the transmitted frequency when the flow velocity is 50 cm/s toward the transducer. The transmitted frequency is 5 MHz and the Doppler angle θ is 0 degrees.
FIGURE 5-4. Doppler shift frequency from reflectors moving at a velocity of 50 cm/s versus Doppler angle. The transmit frequency is 5 MHz. The Doppler shift frequency is 3.2 kHz at 0 degrees, 2.8 Hz at 30 degrees, and 1.6 kHz at 60 degrees. No Doppler shift frequency is observed at 90 degrees.
No Doppler shift frequency occurs at a 90-degree angle of incidence (cosine theta in the Doppler equation is equal to zero for an incident angle of 90 degrees). In practice, the signal never disappears completely. Because the beam has a finite width, some portion of the beam impinges at an angle that is not perpendicular to the motion. Beam divergence tends to amplify this effect, especially in the region beyond the beam’s focal point.
The acoustic velocity is assumed to remain constant at a value of 1540 m/s for soft tissue. The observed change in frequency occurs because the sound beam encounters a moving structure between the source and the detector. The Doppler equation predicts that an increase in reflector velocity results in a greater Doppler shift frequency. If the Doppler shift frequency and angle to flow are known, the velocity of the moving reflectors can be calculated. In practice, the transmitted and received frequencies are first measured, and then processed to find the resultant Doppler shift frequency. The instrument accomplishes these steps autonomously without operator intervention. However, the Doppler angle to flow must be determined by the sonographer with manual input to the scanner for the correct display of flow velocity.
Uncertainty in the measurement of the Doppler angle, particularly at large angles, introduces error in the velocity computation. The exact angle to flow is much more of a consideration when evaluating blood vessels than in the heart due to differences in acoustic access. In vascular applications, the process of angle correction (angle measurement) must be performed by the sonographer in order to achieve an accurate estimation of flow velocity. At a Doppler angle to flow of 60 degrees, the resultant Doppler shift is only half that with a Doppler angle of 0 degrees. The angle to flow must be measured as accurately as possible, because a 5-degree deviation for a 60-degree angle to flow (frequently used when examining blood vessels) introduces an 18% error in the measurement of flow velocity.
Conversely, when the beam is near parallel to flow (as is frequently the case in the heart), the Doppler angle to flow is assumed to be 0 degrees and no angle correction is performed. At a Doppler angle to flow near 0 degrees, a 5-degree inaccuracy in the angle results in only a 1% error in the calculation of flow velocity. A 10-degree error in the estimation of Doppler angle to flow results in a velocity error of less than 10%. In practice, the angle of insonation is assumed to be 0 degrees in cardiac applications and no “angle correction” is performed.
Doppler signals from superficial blood vessels (e.g., the carotids) should generally not be acquired at angles greater than 60 degrees, due to the increased potential of error as the Doppler angle approaches 90 degrees. Regardless of the angle, care should be taken in vascular applications to measure the angle to flow as accurately as possible.
Scattering from Blood
For Doppler measurements of blood flow, red blood cells (RBCs) act as Rayleigh scatterers. The RBC has a diameter of 7 μ (much smaller than the wavelength of the sound wave, usually 0.2–0.5 mm) and thus meets the condition for Rayleigh scattering. Rayleigh scattering exhibits very strong frequency dependence (proportional to the fourth power of the frequency). Therefore, the intensity of the scattered ultrasound energy increases dramatically as the transmitted frequency increases.
The intensity of the scattered sound also depends on the number of RBCs and thus the quantity of blood in the sample volume. Because the scattering from blood is small compared with echoes produced by soft tissue interfaces, blood-filled vessels appear to be echo-free on the B-mode image. To enhance scattering and, therefore, increase the sensitivity to weak echoes generated from blood cells, a high-frequency transducer is often advantageous. However, at higher frequencies, the rate of attenuation of the sound beam by the intervening tissues becomes greater. Therefore, as with B-mode imaging, two opposing frequency-dependent effects (in the case of Doppler, penetration and scattered echo intensity) must be balanced by matching the transducer transmit frequency with the depth of the region of interest.
Doppler transducers usually operate in the frequency range of 2–10 MHz, because other constraints are placed on the system: a single transducer with dual imaging and Doppler functions, a desired frequency range for Doppler shift frequency, and the problem of aliasing (discussed later in this chapter). High transmit frequencies, typically 5–7 MHz, are employed for peripheral vascular Doppler examinations, whereas examinations of deep-seated vessels are performed at frequencies near 2 MHz. Most often, the transmitted Doppler frequency is somewhat lower than the nominal imaging frequency of the transducer. For example, a transducer labeled as “5 MHz” refers specifically to the B-mode imaging frequency. The transmitted frequency of sound used for Doppler evaluation in that same transducer will likely be in the range of 2–3 MHz. Some ultrasound instruments display the actual transmitted frequency used for Doppler, while others display only descriptors such as “resolution/penetration” while in the Doppler mode.
Doppler units are designed to extract the Doppler shift frequencies from received signals. The Doppler shift frequency is in the audible range (typically between 200 and 15,000 Hz). Therefore, loudspeakers are used as output devices in addition to any other type of available display. Nearly all commercially available systems provide an audio display of the Doppler signal, as the human ear is extremely sensitive to Doppler signals. For visual display, the preferred format is to convert the measured Doppler shift frequency to velocity, which is independent of instrument parameters. Doppler displays utilizing frequency expressed in kilohertz, without velocity information, are not readily comparable when multiple examinations are performed by different sonographers on different instruments.
A continuous-wave (CW) Doppler transducer contains two piezoelectric elements: one to transmit the sound waves of constant frequency continuously and one to receive the echoes continuously (Figure 5-5). A single-element transducer cannot send and receive at the same time. Since the transmitted sound wave is not pulsed, broad bandwidth transducers are not practical or even appropriate (wide frequency range yields multiple Doppler shifts for a reflector moving at constant velocity).
FIGURE 5-5. Continuous-wave Doppler transducer. Pencil-type probe has two piezoelectric crystals: one transmits continuously, the other receives continuously.
The sampling volume is restricted by the transmitted ultrasonic field (dependent on the frequency and focal properties of the sound beam) and the geometric arrangement of the elements. For the detection of a moving reflector located along the path of the transmitted beam, the resulting echo must strike the receiving crystal. The sensitive volume, or zone of sensitivity, is defined by the intersection of the transmitted ultrasound field and the reception zone. In essence then, each two-element transducer is focused to a particular depth (Figure 5-6). The two elements are tilted slightly to allow overlap between their respective fields of view (transmission and reception). A multiple-element array transducer creates a similar zone of sensitivity in CW Doppler mode by dedicating one group of elements as the transmitter and another group as the receiver (Figure 5-7).
FIGURE 5-6. Zone of sensitivity for CW Doppler transducer.
FIGURE 5-7. Multiple-element array transducer operating in CW Doppler mode. One group of elements (black) is designated for transmission and another group (gray) is assigned for reception. A zone of sensitivity is created where the wave patterns overlap.
Depending on the clinical application, the sonographer selects a CW transducer with the appropriate operating frequency and sensitive region. In a multiple-element array transducer, the operating frequency and depth of the sensitivity zone in CW Doppler mode may be adjustable, depending upon the instrument.
The transmitted sound wave interacts with various reflectors, some of which are stationary and others moving. A fraction of the incident sound intensity is reflected at each interface. If the reflector is stationary, the frequency of the reflected sound wave is the same as the transmitted frequency, and consequently no change in frequency is observed. A moving interface causes the frequency of the echo to shift up or down depending on whether the movement is toward or away from the sound source.
Measurement of the Doppler shift frequency is based on the principle of wave interference. The Doppler effect causes the reflected wave received from a moving interface to vary slightly in frequency from the original transmitted wave. When waves with different frequencies are algebraically added together, they yield a slowly oscillating broad pattern of peaks and valleys, called the beat frequency (Figure 5-8). The beat frequency equals the difference in frequency between the two waves (transmitted and received) and thus corresponds to the Doppler shift frequency.
FIGURE 5-8. Doppler signal processing. (A) Continuous reference transmitted signal of constant frequency (25 cycles). (B) Continuous echo-induced signal of constant frequency (20 cycles). (C) Addition of the transmitted and received signals in A and B forms a complex waveform. The beat frequency of five cycles composes the outer envelope (dotted line) of this complex waveform.
Figure 5-9 illustrates the steps required to generate the Doppler signal. The oscillator regulates the transmitter to emit a continuous sound wave of a single frequency. Alternating pressure on the receiving element by the returning echo is converted to an RF (radiofrequency) signal. The amplifier increases the echo-induced signal level. The reference waveform from the oscillator, which mimics the transmitted wave, is then combined with the received signal at the demodulator, generating complex resultant wave by means of wave interference. This resultant wave is then processed to remove the rapidly oscillating components; however, the slowly varying envelope corresponding to the beat frequency (dotted line in Figure 5-8C) is retained. Isolation of the beat frequency yields the Doppler signal, which has a frequency equal to the Doppler shift frequency.
FIGURE 5-9. Schematic showing components of a continuous-wave Doppler unit.
The signal processing illustrated by Figure 5-8 yielded a single-beat frequency, which denoted reflectors moving at a single, constant velocity. In a Doppler ultrasound examination of blood flow, RBCs within a vessel have a range of velocities that vary throughout the heart cycle and therefore, a range of Doppler shift frequencies will be present. The velocity of each moving reflector corresponds to a characteristic beat frequency upon echo detection and processing. Many beat frequencies representing all detected motion within the sampling volume comprise the Doppler signal. A complex Doppler signal is then formed by the summation of all the Doppler shift frequencies present after demodulation.
The complex Doppler signal is amplified, filtered to remove unwanted low-frequency components caused by slow-moving structures such as vessel walls, and then routed to a loudspeaker for audible “display.” The pitch of the audio output corresponds to the frequency shift between the transmitted and received sound waves and indicates the flow velocity within the vessel. As flow velocity becomes greater, a higher pitch is heard. A typical audio Doppler display for an artery exhibits a rhythmic rise and fall in the audible frequency due to the acceleration and deceleration of blood with systole and diastole.
Large, slow-moving specular reflectors in the body (e.g., vessel walls or heart valves) generate strong echoes with relatively low Doppler shift frequencies. These low frequencies produce a distracting thumping sound often referred to as “wall thump.” Filtering removes these low frequencies, which are normally not of major interest and could mask other signals. The operator control, wall filter, rejects all frequencies below the threshold value, known as the cutoff frequency (Figure 5-10).
FIGURE 5-10. Wall filter control.
The cutoff frequency is usually set by default to remove Doppler shift frequencies below 100 Hz. Depending on the manufacturer and model, the cutoff frequency can be adjusted to values as low as 40 Hz and as high as 1000 Hz (1 kHz). Most units automatically set the threshold value based on the study type selected in the preset menu. Because the wall filter control removes all frequencies below the cutoff value, care must be taken so that slow-moving flow is not excluded from the display. Thus, the wall filter should be set at the lowest possible value to remove wall thump while not eliminating any important blood-flow components of the Doppler signal. This is particularly true for slow venous flow as well as for the slight flow reversal that occurs in a normal triphasic arterial waveform (discussed in the following chapter).
CW Doppler has high sensitivity to detect slow flow with low Doppler shift frequencies and, further, can discriminate small differences in flow velocity (Figure 5-11). The long sampling time of CW Doppler enables this modality to identify small changes in frequency corresponding to slow flow. At the other extreme, high-velocity flow is accurately measured with no limitation in velocity range. However, extensive flow volumes, such as those encountered within the left ventricle, cannot be accurately assessed with CW Doppler because precise depth information is not possible. The observed Doppler signal can be extremely complex, because the sum of Doppler shifts generated by all the moving interfaces within the sensitive volume is portrayed. If the sampling volume includes multiple vessels, the superposition of resulting Doppler shifts becomes especially problematic. Therefore, CW Doppler is limited to those clinical applications in which sensitivity volume can be associated with a single vessel, such as the brachial or femoral artery. CW Doppler is commonly employed to evaluate flow patterns in heart valves. In this case, even though the large sampling area of CW Doppler records flow from other portions of the atria and ventricles, the easily recognizable flow pattern of the aortic and mitral valves is readily identified. Coupled with the fact that CW Doppler has essentially no practical limit to the velocity that can be measured, this modality is ideal to assess stenosis in valves, such as the aortic valve. (Aortic stenosis often produces velocities in the range of 500–600 cm/s, which would be impossible to determine accurately with a pulsed-wave (PW) Doppler system.)
FIGURE 5-11. Sensitivity of continuous-wave Doppler to slow flow. The echo-induced signal from a slow-moving reflector (dotted line) requires several cycles to be differentiated from the reference transmitted signal (solid line).
PW Doppler provides quantitative depth information of the moving reflectors. Depth of echo formation is obtained via the echo-ranging principle in similar fashion to B-mode imaging. The transducer is electrically stimulated to produce a short burst of ultrasound and then is silent to listen for echoes before another pulsed wave is generated. Because of the requirement to assign depth, there is a physical limit to the number of Doppler pulses that can be transmitted in a given amount of time. Also, Doppler shift frequency determination entails longer pulse duration than in B-mode imaging. The necessity for increased pulse duration lies in the desire to detect received frequencies associated with slow flow that are almost the same as the transmitted frequency. Imagine that the pulse duration was confined to three cycles as in a typical B-mode acquisition (Figure 5-12). Certainly, the ability to distinguish small changes compared with the transmitted frequency becomes more difficult as pulse duration is shortened.
FIGURE 5-12. Pulsed-wave transmission of few cycles is unable to detect low-velocity reflector. Reference transmitted signal (solid line) and echo-induced signal (dotted line) are nearly identical.
The received signals are electronically gated for processing so only the echoes that are detected in a narrow time interval after transmission, corresponding to a specific depth, contribute to the Doppler signal. The delay time before the gate is turned on determines the depth of the sample volume; the amount of time the gate is activated establishes the axial length of the sample volume (Figure 5-13). Gate parameters are selected by the operator; thus, the axial size of the sensitive volume and the depth of the sample can be adjusted. The axial sample length can be as small as 1 mm. The remaining dimensions of the sampling volume are dictated by the beam width in the in-plane direction and in elevation direction. Figure 5-14 illustrates the designation of the sampling region along the Doppler scan line in a B-mode image. Transducer frequency and focusing characteristics influence the dimensions of the ultrasonic field.
FIGURE 5-13. In pulsed-wave Doppler, the timing gate determines the depth and axial length of the sampling volume.
FIGURE 5-14. Operator-defined sampling area for pulsed-wave Doppler. The dotted line indicated the direction of sampling and parallel horizontal lines mark the axial extent of the sensitive region.
Multiple echoes from a moving reflector separated in time must be accrued to detect the motion. In order to achieve this, transmitted pulses are repeatedly directed along the same scan line to interrogate the sampling volume. Suppose a photographer took a single stop-action photograph (with an extremely short shutter time) of a car traveling west at 60 miles per hour. If you were shown that photograph, you would be unable to tell if the car was moving or not. And certainly, the direction of travel and speed would be indiscernible. However, if a series of stop-action photographs were acquired over a specific time period and then shown rapidly one after the other, the motion of the car would be clearly depicted, and the speed could be computed if the rate of sampling were known.
In PW Doppler, the basic CW design is modified to accommodate range gating and to collect successive processed echoes for analysis. Accurate time registration is critical for proper depth assignment of the Doppler signals. Gating is based on elapsed time following each transmitted pulse, and the time between consecutive echoes from a reflector is set by the pulse repetition period (PRP). The PRP is the time interval from the beginning of one transmit pulse to the beginning of the next transmit pulse. A single gate limits the interrogation to one depth along the scan line. The direction of sampling is indicated on the display by the Doppler cursor. Echoes formed along the Doppler scan line, but outside the sampling volume, are rejected. Only those echoes generated from within the sampled volume contribute to the Doppler display.
For reflectors moving at uniform and constant velocity within the sampling volume, a series of echoes from successive transmitted pulses are acquired over time. The depth-specific echo from each transmitted pulse, when processed, provides a single instantaneous value of the Doppler signal (beat frequency). The measured values obtained from multiple transmitted pulses are combined to form the time-varying contour of the Doppler shift frequency (Figure 5-15). In essence, the transmitted pulse rate (Doppler pulse repetition frequency or PRF) indicates how often the Doppler signal is sampled. Typically, a sequence of 64–128 pulses is transmitted along the line of sight to interrogate flow within the sample volume. The total observation time is usually 10 ms or less.
FIGURE 5-15. Pulsed-wave Doppler signal processing. A series of transmitted pulses are directed along the Doppler line of sight. (A and B) For each transmitted pulse, the echo-induced signal from moving reflectors within the sampling volume is combined with the reference signal to yield the net signal. The net signal from successive transmit pulses varies due reflector movement. Note the change in position of RBCs between transmit pulses in (A) and (B). (C) The net signals from multiple transmitted pulses when placed on a time axis compose the beat frequency. The first two points of sampling from A and B are shown as solid lines. Subsequent measurements are indicated by the dotted lines. Connecting all the data points yields the projected beat frequency.
In PW Doppler, the beat frequency is not as well defined as with CW Doppler, because the pulsed echoes are equivalent to sampling the Doppler signal at discrete intervals. The oscillatory pattern can be more accurately delineated if the sampling occurs repeatedly at short intervals. This requires a high Doppler PRF.
Blood flows with a range of velocities within the sample volume and gives rise to multiple Doppler shift frequencies. These combine via interference to yield a complex Doppler signal, which represents all flow velocities present in the sampled volume. Fortunately, methods have been developed to isolate the individual velocity components and then display this information in an easy to understand format.
Velocity Detection Limit
PW Doppler has a limit with respect to the maximum beat frequency that can be detected accurately. This upper frequency boundary is called the Nyquist limit, which is caused by discrete (noncontinuous) sampling. The maximum Doppler shift frequency equals one-half the sampling rate, given by the Doppler PRF. Noncontinuous sampling creates a very important impediment in PW Doppler. To accurately measure a fast moving reflector producing a high Doppler shift frequency, a rapid sampling rate is necessary; however, a high PRF restricts the depth that can be interrogated, because a specific time is required to receive the echoes arising from that depth before the next transmitted pulse. Thus, as the depth to the vessel or structure is increased, more time is required between transmit pulses, and the maximum Doppler shift frequency that can be measured becomes lower. The problem becomes more complex because the Doppler shift frequency is also proportional to the transmitted frequency. The most problematic situation for PW Doppler occurs for deep-lying structures with high-velocity flow in which the Doppler angle to flow is near 0 degrees. This combination of factors arises frequently in the Doppler evaluation of heart valves, particularly with disorders such as aortic stenosis in which the velocities can be very high.
Table 5-1 illustrates the effect of the depth of interest and transmitted frequency on the maximum velocity limit when angle to flow is unchanged. As the depth of interest is increased, the maximum reflector velocity that can be measured is decreased. Importantly, a low-frequency transducer allows higher velocities to be detected. A larger Doppler angle extends the maximum velocity limit. At a depth of 10 cm with 5 MHz transmitted frequency, the maximum velocity limit increases from 84 to 119 cm/s when the Doppler angle is changed from 45 to 60 degrees. This velocity constraint occurs because the motion of the reflector is sampled at discrete intervals and not continuously, as with CW Doppler ultrasound. In contrast with PW Doppler, CW Doppler has no maximum velocity limit. (Since the CW transducer is continuously transmitting, there is essentially no “pulse repetition frequency” and therefore no Nyquist limit).
TABLE 5-1 • Maximum Velocity Limit in Pulsed-Wave Doppler with Different Transmit Frequencies
The following is a real-world example which illustrates a practical application of the maximum velocity limit in a Doppler examination. The maximum PRF for a 10 cm depth is approximately 7700 pulses per second. Using a 3.5-MHz transducer with a Doppler angle of 30 degrees, the maximum Doppler shift frequency, which can be accurately measured, is 3850 Hz, or a velocity of 98 cm/s. If the transmit frequency were lowered to 2 MHz, the maximum detectable velocity would increase to 171 cm/s. Changing the depth of interest to 15 cm while maintaining the transducer frequency at 3.5 MHz reduces the detectable maximum velocity to 65 cm/s. Fortunately, these conditions are such that the physiologic velocities of normal velocity blood flow (except within the heart) usually occur within the detectable range of PW Doppler units.
At a minimum, two measurements are required per beat cycle to define the Doppler shift frequency unambiguously. This is the reason the Nyquist limit (upper limit for detection of the Doppler shift frequency) is equal to one-half the Doppler PRF. Because the beat frequency is sampled intermittently in PW Doppler, limited data are available for calculation of the Doppler shift (each transmit pulse ultimately contributes one point on the waveform of the Doppler shift frequency). If the Doppler PRF is not adequate to generate at least two points per beat cycle of the Doppler shift frequency, the Doppler shift frequencies above the Nyquist limit will be misinterpreted as lower than their actual value (Figure 5-16). This error in the measurement of the Doppler shift frequency caused by a low sampling rate is called aliasing.
FIGURE 5-16. Intermittent sampling of the beat frequency (solid line). (A) Multiple measurements per cycle allow accurate assessment of the beat frequency. (B) As few as two measurements per cycle also provide accurate interpretation of the beat frequency. (C) If the sampling rate is less than two times per cycle, then the true beat frequency (solid line) is misinterpreted as a lower frequency (dotted line).
Imagine that a race car is traveling around an oval racetrack at constant speed. A series of photographs closely spaced in time would accurately depict the motion as the car advances around the track. Indeed, as long as at least two photographs are taken for each lap, the interpretation of the movement would be correct. Now suppose the car accelerates to a higher speed, while the frequency of the photographs remains unchanged. At this faster speed, there may be 1.5 photographs taken for each lap of the car (three photos every two laps). This series of photographs will now appear to show the car moving backward around the track at slower than the actual speed. Thus, there is a minimum sampling rate (2 photos per lap) that accurately portrays the motion of the car around the track, analogous to the minimum sampling rate, or PRF, in Doppler applications.
Because velocity information is almost always displayed in velocity units as opposed to frequency units, the Nyquist limit is typically given in velocity. The Nyquist limit may be displayed separately from the velocity scale; however, it is important for the sonographer to know that the Nyquist limit is equal to the maximum velocity shown on the scale. There are both a positive and a negative value displayed for the Nyquist limit. If the baseline is moved up or down from the middle of the spectral display, the maximum velocity limit for forward and reverse flow is no longer the same (Figure 5-17).
FIGURE 5-17. Baseline is placed off center. The maximum velocity in the forward and reverse directions is not the same (arrows). The + and – maximum velocity values are indicated by arrows.
If the baseline is moved all the way to the top or bottom of the spectral display, the Nyquist limit is extended to the greatest possible value in a single direction for the given sampling rate. However, any flow that is present in the opposite direction is unknown (Figure 5-18).
FIGURE 5-18. Baseline is moved to the bottom of the display. Measurements of velocity are restricted to one direction only, but the maximum velocity in the forward direction that can be displayed without aliasing is extended. |
Guillain-Barré syndrome (GBS) is an inflammation of the covering that surrounds nerve cells of the brain and spinal cord. The basis of the inflammation is not conclusively known, but is generally considered to arise from a malfunctioning immune system that recognizes host tissues as being foreign. The inflammation reaction damages the nerves of the brain and spinal cord, producing weakness in the muscles, loss of sensation (such as the sense of touch in the fingers), or outright paralysis.
GBS is termed a syndrome rather than a disease because there is no conclusive evidence to support the possibility that a specific disease-causing agent such as a bacteria or a virus is the direct cause of the malady. Infections may be a trigger to the development of GBS, however.
The syndrome is named after George Charles Guillen and Jean-Alexandre Barré, French co-authors of a classic paper on the syndrome that was published in 1916. A third author, André Strohl, was not subsequently associated with the syndrome that was the subject of the paper.
GBS is a rare and acute disorder. An acute disorder displays a rapid appearance of symptoms, and a rapid worsening of the symptoms. In the case of GBS, symptoms typically appear over just a single day. Most often, symptoms are first noticed in the feet and legs. The symptoms often progress to involve different parts of the body over the next several days to several weeks. In addition, during that time other more severe symptoms can appear. In more than 90% of cases, the symptoms reach their peak by four weeks.
The syndrome is an inflammatory disorder, in which a person's own immune system attacks the nerves outside the brain and the spinal cord. These nerves are known as peripheral nerves. The nerve inflammation that occurs can damage the nerve cells. The covering (sheath) of a fatty material called myelin that surrounds the cells can be lost. This loss is called demyelination.
Additionally, the elongated portion of the nerve cell called the axon can be killed. This phenomenon is called denervation. The axon conveys electrical impulses to more distant areas of muscles, and from one nerve cell to another. Demyelination and denervation bring about muscle weakness, loss of sensation, or paralysis because the affected nerves cannot transmit signals to muscles. This loss of signal transmission inhibits the muscles from being able to respond to nerve signals. As well, the brain receives fewer signals and the person can become unable to feel heat, cold, or pain .
GBS is also known as Landry-Guillain-Barré syndrome, acute idiopathic polyneuritis, infectious polyneuritis, and acute inflammatory demyelinating polyneuropathy (AIDP). Another malady called chronic inflammatory demyelinating polyradicalneuropathy is possibly related to GBS. It is far less common than GBS (which itself is rare) and persists longer.
GBS can occur at any age. However, the syndrome tends to be more prevalent in men and women aged 15–35 years and 50–75 years (a bimodal pattern of age distribution), respectively. Males are slightly more susceptible than females (the ratio of those affected is approximately 1.5 male per female). There is no known racial group that is any more susceptible to GBS, nor any known geographical localization of the syndrome.
In the United States, the syndrome is rare. For example, the annual incidence of GBS in the United States ranges from 0.6 to 2.4 cases per 100,000 people. Nonetheless, GBS is the most common cause of neuromuscular paralysis among Americans.
Causes and symptoms
The exact cause of GBS is not known. However, bacterial or viral infections may be a trigger for its development. Almost 70% of those who develop GBS have had an infectious illness in the preceding two to four weeks. Examples of infections include sore throat, cold, flu, and diarrhea. Bacteria that have been associated with the subsequent development of GBS include chlamydia, Mycoplasma pneumoniae, and Campylobacter jejuni.
The suspected involvement of Campylobacter is noteworthy, as this bacterium is a common contaminant of poultry. Inadequate cooking can allow the microbe to survive and cause an infection in those who consume the food. Thus, there may be a connection between GBS and food quality. The form of GBS that may be associated with the presence of Campylobacter may be particularly severe. For reasons that are unclear, the peripheral nerves can themselves be directly attacked, rather than just the myelin sheath around the nerves.
Usually, infections such as those caused by Campylobacter have abated before the onset of GBS. As well, chronic infection with the viruses responsible for mononucleosis, herpes, and acquired immunodeficiency syndrome can prelude the appearance of GBS. The latter is also known as HIV-1 associated acute inflammatory demyelinating polyneuropathy.
Other possible associated factors include vaccination (rabies, swine flu, influenza, Group A streptococci), surgery, pregnancy, and maladies such as Hodgkin's disease and systematic lupus erythematosus.
Whether there is direct (causal) connection between infections and maladies and the subsequent development of GBS, or whether the events are only coincidental, is not known. For example, vaccination of Americans against the swine flu in 1976 increased the rate of GBS by less than one case per 100,000 people. Whether this increase was directly due to the vaccine is impossible to determine. Furthermore, more than 99% of people suffering from GBS who have been surveyed by the United States Centers for Disease Control and Prevention (CDC) have not recently been vaccinated. According to the CDC, the chance of developing GBS as a result of vaccination is remote.
It is conceivable that the infections or illnesses disrupt the body's immune system such that autoimmune destruction of nerve cell components occurs. Although this intriguing possibility is favored among many scientists, it remains unsubstantiated.
There is no evidence to indicate that GBS is an infection or that it is a genetically linked (heritable) disorder.
The initial sensation of weakness or paralysis in the toes spreads upward within days to a few weeks to the arms and the central part of the body. In medical terminology, this represents an ascending pattern of spread. The weakness and paralysis can also be accompanied by a tingling sensation, and a cramping or more constant pain in the feet, hands, thighs, shoulders, lower back, and buttocks. Use of the hands and feet can become impaired. More serious development of paralysis can make breathing difficult, even to the point that mechanical ventilation becomes necessary.
Other, less typical symptoms include blurred vision, clumsiness, difficulty in moving facial muscles, involuntary muscle contractions, and a pronounced heartbeat. Symptoms that are indicative of an emergency include difficulty in swallowing, drooling, breathing difficulty, and fainting .
Progression from the early symptoms to the more severe symptoms can occur very quickly (i.e., 24–72 hours). Typically, the exacerbated condition persists for several weeks. Recovery then typically occurs gradually, and can take anywhere from days to six months or more.
In very mild cases, an individual may just have a general feeling of weakness. As the symptoms abate after a few weeks, the person may dismiss the incident as a viral infection, without ever knowing the true nature of the illness.
GBS is suspected if a patient displays muscle weakness or paralysis that has been increasing in severity, especially if an illness has occurred recently. Loss of reflexes such as the knee jerk reaction can be an early clue to a clinician.
Clinical data can be useful in diagnosis. For example, a hormone that is involved in maintaining the proper chemical balance of urine can be affected in GBS. The result is called the syndrome of inappropriate antidiuretic hormone. Antibodies to nerve cells may be present as a result of the body's immune reaction against its own constituents.
Another clue to the diagnosis of GBS can be the finding of muscle weakness by neurological examination. One such test is known as nerve conduction velocity. In this test, the selected nerve is stimulated, usually with surface electrodes contained in a patch that is applied to the surface of the skin. The nerve can be stimulated using a very mild electrical current put out from one electrode, and the resulting electrical activity is recorded by the other electrodes in the patch. The nerve conduction velocity is calculated knowing the distance between electrodes and measuring the time it takes for the impulses to travel from the generating to the measuring electrodes. A person with GBS whose nerves have usually lost some or most of the myelin sheath will display a slower conduction velocity than that displayed by an unaffected person. Electrical impulses travel along the damaged nerve slower than along an undamaged nerve.
Muscle response to electrical stimulation can also be measured by electromyography (EMG). In this test, a needle electrode is inserted through the skin into the muscle. When the muscle is stimulated, for example, by contracting it, the resulting visual or audio pattern carries the information about the muscle's response. The characteristic pattern of wavelengths produced by a healthy muscle (the action potential) can be compared to a muscle in someone suspected of having GBS.
When paralysis of the heart muscle is suspected, an electrocardiogram can be used to record the electrical activity of the heart. GBS muscle paralysis can alter the normal pattern of the heartbeat.
Finally, an examination of the cerebrospinal fluid by means of a spinal tap (also known as a lumbar puncture) may detect a higher-than-normal level of protein in the absence of an increase in the number of white blood cells (WBCs). An increase in WBCs is a hallmark of an infection.
Neurologists, immunologists, physical therapists, occupational therapists, and nurses figure prominently in GBS treatment. The assistance of support groups such as the Guillen-Barré Syndrome Foundation International can also be a useful adjunct to treatment.
As recently as the 1980s, treatment for GBS consisted of letting the syndrome run its course. While most people recovered completely with time, some people were not as lucky. Those who develop severe symptoms such as breathing difficulty are routinely hospitalized.
One medical procedure that can be useful in the treatment of GBS is called plasmaphoresis. It is also known as plasma exchange. In plasmapheresis, antibody-laden blood plasma (the liquid portion of the blood) is removed from the body. Red blood cells are separated and put back into the body with antibody-free plasma or intravenous fluid. The treatment can lessen the symptoms of GBS and hasten recovery time. As of December 2003, it is not known why plasmapheresis works. It is suspected that the removal of antibodies may lessen the effects of the body's immune attack on the nerve cells.
Another procedure that produces similar results involves the administration of intravenous immune globulin (IVIG). Both treatments have been shown to speed up recovery time by up to 50%. IVIG has been shown to be an effective treatment for immune-system-related neuropathies in general. IVIG may act by reducing the amount of anti-myelin antibodies through the binding of the defective antibodies by healthy antibodies contained in the IVIG solution, and in suppressing the immune response.
Other treatments are designed to prevent or lessen complications of GBS. For example, choking during eating, because of throat muscle weakness or paralysis, can be prevented using a feeding tube, and formation of blood clots can be lessened by the use of chemicals that thin the blood. The pain associated with GBS can be treated with anti-inflammatory drugs or, if necessary, stronger-acting narcotic medication. For patients who have breathing difficulties, clinicians may first need to supply oxygen, install a breathing tube (intubation), and/or use a mechanical device that helps in breathing.
Physical therapy is helpful. Caregivers can move a patient's arms and legs to help maintain the flexibility and strength of the muscles. Later in recovery, sessions in a whirlpool (hydrotherapy) can help restore function to arms and legs. Often, therapists will design a series of exercises to be performed when the patient returns home.
Recovery and rehabilitation
More than 95% of people afflicted with GBS survive. In about 20% of people, however, muscle weakness and fatigue may remain. Some people find that wearing highly elastic gradient compression stockings beneficial. The stockings produce the greatest compression at the toes, with a tapering-off upwards to the thigh. The effect is to reduce the volume of veins, which increases the rate of blood flow through the veins. The increased blood flow can reduce the feeling of numbness in the toes.
As of early 2004, three clinical trials were recruiting patients, including:
- Assessment of chronic Guillain-Barré syndrome improvement with use of 4-aminopyridine. The study, funded by the United States Food and Drug Administration Office of Orphan Products Development, seeks to assess the potential of 4-aminopyridine in increasing the transmission of impulses in damaged nerves. It is hoped that increased nerve activity could restore some lost muscle activity, as has occurred using the drug with those afflicted with multiple sclerosis . The contact is the Spain Rehabilitation Center, University of Alabama at Birmingham, 35249-7330; Jay Meythaler, M.D. (205) 934-2088, (email: [email protected]).
- Safety, tolerability, and efficacy of rituximab in patients with anti-glycoconjugate antibody-mediated demyelinating neuropathy: a double-blind placebo-controlled randomized trial. While not directly related to GBS, the study concerns the loss of the myelin sheath of nerves and so is relevant. The study, sponsored by the National Institute of Neurological Disorders and Stroke (NINDS), is designed to evaluate the usefulness of rituximab in preventing the antibody damage to nerves. The contact is the National Institutes of Health Patient Recruitment and Public Liaison Office, Building 61, 10 Cloister Court, Bethesda, MD, 20892-4754; (800) 411-1222; [email protected]
- Diagnostic evaluation of patients with neuromuscular diseases. This NINDS-sponsored study is designed to screen patients for other studies and to help train clinicians in the diagnosis of maladies including GBS. The contact information is the same as the above item.
Most of those afflicted with GBS recover completely, although the recovery can in some cases be slow (months to years). Complete recovery usually occurs when the symptoms fade within three weeks of appearing. The typical scenario is for a patient to experience the most weakness from 10–14 days after the appearance of symptoms, with complete recovery occurring within weeks or a few months. In contrast, a poor prognosis can be associated with a rapid appearance of symptoms, use of assisted ventilation for a month or more, severe nerve damage, and with advancing age.
While recovery is complete for most of those afflicted with GBS, in 10–20% of cases the symptoms reappear, in 15–20% the neurologic complications can persist and can cause a long-term disability, and 5–10% of those who are afflicted die. The main cause of death historically was from respiratory failure due to muscle paralysis. With mechanical ventilation, respiratory failure in GBS is less often fatal. Currently the main cause of death is malfunctioning of the autonomic nervous system, which controls involuntary processes such as heart rate, blood pressure, and body temperature.
Quarles, R. H., and M. D. Weiss. "Autoantibodies Associated with Peripheral Neuropathy." Muscle Nerve (July 1999): 800–822.
Guillain-Barré Syndrome (GBS) and Influenzae Vaccine. Centers for Disease Control and Prevention. CDC. December 15, 2003 (April 4, 2004). <http://www.cdc.gov/nip/vacsafe/concerns/GBS/default.htm>.
Fanion, David, and Daniel M. Joyce. "Guillain-Barré Syndrome." eMedicine. December 12, 2003 (April 4, 2004). <http://www.emedicine.com/EMERG/topic222.htm>.
Mayo Foundation for Medical Education and Research. "Guillain-Barré Syndrome." MayoClinic.com. December 13, 2003 (April 4, 2004). <http://www.mayoclinic.com/invoke.cfm?id=DS00413>.
National Institutes of Health. "Guillain-Barré Syndrome." MEDLINEplus Medical Encyclopedia. December 13, 2003 (April 4, 2004). <http://www.nlm.nih.gov/medlineplus/ency/article/000684.htm>.
NINDS Guillain-Barré Syndrome Information Page. National Institute of Neurological Disorders and Stroke. December 10, 2003 (April 4, 2004). <http://www.ninds.nih.gov/health_and_medical/disorders/gbs.htm>.
Guillain-Barré Syndrome Foundation International. P.O. Box 262, Wynnewood, PA 19096. (610) 667-0131; Fax: (610) 667-7036. [email protected] <http://www.gbsfi.com>.
National Institute for Neurological Disorders and Stroke. P.O. Box 5801, Bethesda, MD 20824. (301) 496-5761 or (800) 352-9424. <http://www.ninds.nih.gov>.
Brian Douglas Hoyle, PhD
Guillain-Barré syndrome (GBS) causes progressive muscle weakness and paralysis (the complete inability to use a particular muscle or muscle group), which develops over days or up to four weeks, and lasts several weeks or even months.
The classic scenario in GBS involves a patient who has just recovered from a typical, seemingly uncomplicated viral infection. Symptoms of muscle weakness appear one to four weeks later. The most common preceding infections are cytomegalovirus, herpes, Epstein-Barr virus, and viral hepatitis. A gastrointestinal infection with the bacteria Campylobacter jejuni is also common and may cause a severe type of GBS from which it is particularly difficult to recover. About 5% of GBS patients have a surgical procedure as a preceding event. Patients with lymphoma, systemic lupus erythematosus, or AIDS have a higher than normal risk of GBS. Other GBS patients have recently received an immunization, while still others have no known preceding event. In 1976–77, there was a vastly increased number of GBS cases among people who had been recently vaccinated against the Swine flu. The reason for this phenomenon has never been identified, and no other flu vaccine has caused such an increase in GBS cases.
Causes and symptoms
The cause of the weakness and paralysis of GBS is the loss of myelin, which is the material that coats nerve cells (the loss of myelin is called demyelination). Myelin is an insulating substance which is wrapped around nerves in the body, serving to speed conduction of nerve impulses. Without myelin, nerve conduction slows or stops. GBS has a short, severe course. It causes inflammation and destruction of the myelin sheath, and it disturbs multiple nerves. Therefore, it is considered an acute inflammatory demyelinating polyneuropathy.
The reason for the destruction of myelin in GBS is unknown, although it is thought that the underlying problem is autoimmune in nature. An autoimmune disorder is one in which the body's immune system, trained to fight against such foreign invaders as viruses and bacteria, somehow becomes improperly programmed. The immune system becomes confused, and is not able to distinguish between foreign invaders and the body itself. Elements of the immune system are unleashed against areas of the body, resulting in damage and destruction. For some reason, in the case of GBS, the myelin sheath appears to become a target for the body's own immune system.
The first symptoms of GBS consist of muscle weakness (legs first, then arms, then face), accompanied by prickly, tingling sensations (paresthesias). Symptoms affect both sides of the body simultaneously, a characteristic that helps distinguish GBS from other causes of weakness and paresthesias. Normal reflexes are first diminished, then lost. The weakness eventually affects all the voluntary muscles, resulting in paralysis. When those muscles necessary for breathing become paralyzed, the patient must be placed on a mechanical ventilator which takes over the function of breathing. This occurs about 30% of the time. Very severely ill GBS patients may have complications stemming from other nervous system abnormalities which can result in problems with fluid balance in the body, severely fluctuating blood pressure, and heart rhythm irregularities.
Diagnosis of GBS is made by looking for a particular cluster of symptoms (progressively worse muscle weakness and then paralysis), and by examining the fluid that bathes the brain and spinal canal through cerebrospinal fluid (CSF) analysis. This fluid is obtained by inserting a needle into the lower back (lumbar region). When examined in a laboratory, the CSF of a GBS patient will reveal a greater-than-normal quantity of protein, with normal numbers of white blood cells and a normal amount of sugar. Electrodiagnostic studies may show slowing or block of conduction in nerve endings in parts of the body other than the brain. Minor abnormalities will be present in 90% of patients.
There is no direct treatment for GBS. Instead, treatments are used that support the patient with the disabilities caused by the disease. The progress of paralysis must be carefully monitored, in order to provide mechanical assistance for breathing if it becomes necessary. Careful attention must also be paid to the amount of fluid the patient is taking in by drinking and eliminating by urinating. Blood pressure, heart rate, and heart rhythm also must be monitored.
A procedure called plasmapheresis, performed early in the course of GBS, has been shown to shorten the course and severity of GBS. Plasmapheresis consists of withdrawing the patient's blood, passing it through an instrument that separates the different types of blood cells, and returning all the cellular components (red and white blood cells and platelets) along with either donor plasma or a manufactured replacement solution. This is thought to rid the blood of the substances that are attacking the patient's myelin.
It has also been shown that the use of high doses of immunoglobulin given intravenously (by drip through a needle in a vein) may be just as helpful as plasma-pheresis. Immunoglobulin is a substance naturally manufactured by the body's immune system in response to various threats. It is interesting to note that corticosteroid medications (such as prednisone), often the mainstay of anti-autoimmune disease treatment, are not only unhelpful, but may in fact be harmful to patients with GBS.
About 85% of GBS patients make reasonably good recoveries. However, 30% of adult patients, and a greater percentage of children, never fully regain their previous level of muscle strength. Some of these patients suffer from residual weakness, others from permanent paralysis. About 10% of GBS patients begin to improve, then suffer a relapse. These patients suffer chronic GBS symptoms. About 5% of all GBS patients die, most from cardiac rhythm disturbances.
Patients with certain characteristics tend to have a worse outcome. These include people of older age, those who required breathing support with a mechanical ventilator, and those who had their worst symptoms within the first seven days.
Because so little is known about what causes GBS to develop, there are no known methods of prevention.
Guillain-Barré Syndrome Foundation International. PO Box 262, Wynnewood, PA 19096. (610) 667-0131. (610) 667-0131. 〈http://www.webmast.com/gbs〉.
Autoimmune— The body's immune system directed against the body itself.
Demyelination— Disruption or destruction of the myelin sheath, leaving a bare nerve. Results in a slowing or stopping of impulses traveling along that nerve.
Inflammatory— Having to do with inflammation, the body's response to either invading foreign substances (such as viruses or bacteria) or to direct injury of body tissue.
Myelin— The substance that is wrapped around nerves, and which is responsible for speed and efficiency of impulses traveling through those nerves. |
Pacemakers and defibrillators are small devices implanted into the body to help regulate the heart through electrical signals. Cardiothoracic surgeons implant these devices keeping the heart beating in a normal rhythm. Wires, called leads, attached to the devices deliver the energy from these devices to the heart. When one or more of these leads needs to be removed, the procedure is called a lead extraction.
Specialists at University of Utah Health Care have over twenty years experience in these procedures. They constantly participate in research to bring the latest in cardiovascular treatments to their patients.
What is a pacemaker insertion?
A pacemaker insertion is the implantation of a small electronic device that is usually placed in the chest (just below the collarbone) to help regulate slow electrical problems with the heart. A pacemaker may be recommended to ensure that the heartbeat does not slow to a dangerously low rate.
The heart's electrical system
The heart is basically a pump made up of muscle tissue that is stimulated by electrical currents, which normally follow a specific circuit within the heart.
This normal electrical circuit begins in the sinus or sinoatrial (SA) node, which is a small mass of specialized tissue located in the right atrium (upper chamber) of the heart. The SA node generates an electrical stimulus at 60 to 100 times per minute (for adults) under normal conditions; this electrical impulse from the SA node starts the heartbeat.
The electrical impulse travels from the SA node via the atria to the atrioventricular (AV) node in the bottom of the right atrium. From there the impulse continues down an electrical conduction pathway called the Bundle of His and then on through the "His-Purkinje" system into the ventricles (lower chambers) of the heart. When the electrical stimulus occurs it causes the muscle to contract and pump blood to the rest of the body. This process of electrical stimulation followed by muscle contraction is what makes the heart beat.
A pacemaker may be needed when problems occur with the electrical conduction system of the heart. When the timing of the electrical stimulation of the heart to the heart muscle and the subsequent response of the heart's pumping chambers is altered, a pacemaker may help.
What is a pacemaker?
A pacemaker is composed of three parts: a pulse generator, one or more leads, and an electrode on each lead. A pacemaker signals the heart to beat when the heartbeat is too slow or irregular.
A pulse generator is a small metal case that contains electronic circuitry with a small computer and a battery that regulate the impulses sent to the heart.
The lead (or leads) is an insulated wire that is connected to the pulse generator on one end, with the other end placed inside one of the heart's chambers. The lead is almost always placed so that it runs through a large vein in the chest leading directly to the heart. The electrode on the end of a lead touches the heart wall. The lead delivers the electrical impulses to the heart. It also senses the heart's electrical activity and relays this information back to the pulse generator. Pacemaker leads may be positioned in the atrium (upper chamber) or ventricle (lower chamber) or both, depending on the medical condition.
If the heart's rate is slower than the programmed limit, an electrical impulse is sent through the lead to the electrode and causes the heart to beat at a faster rate.
When the heart beats at a rate faster than the programmed limit, the pacemaker generally monitors the heart rate and will not pace. Modern pacemakers are programmed to work on demand only, so they do not compete with natural heartbeats. Generally, no electrical impulses will be sent to the heart unless the heart's natural rate falls below the pacemaker's lower limit.
A newer type of pacemaker, called a biventricular pacemaker, is currently used in the treatment of specific types of heart failure. Sometimes in heart failure, the two ventricles do not pump in a normal manner. Ventricular dyssynchrony is a common term used to describe this abnormal pumping pattern. When this happens, less blood is pumped by the heart. A biventricular pacemaker paces both ventricles at the same time, increasing the amount of blood pumped by the heart. This type of treatment is called cardiac resynchronization therapy or CRT.
After a pacemaker insertion, regularly scheduled appointments will be made to ensure the pacemaker is functioning properly. The doctor uses a special computer, called a programmer, to review the pacemaker's activity and adjust the settings when needed.
Other related procedures that may be used to assess the heart include resting and exercise electrocardiogram (ECG), Holter monitor, signal-averaged ECG, cardiac catheterization, chest X-ray, computed tomography (CT scan) of the chest, echocardiography, electrophysiology studies, magnetic resonance imaging (MRI) of the heart, myocardial perfusion scans, radionuclide angiography, and cardiac CT scan. Please see these procedures for additional information. Note that although an MRI is a very safe procedure, a person with a pacemaker generally should not undergo MRI, as the magnetic fields used by the MRI scanner may interfere with the pacemaker's function. Any patient with a pacemaker should always speak with his or her cardiologist before undergoing an MRI.
Reasons for the procedure
A pacemaker may be inserted in order to stimulate a faster heart rate when the heart is beating too slowly, and causing problems that cannot otherwise be corrected.
Problems with the heart rhythm may cause difficulties because the heart is unable to pump an adequate amount of blood to the body. If the heart rate is too slow, the blood is pumped too slowly. If the heart rate is too fast or too irregular, the heart chambers are unable to fill up with enough blood to pump out with each beat. When the body does not receive enough blood, symptoms such as fatigue, dizziness, fainting, and/or chest pain may occur.
Some examples of heart rate and rhythm problems for which a pacemaker might be inserted include:
Bradycardia. This occurs when the sinus node causes the heart to beat too slowly.
Tachy-brady syndrome. This is characterized by alternating fast and slow heartbeats.
Heart block. This occurs when the electrical signal is delayed or blocked after leaving the SA node; there are several types of heart blocks.
There may be other reasons for your doctor to recommend a pacemaker insertion.
Risks of the procedure
Possible risks of pacemaker include, but are not limited to, the following:
Bleeding from the incision or catheter insertion site
Damage to the vessel at the catheter insertion site
Infection of the incision or catheter site
Pneumothorax. If the nearby lung is inadvertently punctured during the procedure, leaking air becomes trapped in the pleural space (outside the lung but within the chest wall); this can cause breathing difficulties and in extreme cases may cause the lung to collapse.
If you are pregnant or suspect that you may be pregnant, you should notify your health care provider. If you are breastfeeding, you should notify your health care provider.
Patients who are allergic to or sensitive to medications or latex should notify their doctor.
For some patients, having to lie still on the procedure table for the length of the procedure may cause some discomfort or pain.
There may be other risks depending on your specific medical condition. Be sure to discuss any concerns with your doctor prior to the procedure.
Before the procedure
Your doctor will explain the procedure to you and offer you the opportunity to ask any questions that you might have about the procedure:
You will be asked to sign a consent form that gives your permission to do the test. Read the form carefully and ask questions if something is not clear.
Notify your doctor if you are sensitive to or are allergic to any medications, iodine, latex, tape, or anesthetic agents (local and general).
You will need to fast for a certain period of time prior to the procedure. Your doctor will notify you how long to fast, usually overnight.
If you are pregnant or suspect that you are pregnant, you should notify your doctor.
Notify your doctor of all medications (prescription and over-the-counter) and herbal or other supplements that you are taking.
Notify your doctor if you have heart valve disease, as you may need to receive an antibiotic prior to the procedure.
Notify your doctor if you have a history of bleeding disorders or if you are taking any anticoagulant (blood-thinning) medications, aspirin, or other medications that affect blood clotting. It may be necessary for you to stop some of these medications prior to the procedure.
Your doctor may request a blood test prior to the procedure to determine how long it takes your blood to clot. Other blood tests may be done as well.
You may receive a sedative prior to the procedure to help you relax. If a sedative is given and there is a possibility that you may be discharged, you will need someone to drive you home. You will likely spend at least one night in the hospital after the procedure for observation and to ensure the pacemaker functions properly.
Based on your medical condition, your doctor may request other specific preparation.
During the procedure
A pacemaker may be performed on an outpatient basis or as part of your stay in a hospital. Procedures may vary depending on your condition and your doctor's practices.
Generally, a pacemaker insertion follows this process:
You will be asked to remove any jewelry or other objects that may interfere with the procedure.
You will be asked to remove your clothing and will be given a gown to wear.
You will be asked to empty your bladder prior to the procedure.
If there is excessive hair at the incision site, it may be clipped off.
An intravenous (IV) line will be started in your hand or arm prior to the procedure for injection of medication and to administer IV fluids, if needed.
You will be placed on your back on the procedure table.
You will be connected to an electrocardiogram (ECG or EKG) monitor that records the electrical activity of the heart and monitors the heart during the procedure using small, adhesive electrodes. Your vital signs (heart rate, blood pressure, breathing rate, and oxygenation level) will be monitored during the procedure.
Large electrode pads will be placed on the front and back of the chest.
You will receive a sedative medication in your IV before the procedure to help you relax. However, you will likely remain awake during the procedure.
The pacemaker insertion site will be cleansed with antiseptic soap.
Sterile towels and a sheet will be placed around this area.
A local anesthetic will be injected into the skin at the insertion site.
Once the anesthetic has taken effect, the physician will make a small incision at the insertion site.
A sheath, or introducer, is inserted into a blood vessel, usually under the collarbone. The sheath is a plastic tube through which the pacer lead wire will be inserted into the blood vessel and advanced into the heart.
It will be very important for you to remain still during the procedure so that the catheter does not move out of place and to prevent damage to the insertion site.
The lead wire will be inserted through the introducer into the blood vessel. The doctor will advance the lead wire through the blood vessel into the heart.
Once the lead wire is inside the heart, it will be tested to verify proper location and that it works. There may be one, two, or three lead wires inserted, depending on the type of device your doctor has chosen for your condition. Fluoroscopy, (a special type of X-ray that will be displayed on a TV monitor), may be used to assist in testing the location of the leads.
The pacemaker generator will be slipped under the skin through the incision (just below the collarbone) after the lead wire is attached to the generator. Generally, the generator will be placed on the nondominant side. (If you are right-handed, the device will be placed in your upper left chest. If you are left-handed, the device will be placed in your upper right chest).
The ECG will be observed to ensure that the pacer is working correctly.
The skin incision will be closed with sutures, adhesive strips, or a special glue.
A sterile bandage or dressing will be applied.
After the procedure
In the hospital
After the procedure, you may be taken to the recovery room for observation or returned to your hospital room. A nurse will monitor your vital signs.
You should immediately inform your nurse if you feel any chest pain or tightness, or any other pain at the incision site.
After the period of bed rest has been completed, you may get out of bed with assistance. The nurse will assist you the first time you get up, and will check your blood pressure while you are lying in bed, sitting, and standing. You should move slowly when getting up from the bed to avoid any dizziness from the period of bedrest.
You will be able to eat or drink once you are completely awake.
The insertion site may be sore or painful. Pain medication may be administered if needed.
Your doctor will visit with you in your room while you are recovering. The doctor will give you specific instructions and answer any questions you may have.
Once your blood pressure, pulse, and breathing are stable and you are alert, you will be taken to your hospital room or discharged home.
If the procedure is performed on an outpatient basis, you may be allowed to leave after you have completed the recovery process. However, it is common to spend at least one night in the hospital after pacemaker implantation for observation.
You should arrange to have someone drive you home from the hospital following your procedure.
You should be able to return to your daily routine within a few days. Your doctor will tell you if you will need to take more time in returning to your normal activities. You should not do any lifting or pulling on anything for a few weeks. You may be instructed to limit movement of the arm on the side that the pacemaker was placed, based on your doctor's preferences.
You will most likely be able to resume your usual diet, unless your doctor instructs you differently.
It will be important to keep the insertion site clean and dry. You will be given instructions about bathing and showering.
Your doctor will give you specific instructions about driving.
Ask your doctor when you will be able to return to work. The nature of your occupation, your overall health status, and your progress will determine how soon you may return to work.
Notify your doctor to report any of the following:
Fever and/or chills
Increased pain, redness, swelling, or bleeding or other drainage from the insertion site
Chest pain/pressure, nausea and/or vomiting, profuse sweating, dizziness and/or fainting
Your doctor may give you additional or alternate instructions after the procedure, depending on your particular situation.
The following precautions should always be considered. Discuss the following in detail with your doctor, or call the company that made your device:
Always carry an ID card that states you have a pacemaker. In addition, you may want to wear a medical identification bracelet indicating that you have a pacemaker.
Let screeners know you have a pacemaker before going through airport security detectors. In general airport detectors are safe for pacemakers, but the small amount of metal in the pacemaker and leads may set off the alarm. If you are selected for additional screening by hand-held detector devices, politely remind the screeners that the detector wand should not be held over your pacemaker for longer than a few seconds, as these devices contain magnets and thus may affect the function or programming of your pacemaker.
You may not have a magnetic resonance imaging (MRI) procedure (unless you have a specially designed pacemaker). You should also avoid large magnetic fields such as power generation sites and industrial sites such as automobile junkyards that use large magnets.
Abstain from diathermy (the use of heat in physical therapy to treat muscles).
Turn off large motors, such as cars or boats, when working close to them as they may create a magnetic field.
Avoid high-voltage or radar machinery, such as radio or television transmitters, electric arc welders, high-tension wires, radar installations, or smelting furnaces.
If you are having a surgical procedure performed, inform your surgeon that you have a pacemaker well before the operation. Also ask your cardiologist's advice on whether anything special should be done prior to and during the surgery, as the electrocautery device that controls bleeding may interfere with the pacemaker. Sometimes the pacemaker's programming will be temporarily changed (using a magnet) during the surgery to minimize the possibility of interference from the electrocautery.
When involved in a physical, recreational, or sporting activity, protect yourself from trauma to the pacemaker. A blow to the chest near the pacemaker can affect its functioning. If you are hit in that area, you may want to see your doctor.
Cell phones in the U.S. with less than 3 watts of output do not seem to affect pacemakers or the pulse generator, but as a precaution, cell phones should be kept at least 6 inches away from your pacemaker. Avoid carrying a cell phone in your breast pocket over your pacemaker.
Always consult your doctor when you feel ill after an activity, or when you have questions about beginning a new activity.
Always consult your doctor if you have any questions concerning the use of certain equipment near your pacemaker.
The content provided here is for informational purposes only, and was not designed to diagnose or treat a health problem or disease, or replace the professional medical advice you receive from your doctor. Please consult your health care provider with any questions or concerns you may have regarding your condition.
This page contains links to other websites with information about this procedure and related health conditions. We hope you find these sites helpful, but please remember we do not control or endorse the information presented on these websites, nor do these sites endorse the information contained here.
Overview of Pacemakers and Implantable Cardioverter Defibrillators (ICDs)
What is a permanent pacemaker?
A permanent pacemaker, a small device that is implanted under the skin (most often in the shoulder area just under the collarbone), sends electrical signals to start or regulate a slow heartbeat. A permanent pacemaker may be used to make the heart beat if the heart's natural pacemaker (the SA node) is not functioning properly and has developed an abnormally slow heart rate or rhythm, or if the electrical pathways are blocked.
A newer type of pacemaker, called a biventricular pacemaker, is currently used in the treatment of ventricular dyssynchrony (irregular conduction pattern in the lower heart chambers) or heart failure. Sometimes in heart failure, the two ventricles do not pump together in a normal manner. When this happens, less blood is pumped by the heart. A biventricular pacemaker paces both ventricles at the same time, increasing the amount of blood pumped by the heart. This type of treatment is called cardiac resynchronization therapy.
What is an implantable converter defibrillator (ICD)?
An implantable cardioverter defibrillator (ICD) looks very similar to a pacemaker, except that it is slightly larger. It has a generator, one or more leads, and an electrode for each lead. These components work very much like a pacemaker. However, the ICD is designed to deliver two levels of electrical energy: a low energy shock that can convert a beating heart that is in an abnormal rhythm back to a normal heartbeat, and a high energy shock that is delivered only if the arrhythmia is so severe that the heart is only quivering instead of beating.
An ICD senses when the heart is beating too fast and delivers an electrical shock to convert the fast rhythm to a normal rhythm. Many devices combine a pacemaker and ICD in one unit for people who need both functions. After the shock is delivered, a "back-up" pacing mode is available if needed for a short while.
The ICD has another type of treatment for certain fast rhythms called anti-tachycardia pacing, a fast-pacing impulse sent to correct the rhythm.
What is the reason for getting a pacemaker or an ICD?
Pacemakers are most commonly advised in patients whose heartbeat slows to an unhealthy low rate. ICDs are advised in specific patients who are at risk for potentially fatal ventricular arrhythmias (an abnormal rhythm from the lower heart chambers, which can cause the heart to pump less effectively). There may be other reasons why your doctor advises placement of a pacemaker or ICD.
When the heart's natural pacemaker or electrical circuit malfunctions, the signals sent out may become erratic: either too slow, too fast, or too irregular to stimulate adequate contractions of the heart chambers. When the heartbeat becomes erratic, it is referred to as an arrhythmia.
Arrhythmias can cause problems with contractions of the heart chambers by:
Not allowing the chambers to fill with an adequate amount of blood because the electrical signal is causing the heart to pump too fast.
Not allowing a sufficient amount of blood to be pumped out to the body because the electrical signal is causing the heart to pump too slowly or too irregularly.
The heart's electrical system
The heart is, in the simplest terms, a pump made up of muscle tissue. The heart's pumping action is regulated by an electrical conduction system that coordinates the contraction of the various chambers of the heart.
How does the heart beat?
An electrical stimulus is generated by the sinus node (also called the sinoatrial node, or SA node), which is a small mass of specialized tissue located in the right atrium (right upper chamber of the heart). In an adult, the sinus node generates an electrical stimulus regularly (for adults, 60 to 100 times per minute under normal conditions). This electrical stimulus travels down through the conduction pathways (similar to the way electricity flows through power lines from the power plant to your house) and causes the heart's lower chambers to contract and pump out blood. The right and left atria (the two upper chambers of the heart) are stimulated first and contract a short period of time before the right and left ventricles (the two lower chambers of the heart).
The electrical impulse travels from the sinus node through the atria to the atrioventricular node (also called AV node), where impulses are slowed down for a very short period, then continue down the conduction pathway via the bundle of His into the ventricles. The bundle of His divides into right and left pathways to provide electrical stimulation to the right and left ventricles.
Normally at rest, as the electrical impulse moves through the heart, the heart contracts about 60 to 100 times a minute, depending on a person's age (infants normally have very high heart rates). Each contraction of the ventricles represents one heartbeat. The atria contract a fraction of a second before the ventricles so their blood empties into the ventricles before the ventricles contract.
Under some abnormal conditions, certain heart tissue is capable of starting a heartbeat, or becoming the pacemaker. An arrhythmia (abnormal heartbeat) occurs when:
The heart's natural pacemaker develops an abnormal rate or rhythm
The normal conduction pathway is interrupted
Another part of the heart takes over as pacemaker
In any of these situations, the body may not receive enough blood because the heart cannot pump out an adequate amount with each beat as a result of the arrhythmia's effects on the heart rate. The effects on the body are often the same, however, whether the heartbeat is too fast, too slow, or too irregular. Some symptoms of arrhythmias include, but are not limited to:
Low blood pressure
The symptoms of arrhythmias may resemble other medical conditions. Consult your doctor for a diagnosis.
What are the components of a permanent pacemaker/ICD?
A permanent pacemaker or ICD has three main components:
A pulse generator which has a sealed lithium battery and an electronic circuitry package. The pulse generator produces the electrical signals that make the heart beat. Most pulse generators also have the capability to receive and respond to signals that are sent by the heart itself.
One or more wires (also called leads). Leads are insulated flexible wires that conduct electrical signals to the heart from the pulse generator. The leads also relay signals from the heart to the pulse generator. One end of the lead is attached to the pulse generator and the electrode end of the lead is positioned in the atrium (the upper chamber of the heart) or in the right ventricle (the lower chamber of the heart). In the case of a biventricular pacemaker, leads are placed in both ventricles.
Electrodes, which are found on each lead.
Pacemakers can "sense" when the heart's natural rate falls below the rate that has been programmed into the pacemaker's circuitry.
Pacemaker leads may be positioned in the right atrium, right ventricle, or positioned to pace both ventricles, depending on the condition requiring the pacemaker to be inserted. An atrial arrhythmia (an arrhythmia caused by a dysfunction of the sinus node or the development of another atrial pacemaker within the heart tissue that takes over the function of the sinus node) may be treated with an atrial permanent pacemaker whose lead wire is located in the atrium.
When the ventricles are not stimulated normally by the sinus node or another natural atrial pacemaker site, a ventricular pacemaker whose lead wire is located in the ventricle is placed/used. It is possible to have both atrial and ventricular arrhythmias, and there are pacemakers which have lead wires positioned in both the atrium and the ventricle.
An ICD has a lead wire that is positioned in the ventricle, as it is used for treating fast ventricular arrhythmias. Commonly, ICDs will have an atrial lead and ventricular lead.
Pacemakers that pace either the right atrium or the right ventricle are called "single-chamber" pacemakers. Pacemakers that pace both the right atrium and right ventricle of the heart and require two pacing leads are called "dual-chamber" pacemakers. Pacemakers that pace the right atrium and right and left ventricles are called "biventricular" pacemakers.
How is a pacemaker/ICD implanted?
Pacemaker/ICD insertion is done in the cardiac catheterization laboratory, or the electrophysiology laboratory. The patient is awake during the procedure, although local anesthesia is given over the incision site, and generally sedation is given to help the patient relax during the procedure. A night or two of hospitalization may be recommended so that the functioning of the implanted device may be observed.
Shown here is a chest X-ray. The large, white space in the middle is the heart. The dark spaces on either side are the lungs. The small object in the upper corner is an implanted pacemaker.
A small incision is made just under the collarbone. The pacemaker/ICD lead(s) is inserted into the heart through a blood vessel which runs under the collarbone. Once the lead is in place, it is tested to make sure it is in the right place and is functional. The lead is then attached to the generator, which is placed just under the skin through the incision made earlier. Once the procedure has been completed, the patient goes through a recovery period of several hours.
There are certain instructions related to having an implanted permanent pacemaker or ICD. For example, after you receive your pacemaker or ICD, you will receive an identification card from the manufacturer that includes information about your specific model of pacemaker and the serial number as well as how the device works. You should carry this card with you at all times so that the information is always available to any health care professional who may have reason to examine and/or treat you.
Removing lead wires from pacemakers and defibrillators is a delicate process. They don’t easily pull out, since they get tightly attached to the heart and the veins through which they travel on the way to the heart. The longer leads have been implanted the more tightly attached they get. Special tools and techniques are used to extract leads safely and effectively.
The most common reasons for recommending lead extraction are infection of the pacemaker or defibrillator, malfunctioning leads, and multiple abandoned leads. The need for lead extraction is not always clear-cut and University of Utah physicians can advise patients on the advisability of extraction and alternative approaches.
Roger A. Freedman, M.D. has been a faculty member at the University of Utah for over 20 years. He specializes in the treatment of cardiac arrhythmias, particularly in the implantation and management of implanted pacemakers and defibrillators. Pacemaker and defibrillator lead extraction is a highly specialized technique in which Dr. Freedman is gre... Read More
|Idaho Heart Institute||(801) 585-1935|
|Memorial Hospital of Sweetwater County||(801) 585-7676|
|Star Valley Medical Center||(801) 585-7676|
|William B. Ririe Hospital & Rural Health Clinic||(801) 585-7676|
Cardiothoracic Surgery, Nurse Practitioner
|Eccles Primary Children’s Outpatient Services Building||(801) 662-1000|
Scott came to the University over 6 years ago from private practice. As a trainer and consultant for Endoscopic Vein harvesting, he has effectively introduced and incorporated new technology and procedures into the University healthcare system. As the senior physician assistant in the division of cardiothoracic surgery, Scott has over twelve year... Read More |
History of the Yosemite area
Human habitation in the Sierra Nevada region of California reaches back 8,000 to 10,000 years ago. Historically attested Native American populations, such as the Sierra Miwok, Mono and Paiute, belong to the Uto-Aztecan and Utian phyla.
In the mid-19th century, a band of Native Americans called the Ahwahnechee lived in Yosemite Valley. The California Gold Rush greatly increased the number of non-indigenous people in the region. Tensions between Native Americans and white settlers escalated into the Mariposa War. As part of this conflict, settler James Savage led the Mariposa Battalion into Yosemite Valley in 1851, in pursuit of Ahwaneechees led by Chief Tenaya. Accounts from the battalion, especially from Dr. Lafayette Bunnell, popularized Yosemite Valley as a scenic wonder.
In 1864, Yosemite Valley and the Mariposa Grove of giant sequoia trees were transferred from federal to state ownership. Yosemite pioneer Galen Clark became the park's first guardian. Conditions in Yosemite Valley were made more hospitable to people and access to the park was improved in the late 19th century. Naturalist John Muir and others became increasingly alarmed about the excessive exploitation of the area. Their efforts helped establish Yosemite National Park in 1890. Yosemite Valley and the Mariposa Grove were added to the national park in 1906.
The United States Army had jurisdiction over the national park from 1891 to 1914, followed by a brief period of civilian stewardship. The newly formed National Park Service took over the park's administration in 1916. Improvements to the park helped to increase visitation during this time. Preservationists led by Muir and the Sierra Club failed to save Hetch Hetchy Valley from becoming a reservoir when the Raker Act was approved on December 2, 1913. This event was considered a major conservation battle lost, after which John Muir gave up on fighting and died on December 24th, 1914. Construction on the O'Shaughnessey Dam began in 1919 and was completed in 1923. The loss of Hetch Hetchy led to the formation of the National Park Service through the approval of the Organic Act of 1916. In 1964, 89 percent of the park was set aside in a highly protected wilderness area, and other protected areas were added adjacent to the park. The once-famous Yosemite Firefall, created by pushing red hot embers off a cliff near Glacier Point at night, was discontinued in the mid-to-late 20th century along with other activities that were deemed to be inconsistent with protection of the national park.
- 1 Early history
- 2 State grant
- 3 National park
- 4 Human impact
- 5 Notes
- 6 References
- 7 External links
Humans may have visited the Yosemite area as long as 8,000 to 10,000 years ago. Habitation of the Yosemite Valley proper can be traced to about 3,000 years ago, when vegetation and game in the region was similar to that present today; the western slopes of the Sierra Nevada had acorns, deer, and salmon, while the eastern Sierra had pinyon nuts and obsidian. Native American groups traveled between these two regions to trade and raid.
Archaeologists divide the pre-European American contact period of the region into three cultural phases. The Crane Flat phase lasted from 1000 BCE to 500 CE and is marked by hunting with the atl atl and the use of grinding stones. The Tarmarack phase lasted from 500 until 1200, marked by a shift to using smaller rock points, indicating development and use of the bow and arrow. The Mariposa phase lasted from 1200 until contact with European Americans.
Trade between tribes became more widespread during the Mariposa phase, and the diet continued to improve. Paiutes, Miwok, and Monos visited the area to trade; one major trading route went over Mono Pass and through Bloody Canyon to Mono Lake in Eastern California.
Paiutes were the primary inhabitants of the Yosemite area and the foothills to the east during the Mariposa and historic phases. The Central Sierra Miwoks lived along the drainage area of the Tuolumne and Stanislaus Rivers, while the Paiutes inhabited the upper drainage of the Merced and Chowchilla Rivers.
The indigenous natives called themselves the Ah-wah-ne-chee, meaning "dwellers in Ahwahnee." The Ahwahneechees were decimated by a disease in about 1800, and left the valley, although about 200 returned under the leadership of Tenaya, son of an Ahwahneechee chief.
Displaced Native Americans from the Californian coast moved to the Sierra Nevada during the early-to-mid-19th century, bringing with them their knowledge of Spanish food, technology, and clothing. Joining forces with the other tribes in the area, they raided land grant ranchos on the coast and drove herds of horses to the Sierra, where horse meat became a major new food source.
Exploration by European AmericansEdit
Although there were Spanish missions, pueblos (towns), presidios (forts), and ranchos along the coast of California, no Spanish explorers visited the Sierra Nevada. The first European Americans to visit the mountains were amongst a group led by fur trapper Jedediah Smith, crossing north of the Yosemite area in May 1827, at Ebbetts Pass.
A group of trappers led by mountain man Joseph Reddeford Walker may have seen Yosemite Valley in the autumn of 1833. Walker approached a valley rim as he led his party across the Sierra Nevada, but he did not enter it. A member of the group, Zenas Leonard, wrote in his journal that streams from the valley rim dropped "from one lofty precipice to another, until they are exhausted in rain below. Some of these precipices appeared to us to be more than a mile high." The Walker party probably visited either the Tuolumne or Merced Groves of giant sequoia, becoming the first non-indigenous people to see the giant trees, but journals relating to the Walker party were destroyed in 1839, in a print shop fire in Philadelphia.
The part of the Sierra Nevada where the park is located was long considered to be a physical barrier to European American settlers, traders, trappers, and travelers. That situation changed in 1848 after gold was discovered in the foothills west of the range. Travel and trade activity dramatically increased in the area during the ensuing California Gold Rush. Resources depended upon by local Native Americans were depleted or destroyed, and disease brought by the newcomers spread rapidly through indigenous populations. Extermination of native culture became a policy of the United States Government.
The first confirmed sighting of Yosemite Valley by a non-indigenous person occurred on October 18, 1849 by William P. Abrams and a companion. Abrams accurately described some landmarks, but it is uncertain whether he or his companion actually entered the valley. In 1850, one of three brothers, Joseph, William, or Nathan Screech, became the first confirmed non-indigenous person to enter Hetch Hetchy Valley. Joseph Screech returned two years later and spoke with the Native Americans living there, asking them what the name of a grass-covered seed meal was and was told, "hatch hatchy."
The surveying crew of Allexey W. Von Schmidt conducted the first systematic traverse of any part of the Yosemite area backcountry in 1855, when it extended an approximation of the Mount Diablo Baseline eastward from a point west of the present park boundary, to a point south of Mono Lake. The actual route taken was 5 to 6 miles south of the actual baseline, due to topographic difficulties, including the Tuolumne River canyon at low elevations, and steep mountain slopes higher up. Nevertheless, this was the first straight line survey made across the Sierra Nevada From 1879 to 1883 large parts of the western half of the park were surveyed as part of the General Land Office survey. However, the individual contracted for the largest area, one S. A. Hanson, was later listed among those associated with the Benson Syndicate, and he combined actual with probably fabricated surveys. Topographic surveys performed by Lieutenant Montgomery M. Macomb, under George M Wheeler's Surveys West of the 100th Meridian, were completed in the late 1870s and early 1880s.
Mariposa Wars and legacyEdit
James Savage's trading camp on the Merced River, 10 miles (16 km) west of Yosemite Valley, was raided by Native Americans in December 1850, after which the raiders retreated into the mountains. An appeal to the governor of California to put an end to this and other raids led to the formation of the Mariposa Battalion in 1851, and the start of the Mariposa War.
Savage led the battalion into Yosemite Valley in 1851, in pursuit of around 200 Ahwaneechees led by Chief Tenaya. On March 27, 1851, the company of 50 to 60 men reached what is now called Old Inspiration Point, from where Yosemite Valley's main features are visible. Chief Tenaya and his band were eventually captured and their village burned, fulfilling the prophecy an old and dying medicine man had given Tenaya many years before. The Ahwahnechee were escorted by their captor, Captain John Bowling, to the Fresno River Reservation, and the battalion was disbanded on July 1, 1851. Life on the reservation was unpleasant and the Ahwahneechee longed for their valley. Reservation officials consented and allowed Tenaya and some of his band to return on their own recognizance.
A group of eight miners entered Yosemite Valley in May 1852, and were allegedly attacked by Tenaya's warriors; two of the miners were killed. Regular army troops under the direction of Lt. Tredwell Moore retaliated by shooting six Ahwahneechee who were in possession of white men's clothing.
Tenaya's band fled the valley and sought refuge with the Mono, his mother's tribe. In mid-1853, the Ahwahneechee returned to the valley, but they subsequently betrayed the hospitality of their former Mono hosts by stealing horses that the Mono had taken from non-indigenous ranchers. In return, the Monos tracked down and killed many of the remaining Ahwahneechee, including Tenaya; Tenaya Lake is named after the fallen chief. Hostilities subsided and by the mid-1850s local European American residents started to befriend Native Americans still living in the Yosemite area.
Members of the battalion proposed names for the valley while they were camped at Bridalveil Meadow. The company physician who had been attached to Savage's unit, Dr. Lafayette Bunnell, suggested "Yo-sem-i-ty", after what the surrounding Sierra Miwok tribes, who feared the Yosemite Valley tribe, called them. Savage, who spoke some native dialects, translated this as "full-grown grizzly bear." The term, which was possibly derived from or confused with the similar uzumati or uhumati, meaning "grizzly bear," is the Southern Sierra Miwok word Yohhe'meti, meaning "they are killers." Bunnell named many other local topographic features on the same trip.
Bunnell drafted an article about the trip, but destroyed it when a newspaper correspondent in San Francisco suggested cutting his 1,500- foot (460 m) height estimate for the valley's walls in half; the walls are in fact twice the height that Bunnell surmised. The first published account of Yosemite Valley was written by Lt. Tredwell Moore for the January 20, 1854, issue of the Mariposa Chronicle, establishing the modern spelling of Yosemite. Bunnell described his awestruck impressions of the valley in his book, The Discovery of the Yosemite, published in 1892.
Artists, photographers, and the first touristsEdit
Forty-eight Non-Indian people visited Yosemite Valley in 1855, including San Francisco writer James Mason Hutchings and artist Thomas Ayres. Hutchings wrote an article about his experience that was published in the July 12, 1855, issue of the Mariposa Gazette and Ayres' sketch of Yosemite Falls was published in late 1855; four of his drawings were presented in the lead article of the July 1856 and initial issue of Hutchings' Illustrated California Magazine. The article and illustrations created tourist interest in Yosemite and eventually led to its protection.
Ayres returned in 1856 and visited Tuolumne Meadows in the area's high country. His highly detailed angularly exaggerated artwork and his written accounts were distributed nationally and an art exhibition of his drawings was held in New York City.
Hutchings took photographer Charles Leander Weed to Yosemite Valley in 1859; Weed took the first photographs of the valley's features, which were presented to the public in a September exhibition held in San Francisco. Hutchings published four installments of "The Great Yo-semite Valley" from October 1859 to March 1860 in his magazine and re-published a collection of these articles in his Scenes of Wonder and Curiosity in California, which remained in print into the 1870s.
Photographer Ansel Adams made his first trip to Yosemite in 1916; his photographs of the valley made him famous in the 1920s and 1930s. Adams willed the originals of his Yosemite photos to the Yosemite Park Association, and visitors can still buy direct prints from his original negatives. The studio in which the prints are sold was established in 1902 by artist Harry Cassie Best.
Milton and Houston Mann opened a toll road to Yosemite Valley in 1856, up the South Fork of the Merced River. They charged the then considerable sum of two dollars per person until the road was bought by Mariposa County, after which it became free.
In 1856, settler Galen Clark discovered the Mariposa Grove of giant sequoia at Wawona, an indigenous encampment in what is now the southwestern part of the park. Clark completed a bridge over the South Fork of the Merced River in 1857 at Wawona for traffic headed toward Yosemite Valley and provided a way station for travelers on the road the Mann brothers built to the valley.
Simple lodgings, later called the Lower Hotel, were completed soon afterward; the Upper Hotel, later renamed Hutchings House and eventually known as Cedar Cottage, was opened in 1859. In 1876, the more substantial Wawona Hotel was built to serve tourists visiting the nearby grove of big trees and those on their way to Yosemite Valley. Aaron Harris opened the first campground business in Yosemite in 1876.
Forming the state grantEdit
Visitation and interest in Yosemite continued to grow through the American Civil War. Unitarian minister Thomas Starr King visited the valley in 1860 and saw some of the negative effects that settlement and commercial activity were having on the area. Six travel letters by Starr King were published in the Boston Evening Transcript in 1860 and 1861; Starr King became the first person with a nationally recognized voice to call for a public park at Yosemite. Oliver Wendell Holmes and John Greenleaf Whittier read and commented on Starr King's letters and landscape architect Frederick Law Olmsted was prompted by the warnings to visit the Yosemite area in 1863.
Pressure from Starr King and Olmsted, photographs by Carleton Watkins, and geological data from the 1863 Geological Survey of California prompted legislators to take action. Senator John Conness of California introduced a park bill in 1864 to the United States Senate to cede Yosemite Valley and the Mariposa Grove of Big Trees to California.
The bill easily passed both houses of the United States Congress, and was signed by President Abraham Lincoln on June 30, 1864. The Yosemite Grant, as it was called, was given to California as a state park for "public use, resort and recreation". A board of commissioners, with Frederick Law Olmsted as chairman, was formed in September 1864 to govern the grant, but it did not meet until 1866.
Managing the state grantEdit
The commission appointed Galen Clark as the grant's first guardian, but neither Clark nor the commissioners had the authority to evict homesteaders. Josiah Whitney, the first director of the California Geological Survey, lamented that Yosemite Valley would meet the same fate as Niagara Falls, which at that time was a tourist trap with tolls on every bridge, path, trail, and viewpoint.
Hutchings and a small group of settlers sought legal homesteading rights on 160 acres (65 ha) of the valley floor. The issue was not settled until 1874 when the land holdings of Hutchings and three others were invalidated and the state legislature appropriated $60,000 ($1,330,000 as of 2019) to compensate the settlers, of which Hutchings received $20,000.
Conditions in Yosemite Valley and access to the park steadily improved. In 1878, Clark used dynamite to breach a recessional moraine in the valley to drain a swamp behind it. Tourism significantly increased after a Sacramento to Stockton extension of the First Transcontinental Railroad was completed in 1869 and the Central Pacific Railroad reached Merced in 1872.
The long horseback ride from Merced remained a deterrent to tourists. Three stagecoach roads were built in the mid-1870s to provide better access; Coulterville Road (June 1874), Big Oak Flat Road (July 1874), and the Wawona Road (July 1875). A road to Glacier Point was completed in 1882 by John Conway, and the Great Sierra Wagon Road was opened in 1883, which roughly followed the Mono Trail to Tuolumne Meadows.
Clark and the sitting commissioners were removed from office by the California Legislature in 1880, and Hutchings became the new guardian. Hutchings in turn was replaced as guardian, in 1884, by W. E. Dennison. Clark was reappointed as guardian in 1889 and retired in 1896.
In 1900, Oliver Lippincott became the first to drive an automobile into Yosemite Valley. Yosemite Valley Railroad, nicknamed "the short line to paradise," arrived at nearby El Portal, California in 1907. Numerous hiking and horse trails were cleared, including a walking path through Mariposa Grove.
Yosemite's first concession was established in 1884 when Mr. and Mrs. John Degnan established a bakery and store. The Desmond Park Service Company was granted a twenty-year concession in 1916; the company bought out or built hotels, stores, camps, a dairy, a garage, and other park services. Desmond changed its name to the Yosemite National Park Company in December 1917 and was reorganized in 1920.
The Curry Company was started by David and Jenny Curry in 1899; the couple also founded Camp Curry, now known as Curry Village. The Currys lobbied reluctant park supervisors to allow expansion of concessionaire operations and development in the area.
Administrators in the National Park Service felt that limiting the number of concessionaires in each national park would be more financially sound. The Curry Company and its rival, the Yosemite National Park Company, were forced to merge in 1925 to form the Yosemite Park & Curry Company (YP&CC).
John Muir's influenceEdit
Immediately following his arrival in California in March 1868, naturalist John Muir set out for the Yosemite area, where he found work tending to the sheep owned by a local rancher, Pat Delaney. Muir's employment provided him with the opportunity to study the area's plants, rocks, and animals; the articles and scientific papers he wrote describing his observations helped to popularize the area and to increase scientific interest in it. Muir was one of the first to suggest that Yosemite Valley's major landforms were created by large alpine glaciers, contradicting the view of established scientists such as Josiah Whitney, who regarded Muir as an amateur.
Alarmed by over grazing of meadows, logging of giant sequoia, and other damage, Muir changed from being a promoter and scientist to an advocate for further protection. He persuaded many influential people to camp with him in the area, such as Ralph Waldo Emerson in 1871. Muir tried to convince his guests that the entire area should be under federal protection. None of his guests through the 1880s could do much for Muir's cause, except for Robert Underwood Johnson, editor of Century Magazine. Through Johnson, Muir had a national audience for his writing and a highly motivated and crafty congressional lobbyist.
Muir's wish was partially granted on October 1, 1890, when the area outside the valley and sequoia grove became a national park under the unopposed Yosemite Act. The Act provided "for the preservation from injury of all timber, mineral deposits, natural curiosities, or wonders within said reservation, and their retention in their natural condition" and prohibited "wanton destruction of the fish and game and their capture or destruction for the purposes of merchandise or profit."
Yosemite National Park included the entire upper drainages of two river watersheds. Preservation of watersheds was very important to Muir, who said "you cannot save Yosemite Valley without saving its Sierran fountains." The State of California retained control of Yosemite Valley and the Mariposa Grove of Big Trees. Muir and 181 others founded the Sierra Club in 1892, in part to lobby for the transfer of the valley and the grove into the national park.
Like Yellowstone National Park before it, Yosemite National Park was at first administered by various units of the United States Army. Captain Abram Wood led the 4th Cavalry Regiment into the new park on May 19, 1891, and set up Camp A.E. Wood (now the Wawona Campground) in Wawona. Each summer, 150 cavalrymen traveled from the Presidio of San Francisco to patrol the park. Approximately 100,000 sheep were illegally led into Yosemite's high meadows each year. The Army lacked legal authority to arrest the herders, but instead escorted them several days' hike from their flock, which left the sheep vulnerable. By the late 1890s sheep grazing was no longer a problem, but at least one herder continued to graze his sheep in the park into the 1920s.
The Army also tried to control poaching. In 1896, acting Superintendent Colonel S. B. M. Young stopped issuing firearm permits after discovering that large numbers of game and fish were being killed. Poaching continues to be an issue in the 21st century. The Army's administration of the park ended in 1914.
Galen Clark retired as the state grant's guardian in 1896, leaving Yosemite Valley and the Mariposa Grove of Big Trees under ineffective stewardship. Pre-existing problems in the state grant worsened and new problems arose, but the cavalry could only monitor the situation. Muir and the Sierra Club continued to lobby the government and influential people for the creation of a unified Yosemite National Park. The Sierra Club began to organize annual trips to Yosemite in 1901 in an effort to make the remote area more accessible.
Unified national parkEdit
U.S. President Theodore Roosevelt camped with John Muir near Glacier Point for three days in May 1903. During that trip, Muir convinced Roosevelt to take control of the valley and the grove away from California and give it to the federal government. On June 11, 1906, Roosevelt signed a bill that did precisely that, and the superintendent's headquarters was moved from Wawona to Yosemite Valley.
To secure congressional and State of California approval for the plan, the size of the park was reduced by more than 500 square miles (1,300 km2), which excluded natural wonders such as the Devils Postpile and prime wildlife habitat. The park was again reduced in size in 1906, when logging began in an area around Wawona. Acting superintendent Major H. C. Benson said in 1908 that "game is on the decrease. Each reduction of the park has cut another portion of the winter resort of game." The various changes meant that the park was reduced to two-thirds of its original size.
About 12,000 acres (4,900 ha) between the Tuolumne and the Merced big tree groves were added to the park in 1930 through land purchases by the federal government and matching funds provided by industrialist John D. Rockefeller. Another 8,765 acres (3,547 ha) near Wawona were added in 1932. The Carl Inn Tract, close to the Rockefeller purchase, was secured in 1937 and 1939.
Fight over Hetch Hetchy ValleyEdit
San Francisco Mayor James D. Phelan hired USGS engineer Joseph P. Lippincott in 1900 to perform a discreet survey of Hetch Hetchy Valley, located north of Yosemite Valley in the national park. His report stated that a dam of the Tuolumne River in the Hetch Hetchy Valley was the best choice to create a drinking water reservoir for the city. Lippincott sought water rights to the Tuolumne River and rights to build reservoirs at Hetch Hetchy and Lake Eleanor on behalf of Phelan in 1901. These requests were rejected in 1903 by Secretary of the Interior Ethan Allen Hitchcock, who felt the application was "not in keeping with the public interest."
The 1906 San Francisco earthquake tipped the balance in favor of granting the city the right to build the dam. Rights to Hetch Hetchy were granted to the City of San Francisco in 1908 by Secretary of the Interior James Rudolph Garfield, who wrote: "Domestic use is the highest use to which water and available storage basins ... can be put."
A nationally publicized fight over the dam project ensued; preservationists like Muir wanted to leave wild areas wild, and conservationists like Gifford Pinchot wanted to manage wild areas for the betterment of mankind. Robert Underwood Johnson and the Sierra Club joined the fight to save the valley from flooding. Muir wrote, "Dam Hetch Hetchy! As well dam for watertanks the people's cathedrals and churches, for no holier temple has ever been consecrated by the heart of man." Pinchot, who was director of the U.S. Forest Service, wrote to his close friend Roosevelt that "the highest possible use which could be made of it would be to supply pure water to a great center of population."
Roosevelt's successor, Woodrow Wilson, signed the Raker Act into law on December 13, 1913, which authorized construction of the dam. Hetch Hetchy Reservoir grew as the valley was flooded behind the O'Shaughnessy Dam in 1923. The Raker Act also gave the city the right to store water in Lake Eleanor and Cherry Lake, both located northwest of Hetch Hetchy in the park.
Shortly before Muir died he expressed the hope that "some compensating good must follow" from the Raker Act. The fight over the dam strengthened the conservation movement by popularizing it nationally.
National Park ServiceEdit
The administration of Yosemite National Park was transferred to the newly formed National Park Service in 1916, when W. B. Lewis was appointed as the park's superintendent. Parsons Memorial Lodge and Tioga Pass Road, along with campgrounds at Tenaya and Merced lakes, were completed the same year; six hundred automobiles entered the east side of the park using Tioga Road that summer. The "All-Weather Highway" (now State Route 140) opened in 1926, ensuring year-long visitation and delivery of supplies under normal conditions. Completion of the 0.8-mile (1.3 km)-long Wawona Tunnel in 1933 significantly reduced travel time to Yosemite Valley from Wawona. The famous Tunnel View is on the valley side of the tunnel and Old Inspiration Point is above it. A flood, reduced lumber and mining extraction, and greatly increased automobile and bus use forced the Yosemite Valley Railway out of business in 1945. The present day Tioga Road, now part of California State Route 120, was dedicated in 1961.
Interpretive programs and services for national parks were pioneered in Yosemite by Harold C. Bryant and Loye Holmes Miller in 1920. Ansel F. Hall became the first park naturalist in 1921 and served in that role for two years. Hall's idea to have park museums act as public contact centers for interpretive programs became a model followed by other national parks in the United States and internationally. Yosemite Museum, the first permanent museum in the National Park System, was completed in 1926.
The Ahwahnee Hotel, in Yosemite Valley, is a National Historic Landmark. Built in 1927, it is a luxury hotel designed by the architect Gilbert Stanley Underwood, decorated in Native American motifs. For many years it hosted an annual pageant produced by Ansel Adams. During World War II it was used as a rehabilitation hospital for soldiers.
Restoration and preservationEdit
Large floods covered Yosemite Valley in 1937, 1950, 1955, and 1997. These floods had a flow rate of 22,000 to 25,000 cubic feet (620 to 700 m3) per second, as measured at the Pohono Bridge gauging station in Yosemite Valley.
All the structures in Old Yosemite Village, except for the chapel, were either moved to the Pioneer Yosemite History center in Wawona or demolished during the 1950s and 1960s. Other structures in the park were also moved to the history center. Cedar Cottage, the oldest building in Yosemite Valley, was demolished in 1941 along with others, even though they had not been flooded. Little regard was given to historic preservation, as the priority was thought to be the preservation and restoration of natural scenery.
Congress set aside about 89 percent of the park in a highly protected wilderness area through passage of the Wilderness Act of 1964. No roads, motorized vehicles (except rescue helicopters and other emergency vehicles), or any development beyond trail maintenance are allowed in this area. The adjacent Ansel Adams Wilderness and John Muir Wilderness were also protected under the act and include regions removed from the park immediately before it was unified with the state grant in 1906.
The Yosemite Firefall, in which the embers from a bonfire were pushed off a cliff near Glacier Point to create a spectacular effect, was ended in 1968 because it was deemed to be inconsistent with park values. The firefall was occasionally performed in the 1870s and became a nightly tradition with the founding of Camp Curry.
Since the late 1960sEdit
Broader tensions in American society surfaced in Yosemite when a large number of youths gathered in the park over the summer of 1970, triggering a riot on July 4 after rangers tried to evict visitors from camping illegally in Stoneman Meadow. Rioters attacked the rangers with rocks, and pulled mounted rangers from their horses. The National Guard was brought in to restore order.
The Yosemite Park and Curry Company was bought by Music Corporation of America (MCA) in 1973. In 1988, concessionaires brought in $500 million ($1.06 billion as of 2019), and paid the federal government $12.5 million ($26.5 million as of 2019) for the franchise. Delaware North Companies became the primary concessionaire for Yosemite in 1992. The agreement it signed with the National Park Service increased yearly park revenue from concessionaires to $20 million ($35.7 million as of 2019).
In 1999, four women were killed by Cary Stayner just outside the park. That same year a large rockslide originating at the east side of Glacier Point ended near the Happy Isles of the Merced River, creating a debris field larger than several football fields. Tourism dropped a little after those incidents, but soon returned to its previous level.
Plans for reducing human impact on the park were released by the Park Service in 1980. The General Management Plan specified a reduction of 17 percent in overnight accommodations, a 68 percent reduction in staff housing and removal of golf courses and tennis courts by 1990, yet there were still 1,300 buildings in Yosemite Valley and 17 acres (6.9 ha) of the valley floor were covered by parking lots in the late 1990s. The goals were not met, but flooding in January 1997 destroyed park infrastructure in Yosemite Valley. The Yosemite Valley Plan was later established to implement the General Management Plan and over 250 other actions.
Forests and meadowsEdit
The Awahnechee and other aboriginal groups changed the environment of the Yosemite area. Parts of valley floors were intentionally burned each year to encourage the growth of acorn-bearing black oaks. Fire kept forests open, reducing the risk of ambush, and the open areas helped to expand and maintain meadows.
Early park guardians drained swamps, which reduced the number and extent of meadows. In the 1860s there were over 750 acres (300 ha) of meadows in the valley compared to 340 acres (140 ha) by the end of the 20th century. The remaining meadows are maintained by manually clearing trees and shrubs. The Park Service has prohibited driving and camping in meadows, a common practice in the 1910s to 1930s and cattle and horses are no longer allowed to roam freely in the park.
Fire suppression encouraged the growth of young coniferous trees, such as ponderosa pine and incense cedar; adult conifers create enough shade to inhibit the growth of young black oak trees. By the 20th century, fire suppression and the lowering of water tables by draining swamps led to the establishment of dense conifer forests where mixed and open conifer-oak woodlands had previously grown. Fire suppression policies have been replaced by a fire management program which includes the annual use of prescribed fires. Fire is especially important to the giant sequoia groves, whose seeds cannot germinate without fire-touched soil.
Logging used to be carried out in the area. Over one-half-billion board feet of timber were felled between World War I and 1930, when John D. Rockefeller, Jr. and the federal government bought out the Yosemite Lumber Company.
Increases in visitationEdit
Muir and the Sierra Club initially encouraged efforts to increase visitation to the park. Muir wrote that even the "frivolous and inappreciative" visitors were on the whole "a most hopeful sign of the times, indicating at least the beginning of our return to nature – for going to the mountains is going home."
The first automobile entered Yosemite Valley in 1900, but growth in car traffic did not increase significantly until 1913, when they were first officially allowed to enter; the next year, 127 cars entered the park.
Park visitation increased from 15,154 in 1914, to 35,527 in 1918, and to 461,000 in 1929. Two-thirds of a million visited in 1946, 1 million in 1954, 2 million by 1966, 3 million in the 1980s, and 4 million in the 1990s.
Half Dome is a prominent and iconic granite dome that rises 4,737 feet (1,444 m) above the floor of Yosemite Valley. It was first climbed on October 12, 1875, by the Scottish blacksmith of Yosemite Valley, George C. Anderson. A rope that Anderson laid was used by six men, including 61-year-old Galen Clark, and one woman, to scale the last 975 feet (297 m) of Half Dome. Anderson's rope was repaired several times and was replaced in 1919 by a stairway built by the Sierra Club.
Sunnyside walk-in campground, better known as Camp 4, was built in 1929. Rock climbers, who started to scale the cliffs of Yosemite in the 1950s, camped there. In 1997, a flood in Yosemite Valley destroyed employee housing in the valley. The Park Service wanted to build dormitories next to Camp 4, but Tom Frost, the American Alpine Club and others succeeded in killing the plan. Camp 4 was listed on the National Register of Historic Places on February 21, 2003, because of its role in the development of rock climbing as a sport.
Badger Pass Ski Area was established in 1935. The 9-hole Wawona Golf Course opened in June 1918 in a meadow adjacent to the Wawona Hotel. A golf course was later built near the Ahwahnee Hotel in Yosemite Valley, but was removed and converted into a meadow in 1981.
Introduced and invasive speciesEdit
Introduced animals and diseases had impacted the park area by the late 19th century. Galen Clark noted in the mid-1890s that native grasses and flowering plants in Yosemite Valley had been reduced in number by three-quarters.
White pine blister rust, a fungal disease that infects conifer trees, was accidentally introduced in British Columbia in 1910 and had reached California by the 1920s. It has since infected many sugar pine trees in the Yosemite area. The rust is managed by removing plants belonging to the ribes genus, which act as carriers of the fungus.
Current park managers focus on controlling nine high-priority invasive plant species of noxious weeds: yellow star-thistle (Centaurea solstitialis); spotted knapweed (Centaurea maculosa); Himalayan blackberry (Rubus armeniacus); bull thistle (Cirsium vulgare); velvet grass (Holcus lanatus); cheat grass (Bromus tectorum); French broom (Genista monspessulana); Italian thistle (Carduus pycnocephalus); and perennial pepperweed (Lepidium latifolium). In 2008, the park began to use the herbicides glyphosate and aminopyralid to augment manual methods to manage the most threatening plants.
Brown bears, also called grizzlies, featured prominently in Miwok mythology and were the top predators in the region until the 1920s, when they became locally extinct. A sketch of a Yosemite grizzly by Charles Nahl adorns the flag of California.
American black bears were a common attraction by the 1930s, but in 1929 alone 81 people required treatment for bear-related injuries. Troublesome bears were marked with white paint before being relocated to other parts of the park, and repeat offenders were killed. Bear feeding shows were stopped in 1940, but the Park Service continued to kill bears that habitually raided camps; 200 were put down between 1960 and 1972. Park visitors are now educated about proper food storage.
To supplement their incomes, the rangers trapped predators such as coyote, fox, lynx, mountain lion, and wolverine for their furs, a practice that survived until 1925. Predator control continued however; 43 mountain lions were killed in Yosemite by the state lion hunter in 1927. Cooper's hawk and sharp-shinned hawk were hunted to local extinction.
Bighorn sheep, which were driven locally extinct through hunting and disease, have been reintroduced in the east of the park. The Park Service and the Yosemite Fund have also helped peregrine falcons and great gray owls to re-establish themselves. Tule elk, which had been hunted almost to extinction, were housed in a pen in Yosemite before being moved to the Owens Valley in eastern California.
- NPS 1989, p. 102.
- Wuerthner 1994, pp. 12–13.
- "Native American life in Yosemite Valley. A trip into early Yosemite American Indian life". The Hive. Archived from the original on December 16, 2008. Retrieved June 6, 2009. Cite uses deprecated parameter
- Wuerthner 1994, p. 13.
- Wuerthner 1994, pp. 14–17.
- Greene 1987, p. 57.
- Runte 1990, Chapter 1.
- Schaffer 1999, p. 45.
- Wuerthner 1994, p. 14.
- Kiver 1999, p. 214.
- Wuerthner 1994, p. 17.
- Schaffer 1999, pp. 45, 46.
- Greene 1987, p. 156.
- YNHA contributors (1949). Yosemite Notes, Volumes 28–30. Yosemite Natural History Association. p. 27.
- "Macomb Gets Promotion". Indianapolis Star. September 6, 1910. Retrieved 22 August 2014.
- Greene 1987, p. 100.
- Beck, Warren; Hasse, Ynez D. (May 18, 2008). "California and the Indian Wars: Mariposa Indian War, 1850–1851". The California State Military Museum.
- Harris 1997, p. 326.
- Schaffer 1999, p. 46.
- Greene 1987, p. 68.
- Bunnell, Lafayette (1911). The Discovery of the Yosemite, and the Indian war of 1851, which led to that event. G. W. Gerlicher. pp. 69–70. ISBN 0-8369-5621-4.
- Greene 1987, p. 22.
- Beeler, Madison Scott (September 1955). "Yosemite and Tamalpais". Names. 55 (3): 185–186.
- Bunnell, Lafayette (1892). The Discovery of the Yosemite (3rd ed.). New York: F.H. Revell Company. ISBN 0-8369-5621-4.
- Wuerthner 1994, pp. 21–22.
- Schaffer 1999, p. 47.
- "The High Falls by Thomas Ayres". Museum Centennial. National Park Service. Archived from the original on 2007-06-26.
- Hutchings, James M. (1862). "Scenes of Wonder and Curiosity in California".
- NPS 1989, p. 21.
- Roney, Rob (Summer–Fall 2002). "Celebrating Yosemite". Yosemite Guide. National Park Service. XXXI (2). Archived from the original on August 8, 2007. Cite uses deprecated parameter
- Schaffer 1999, pp. 47–48.
- Runte 1990, Chapter 2.
- Greene 1987, p. 54.
- Greene 1987, p. 134.
- NPS 1989, pp. 21, 29, 115.
- Wuerthner 1994, p. 23.
- Muir, John (1912). "Appendix A: Legislation About the Yosemite". The Yosemite. New York: The Century Company. LCCN 12011005.
- Schaffer 1999, p. 48.
- Schaffer 1999, p. 49.
- Hutchings, J. M. (1886). In the Heart of the Sierras: The Yo Semite Valley, both Historical and Descriptive; and Scenes by the Way. Oakland, California: Pacific Press. pp. 162–163.
- NPS 1989, p. 57.
- Greene 1987, p. 33.
- NPS 1989, pp. 57, 113.
- Greene 1987, p. 200.
- Greene 1987, p. 230.
- NPS 1989, p. 59.
- NPS 1989, p. 112.
- NPS 1989, p. 58.
- Greene 1987, p. 360.
- Greene 1987, pp. 362, 364.
- Wuerthner 1994, p. 40.
- Greene 1987, p. 387.
- Wuerthner 1994, p. 27.
- Wuerthner 1994, p. 29.
- Harris 1997, p. 327.
- Schaffer 1999, p. 50.
- Greene 1987, p. 590.
- Greene 1987, p. 591.
- Runte 1990, Chapter 5.
- Schaffer 1999, p. 51.
- Greene 1987, p. 242.
- "Yosemite National Park Cautions Poachers". National Park Service. October 14, 2008. Retrieved April 18, 2010.
- Runte 1990, Chapter 7.
- Greene 1987, p. 160.
- Worster, Donald (2008). A Passion for Nature: The Life of John Muir. Oxford University Press. p. 366. ISBN 0-19-516682-5.
- Greene 1987, p. 261.
- Kiver 1999, p. 216.
- Wuerthner 1994, p. 35.
- Starr, Kevin (1996). Endangered Dreams: The Great Depression in California. Oxford: Oxford University Press. p. 277. ISBN 978-0-19-511802-5.
- Wuerthner 1994, p. 36.
- Wuerthner 1994, p. 37.
- Engineers, American Society of Civil (1916). Transactions of the American Society of Civil Engineers. 80. New York: American Society of Civil Engineers. p. 132.
- Schaffer 1999, p. 52.
- NPS 1989, p. 113.
- Greene 1987, p. 117.
- Greene 1987, p. 527.
- Greene 1987, p. 52.
- Greene 1987, p. 352.
- Greene 1987, p. 353.
- NPS 1989, p. 117.
- NPS 1989, p. 118.
- "Water Overview". Yosemite National Park. National Park Service. Archived from the original on 2007-01-07. Retrieved January 4, 2007.
- Anderson, Dan E. (2005). "Pioneer Yosemite History Center Online". Yosemite Online. Retrieved May 9, 2010.
- Greene 1987, pp. 479, 483, 595.
- Greene 1987, p. 483.
- Orsi, Runte & Smith-Baranzini 1993, p. 8.
- "John Muir Wilderness". SierraWild.Gov. Retrieved 2012-01-24.
- "Ansel Adams Wilderness". SierraWild.Gov. Retrieved 2012-01-24.
- Greene 1987, p. 435.
- Greene 1987, p. 131.
- O'Brien, Bob R. (1999). Our national parks and the search for sustainability. University of Texas Press. p. 175. ISBN 978-0-292-76050-9.
- Orsi, Runte & Smith-Baranzini 1993, p. 125.
- McDowell, Jeanne (January 14, 1991). "Fighting For Yosemite's Future". Time.
- "U.S. Picks Concessionaire for Yosemite Park". New York Times. December 18, 1992.
- "Yosemite suspect confesses to 4 killings". Cable News Network. July 27, 1999.
- Wieczorek, Gerald F.; Snyder, James B. (1999). Rock falls from Glacier Point above Camp Curry, Yosemite National Park, California. United States Geological Survey. USGS Open-file Report 99-385.
- Wuerthner 1994, p. 44.
- Wuerthner 1994, p. 41.
- Schaffer 1999, p. 54.
- "Yosemite Valley Plan: The Story and the Process". National Park Service. Archived from the original on July 31, 2009. Retrieved March 27, 2010. Cite uses deprecated parameter
- Schaffer 1999, p. 43.
- Schaffer 1999, p. 44.
- Schaffer 1999, pp. 52, 54.
- Greene 1987, p. 104.
- Greene 1987, p. 332.
- Greene 1987, p. 348.
- Kaiser, James (2007). Yosemite: The Complete Guide. Destination Press. p. 138. ISBN 0-9678904-7-0.
- Muhlfeld, Teige (September 17, 2009). "Rock and Ice Magazine: Coffee's Free at Camp 4". Archived from the original on July 15, 2011. Retrieved November 2, 2009. Cite uses deprecated parameter
- "Camp 4". Yosemite National Park. Retrieved May 8, 2010.
- Wuerthner 1994, p. 46.
- Misuraca, Karen (2006). Insiders' Guide to Yosemite. Guilford, Connecticut: Morris Book Publishing. p. 137. ISBN 0-7627-4050-7.
- Schaffer, Jeffrey P. (2006). Yosemite National Park: A Complete Hikers Guide. Berkeley, California: Wilderness Press. p. 258. ISBN 978-0-89997-383-8.
- Johnston, Verna R. (1994). California Forests and Woodlands: A Natural History. Berkeley, California: University of California Press. p. 94. ISBN 978-0-520-20248-1.
- "Forest Pests". National Park Service. Retrieved April 18, 2010.
- Greene 1987, p. 303.
- Wuerthner 1994, p. 38.
- "Invasive Plant Management (Yosemite National Park)". National Park Service. Archived from the original on April 22, 2010. Retrieved April 24, 2010. Cite uses deprecated parameter
- "Yosemite Mammals". National Park Service. Retrieved May 9, 2010.
- Wuerthner 1994, p. 39.
- Wuerthner 1994, pp. 37–38.
- Greene, Linda Wedel (1987). Yosemite: the Park and its Resources (PDF). U.S. Department of the Interior / National Park Service.
- Harris, Ann G.; Tuttle, Esther; Tuttle, Sherwood D. (1997). Geology of National Parks (5th ed.). Iowa: Kendall/Hunt Publishing. ISBN 0-7872-5353-7.
- Kiver, Eugene P.; Harris, David V. (1999). Geology of U.S. Parklands (5th ed.). New York: John Wiley & Sons. ISBN 0-471-33218-6.
- NPS contributors (1989). Yosemite: Official National Park Service Handbook. Washington, D.C.: Division of Publications, National Park Service. no. 138.
- Orsi, Richard J.; Runte, Alfred; Smith-Baranzini, Marlene (1993). Yosemite and Sequoia. Berkeley, California: University of California Press. ISBN 978-0-520-08160-4.
- Runte, Alfred (1990). Yosemite: The Embattled Wilderness. University of Nebraska Press. ISBN 0-8032-3894-0.
- Schaffer, Jeffrey P. (1999). Yosemite National Park: A Natural History Guide to Yosemite and Its Trails. Berkeley: Wilderness Press. ISBN 0-89997-244-6.
- Stock, Greg M. et al. (2013). Historical Rock Falls in Yosemite National Park, California (1857-2011). Reston, Va.: U.S. Department of the Interior, U.S. Geological Survey.
- Wuerthner, George (1994). Yosemite: A Visitors Companion. Stackpole Books. ISBN 0-8117-2598-7.
|Wikimedia Commons has media related to |
History of Yosemite National Park.
- Yosemite: the Park and its Resources (1987) by Linda W. Greene
- "Historic Yosemite Indian Chiefs – with photos". Archived from the original on 2012-12-04. Retrieved 2019-04-06. Cite uses deprecated parameter
- Guardians of the Yosemite (1961) by John W. Bingaman |
What Is an Indirect Tax?
An indirect tax is collected by one entity in the supply chain (usually a producer or retailer) and paid to the government, but it is passed on to the consumer as part of the purchase price of a good or service. The consumer is ultimately paying the tax by paying more for the product.
Understanding an Indirect Tax
Indirect taxes are defined by contrasting them with direct taxes. Indirect taxes can be defined as taxation on an individual or entity, which is ultimately paid for by another person. The body that collects the tax will then remit it to the government. But in the case of direct taxes, the person immediately paying the tax is the person that the government is seeking to tax.
Excise duties on fuel, liquor, and cigarettes are all considered examples of indirect taxes. By contrast, income tax is the clearest example of a direct tax, since the person earning the income is the one immediately paying the tax. Admission fees to a national park are another clear example of direct taxation.
Some indirect taxes are also referred to as consumption taxes, such as a value-added tax (VAT).
Regressive Nature of an Indirect Tax
Indirect taxes are commonly used and imposed by the government in order to generate revenue. They are essentially fees that are levied equally upon taxpayers, no matter their income, so rich or poor, everyone has to pay them.
But many consider them to be regressive taxes as they can bear a heavy burden on people with lower incomes who end up paying the same amount of tax as those who make a higher income.
For example, the import duty on a television from Japan will be the same amount, no matter the income of the consumer purchasing the television. And because this levy has nothing to do with a person's income, that means someone who earns $25,000 a year will have to pay the same duty on the same television as someone who earns $150,000; clearly, a bigger burden on the former.
There are also concerns that indirect taxes can be used to further a particular government policy by taxing certain industries and not others. For this reason, some economists argue that indirect taxes lead to an inefficient marketplace and alter market prices from their equilibrium price.
Common Indirect Taxes
The most common example of an indirect tax is import duties. The duty is paid by the importer of a good at the time it enters the country. If the importer goes on to resell the good to a consumer, the cost of the duty, in effect, is hidden in the price that the consumer pays. The consumer is likely to be unaware of this, but they will nonetheless be indirectly paying the import duty.
Essentially, any taxes or fees imposed by the government at the manufacturing or production level is an indirect tax. In recent years, many countries have imposed fees on carbon emissions to manufacturers. These are indirect taxes since their costs are passed along to consumers.
Sales taxes can be direct or indirect. If they are imposed only on the final supply to a consumer, they are direct. If they are imposed as value-added taxes (VATs) along the production process, then they are indirect. |
In Economics, “Supply” means the relationship between prices and production. In general, the higher the market price of a good or service is, the more producers are willing to sell of it.
As the market price for a good goes up, companies want to sell more of it to try to make greater profits. Conversely, as the market price goes down, companies are less interested in production, and so the quantity supplied generally goes down.
Take a look at the graph on the right – notice that the “Quantity” does not start going up until the “price” gets above a certain point. This is because companies will not start producing something until the market price is at least as high as the cost of making it. As the market price goes up, the potential for profit also goes up, so companies are willing to put more resources into the production of this good, and the quantity supplied increases.
Difference Between “Supply” and “Quantity Supplied”
“Supply” refers to the relationship between the price and quantity – in our graphs, the “Supply” means the entire red line. The quantity supplied is a single point on that line.
This means that a change in “supply” means the entire line has moved, while a change in the “quantity supplied” means that the quantity has moved to a different point on the same line (due to a change in Demand).
An increase in Supply usually means there was a fundamental shift in how the good is produced. A new manufacturing technique that saves on cost, subsidies from the government, or the cost of inputs becoming cheaper can all cause an increase in supply.
In contrast, new government regulations, an increase in the cost of inputs, or increased wages for workers (assuming they do not become more productive) will all cause supply to decrease.
The Quantity Supplied, on the other hand, can move because of both shifts in Supply and Demand. As you can see on the graphs above, an increase in supply will cause the price to decrease and the quantity supplied to increase, even if demand does not increase.
However, if demand increases (meaning more goods are demanded at the same price), the quantity supplied will also increase, even though the supply line itself stays the same.
Examples With The Stock Market
The stock market is a perfect example of seeing both changes in Supply and changes in Quantity Supplied in action.
Imagine there is a stock that 10 people currently own. Each of them has a price they would sell their stock for, if someone offered.
This would then be the supply line for this market:
The actual quantity supplied will depend on what price buyers are willing to pay. Now lets consider a real example – what if all of these people had a share of Twitter stock (symbol: TWTR)? We can find out by getting a quote from the Trade page:
This quote tells us a lot about the supply, and what would happen in this case. The “Last Price” is $15 – that is because the market price was high enough for our lowest two suppliers to sell their share.
If we look at the “Bid/Ask”, we can also see that the most a buyer is willing to pay (the “bid” price) is now $17.89. We also see that the “Ask” price is now $20, which is the lowest amount that a seller is willing to take for their share.
This means for today at the market equilibrium price, the quantity supplied is 2. |
Paired black holes are theorized to be common, but have escaped detection — until now.
Astronomers Todd Boroson and Tod Lauer, from the National Optical Astronomy Observatory (NOAO) in Tucson, Arizona, have found what looks like two massive black holes orbiting each other in the center of one galaxy. Their discovery appears in this week’s issue of Nature.
Astronomers have long suspected that most large galaxies harbor black holes at their center, and that most galaxies have undergone some kind of merger in their lifetime. But while binary black hole systems should be common, they have proved hard to find. Boroson and Lauer believe they’ve found a galaxy that contains two black holes, which orbit each other every 100 years or so. They appear to be separated by only 1/10 of a parsec, a tenth of the distance from Earth to the nearest star.
After a galaxy forms, it is likely that a massive black hole can also form at its center. Since many galaxies are found in cluster of galaxies, individual galaxies can collide with each other as they orbit in the cluster. The mystery is what happens to these central black holes when galaxies collide and ultimately merge together. Theory predicts that they will orbit each other and eventually merge into an even larger black hole.
“Previous work has identified potential examples of black holes on their way to merging, but the case presented by Boroson and Lauer is special because the pairing is tighter and the evidence much stronger,” wrote Jon Miller, a University of Michigan astronomer, in an accompanying editorial.
The material falling into a black hole emits light in narrow wavelength regions, forming emission lines which can be seen when the light is dispersed into a spectrum. The emission lines carry the information about the speed and direction of the black hole and the material falling into it. If two black holes are present, they would orbit each other before merging and would have a characteristic dual signature in their emission lines. This signature has now been found.
The smaller black hole has a mass 20 million times that of the sun; the larger one is 50 times bigger, as determined by the their orbital velocities.
Boroson and Lauer used data from the Sloan Digital Sky Survey, a 2.5-meter (8-foot) diameter telescope at Apache Point in southern New Mexico to look for this characteristic dual black hole signature among 17,500 quasars.
Quasars are the most luminous versions of the general class of objects known as active galaxies, which can be a hundred times brighter than our Milky Way galaxy, and powered by the accretion of material into supermassive black holes in their nuclei. Astronomers have found more than 100,000 quasars.
Boroson and Lauer had to eliminate the possibility that they were seeing two galaxies, each with its own black hole, superimposed on each other. To try to eliminate this superposition possibility, they determined that the quasars were at the same red-shift determined distance and that there was a signature of only one host galaxy.
“The double set of broad emission lines is pretty conclusive evidence of two black holes,” Boroson said. “If in fact this were a chance superposition, one of the objects must be quite peculiar. One nice thing about this binary black hole system is that we predict that we will see observable velocity changes within a few years at most. We can test our explanation that the binary black hole system is embedded in a galaxy that is itself the result of a merger of two smaller galaxies, each of which contained one of the two black holes.”
LEAD IMAGE CAPTION (more): Artist’s conception of the binary supermassive black hole system. Each black hole is surrounded by a disk of material gradually spiraling into its grasp, releasing radiation from x-rays to radio waves. The two black holes complete an orbit around their center of mass every 100 years, traveling with a relative velocity of 6000 kilometers (3,728 miles) per second. (Credit P. Marenfeld, NOAO) |
A hash function in cryptography refers to a mathematical function that converts a numerical input value into another compressed numerical value. The input to the hash function is of arbitrary length but the output is always of fixed length.
A hash function is also known as a hashing algorithm or message digest function.
To put it simply, a hash function takes a group of characters and maps it to a value of certain length.
That is to say, for any x input value, you will always get the same y output value whenever the hash function runs.
f(x) = y
This means every input has a predetermined output. The hash value represents the original string of characters but it is usually smaller than the original. The value you get after processing a set of data through a hash function is called a hash value or message digest.
Note that the input can be any data like numbers, files and other types of files. The process of transforming a given set of data to a specific hash value is called hashing the data. The hash value or message digest is always in form of a hexadecimal number.
Since the resulting value after hashing is much smaller compared to the data that passed through the function, makes hash functions to act like compression functions. |
|Solar PV modules (top) and two solar hot water panels (bottom) mounted on rooftops|
Solar panel refers to a panel designed to absorb the sun's rays as a source of energy for generating electricity or heating.
A photovoltaic (PV) module is a packaged, connect assembly of typically 6×10 solar cells. Solar Photovoltaic panels constitute the solar array of a photovoltaic system that generates and supplies solar electricity in commercial and residential applications. Each module is rated by its DC output power under standard test conditions, and typically ranges from 100 to 365 watts. The efficiency of a module determines the area of a module given the same rated output – an 8% efficient 230 watt module will have twice the area of a 16% efficient 230 watt module. There are a few commercially available solar panels available that exceed 22% efficiency and reportedly also exceeding 24%. A single solar module can produce only a limited amount of power; most installations contain multiple modules. A photovoltaic system typically includes a panel or an array of solar modules, a solar inverter, and sometimes a battery and/or solar tracker and interconnection wiring.
Theory and construction
Solar modules use light energy (photons) from the sun to generate electricity through the photovoltaic effect. The majority of modules use wafer-based crystalline silicon cells or thin-film cells based on cadmium telluride or silicon. The structural (load carrying) member of a module can either be the top layer or the back layer. Cells must also be protected from mechanical damage and moisture. Most solar modules are rigid, but semi-flexible ones are available, based on thin-film cells.
Electrical connections are made in series to achieve a desired output voltage and/or in parallel to provide a desired current capability. The conducting wires that take the current off the modules may contain silver, copper or other non-magnetic conductive [transition metals]. The cells must be connected electrically to one another and to the rest of the system. Externally, popular terrestrial usage photovoltaic modules use MC3 (older) or MC4 connectors to facilitate easy weatherproof connections to the rest of the system.
Bypass diodes may be incorporated or used externally, in case of partial module shading, to maximize the output of module sections still illuminated.
Some recent solar module designs include concentrators in which light is focused by lenses or mirrors onto an array of smaller cells. This enables the use of cells with a high cost per unit area (such as gallium arsenide) in a cost-effective way.
Depending on construction, photovoltaic modules can produce electricity from a range of frequencies of light, but usually cannot cover the entire solar range (specifically, ultraviolet, infrared and low or diffused light). Hence, much of the incident sunlight energy is wasted by solar modules, and they can give far higher efficiencies if illuminated with monochromatic light. Therefore, another design concept is to split the light into different wavelength ranges and direct the beams onto different cells tuned to those ranges. This has been projected to be capable of raising efficiency by 50%. Scientists from Spectrolab, a subsidiary of Boeing, have reported development of multi-junction solar cells with an efficiency of more than 40%, a new world record for solar photovoltaic cells. The Spectrolab scientists also predict that concentrator solar cells could achieve efficiencies of more than 45% or even 50% in the future, with theoretical efficiencies being about 58% in cells with more than three junctions.
Currently the best achieved sunlight conversion rate (solar module efficiency) is around 21.5% in new commercial products typically lower than the efficiencies of their cells in isolation. The most efficient mass-produced solar modules[disputed ] have power density values of up to 175 W/m2 (16.22 W/ft2). Research by Imperial College, London has shown that the efficiency of a solar panel can be improved by studding the light-receiving semiconductor surface with aluminum nanocylinders similar to the ridges on Lego blocks. The scattered light then travels along a longer path in the semiconductor which means that more photons can be absorbed and converted into current. Although these nanocylinders have been used previously (aluminum was preceded by gold and silver), the light scattering occurred in the near infrared region and visible light was absorbed strongly. Aluminum was found to have absorbed the ultraviolet part of the spectrum, while the visible and near infrared parts of the spectrum were found to be scattered by the aluminum surface. This, the research argued, could bring down the cost significantly and improve the efficiency as aluminum is more abundant and less costly than gold and silver. The research also noted that the increase in current makes thinner film solar panels technically feasible without "compromising power conversion efficiencies, thus reducing material consumption".
- Efficiencies of solar panel can be calculated by MPP (Maximum power point) value of solar panels
- Solar inverters convert the DC power to AC power by performing MPPT process: solar inverter samples the output Power (I-V curve) from the solar cell and applies the proper resistance (load) to solar cells to obtain maximum power.
- MPP (Maximum power point) of the solar panel consists of MPP voltage (V mpp) and MPP current (I mpp): it is a capacity of the solar panel and the higher value can make higher MPP.
Micro-inverted solar panels are wired in parallel which produces more output than normal panels which are wired in series with the output of the series determined by the lowest performing panel (this is known as the "Christmas light effect"). Micro-inverters work independently so each panel contributes its maximum possible output given the available sunlight.
Most solar modules are currently produced from crystalline silicon (c-Si) solar cells made of multicrystalline and monocrystalline silicon. In 2013, crystalline silicon accounted for more than 90 percent of worldwide PV production, while the rest of the overall market is made up of thin-film technologies using cadmium telluride, CIGS and amorphous silicon Emerging, third generation solar technologies use advanced thin-film cells. They produce a relatively high-efficiency conversion for the low cost compared to other solar technologies. Also, high-cost, high-efficiency, and close-packed rectangular multi-junction (MJ) cells are preferably used in solar panels on spacecraft, as they offer the highest ratio of generated power per kilogram lifted into space. MJ-cells are compound semiconductors and made of gallium arsenide (GaAs) and other semiconductor materials. Another emerging PV technology using MJ-cells is concentrator photovoltaics ( CPV ).
In rigid thin-film modules, the cell and the module are manufactured in the same production line. The cell is created on a glass substrate or superstrate, and the electrical connections are created in situ, a so-called "monolithic integration". The substrate or superstrate is laminated with an encapsulant to a front or back sheet, usually another sheet of glass. The main cell technologies in this category are CdTe, or a-Si, or a-Si+uc-Si tandem, or CIGS (or variant). Amorphous silicon has a sunlight conversion rate of 6–12%
Flexible thin film cells and modules are created on the same production line by depositing the photoactive layer and other necessary layers on a flexible substrate. If the substrate is an insulator (e.g. polyester or polyimide film) then monolithic integration can be used. If it is a conductor then another technique for electrical connection must be used. The cells are assembled into modules by laminating them to a transparent colourless fluoropolymer on the front side (typically ETFE or FEP) and a polymer suitable for bonding to the final substrate on the other side.
Smart solar modules
Several companies have begun embedding electronics into PV modules. This enables performing maximum power point tracking (MPPT) for each module individually, and the measurement of performance data for monitoring and fault detection at module level. Some of these solutions make use of power optimizers, a DC-to-DC converter technology developed to maximize the power harvest from solar photovoltaic systems. As of about 2010, such electronics can also compensate for shading effects, wherein a shadow falling across a section of a module causes the electrical output of one or more strings of cells in the module to fall to zero, but not having the output of the entire module fall to zero.
Performance and degradation
||This section possibly contains original research. (August 2013) (Learn how and when to remove this template message)|
Electrical characteristics include nominal power (PMAX, measured in W), open circuit voltage (VOC), short circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power, (watt-peak, Wp), and module efficiency (%).
Nominal voltage refers to the voltage of the battery that the module is best suited to charge; this is a leftover term from the days when solar modules were only used to charge batteries. The actual voltage output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the module operates. Nominal voltage allows users, at a glance, to make sure the module is compatible with a given system.
Open circuit voltage or VOC is the maximum voltage that the module can produce when not connected to an electrical circuit or system. VOC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable.
The peak power rating, Wp, is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately 1x2 meters or 2x4 feet, will be rated from as low as 75 watts to as high as 350 watts, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 watt increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%.
Solar modules must withstand rain, hail, heavy snow load, and cycles of heat and cold for many years. Many crystalline silicon module manufacturers offer a warranty that guarantees electrical production for 10 years at 90% of rated power output and 25 years at 80%.
Potential induced degradation (also called PID) is a potential induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. This effect may cause power loss of up to 30 percent.
The largest challenge of photovoltaic technology is the efficiencies of such solar systems. While utilizing such systems draws a great interest due to the long term returns of profit, the efficacy needs to come a long way before making it plausible to be introduced in all consumers of electricity.
The problem resides in the enormous activation energy that must be overcome for a photon to excite an electron for harvesting purposes. Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons. Chemicals such as Boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands. In doing so, the addition of Boron impurity allows the activation energy to decrease 20 fold from 1.12 eV to 0.05 eV. Since the potential difference (EB) is so low, the Boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons.
Solar power allows for greater efficiency than heat, such as the generation of energy in heat engines. The drawback with heat is that most of the heat created is lost to the surroundings. Thermal efficiency is as defined:
Due to the inherent irreversibility of heat production for useful work, efficiency levels are decreased. On the other hand, with solar panels there isn't a requirement to retain any heat, and there are no drawbacks such as friction.
Solar panel conversion efficiency, typically in the 20 percent range, is reduced by dust, grime, pollen, and other particulates that accumulate on the solar panel. "A dirty solar panel can reduce its power capabilities by up to 30 percent in high dust/pollen or desert areas", says Seamus Curran, associate professor of physics at the University of Houston and director of the Institute for NanoEnergy, which specializes in the design, engineering, and assembly of nanostructures.
Paying to have solar panels cleaned is often not a good investment; researchers found panels that hadn’t been cleaned, or rained on, for 145 days during a summer drought in California, lost only 7.4 percent of their efficiency. Overall, for a typical residential solar system of 5 kilowatts, washing panels halfway through the summer would translate into a mere $20 gain in electricity production until the summer drought ends—in about 2 ½ months. For larger commercial rooftop systems, the financial losses are bigger but still rarely enough to warrant the cost of washing the panels. On average, panels lost a little less than 0.05 percent of their overall efficiency per day.
Most parts of a solar module can be recycled including up to 97% of certain semiconductor materials or the glass as well as large amounts of ferrous and non-ferrous metals. Some private companies and non-profit organizations are currently engaged in take-back and recycling operations for end-of-life modules.
Recycling possibilities depend on the kind of technology used in the modules:
- Silicon based modules: aluminum frames and junction boxes are dismantled manually at the beginning of the process. The module is then crushed in a mill and the different fractions are separated - glass, plastics and metals. It is possible to recover more than 80% of the incoming weight. This process can be performed by flat glass recyclers since morphology and composition of a PV module is similar to those flat glasses used in the building and automotive industry. The recovered glass for example is readily accepted by the glass foam and glass insulation industry.
- Non-silicon based modules: they require specific recycling technologies such as the use of chemical baths in order to separate the different semiconductor materials. For cadmium telluride modules, the recycling process begins by crushing the module and subsequently separating the different fractions. This recycling process is designed to recover up to 90% of the glass and 95% of the semiconductor materials contained. Some commercial-scale recycling facilities have been created in recent years by private companies.
|#||Top Module Producer||Shipments in 2014 (MW)|
In 2010, 15.9 GW of solar PV system installations were completed, with solar PV pricing survey and market research company PVinsights reporting growth of 117.8% in solar PV installation on a year-on-year basis.
With over 100% year-on-year growth in PV system installation, PV module makers dramatically increased their shipments of solar modules in 2010. They actively expanded their capacity and turned themselves into gigawatt GW players. According to PVinsights, five of the top ten PV module companies in 2010 are GW players. Suntech, First Solar, Sharp, Yingli and Trina Solar are GW producers now, and most of them doubled their shipments in 2010.
The basis of producing solar panels revolves around the use of silicon cells. These silicon cells are typically 10-20% efficient at converting sunlight into electricity, with newer production models now exceeding 22%.
In order for solar panels to become more efficient, researchers across the world have been trying to develop new technologies to make solar panels more effective at turning sunlight into energy.
Average pricing information divides in three pricing categories: those buying small quantities (modules of all sizes in the kilowatt range annually), mid-range buyers (typically up to 10 MWp annually), and large quantity buyers (self-explanatory—and with access to the lowest prices). Over the long term there is clearly a systematic reduction in the price of cells and modules. For example, in 2012 it was estimated that the quantity cost per watt was about US$0.60, which was 250 times lower than the cost in 1970 of US$150. A 2015 study shows price/kWh dropping by 10% per year since 1980, and predicts that solar could contribute 20% of total electricity consumption by 2030, whereas the International Energy Agency predicts 16% by 2050.
Real world prices depend a great deal on local weather conditions. In a cloudy country such as the United Kingdom, price per installed kW is higher than in sunnier countries like Spain.
Following to RMI, Balance-of-System (BoS) elements, this is, non-module cost of non-microinverter solar modules (as wiring, converters, racking systems and various components) make up about half of the total costs of installations.
For merchant solar power stations, where the electricity is being sold into the electricity transmission network, the cost of solar energy will need to match the wholesale electricity price. This point is sometimes called 'wholesale grid parity' or 'busbar parity'.
Some photovoltaic systems, such as rooftop installations, can supply power directly to an electricity user. In these cases, the installation can be competitive when the output cost matches the price at which the user pays for his electricity consumption. This situation is sometimes called 'retail grid parity', 'socket parity' or 'dynamic grid parity'. Research carried out by UN-Energy in 2012 suggests areas of sunny countries with high electricity prices, such as Italy, Spain and Australia, and areas using diesel generators, have reached retail grid parity.
Mounting and tracking
Ground mounted photovoltaic system are usually large, utility-scale solar power plants. Their solar modules are held in place by racks or frames that are attached to ground based mounting supports. Ground based mounting supports include:
- Pole mounts, which are driven directly into the ground or embedded in concrete.
- Foundation mounts, such as concrete slabs or poured footings
- Ballasted footing mounts, such as concrete or steel bases that use weight to secure the solar module system in position and do not require ground penetration. This type of mounting system is well suited for sites where excavation is not possible such as capped landfills and simplifies decommissioning or relocation of solar module systems.
Roof-mounted solar power systems consist of solar modules held in place by racks or frames attached to roof-based mounting supports. Roof-based mounting supports include:
- Pole mounts, which are attached directly to the roof structure and may use additional rails for attaching the module racking or frames.
- Ballasted footing mounts, such as concrete or steel bases that use weight to secure the panel system in position and do not require through penetration. This mounting method allows for decommissioning or relocation of solar panel systems with no adverse effect on the roof structure.
- All wiring connecting adjacent solar modules to the energy harvesting equipment must be installed according to local electrical codes and should be run in a conduit appropriate for the climate conditions
Solar trackers increase the amount of energy produced per module at a cost of mechanical complexity and need for maintenance. They sense the direction of the Sun and tilt or rotate the modules as needed for maximum exposure to the light. Alternatively, fixed racks hold modules stationary as the sun moves across the sky. The fixed rack sets the angle at which the module is held. Tilt angles equivalent to an installation's latitude are common. Most of these fixed racks are set on poles above ground.
Standards generally used in photovoltaic modules:
- IEC 61215 (crystalline silicon performance), 61646 (thin film performance) and 61730 (all modules, safety)
- ISO 9488 Solar energy—Vocabulary.
- UL 1703 From Underwriters Laboratories
- UL 1741 From Underwriters Laboratories
- UL 2703 From Underwriters Laboratories
- CE mark
- Electrical Safety Tester (EST) Series (EST-460, EST-22V, EST-22H, EST-110).
There are many practical applications for the use of solar panels or photovoltaics. It can first be used in agriculture as a power source for irrigation. In health care solar panels can be used to refrigerate medical supplies. It can also be used for infrastructure. PV modules are used in photovoltaic systems and include a large variety of electric devices:
- Photovoltaic power stations
- Rooftop solar PV systems
- Standalone PV systems
- Solar hybrid power systems
- Concentrated photovoltaics
- Solar planes
- Solar-pumped lasers
- Solar vehicles
- Solar panels on spacecrafts and space stations
Varun Sivaram, Samuel Stranks, and Henry Snaith in an article for Scientific American about perovskite solar cells said that the solar panels of tomorrow will be transparent, lightweight, flexible, and ultra-efficient. We will be able to coat shingles or skylights or windows with them — and it will all be as cheap as putting up wallpaper. In the future there will be solar panel roads. These roads will be able to support the force of any vehicle driving across it. They will have lights on showing the different lanes instead of reflectors or paint. The lights will be easier to see in the dark. The solar panel road will also have heat radiating from it so that there will be no more problems with snow or ice. The company SolaRoad has installed a 70-meter cycle path in Amsterdam. An American company, Solar Roadways is developing solar panels meant for all types of roads and conditions.
The Solar Settlement with the Sun Ship in the background (Freiburg, Germany)
Solar modules on the International Space Station
- Ulanoff, L. Elon Musk and SolarCity unveil ‘world’smost efficient’ solar panel, Mashable, October 2, 2015, accessed June 28, 2016
- Milestone in solar cell efficiency achieved: New record for unfocused sunlight edges closer to theoretic limits. Wilson da Silva. Science Daily. May 17, 2016
- [University of New South Wales. "Milestone in solar cell efficiency achieved: New record for unfocused sunlight edges closer to theoretic limits." ScienceDaily. ScienceDaily, 17 May 2016. <www.sciencedaily.com/releases/2016/05/160517121811.htm>]
- Morgan Baziliana; et al. (17 May 2012). Re-considering the economics of photovoltaic power. UN-Energy (Report). United Nations. Retrieved 20 November 2012.
- KING, R.R., et al., Appl. Phys. Letters 90 (2007) 183516.
- "SunPower e20 Module".
- "HIT® Photovoltaic Module" (PDF). Sanyo / Panasonic. Retrieved 24 December 2012.
- "Improving the efficiency of solar panels". The Hindu. 24 October 2013. Retrieved 24 October 2013.
- "Micro Inverters for Residential Solar Arrays". Retrieved 2015-09-29.
- Photovoltaics Report, Fraunhofer ISE, 28 July 2014, pages 18,19
- "First Solar – FS-377 / FS-380 / FS-382 / FS-385 Datasheet" (PDF). Retrieved 2012-06-04.
- "TSM PC/PM14 Datasheet" (PDF). Retrieved 2012-06-04.
- "YGE 235 Data sheet" (PDF). Retrieved 2012-06-04.
- "CTI Solar sales brochure" (PDF). cti-solar.com. Retrieved 3 September 2010.
- "How Solar Cells Work". HowStuffWorks. Retrieved 2015-12-09.
- "Bonding in Metals and Semiconductors". 2012books.lardbucket.org. Retrieved 2015-12-09.
- Crawford, Mike (October 2012). "Self-Cleaning Solar Panels Maximize Efficiency". The American Society of Mechanical Engineers. ASME. Retrieved 15 September 2014.
- Patringenaru, Ioana (August 2013). "Cleaning Solar Panels Often Not Worth the Cost, Engineers at UC San Diego Find". UC San Diego News Center. UC San Diego News Center. Retrieved 31 May 2015.
- Lisa Krueger. 1999. "Overview of First Solar's Module Collection and Recycling Program" (PDF). Brookhaven National Laboratory p. 23. Retrieved August 2012. Check date values in:
- Karsten Wambach. 1999. "A Voluntary Take Back Scheme and Industrial Recycling of Photovoltaic Modules" (PDF). Brookhaven National Laboratory p. 37. Retrieved August 2012. Check date values in:
- Krueger. 1999. p. 12-14
- Wambach. 1999. p. 15
- Wambach. 1999. p. 17
- Krueger. 1999. p. 23
- Wambach. 1999. p. 23
- "First Breakthrough In Solar Photovoltaic Module Recycling, Experts Say". European Photovoltaic Industry Association. Retrieved January 2011. Check date values in:
- "3rd International Conference on PV Module Recycling". PV CYCLE. Retrieved October 2012. Check date values in:
- "Solar Power Plant Report".
- "PVinsights announces worldwide 2010 top 10 ranking of PV module makers". www.pvinsights.com. Retrieved 2011-05-06.
- "SolarCity Press Release".
- "Leaders and laggards of the top 10 PV module manufacturers in 2014". PV Tech. Retrieved 2015-02-21.
- "Swanson's Law and Making US Solar Scale Like Germany". Greentech Media. 24 November 2014.
- ENF Ltd. (8 January 2013). "Small Chinese Solar Manufacturers Decimated in 2012 | Solar PV Business News | ENF Company Directory". Enfsolar.com. Retrieved 29 August 2013.
- Harnessing Light. National Research Council. 1997. p. 162.
- J. Doyne Farmer, François Lafond (2 November 2015). "How predictable is technological progress?". doi:10.1016/j.respol.2015.11.001. License: cc. Note: Appendix F. A trend extrapolation of solar energy capacity.
- "Solar Photovoltaics competing in the energy sector – On the road to competitiveness" (PDF). EPIA. Retrieved August 2012. Check date values in:
- SolarProfessional.com Ground-Mount PV Racking Systems March 2013
- Massachusetts Department of Energy Resources Ground-Mounted Solar Photovoltaic Systems, December 2012
- "A Guide To Photovoltaic System Design And Installation". ecodiy.org. Retrieved 2011-07-26.
- Shingleton, J. "One-Axis Trackers – Improved Reliability, Durability, Performance, and Cost Reduction" (PDF). National Renewable Energy Laboratory. Retrieved 30 December 2012.
- Mousazadeh, Hossain; et al. "A review of principle and sun-tracking methods for maximizing" (PDF). Renewable and Sustainable Energy Reviews 13 (2009) 1800–1818. Elsevier. Retrieved 30 December 2012.
- "Optimum Tilt of Solar Panels". MACS Lab. Retrieved 19 October 2014.
- The future of solar: lightweight, flexible and as cheap as wallpaper
|Wikimedia Commons has media related to Photovoltaics.| |
Matrix is an array of real numbers(or other suitable entities), arranged in rows and columns where entities are referred to elements present in the Matrix. Below image demonstrates the Matrix, where elements separated horizontally are known as rows of the Matrix and the elements separated vertically are know as columns of the Matrix.
As we know Matrix is arranged in rows and columns, the below matrix having 3 rows and 3 columns, So the order of the Matrix is 3 × 3.
Any four elements a, b, c, and d are arranged in two rows and two columns between two Vertical bars as shown below, forms which are called Determinant of the second-order or second-order determinant. As shown below demonstrating the Determinant and expansion of Determinant.
Determinant of a Matrix
Determinant is useful for solving linear equations, capturing how linear transformation change area or volume, and changing variables in integrals. The determinant can be viewed as a function whose input is a square matrix and whose output is a number. In the below article we are discussing the Minors and Cofactors thoroughly. In simple language we can say, To every small matrix A, we can associate a number (real or complex) which is called the determinant of a square matrix A.
Determinant of a matrix can be easily represented as det (A) or | A |
Now let’s jump to our topic which is Minors and Co-factors.
So firstly let’s discuss the Minors.
- The questions which are done in this article have appeared in different previous year question papers.
- i represent rows of the determinant whereas j represents columns of the determinant.
- I am highlighting the ijth terms so that you can see clearly without any confusion.
- In the below article you will see the questions and solution of that question is demonstrated by the image.
Minor of a Matrix
Minor of an element aij of a determinant, is a determinant obtained by deleting the ith row and jth column in which element aij lies. Minor of an element aij is denoted by Mij
Steps for Computing Minor of a Matrix
Step 1: Hide the ith row and jth column of the matrix A, where the element aij lies.
Step 2: Now compute the determinant of the matrix after the row and column is removed using step 1.
Sample Problems on Minor of a Matrix
Problem 1: If the matrix A is
then, write the minor of a22.
In this question, we have to find out the minor of a22, the element present at a22 is 0. As we learn from our definition of a minor we have to delete the ith row and jth columns at which our asked element is present. Below image is demonstrating how to delete the ith row and jth column
After deletion, we write our left element as it is and do cross multiplication.
Now after deleting ith row and jth column we had to expand the determinant, so we get (8 – 15) which on solving gives -7, which is our required answer.
Note: Always remember, after multiplication of left diagonal element always put -ve sign then do the multiplication of right diagonal elements and solve them out.
Problem 2: If the matrix A is
then find out minor of a32.
In the above question we have asked to find out the minor of a32 element which is 1. So as we did in this above problem same procedure we will follow. So firstly we have to delete the ith row and jth column at which our element is present.
So we had canceled the ith row and jth column at which our element is present. So write the elements which are left as it is.
Then do the cross multiplication and solve:
By following the same procedure as of the above question we had solved this question too through the expansion of determinant as we discussed in our Introduction.
Co-factors of a Matrix
Co-factor of an element aij of a determinant, denoted by Aij or Cij , is defined as Aij = (-1)i+j Mij , where Mij is a minor of an element aij
Formula to find cofactors
Aij = (-1)i+j Mij
Sample Problems on Co-factors of a Matrix
Problem 1: If a matrix A is
write the cofactor of the element a32.
As asked in question we have to find the co factor of element a32 which means our row (i) = 3 and column (j) = 2 so we have row and column as we do to find the minor by deleting the rows and column at which asked element exist we do the same in this question to and then put that in our formula -> Aij = (-1)i+j Mij
So after putting in the formula of finding cofactor and doing expansion of determinant we get (-1) (5 – 16) which on solving gives the answer 11, this is our required answer.
Problem 2: If Aij of the element aij of the determinant given below, then write the value of a32 . A32
In the question, we are having determinant. So we have row and column given in the question.
Here, a32 = 3+2 = 5
Given, Aij is the cofactor of the element aij of A . So now we can solve this question by putting the values in the formula of cofactor as discussed in above question.
So, 110 is our required answer. |
3 3 Demand, Supply, and Market Equilibrium CHAPTER OUTLINE Firms and Households: The Basic Decision-Making UnitsInput Markets and Output Markets: The Circular FlowDemand in Product/Output MarketsChanges in Quantity Demanded versus Changes in DemandPrice and Quantity Demanded: The Law of DemandOther Determinants of Household DemandShift of Demand versus Movement Along the Demand CurveFrom Household Demand to Market DemandSupply in Product/Output MarketsPrice and Quantity Supplied: The Law of SupplyOther Determinants of SupplyShift of Supply versus Movement Along the Supply CurveFrom Individual Supply to Market SupplyMarket EquilibriumExcess DemandExcess SupplyChanges in EquilibriumDemand and Supply in Product Markets: A ReviewLooking Ahead: Markets and the Allocation of Resources
4 Firms and Households: The Basic Decision-Making Units firm An organization that transforms resources (inputs) into products (outputs). Firms are the primary producing units in a market economy.entrepreneur A person who organizes, manages, and assumes the risks of a firm, taking a new idea or a new product and turning it into a successful business.households The consuming units in an economy.
5 Input Markets and Output Markets: The Circular Flow product or output markets The markets in which goods and services are exchanged.input or factor markets The markets in which the resources used to produce goods and services are exchanged.
6 FIGURE 3.1 The Circular Flow of Economic Activity Diagrams like this one show the circular flow of economic activity, hence the name circular flow diagram. Here goods and services flow clockwise: Labor services supplied by households flow to firms, and goods and services produced by firms flow to households.Payment (usually money) flows in the opposite (counterclockwise) direction: Payment for goods and services flows from households to firms, and payment for labor services flows from firms to households.Note: Color Guide—In Figure 3.1 households are depicted in blue and firms are depicted in red. From now on all diagrams relating to the behavior of households will be blue or shades of blue and all diagrams relating to the behavior of firms will be red or shades of red. The green color indicates a monetary flow.
7 labor market The input/factor market in which households supply work for wages to firms that demand labor.capital market The input/factor market in which households supply their savings, for interest or for claims to future profits, to firms that demand funds to buy capital goods.
8 land market The input/factor market in which households supply land or other real property in exchange for rent.factors of production The inputs into the production process. Land, labor, and capital are the three key factors of production.Input and output markets are connected through the behavior of both firms and households. Firms determine the quantities and character of outputs produced and the types and quantities of inputs demanded. Households determine the types and quantities of products demanded and the quantities and types of inputs supplied.
9 Demand in Product/Output Markets A household’s decision about what quantity of a particular output, or product, to demand depends on a number of factors, including:The price of the product in question.The income available to the household.The household’s amount of accumulated wealth.The prices of other products available to the household.The household’s tastes and preferences.The household’s expectations about future income, wealth, and prices.
10 quantity demanded The amount (number of units) of a product that a household would buy in a given period if it could buy all it wanted at the current market price.
11 Changes in Quantity Demanded versus Changes in Demand The most important relationship in individual markets is that between market price and quantity demanded.Changes in the price of a product affect the quantity demanded per period. Changes in any other factor, such as income or preferences, affect demand. Thus, we say that an increase in the price of Coca-Cola is likely to cause a decrease in the quantity of Coca-Cola demanded. However, we say that an increase in income is likely to cause an increase in the demand for most goods.
12 Price and Quantity Demanded: The Law of Demand demand schedule Shows how much of a given product a household would be willing to buy at different prices for a given time period.demand curve A graph illustrating how much of a given product a household would be willing to buy at different prices.
13 Quantity Demanded (Gallons per Week) TABLE Alex’s Demand Schedule for GasolinePrice (per Gallon)Quantity Demanded (Gallons per Week)$ 8.007.0026.0035.0054.0073.00102.00141.00200.0026 FIGURE 3.2 Alex’s Demand CurveThe relationship between price (P) and quantity demanded (q) presented graphically is called a demand curve.Demand curves have a negative slope, indicating that lower prices cause quantity demanded to increase.Note that Alex’s demand curve is blue; demand in product markets is determined by household choice.
14 Demand Curves Slope Downward law of demand The negative relationship between price and quantity demanded: As price rises, quantity demanded decreases; as price falls, quantity demanded increases.It is reasonable to expect quantity demanded to fall when price rises, ceteris paribus, and to expect quantity demanded to rise when price falls, ceteris paribus. Demand curves have a negative slope.
15 Other Properties of Demand Curves They have a negative slope.They intersect the quantity (X) axisa result of time limitations and diminishing marginal utility.They intersect the price (Y) axis, a result of limited income and wealth.The actual shape of an individual household demand curve—whether it is steep or flat, whether it is bowed in or bowed out—depends on the unique tastes and preferences of the household and other factors.
16 Other Determinants of Household Demand Income and Wealthincome The sum of all a household’s wages, salaries, profits, interest payments, rents, and other forms of earnings in a given period of time. It is a flow measure.wealth or net worth The total value of what a household owns minus what it owes. It is a stock measure.
17 normal goods Goods for which demand goes up when income is higher and for which demand goes down when income is lower.inferior goods Goods for which demand tends to fall when income rises.
18 Prices of Other Goods and Services substitutes Goods that can serve as replacements for one another;when the price of one increases, demand for the other increases.perfect substitutes Identical products.complements, complementary goods Goods that “go together”; a decrease in the price of one results in an increase in demand for the other and vice versa.
19 Have You Bought This Textbook? E C O N O M I C S I N P R A C T I C EHave You Bought This Textbook?Recent work by Judy Chevalier and Austan Goolsbee discovered, even when instructors require particular texts, when prices are high students have found substitutes. Even in the textbook market student demand does slope down!One might think that the total number of textbooks, used plus new, should match the class enrollment. After all, the text is required! In fact, what they found was the higher the textbook price, the more text sales fell below class enrollments.THINKING PRACTICALLY1. If you were to construct a demand curve for a required text in a course, where would that demand curve intersect the horizontal axis?2. And this much harder question: In the year before a new edition of a text is published, many college bookstores will not buy the older edition. Given this fact, what do you think happens to the gap between enrollments and new plus used book sales in the year before a new edition of a text is expected?
20 Tastes and Preferences Income, wealth, and prices of goods available are the three factors that determine the combinations of goods and services that a household is able to buy.Changes in preferences can and do manifest themselves in market behavior.Within the constraints of prices and incomes, preference shapes the demand curve, but it is difficult to generalize about tastes and preferences. First, they are volatile. Second, tastes are idiosyncratic.
21 ExpectationsWhat you decide to buy today certainly depends on today’s prices and your current income and wealth.There are many examples of the ways expectations affect demand.Increasingly, economic theory has come to recognize the importance of expectations.It is important to understand that demand depends on more than just current incomes, prices, and tastes.
22 Shift of Demand versus Movement Along a Demand Curve TABLE Shift of Alex’s Demand Schedule Due to Increase in IncomeSchedule D0Schedule D1Price (per Gallon)Quantity Demanded (Gallons per Week at an Income of $500 per Week)Quantity Demanded (Gallons per Week at an Income of $700 per Week)$ 8.0037.00256.0075.00104.00123.00152.0014191.0020240.002630 FIGURE 3.3 Shift of a Demand Curve following a Rise in IncomeWhen the price of a good changes, we move along the demand curve for that good.When any other factor that influences demand changes (income, tastes, and so on), the relationship between price and quantity is different; there is a shift of the demand curve, in this case from D0 to D1.Gasoline is a normal good.
23 shift of a demand curve The change that takes place in a demand curve corresponding to a new relationship between quantity demanded of a good and price of that good. The shift is brought about by a change in the original conditions.movement along a demand curve The change in quantity demanded brought about by a change in price.Change in price of a good or service leads toChange in quantity demanded (movement along a demand curve).Change in income, preferences, or prices of other goods or services leads toChange in demand (shift of a demand curve).
24 and the demand for normal goods shifts to the right. FIGURE 3.4 Shifts versus Movement Along a Demand Curvea. When income increases, the demand for inferior goods shifts to the leftand the demand for normal goods shifts to the right.
25 FIGURE 3.4 Shifts versus Movement Along a Demand Curve (continued) b. If the price of hamburger rises, the quantity of hamburger demanded declines— this is a movement along the demand curve.The same price rise for hamburger would shift the demand for chicken (a substitute for hamburger) to the right and the demand for ketchup (a complement to hamburger) to the left.
26 From Household Demand to Market Demand market demand The sum of all the quantities of a good or service demanded per period by all the households buying in the market for that good or service.
27 FIGURE 3.5 Deriving Market Demand from Individual Demand Curves Total demand in the marketplace is simply the sum of the demands of all the households shopping in a particular market. It is the sum of all the individual demand curves—that is, the sum of all the individual quantities demanded at each price.
28 Supply in Product/Output Markets Firms build factories, hire workers, and buy raw materials because they believe they can sell the products they make for more than it costs to produce them.profit The difference between revenues and costs.
29 Price and Quantity Supplied: The Law of Supply quantity supplied The amount of a particular product that a firm would be willing and able to offer for sale at a particular price during a given time period.supply schedule A table showing how much of a product firms will sell at alternative prices.
30 law of supply The positive relationship between price and quantity of a good supplied: An increase in market price will lead to an increase in quantity supplied, and a decrease in market price will lead to a decrease in quantity supplied.supply curve A graph illustrating how much of a product a firm will sell at different prices.
31 Quantity Supplied (Bushels per Year) TABLE Clarence Brown’s Supply Schedule for SoybeansPrice (per Bushel)Quantity Supplied (Bushels per Year)$1.501.7510,0002.2520,0003.0030,0004.0045,0005.00 FIGURE 3.6 Clarence Brown’s Individual Supply CurveA producer will supply more when the price of output is higher. The slope of a supply curve is positive.Note that the supply curve is red: Supply is determined by choices made by firms.
32 Other Determinants of Supply The Cost of ProductionFor a firm to make a profit, its revenue must exceed its costs.Cost of production depends on a number of factors, including the available technologies and the prices and quantities of the inputs needed by the firm (labor, land, capital, energy, and so on).
33 The Prices of Related Products Assuming that its objective is to maximize profits, a firm’s decision about what quantity of output, or product, to supply depends on:1. The price of the good or service.2. The cost of producing the product, which in turn depends on:■ The price of required inputs (labor, capital, and land).■ The technologies that can be used to produce the product.3. The prices of related products.
34 Shift of Supply versus Movement Along a Supply Curve movement along a supply curve The change in quantity supplied brought about by a change in price.shift of a supply curve The change that takes place in a supply curve corresponding to a new relationship between quantity supplied of a good and the price of that good. The shift is brought about by a change in the original conditions.
35 TABLE 3.4 Shift of Supply Schedule for Soybeans following Development of a New Disease-Resistant Seed StrainSchedule S0Schedule S1Price (per Bushel)Quantity Supplied (Bushels per Year Using Old Seed)Quantity Supplied (Bushels per Year Using New Seed)$1.505,0001.7510,00023,0002.2520,00033,0003.0030,00040,0004.0045,00054,0005.00 FIGURE 3.7 Shift of the Supply Curve for Soybeans following Development of a New Seed StrainWhen the price of a product changes, we move along the supply curve for that product; the quantity supplied rises or falls.When any other factor affecting supply changes, the supply curve shifts.
36 As with demand, it is very important to distinguish between movements along supply curves (changes in quantity supplied) and shifts in supply curves (changes in supply):Change in price of a good or service leads toChange in quantity supplied (movement along a supply curve).Change in costs, input prices, technology, or prices of related goods and services leads toChange in supply (shift of a supply curve).
37 From Individual Supply to Market Supply market supply The sum of all that is supplied each period by all producers of a single product.
38 FIGURE 3.8 Deriving Market Supply from Individual Firm Supply Curves Total supply in the marketplace is the sum of all the amounts supplied by all the firms selling in the market. It is the sum of all the individual quantities supplied at each price.
39 Market Equilibriumequilibrium The condition that exists when quantity supplied and quantity demanded are equal. At equilibrium, there is no tendency for price to change.Excess Demandexcess demand or shortage The condition that exists when quantity demanded exceeds quantity supplied at the current price.
40 When quantity demanded exceeds quantity supplied, price tends to rise. FIGURE 3.9 Excess Demand, or ShortageAt a price of $1.75 per bushel, quantity demanded exceeds quantity supplied.When excess demand exists, there is a tendency for price to rise.When quantity demanded equals quantity supplied, excess demand is eliminated and the market is in equilibrium. Here the equilibrium price is $2.00 and the equilibrium quantity is 40,000 bushels.When quantity demanded exceeds quantity supplied, price tends to rise.When the price in a market rises, quantity demanded falls and quantity supplied rises until an equilibrium is reached at which quantity demanded and quantity supplied are equal.
41 Excess Supplyexcess supply or surplus The condition that exists when quantity supplied exceeds quantity demanded at the current price.
42 FIGURE 3.10 Excess Supply, or Surplus At a price of $3.00, quantity supplied exceeds quantity demanded by 20,000 bushels.This excess supply will cause the price to fall.When quantity supplied exceeds quantity demanded at the current price, the price tends to fall. When price falls, quantity supplied is likely to decrease and quantity demanded is likely to increase until an equilibrium price is reached where quantity supplied and quantity demanded are equal.
43 Changes In Equilibrium When supply and demand curves shift, the equilibrium price and quantity change. FIGURE The Coffee Market: A Shift of Supply and Subsequent Price AdjustmentBefore the freeze, the coffee market was in equilibrium at a price of $1.20 per pound.At that price, quantity demanded equaled quantity supplied.The freeze shifted the supply curve to the left (from S0 to S1), increasing the equilibrium price to $2.40.
44 E C O N O M I C S I N P R A C T I C E Coffee or Tea? China is rapidly changing, and tea-drinking habits are no exception. Chinese consumers have discovered coffee!Some observers suggest that the fast pace of current day China is more compatible with coffee drinking than tea. Perhaps coffee drinking is a complement to economic growth?With new and large populations now interested in coffee, the world demand for coffee shifts rightward. This is good news for coffee growers. As you already know from this chapter, however, how good that news really is from the point of view of coffee prices depends on the supply side as well!THINKING PRACTICALLY1. Show in a graph the effect that the growth in China’s interest in coffee will likely have on coffee prices? What features of supply determine how big the price increase will be?
45 FIGURE 3.12 Examples of Supply and Demand Shifts for Product X
46 Demand and Supply in Product Markets: A Review Here are some important points to remember about the mechanics of supply and demand in product markets:A demand curve shows how much of a product a household would buy if it could buy all it wanted at the given price. A supply curve shows how much of a product a firm would supply if it could sell all it wanted at the given price.Quantity demanded and quantity supplied are always per time period—that is, per day, per month, or per year.The demand for a good is determined by price, household income and wealth, prices of other goods and services, tastes and preferences, and expectations.
47 The supply of a good is determined by price, costs of production, and prices of related products. Costs of production are determined by available technologies of production and input prices.Be careful to distinguish between movements along supply and demand curves and shifts of these curves. When the price of a good changes, the quantity of that good demanded or supplied changes—that is, a movement occurs along the curve. When any other factor changes, the curve shifts, or changes position.Market equilibrium exists only when quantity supplied equals quantity demanded at the current price.
48 Looking Ahead: Markets and the Allocation of Resources You can already begin to see how markets answer the basic economic questions of what is produced, how it is produced, and who gets what is produced.Demand curves reflect what people are willing and able to pay for products; demand curves are influenced by incomes, wealth, preferences, prices of other goods, and expectations.Firms in business to make a profit have a good reason to choose the best available technology—lower costs mean higher profits.When a good is in short supply, price rises. As it does, those who are willing and able to continue buying do so; others stop buying.
49 R E V I E W T E R M S A N D C O N C E P T S capital marketcomplements, complementary goodsdemand curvedemand scheduleentrepreneurequilibriumexcess demand or shortageexcess supply or surplusfactors of productionfirmhouseholdsincomeinferior goodsinput or factor marketslabor marketland marketlaw of demandlaw of supplymarket demandmarket supplymovement along a demand curvemovement along a supply curvenormal goodsperfect substitutesproduct or output marketsprofitquantity demandedquantity suppliedshift of a demand curveshift of a supply curvesubstitutessupply curvesupply schedulewealth or net worth |
Cyberspace requires a special and unique addressing system, known as the domain name system (DNS), which applies throughout cyberspace and doesn’t recognize regional or national boundaries. As in computer networking each computer on the internet has this unique address, which is called as Internet protocol (IP) address. This IP takes the form of four sets of numbers, separated by periods, or “dots” (such as 207.144.332.12). This numbers are what computers use to route traffic on the internet.
The organization that include in this matter is the Internet Assigned Numbers Authority (IANA). This organization oversees IP address; DNS root zone managements and other internet protocol assignments. It is operated by the Internet Corporation under authority of the United States Commerce Department. This IP number very useful and understandable for computer, however are too long for humans to remember.
MEANING OF DOMAIN NAME
“Domain” came from the “Dominium”, a Latin word for property or right of ownership. A dictionary defines “Domain” as the territory over which dominion or authority is exerted. In the internet domain signifies “ownership” or a “space” in the digital and virtual world of networked computers. A domain name is thus a name that refers to a digital domain or territory on the internet. It is textual address which is anyone can find your host machine on the internet. Its contains labels and dot. The dot separated the labels. For example “fskk.com”. Here “fskk “and “com” is labels and separated by a dot.
URL is Uniform Resource Locator, a form of address that specifies the location of the object, usually a webpage or a website on the Internet. Its contains three part. There are:
1. Protocol (e.g. http)
2. Domain name of any Internet host (e.g. www. Fskk.com.my)
3. Path or file name (e.g. html, welcome/html)
There are some examples of URL:
So, in URL also this domain is used. When you use the web or send e-mail message, a domain name is used. The URL http://fskk.com.my contains the domain name fskk.com.my. The e-mail address email@example.com contains domain name yahoo.com.
LEVELS OF DOMAIN NAME
level of this url are:
http = Hyper text transfer protocol
wwww = World Wide Web
fskk = Second-level domain (SLD)
*First come first served Basic
*free to choose
com = Top-level domain (TLD)
*Pre-defined by ICANN and the interNIC
my = Top level domain (TLD)
DNS names must be unique:
1. It cannot have two or more name that are the same .
2. Names need only differ slightly.
Registering authorities and top level domains independent
1. A name in one domain scheme does not preclude that name in
2. Must register in all domains to control name
3. Must register all combinations
4. Registering authorities operate their own regulations
Using a domain name which is the same or similar to a particular trademark can happen in many ways:
1. Slight amendment to letters
2. Reference in metatags
3. Deep linking
Cybersquatting is a registration of a domain name with a view to get something in return from the owner of a particular trademark who wishes to use that domain name. it can happen because of the domain name can be sold to a company or organization which intend to use the domain name, can be sold to the third party who has other interests in keeping the domain name and because to prevent the owner of a particular trademark from using the domain name.
Disney wins domain name case
Sunday, 06 September 2009
Disney Enterprises,the largest media and entertainment conglomerate in the world submitted a complaint to the National Arbitration Forum,requesting six domain names to be transferred to them .
The six domain names are : marypoppinsonbroadway.com, marypoppinstickets.net, marypoppinstickets.org, hannahmontanaticketsonline.com, and littlemermaidticketsny.com .Disney Enterprises owns various trademarks.The domain names marypoppinsonbroadway.com, marypoppinstickets.net, marypoppinstickets.org are confusingly similar to its Mary Poppins mark,hannahmontanaticketsonline.com is confusingly similar to its Hannah Montana mark and littlemermaidticketsny.com is confusingly similar to its Little Mermaid mark .Moreover,the entertainment conglomerate demonstrated that the domain names were registered and used in bad faith.The complainant ,demonstrated that two of the domain name resolves to their "Disney on Broadway” web page".Marypoppinstickets.org, hannahmontanaticketsonline.com, and littlemermaidticketsny.com seems to resolve " to a commercial website that sells goods and services that compete with Complainant’s business. Specifically, it appears that Respondent is diverting Internet users seeking Complainant’s entertainment goods and services to a website that sells tickets to Complainant’s shows, as well as shows of competitors, in competition with Complainant’s own sale of its goods and services. "Because the respondent failed to submit a response and because the complainant managed to demonstrate all the elements required,the Panel decided the six disputed domain names to be transferred from the respondent to he complainant.
Almost half of Malaysian listed companies yet to secure domain name
Malaysia’s domain registrar moves to pre-register .my domain
By AvantiKumar07 Sep 2009
KUALA LUMPUR, 7 SEPTEMBER 2009 -- Almost half of Malaysian public-listed companies have yet to secure their .my domain name, according to the country’s sole registrar .my DOMAIN REGISTRY.
“In support of Bursa Malaysia’s [Malaysian stock exchange’s] directive for all listed companies to have a corporate website, .my DOMAIN REGISTRY has announced that it is pre-reserving the domain names for a number of Bursa Malaysia companies it has identified as not yet having a ‘.my’ address,” said .my DOMAIN REGISTRY director, Shariya Haniz Zulkifli.
“From our research, nearly 50 per cent of the companies listed on Bursa Malaysia—inclusive of all three: main board, second board and MESDAQ—do not have, or do not actively identify their online presence via a .my domain name,” said Zulkifli, adding that the agency would ‘hold’ these domains on behalf of these companies until 31 October 2009 as part of its continuing efforts to encourage more Malaysian businesses to secure their online IP [intellectual property].
Formerly known as MYNIC, .my DOMAIN is the sole administrator for Web addresses that end with .my in Malaysia, and is an agency under the ministry of science, technology and innovation (MOSTI) regulated by the Malaysian Communications and Multimedia Commission (MCMC). As the national level domain name, .my gives Malaysian businesses and individuals their unique brand identity on the Internet.
“To protect your business IP, we are advocating that Malaysian businesses, particularly public-listed ones, register with .my DOMAIN REGISTRY resellers. The ‘.my’ unique domain name not only protects IP rights here, it is arguably less open to cyber-squatting issues compared to the ‘.com’ domain names,” said Zulkifli.
“By registering domain names derived from famous or known brands, unscrupulous cyber squatters go to the extent of luring online consumers into purchasing counterfeit products, giving away personal information and exposing them to malware,” she added.
“As the national level domain name, ‘.my’ gives Malaysian businesses a unique brand identity that is accessible worldwide on the Internet. Utilising a ‘.my’ address will therefore not only differentiate local businesses from foreign entities but will also enhance its export and international market potential as a Malaysian company,” she said.
“Even small-medium enterprises [SMEs] should make their Malaysian identity more noticeable online as e-commerce has become one of the most important facets of the Internet as it is relatively cost-efficient, and a very effective and credible way of reaching out to the global market,” she said.
How to Choose the Right Domain Name
What’s your favorite word? There’s a certain magic in the right combination of syllables, the way a specific word rolls right off the tongue. Words like gregarious, origami and highfalutin sound fantastic -- even when all by themselves. These words, however, are also hard to spell, difficult to define and almost impossible to remember when the occasion finally does call for their use. What’s your favorite word? When you want to know how to choose the right domain name, the words you like no longer matter at all.
The most popular domain names on the Web hardly even sound like real words (even if they are): Google, Yahoo, Wikipedia. It might sound like a lot of nonsense, but you’re listening to the symphony of money when you say these odd, one-word domain names. Maybe you don’t like the words -- but you know the sites (and so does everyone else). Need to know how to choose the right domain name? It’s time to take a crash course on Internet names, and online naming, in general. (from http://tools.devshed.com/c/a/Domain-Name)
http://www.igoldrush.com/intro2d.htm( u can get information about the meaning of domain name,the characteristics, the different kinds of domain name, some slightly tecnical stuff about domain name, what can do with domain name and why should buy a domain name) |
The Cherokee–American wars were a series of back-and-forth raids, campaigns, ambushes, minor skirmishes, and several full-scale frontier battles in the Old Southwest from 1776 to 1795 between the Cherokee (Ani-Yunwiya, Tsalagi) and the Americans on the frontier. Most of the events took place in the Upper South. While their fight stretched across the entire period, there were times, sometimes ranging over several months, of little or no action.
The Cherokee leader Dragging Canoe, whom some historians call “the Savage Napoleon”, and his warriors and other Cherokee fought alongside and in conjunction with Indians from a number of other tribes both in the Old Southwest and in the Old Northwest, most often Muscogee (Muskokulke) in the former and the Shawnee (Saawanwa) in the latter. During the Revolution, they also fought alongside British troops, Loyalist militia, and the King’s Carolina Rangers.
Open warfare broke out in the summer of 1776 along the frontier of the Watauga, Holston, Nolichucky, and Doe rivers in East Tennessee, as well as the colonies (later states) of the Virginia, North Carolina, South Carolina, and Georgia. It later spread to those along the Cumberland River in Middle Tennessee and in Kentucky.
The wars of the Cherokee and the Americans divide into two phases.
In the first phase, lasting 1776–1783, the Cherokee also fought as allies of the Kingdom of Great Britain against its rebellious colonies. This first part of this phase, from summer 1776 to summer of 1777, involved the all sections of the entire Cherokee nation, and is often referred to as the “Cherokee War of 1776”. At the end of 1776, the only militant Cherokee were those who migrated with Dragging Canoe to the Chickamauga towns, for which they were known to the frontierspeople as the "Chickamauga" or "Chickmauga Cherokee".
In the second phase, lasting 1783–1794, the Cherokee also served as proxies of the Viceroyalty of New Spain against the new United States of America. Because of their relocation westward to new homes initially known as the "Five Lower Towns", they then became known as the Lower Cherokee, a moniker which persisted well into the nineteenth century. In 1786, the Lower Cherokee became founding members of the Native Americans' Western Confederacy organized by the Mohawk leader Joseph Brant, and took an active part in the Northwest Indian War.
The conflict in the Southwest ended in November 1794 with the Treaty of Tellico Blockhouse. The Northwest Indian War, which the Cherokee were also involved in, ended with the Treaty of Greenville in 1795.
- 1 Prelude (1763–1775)
- 1.1 Aftermath of the French and Indian War
- 1.2 Early colonial settlements in Upper East Tennessee (1768–1772)
- 1.3 Settlements of the Cherokee in 1775
- 1.4 Henderson Purchase (1775)
- 2 Revolutionary War phase: Cherokee War of 1776
- 2.1 Flight of the Loyalists
- 2.2 Patriot black propaganda
- 2.3 Battle of Sullivan's Island
- 2.4 Visit from the northern tribes
- 2.5 First Cherokee campaigns
- 2.6 Colonial response
- 2.7 Treaties of 1777
- 2.8 Other Southeastern Indian nations
- 2.9 Migration to the Chickamauga area
- 2.10 Continuing the fight
- 3 Revolutionary War phase: Southern strategy (1778–1783)
- 3.1 British victory in the North
- 3.2 British victories in the South
- 3.3 First Cumberland settlement
- 3.4 Loss in the North
- 3.5 Raids in the Overmountain region
- 3.6 Death of John Stuart
- 3.7 Scott and Shelby expeditions
- 3.8 Cameron's expedition
- 3.9 Concord between the Lenape and the Overhill Cherokee
- 3.10 Loss of Mobile
- 3.11 Chickasaw-American war
- 3.12 Robertson and Donelson parties
- 3.13 Capture of Charlestown
- 3.14 Defense of Augusta and Battle of Kings Mountain
- 3.15 Cherokee Overmountain campaign of 1780
- 3.16 Cherokee Cumberland campaign, 1780–1781
- 3.17 First Cherokee Overmountain campaign of 1781
- 3.18 Loss of Pensacola
- 3.19 Battle of the Bluff
- 3.20 Shawnee Overmountain campaign, 1781–1785
- 3.21 Loss of Augusta
- 3.22 Second Cherokee Overmountain campaign of 1781
- 3.23 Lenape refugees
- 3.24 Politics in the Overhill Towns
- 3.25 Cherokee Georgia campaign of 1781
- 3.26 Death of Alexander Cameron
- 3.27 Diplomatic mission to Ft. St. Louis
- 3.28 Loss of Savannah
- 3.29 Cherokee Overmountain campaign of 1782
- 3.30 Migration to the Lower Towns
- 3.31 Another visit from the North
- 3.32 Georgia Indian war of 1782
- 3.33 Cherokee in the Ohio region
- 3.34 St. Augustine conference
- 3.35 Treaty of Long Swamp Creek (1783)
- 3.36 More Overhill politics
- 3.37 Treaty of Paris (1783)
- 3.38 Cherokee Overmountain campaign of 1783
- 3.39 Treaty of French Lick
- 3.40 Treaty of Augusta (1783)
- 4 Post-Revolution phase: New directions (1783–1788)
- 4.1 Coldwater Town
- 4.2 Spanish Indian treaties
- 4.3 Unquiet Western frontier
- 4.4 Towards an Indian alliance
- 4.5 Muscogee council at Tuckabatchee
- 4.6 Free Republic of Franklin
- 4.7 Northwest Indian War (1785–1795)
- 4.8 Treaty of Galphinton
- 4.9 Treaty of Hopewell
- 4.10 Houston County, Georgia
- 4.11 The Spanish Conspiracy
- 4.12 Cherokee war of 1786
- 4.13 Formation of the Western Confederacy
- 4.14 Trouble with Franklin and Kentucky
- 4.15 Coldwater Indian war (1785–1787)
- 5 Post-Revolution: Peak of Cherokee influence (1788–1792)
- 5.1 Chiksika's band of Shawnee
- 5.2 Cherokee-Franklin war of 1788 (1788–1789)
- 5.3 Blow to the Western Confederacy
- 5.4 Implosion of the Spanish Conspiracy
- 5.5 Council at Coweta
- 5.6 Prisoner exchange
- 5.7 Non-treaty of Swannanoa
- 5.8 Doublehead's war
- 5.9 Treaty of New York (1790)
- 5.10 Muscle Shoals settlement
- 5.11 Bob Benge's war
- 5.12 Treaty of Holston (1791)
- 5.13 Battle of the Wabash
- 5.14 Death of the "Savage Napoleon"
- 6 Post-Revolution: the Watts years (1792–1795)
- 6.1 John Watts
- 6.2 Southwest Territory Indian War, 1792–1795
- 6.2.1 Raiding season, spring and summer 1792
- 6.2.2 Invasion of the Miro District
- 6.2.3 Northern concerns
- 6.2.4 Death of an ally
- 6.2.5 Spring and summer campaigns, 1793
- 6.2.6 Peace overtures
- 6.2.7 Attack on the diplomatic party
- 6.2.8 Invasion of the Eastern Districts
- 6.2.9 Battle of Etowah
- 6.2.10 Southwest Point Blockhouse
- 6.2.11 Tellico Blockhouse
- 6.2.12 Another Spanish treaty
- 6.2.13 Spring and summer 1794
- 6.2.14 Treaty of Philadelphia (1794)
- 6.2.15 End of Lesley’s war party
- 6.2.16 Battle of Fallen Timbers
- 6.2.17 Aborted invasion of the Miro District
- 6.2.18 Trans-Oconee Republic
- 6.2.19 Nickajack Expedition
- 6.2.20 Treaty of Tellico Blockhouse (1794)
- 6.2.21 Muscogee continue the war
- 6.3 Treaty of Greenville
- 6.4 Treaty of Coleraine
- 6.5 Treaty of San Lorenzo
- 7 Aftermath and Assessment
- 8 See also
- 9 References
- 10 Sources
- 11 External links
The French and Indian War (1754–1763) and the related European theater conflict known as the Seven Years' War (1756–1763) laid many of the foundations for the conflict between the Cherokee and the American settlers on the frontier. These tensions on the frontier broke out into open hostilities with the advent of the American Revolution.
Aftermath of the French and Indian War
The action of the French and Indian War in North America included the Anglo-Cherokee War, lasting 1758–1761. British forces under general James Grant destroyed a number of major Cherokee towns, which were never reoccupied. Kituwa was abandoned, and its former residents migrated west; they took up residence at Mialoquo, called Great Island Town, on the Little Tennessee River among the Overhill Cherokee.
At the end of this conflict within a conflict, the Cherokee signed the Treaty of Long-Island-on-the-Holston with the Colony of Virginia (1761) and the Treaty of Charlestown with the Province of South Carolina (1762). Standing Turkey, the First Beloved Man during the conflict, was replaced by Attakullakulla, who was pro-British.
In the aftermath of the Seven Years' War, France in defeat ceded that part of the Louisiana Territory east of the Mississippi and Canada to the British. Spain took control of Louisiana west of the Mississippi. In exchange it ceded Florida to Great Britain, which created the jurisdictions of East Florida and West Florida.
Valuing the support of Native Americans, King George III issued the Royal Proclamation of 1763. This prohibited colonial settlement west of the Appalachian Mountains, in an effort to preserve territory for the native tribes. Many colonials resented the interference with their drive to the vast western lands. The proclamation was a major irritant to the colonists, contributing to their support for the American Revolution and ending interference by the Crown.
For example, the British had previously announced that a colony, to be called Charlotina, was planned for the lands of the Ohio Valley and Great Lakes regions, which under the French had been part of Upper Louisiana; it was also known as the Illinois Country. The Proclamation of 1763 ended those plans of another Anglo-American colony. In 1774 the Crown attached the lands in question to the Province of Quebec. After achieving independence in 1783, the United States identified the area north of the Ohio River as the Northwest Territory.
John Stuart, the only officer to escape the Fort Loudoun massacre that took place during the Anglo-Cherokee War, was appointed as the British Superintendent of Indian Affairs for the Southern District, based in Charlestown. His deputy to the Cherokee, Alexander Cameron, lived among them, first at Keowee, then at Toqua on the Little Tennessee River. Cameron's assistant, John McDonald, set up a base 100 miles to the southwest on the west side of the Chickamauga River, where it was crossed by the Great Indian Warpath. To the Muscogee, Stuart sent David Taitt as deputy. The deputy to the Choctaw was his brother Charles Stuart. Among the Chickasaw, Farquhar Bethune served as Stuart’s deputy. John’s brother Henry Stuart served as chief deputy superintendent.
Treaty of Hard Labour (1768)
To resolve the problem of settlers living beyond the line established in the previous treaty, John Stuart, as Superintendent for Southern Indian Affairs, negotiated a treaty signed on October 17, 1768, with the Cherokee surrendering their claims to the Colony of Virginia to the land between the Allegheny Mountains and the Ohio River. Essentially, it covered what is now West Virginia and eastern Kentucky, with a bit of the southwest corner of Pennsylvania.
Treaty of Fort Stanwix (1768)
After Pontiac's War (1763–1764), the Iroquois Confederacy (Haudenosaunee) ceded to the British government its claims to the hunting grounds between the Ohio and Cumberland rivers, known to them and other Indians as Kain-tuck-ee (Kentucky), in the 1768 Treaty of Fort Stanwix. It had controlled this area by right of conquest after pushing Siouan tribes out to the west during the Beaver Wars of the 17th century.
With a significant obstacle removed, in 1769 developers and land speculators planned to start a new colony called Vandalia in the territory ceded by the Cherokee. Plans for that fell through, however, and in 1774 Virginia annexed it as the District of West Augusta.
Treaty of Augusta (1773)
In this treaty signed on June 1, 1773, with the Province of Georgia, the Cherokee and the Muskogee ceded their claims to 2 million acres in return for the cancellation of their enormous debts to traders of the colony. The land ceded now makes up the counties of Wilkes, Oglethorpe, Elbert, and Lincoln, plus parts of the surrounding counties. Most of the Muscogee refused to recognize it and the British government rejected it.
Dunmore’s War (1774)
The next year, Daniel Boone led a group to establish a permanent settlement inside the hunting grounds of Kentucky. In retaliation the Shawnee, Lenape (Delaware), Mingo, and some Cherokee attacked a scouting and forage party, which included Boone's son James. The Indians ritually tortured to death their captives James Boone and Henry Russell. The colonists responded with the beginning of Dunmore's War (1773–1774).
The Cherokee and the Muscogee were active also, mainly confining themselves to small raids on the backcountry settlements of the Carolinas and Georgia. The fighting reached into the later Tennessee with an attack by the Shawnee and their allies upon the North-of-Holston settlements.
In the Treaty of Camp Charlotte that ended the war, signed October 19, 1774, between the Shawnee and Lenape and Virginia, the former two ceded all their claims to Kentucky in addition to pledging an end to fighting. The Mingo (Minko latter) refused to take part.
Early colonial settlements in Upper East Tennessee (1768–1772)
The earliest colonial settlement in the vicinity of what became Upper East Tennessee was Sapling Grove (Bristol). The first of the North-of-Holston settlements, it was founded by Evan Shelby, who "purchased" the land in 1768 from John Buchanan. Jacob Brown began another settlement on the Nolichucky River, and John Carter another in what became known as Carter's Valley (between Clinch River and Beech Creek), both in 1771. Following the Battle of Alamance in 1771, James Robertson led a group of some twelve or thirteen Regulator families from North Carolina to the Watauga River.
Each of the groups thought they were within the territorial limits of the colony of Virginia. After a survey proved their mistake, Alexander Cameron, Deputy Superintendent for Indian Affairs, ordered them to leave. Attakullakulla, now First Beloved Man (Principal Chief) of the Cherokee, interceded on their behalf. The settlers were allowed to remain, provided no additional people joined them.
On May 8, 1772, the settlers on the Watauga and on the Nolichucky signed the Watauga Compact to form the Watauga Association. Although the two other settlements were not parties to it, all of them are sometimes referred to as "Wataugans".
|Wikisource has original text related to this article:|
Settlements of the Cherokee in 1775
To get a better view of where actions took place and who was involved, here's the following list:
The Middle Towns sat on the upper Little Tennessee River and its headwaters in western North Carolina.
Henderson Purchase (1775)
One year later, on March 17, 1775, a group of North Carolina speculators led by Richard Henderson negotiated the Treaty of Watauga at Sycamore Shoals with the older Overhill Cherokee leaders; Oconostota and Attakullakulla (now First Beloved Man), the most prominent, ceded the claim of the Cherokee to the Kain-tuck-ee (Ganda-giga'i) lands. The Transylvania Land Company believed it was gaining ownership of the land, not realizing that other tribes, such as the Lenape, Shawnee, and Chickasaw, also claimed these lands for hunting.
Dragging Canoe (Tsiyugunsini), headman of Great Island Town (Amoyeliegwayi) and son of Attakullakulla, refused to go along with the deal. He told the North Carolina men, "You have bought a fair land, but there is a cloud hanging over it; you will find its settlement dark and bloody". The governors of Virginia and North Carolina repudiated the Watauga treaty, and Henderson fled to avoid arrest. George Washington also spoke out against it. The Cherokee appealed to John Stuart, the Indian Affairs Superintendent, for help, which he had provided on previous such occasions, but the outbreak of the American Revolution intervened.
Henderson and frontiersmen thought the outbreak of the Revolution superseded the judgments of the royal governors. The Transylvania Company began recruiting settlers for the region they had "purchased".
Revolutionary War phase: Cherokee War of 1776
During the Revolutionary War, the Cherokee not only fought against the settlements in the Overmountain region, and later in the Cumberland Basin, defending against territorial encroachment, they also fought as allies of Great Britain against its rebellious subjects.
In the first phase, British strategy was focused on the North, and not so much on the backwoods settlements, especially those in the west. The Cherokee, therefore, were on their own, except for supplies from British ports on the coast and some joint operations in South Carolina.
Flight of the Loyalists
As tensions rose, the Loyalist John Stuart, British Superintendent of Indian Affairs, was besieged by a mob at his house in Charleston and had to flee for his life. His first stop was St. Augustine in East Florida.
Another noted Loyalist later associated with the Cherokee, Thomas Brown, was not nearly so fortunate. In his home of Brownsborough, Georgia, near Augusta, he was assaulted by a crowd of the Sons of Liberty, tied to a tree, roasted with fire, scalped, tarred, and feathered. After his escape, he took up residence among the Seminole commanding his East Florida Rangers, who fought with them and some of the Lower Muskogee.
From St. Augustine, Stuart sent his deputy, Alexander Cameron, and his brother Henry to Mobile to obtain short-term supplies and arms for the Cherokee. Dragging Canoe took a party of 80 warriors to provide security for the pack train. He met Henry Stuart and Cameron (whom he had adopted as a brother) at Mobile on March 1, 1776. He asked how he could help the British against their rebel subjects, and for help with the illegal settlers. The two men told him to wait for regular troops to arrive before taking any action.
Patriot black propaganda
When the two arrived at Chota, Henry Stuart sent out letters to the frontier settlers of the Washington District (Watauga and Nolichucky), Pendelton District (North-of-Holston), and Carter's Valley (in modern Hawkins County). He informed them that they were illegally on Cherokee land and gave them 40 days to leave. In an exercise of black propaganda, people sympathetic to the Revolution forged a letter to indicate a large force of regular troops, plus Chickasaw, Choctaw, and Muscgoee, was on the march from Pensacola and planning to pick up reinforcements from the Cherokee. The forged letters alarmed the settlers, who began gathering together in closer, fortified groups, building stations (small forts), and otherwise preparing for an attack.
Battle of Sullivan's Island
In June 1776, the British launched an attempt to capture Charlestown Harbor by land and by sea. On June 28 the land forces commanded by Henry Clinton attacked the harbor's chief defense, Fort Sullivan, commanded by William Moultrie. An attempt by three of the British ships to maneuver in support failed due to hidden natural obstructions. Meanwhile, Moultrie's guns inflicted heavy damage on several of the other ships in the fleet. The land attack failed too.
After withdrawing, the British abandoned the South for the next two-and-a-half years. However, the British officials could not halt plans already in motion for supporting attacks by the Cherokee and Loyalists.
Visit from the northern tribes
In May 1776, partly at the behest of Henry Hamilton, the British governor in Detroit, the Shawnee chief Cornstalk (Hokoleskwa) led a delegation from the northern tribes (Shawnee, Lenape, Iroquois, Ottawa, others) to the southern tribes. He traveled to Chota to meet with the southern tribes (Cherokee, Muscogee, Chickasaw, Choctaw) about fighting with the British against their common enemy. Cornstalk called for united action against those they called the "Long Knives", the squatters who settled and remained in Kain-tuck-ee (Ganda-gi), or, as the settlers called it, Transylvania. At the close of his speech, Cornstalk offered his war belt, and Dragging Canoe accepted it, along with Abraham (Osiuta) of Chilhowee (Tsulawiyi). Dragging Canoe also accepted belts from the Ottawa and the Iroquois. Savanukah, the Raven of Chota, accepted the war belt from the Lenape. The northern emissaries offered war belts to Stuart and Cameron, but they declined to accept.
The plan was for the Cherokee of the Middle, Out, and Valley Towns, of what is now western North Carolina, to attack South Carolina. Cameron would lead warriors of the Lower Towns against Georgia. Warriors of the Overhill Towns, along the lower Little Tennessee and Hiwassee rivers, were to attack Virginia and North Carolina. In the Overhill campaign, Dragging Canoe was to lead a force against the Pendleton District, Abraham one against the Washington District, and Savanukah one against Carter's Valley.
To prepare themselves for the coming campaign, the Overhill Cherokee began raiding into Kentucky, often with the Shawnee. Before the northern delegation had left, Dragging Canoe led a small war party into Kentucky and returned with four scalps to present to Cornstalk before they departed. In another raid, a war party led by Hanging Maw (Skwala-guta) of Coyatee (Kaietiyi), captured three teenage settler girls, Jemima Boone and Elizabeth and Frances Callaway, on July 14, but lost them three days later to a rescue party led by Daniel Boone, father of Jemima, and Richard Callaway, father of Elizabeth and Frances.
First Cherokee campaigns
Meanwhile, traders warned the Overmountain settlers of the impending attacks from the Overhill Towns. They had come from Chota bearing word from Nancy Ward (Agigaue), the Beloved Woman (leader or Elder). The Cherokee offensive proved to be disastrous for the attackers.
Siege of McDowell's Station
On July 3, a small war party of Cherokee besieged a small fort on the North Carolina frontier. The garrison managed to keep from being overrun until a large body of militia under Griffith Rutherford arrived in the rear of besiegers, who then retreated.
Battle of Lindley's Station
A 190-strong war party of Cherokee and Loyalist partisans dressed as Cherokee attacked the large fort on the South Carolina frontier known as Lindley's Station. Its 150-man Patriot garrison had just finished building it the day before. After repulsing the attack, the Patriots gave chase, killing two Loyalists and capturing ten, but inflicting no casualties on the Cherokee.
Battle of Island Flats
Finding Fort Lee on the Nolichucky deserted, the Cherokee force from the Overhill Towns burned it to the ground, then divided into three columns.
Dragging Canoe's force advanced up the Great Indian Warpath and had a small skirmish with a body of militia numbering twenty who quickly withdrew. Pursuing them and intending to take Fort Lee at Long-Island-on-the-Holston, his force advanced toward the island. However, on July 20, it encountered a larger force of militia six miles from their target, about half the size of his own but desperate, in a stronger position than the small group before.
During the “Battle of Island Flats” which followed, Dragging Canoe himself was wounded in his hip by a musket ball and his brother Little Owl (Uku-usdi) incredibly survived after being hit eleven times. His force then withdrew, raiding isolated cabins on the way and returned to the Overhill area with plunder and scalps, after raiding further north into southwestern Virginia.
Siege of Fort Caswell
On July 21, Abraham of Chilhowee led his party in attempting to capture Fort Caswell on the Watauga, but his attack was driven off with heavy casualties. Instead of withdrawing, however, he put the garrison under siege, a tactic which had worked well the previous decade with Fort Loudon. After two weeks, though, he and his warriors gave that up.
Savanukah's party raided from the outskirts of Carter's Valley far into the Clinch River Valley in Virginia, but those targets contained only small settlements and isolated farmsteads so he did no real military damage.
The affected colonies of Virginia, North Carolina, South Carolina, and Georgia conferred and decided that swift and massive retaliation was the only way to preserve peace on the frontier.
In the Lower, Middle, Valley, and Out Towns
The colonials quickly gathered militia who retaliated against the Cherokee. North Carolina sent Rutherford with 2400 militia to scour the Oconaluftee and Tuckasegee river valleys, and the headwaters of the Little Tennessee and Hiwassee. South Carolina sent 1800 men to the Savannah, and Georgia sent 200 to attack Cherokee settlements along the Chattahoochee and Tugaloo rivers.
Not long after leaving Fort McGahey on July 23, Rutherford’s militia, accompanied by a large contingent of Catawba warriors, encountered an ambush by the Cherokee at the Battle of Cowee Gap in what is now western North Carolina. After defeating the attackers, he proceeded to a designated rendezvous with the South Carolina militia.
On August 1, Cameron and the Cherokee ambushed Andrew Williamson and his South Carolina militia force near the Lower Cherokee town of Isunigu known to whites as Seneca, in the Battle of Twelve Mile Creek. After retreating, he joined up with the militia force of Andrew Pickens.
The next day, August 2, the joint militia force bivouacked, and Pickens led a party of twenty-five to forage for food and firewood. In what is known as the Ring Fight, two hundred Cherokee surrounded and attacked the party, which withdrew into a ring and were able to hold their attackers at bay until reinforcements arrived.
On August 12, Williamson and Pickens defeated the Cherokee at the Battle of Tamassee. With this, they had completed their destruction of the Lower Towns, Keowee, Estatoe, Seneca, and the rest. Afterwards, they proceeded north to meet up with the North Carolina militia of Griffith Rutherford.
Rutherford’s militia traversed Swannanoa Gap in the Blue Ridge on September 1, and reached the outskirts of the Out, Valley, and Middle Towns on September 14, at which they started burning towns and crops.
Williamson’s militia were attacked at the Battle of Black Hole near Franklin, North Carolina on September 19, but were able to fend off the Cherokee and meet up with Rutherford to take part in the campaign of destruction.
In all, Williamson, Pickens, and Rutherford destroyed more than 50 towns, burned the houses and food stores, destroyed the orchards, slaughtered livestock, and killed hundreds of Cherokee. They sold captives into slavery, and these were often transported to the Caribbean.
In the Overhill Towns
In the meantime, the Continental Army sent Col. William Christian to the lower Little Tennessee Valley with a battalion of Continentals, five hundred Virginia militia, three hundred North Carolina militia, and three hundred rangers. By this time, Dragging Canoe and his warriors had already returned to the Overhill Towns.
Oconostota supported making peace with the colonists at any price. Dragging Canoe called for the women, children, and old to be sent below the Hiwassee and for the warriors to burn the towns, then ambush the Virginians at the French Broad River. Oconostota, Attakullakulla, and the older chiefs decided against that plan. Oconostota sent word to the approaching colonial army offering to exchange Dragging Canoe and Cameron if the Overhill Towns were spared.
Dragging Canoe spoke to the council of the Overhill Towns, denouncing the older leaders as rogues and "Virginians" for their willingness to cede land for an ephemeral safety. He concluded, "As for me, I have my young warriors about me. We will have our lands." He stalked out of the council. Afterward, he and other militant leaders, including Ostenaco, gathered like-minded Cherokee from the Overhill, Valley, and Hill towns, and migrated to what is now the Chattanooga, Tennessee area. Cameron had already transferred there.
Upon reaching the Little Tennessee in late October, Christian's Virginia force found found those towns from whence the militant attackers had spring—Great Island, Citico (Sitiku), Toqua (Dakwayi), Tuskegee (Taskigi), Chilhowee, and Great Tellico—not only deserted but burned to the ground by their own former inhabitants, along with all the food and stores that could not be carried away.
Treaties of 1777
Preliminary negotiations between the Overhill Towns and Virginia were held as Fort Patrick Henry in April 1777. Nathaniel Gist, later father of Sequoyah, led the talks for Virginia, while Attakullakulla, Oconostota, and Savanukah headed the delegation of Cherokee.
The Cherokee signed the Treaty of Dewitt's Corner with Georgia and South Carolina (Ostenaco was one of the Cherokee signatories) May 20 and the Treaty of Fort Henry with Virginia and North Carolina on July 20. They promised to stop warring, with those colonies promising in return to protect them from attack. In the former treaty, the Lower Towns ceded all their land in modern South Carolina except for a small strip in what is now Oconee County. One provision of the latter treaty required that James Robertson and a small garrison be quartered at Chota on the Little Tennessee. He had been appointed Superintendent of Indian Affairs for North Carolina, while Joseph Martin had been appointed Superintendent of Indian Affairs for Virginia.
Other Southeastern Indian nations
The paramount mico Emistisigua led the Upper Muscogee in alliance with the British; within a year he had become the strongest native ally of Dragging Canoe and his faction of Cherokee. After 1777, he was assisted by Alexander McGillivray (Hoboi-Hili-Miko; he signed his name "Alex McGillivray"), the mixed-blood son of a Coushatta woman and a Scots-Irish American trader. He was mico of the Coushatta, a former colonel in the British Army, and one of John Stuart's agents.
The Seminole of East Florida, universally Loyalist in sympathy, provided hundreds of warriors for British campaigns in the Southeast. They often fought with Loyalist Rangers commanded by Thomas Brown, formerly of Charlestown. Known to the whites as Cowkeeper, Ahaya, founder of the Seminole nation, was usually their leader.
Although the majority of the Lower Muscogee chose to remain neutral, the Loyalist Capt. William McIntosh, another of Stuart's agents and father of later Muscogee leader William McIntosh, recruited a sizable unit of Hitchiti warriors to fight on the British side.
The Choctaw and the Chickasaw in alliance with the British patrolled the Mississippi and western Tennessee rivers to prevent American incursion along those pathways. The Chickasaw formed part of the garrison of Fort Panmure on the Mississippi and later in Pensacola. Over a thousand Choctaw warriors helped guard the vital ports in West Florida of Pensacola (seat of the province) and Mobile against the Spanish. In contrast, a portion of Choctaw supported the Spanish, though never in direct opposition to other Choctaw, while the rest remained neutral.
Sandwiched in between the colonies of North Carolina and South Carolina, the Catawba had no real option to take the Loyalist side, but rather than simply remaining neutral joined the Patriot cause as active allies.
Migration to the Chickamauga area
After the end of the opening campaigns, Alexander Cameron had suggested to Dragging Canoe and his dissenting Cherokee that they settle at the place where the Great Indian Warpath crossed the Chickamauga River (South Chickamauga Creek). Since Dragging Canoe made that town his seat of operations, frontier Americans called his faction the "Chickamaugas". Other Cherokee refugees turned up in Pensacola and wintered there.
As mentioned above, John McDonald already had a trading post across the Chickamauga River. This provided a link to Henry Stuart, brother of John, in the West Florida capital of Pensacola. Cameron, the British deputy Indian superintendent, accompanied Dragging Canoe to Chickamauga. Nearly all the whites legally resident among the Cherokee were part of the related exodus.
In March 1777, Cameron sent the refugees to Chickamauga along with a sizable amount of goods. The colonials learned of the material and planned to intercept it. When Cameron informed him of the danger, Emistisigua, paramount chief of the Upper Mucogee, sent a force of three hundred fifty warriors to guard them as well as to assist in rebuilding and waging war.
Shortly after the column arrived, Dragging Canoe organized the campaign against the settlements in the Holston region mentioned above.
The Chickamauga Towns
In addition to Old Chickamauga (Tsikamagi) Town, the headman of which was Big Fool, Dragging Canoe's band set up three other settlements on the Chickamauga River: "Toqua" (Dakwayi), at its mouth on the Tennessee River, "Opelika", a few kilometers upstream from Chickamauga Town; and "Buffalo Town" (Yunsayi; John Sevier called it "Bull Town") at the headwaters of the river in northwest Georgia (in the vicinity of the later Ringgold, Georgia).
Other towns established were Cayoka, on Hiwassee Island; "Black Fox" (or Inaliyi) at the current community of the same name in Bradley County, Tennessee; "Ooltewah" (Ultiwa), under Ostenaco on Ooltewah (Wolftever) Creek; "Sawtee" (Itsati), under Dragging Canoe's brother Little Owl on Laurel (North Chickamauga) Creek; "Citico" (Sitiku), along the creek of the same name; "Chatanuga" (Tsatanugi) at the foot of Lookout Mountain in what is now St. Elmo; and "Tuskegee" (Taskigi) under Bloody Fellow (Yunwigiga) on Williams' Island.
The Cherokee towns of Great Hiwassee (Ayuwasi), Tennessee (Tanasi), Chestowee (Tsistuyi), Ocoee (Ugwahi), and Amohee (Amoyee) in the vicinity of Hiwassee River supported those who fought against the settlers moving into their lands, as did the Lower Cherokee in the North Georgia towns of Coosawatie (Kusawatiyi), Etowah (Itawayi), Ellijay (Elatseyi), Ustanari (or Ustanali), etc., who had been evicted from their homes in South Carolina by the Treaty of Dewitts' Corner.
Targets of the Cherokee
From their new bases, the Cherokee conducted raids against settlers on the Holston, Doe, Watauga, and Nolichucky rivers, on the Cumberland and Red rivers, and those in the isolated frontier stations in between. Dragging Canoe called them all "Virginians". The Cherokee ambushed parties traveling on the Tennessee River, and on local sections of the many ancient trails that served as "highways", such as the Great Indian Warpath (Mobile to northeast Canada), the Cisca and St. Augustine Trail (St. Augustine to the French Salt Lick at Nashville), the Cumberland Trail (from the Upper Creek Path to the Great Lakes), and the Nickajack Trail (Nickajack to Augusta). Later, the Cherokee stalked the Natchez Trace and roads improved by the uninvited settlers, such as the Kentucky, Cumberland, and Walton roads. Occasionally, the Cherokee attacked targets in Virginia, the Carolinas, Georgia, Kentucky, and the Ohio country.
Continuing the fight
In contempt of the peace proceedings at Fort Henry in April 1777, Dragging Canoe led a war party that killed a settler named Frederick Calvitt and stole fifteen horses from James Robertson, then moved to Carter's Valley, killing the grandparents of later U.S. Congressman David Crockett along with several children near the modern Rogersville, and marauding across the valley. In all the raiders took twelve scalps.
In summer 1777, Deputy Superintendents Cameron and Taitt led a large contingent of Cherokee and Muscogee warriors against the back country settlements of the Carolinas and Georgia.
While they were thus engaged, the Shawnee repeatedly attacked the Kentucky settlements between the Cumberland River and Levisa Fork.
Besides continued small harassment raids against the back country of Virginia, the Carolinas, and Georgia, the Cherokee established at camp at the confluence of the Tennessee and Ohio Rivers to prevent infiltration into the Mississippi in the spring of 1778.
Warriors of the Chickamauga Towns renewed their raiding after the Green Corn festival in August 1778.
Revolutionary War phase: Southern strategy (1778–1783)
In late 1778, British strategy shifted south. As their attention went, so too did their efforts, their armies, and their supplies, including those slated for the Southern Indians. The Southern theater had the added advantage of being home to more Loyalists than the North. With all these new advantages, the Cherokee were able to greatly renew their territorial defense. Both the Cherokee (all of them) and the Upper Creek signed on for active participation.
British victory in the North
On December 17, 1778, Henry Hamilton captured Fort Vincennes and used it as a base to plan a spring offensive against George Rogers Clark, whose forces had recently seized control of much of the Illinois Country. His plans were to assemble five hundred warriors from various Indian nations, including the Cherokee, the Chickasaw, the Shawnee, and others, for a campaign to expel Clark's forces back east, then drive through Kentucky clearing American settlements. McDonald's headquarters at Chickamauga was to be the staging ground and commissary for the Cherokee and the Muscogee.
British victories in the South
The British captured Savannah, Georgia (see: Capture of Savannah) on December 29, 1778, with help from Dragging Canoe, John McDonald, and the Cherokee, along with McGillivray's Upper Muscogee force and McIntosh's band of Hitichiti warriors.
Just over a month later, January 31, 1779, they captured Augusta, Georgia, as well, though they quickly had to retreat. After a couple of more handovers, the British were in control.
With these victories, the remaining neutral towns of the Lower Muscogee now threw in their lot with the British side.
First Cumberland settlement
In early 1779, Robertson and John Donelson traveled overland across country along the Kentucky Road and founded Fort Nashborough at the French Salt Lick (which got its name from having previously been the site of a French outpost called Fort Charleville) on the Cumberland River. It was the first of many such settlements in the Cumberland area, which subsequently became the focus of attacks by all the tribes in the surrounding region. Leaving a small group there, both returned east.
Loss in the North
Unfortunately for the grand scheme of Henry Hamilton, Clark recaptured the fort and him along with it on February 25, 1779, after the Siege of Fort Vincennes. The Chickamauga Cherokee turned their sights to the northeast.
Raids in the Overmountain region
Robertson heard warning from Chota that Dragging Canoe's warriors were going to attack the Holston area. In addition, he had received intelligence that McDonald's place was the staging area for the northern campaign that Hamilton had been planning to conduct, and that a stockpile of supplies equivalent to that of a hundred packhorses was stored there. Small parties of Cherokee began repeated small raids on the Holston frontier shortly thereafter.
Death of John Stuart
On March 21, 1779, John Stuart, up to that point Indian Affairs Superintendent, died at Pensacola. George Germain, Secretary of State for the Colonies, split the Southern Department into two districts. Alexander Cameron in Pensacola was assigned to the Mississippi District to work with the Chickasaw and Choctaw. In Savannah, Thomas Brown of the King's Carolina Rangers (as his unit was renamed) was assigned to the Atlantic District to work with the Cherokee, Muscogee, and Seminole.
Scott and Shelby expeditions
At the beginning of April 1779, a group of three hundred Cherokee and fifty Loyalist Rangers under Walter Scott left the Chickamauga Towns headed for a marauding campaign against the frontier settlements in Georgia and South Carolina.
Hearing of their departure, Joseph Martin, Indian agent for the Americans at Chota, sent word to Governor Patrick Henry of their absence.
The state governments of Virginia and North Carolina made a joint decision to send an expedition against the Chickamauga Towns, who were thought to be responsible for the raids. Most of those warriors, however, were in South Carolina with Cameron and Dragging Canoe. A thousand Overmountain men under Evan Shelby (father of Isaac Shelby, first governor of the State of Kentucky) and a regiment of Continentals under John Montgomery disembarked on April 10, boating down the Tennessee in a fleet of dugout canoes.
They arrived in the Chickamauga towns ten days later. For the next two weeks, they destroyed the eleven towns in the immediate area and most of the food supply, along with McDonald's home, store, and commissary. Due to the absence of nearly all the warriors, there was no resistance and only four deaths among the inhabitants. Whatever was not destroyed was confiscated and sold at the site where the trail back to the Holston crossed what has since been known as Sale Creek.
Return home of the warriors
Upon hearing of the devastation of the towns and loss of all their stores, Dragging Canoe, McDonald, and their men, including the Rangers, returned to Chickamauga and its vicinity.
The Shawnee sent envoys to Chickamauga to find out if the destruction had caused Dragging Canoe's people to lose the will to fight, along with a sizable detachment of warriors to assist them in the South. In response to their inquiries, Dragging Canoe held up the war belts he'd accepted when the delegation visited Chota in 1776, and said, "We are not yet conquered". To cement the alliance, the Cherokee responded to the Shawnee gesture with nearly a hundred of their warriors sent to the North.
The towns in the Chickamauga area were soon rebuilt and reoccupied by their former inhabitants. Dragging Canoe responded to the Shelby expedition with punitive raids on the frontiers of both North Carolina and Virginia, and proved good on his word because British command communications in October show the Cherokee active on the frontier from Virginia to Georgia.
In midsummer 1779, Cameron arrived at Chickamauga with a company of Loyalist Refugees and convinced the Cherokee in the towns there to join them on their march to South Carolina. Three hundred took up arms and headed out to maraud the backcountry of Georgia and South Carolina. Later in October, Andrew Williamson's South Carolina militia responded by attacking several towns on the eastern frontier of Cherokee territory and burning their foodstores.
Concord between the Lenape and the Overhill Cherokee
In late 1779, Oconostota, Savanukah, and other non-belligerent Cherokee leaders traveled north to pay their respects after the death of the White Eyes, the Lenape leader who had been encouraging his people to give up their fighting against the Americans. He had also been negotiating, first with Lord Dunmore and second with the American government, for an Indian state with representatives seated in the Continental Congress, which he finally won an agreement for with that body, which he had addressed in person in 1776.
Upon the arrival of the Cherokee in the village of Goshocking, they were taken to the council house and began talks. The next day, the Cherokee present solemnly agreed with their "grandfathers" to take neither side in the ongoing conflict between the Americans and the British. Part of the reasoning was that thus "protected", neither tribe would find themselves subject to the vicissitudes of war. The rest of the world at conflict, however, remained heedless, and the provisions lasted as long as it took the ink to dry, as it were.
Loss of Mobile
On February 10, 1780, Spanish forces from New Orleans under Bernardo de Galvez, allied to the Americans but acting in the interests of Spain, captured Mobile in the Battle of Fort Charlotte, along with Charles Stuart and David Taitt.
When they next moved against Pensacola the following month, McIntosh and McGillivray rallied 2000 Muscogee warriors to its defense, joining a large contingent of Choctaw and a smaller one of Chickasaw. A British fleet arrived before the Spanish could take the port.
The Chickasaw transformed from river sentries into attacking warriors in June 1780 when George Rogers Clark and a party of over five hundred, including some Kaskaskia of the Illinois Confederation, built Fort Jefferson and the surrounding settlement of Clarksville near the mouth of the Ohio, inside their hunting grounds. The building had begun in April and just finished before the first attack on June 7.
After learning of the trespass, the Chickasaw destroyed the settlement, laid siege to the fort, and began attacking settlers on the Kentucky frontier. They continued attacking the Cumberland and into Kentucky through early the following year. Their last raid was in conjunction with Dragging Canoe's Cherokee, upon Freeland’s Station on the Cumberland on January 11, 1781.
Robertson and Donelson parties
In autumn 1779, Robertson and a group of fellow Wataugans left the east down the Kentucky Road headed for Fort Nashborough. They arrived on Christmas Day 1779 without incident, unlike what the group led by his partner John Donelson was to face.
Donelson journeyed down the Tennessee with a party that included his family, intending to go across to the mouth of the Cumberland, then upriver to Ft. Nashborough. Their departed the East Tennessee settlements on February 27, 1780. Eventually, the group did reach its destination, but only after being ambushed several times.
In the first encounter near Tuskegee Island on March 7, the Cherokee warriors under Bloody Fellow attacked the boat in the rear. Its passengers had come down with smallpox. They took as captive the one survivor, who was later ransomed by American colonists. The victory proved to be a Pyrrhic one for the Cherokee, a smallpox epidemic spread among its people, killing several hundred in the vicinity.
Several miles downriver, beginning with the obstruction known as the Suck or the Kettle, the party was fired upon throughout their passage through the Tennessee River Gorge (aka Cash Canyon); one person died and several were wounded. Several hundred kilometers downriver, the Donelson party ran up against Muscle Shoals, where they were attacked at one end by the Muscogee and the other end by the Chickasaw. The final attack was by the Chickasaw in the vicinity of the modern Hardin County, Tennessee.
The Donelson party finally reached its destination on April 24, 1780. The group included John's daughter Rachel, much later the wife of future U.S. Representative, Senator, and President Andrew Jackson, who fought a duel in her honor in 1806.
Shortly after his party's arrival at Fort Nashborough, Donelson along with Robertson and others formed the Cumberland Compact.
Donelson eventually moved to the Indiana country after the Revolution. He and William Christian were captured while fighting in the Illinois country in 1786 during the Northwest Indian Wars. They were burned at the stake as warriors by their captors.
Capture of Charlestown
Charlestown was captured on May 12, 1780, after a siege that began March 29. Along with it, the British took prisoner some three thousand Patriots, including South Carolina militia leader Andrew Williamson. Upon giving his parole that he would not again take up arms, Williamson became a double agent for the Patriots, according to testimony after the war by Patriot General Nathanael Greene.
Defense of Augusta and Battle of Kings Mountain
In the summer of 1780, Thomas Brown planned to have a joint conference between the Cherokee and Muscogee to plan ways to coordinate their attacks, but those plans were forestalled when Georgians under Elijah Clarke made a concerted effort to retake Augusta in September, where he had his headquarters. His King's Carolina Rangers and fifty Muscogee warriors formed the entire garrison against Clarke's seven hundred fighters.
The arrival of a sizable war party from the Chickamauga and Overhill Towns and a force from Fort Ninety-Six in South Carolina prevented the capture of both, and the Cherokee and Brown's rangers chased Elijah Clarke's army into the arms of John Sevier, wreaking havoc on rebellious settlements along the way.
This set the stage for the Battle of Kings Mountain October 7, 1780, in which Loyalist militia American Volunteers under Patrick Ferguson moved south trying to encircle Clarke; they were defeated by a force of 900 frontiersmen under Sevier and William Campbell, who were referred to as the Overmountain Men.
Cherokee Overmountain campaign of 1780
Brown, aware that nearly 1,000 men were away from the American settlements with the militias, urged Dragging Canoe and other Cherokee leaders to strike while they had the opportunity. Under the influence of Savanukah, the Overhill Towns gave their full support to the new offensive. Both Brown and the Cherokee had been expecting a quick victory by Patrick Ferguson and were stunned that he suffered such a resounding defeat so soon. But their planned assault on the settlements was in motion.
Learning of the new invasion from Nancy Ward (her second documented betrayal of Dragging Canoe), Virginia Governor Thomas Jefferson sent an expedition of 700 Virginians and North Carolinians against the Cherokee in December 1780, under the command of Sevier. It met a Cherokee war party at the Battle of Boyd's Creek on December 16 and routed it.
After that battle, Sevier's army was joined by forces under Arthur Campbell and Joseph Martin. The combined force marched against the Overhill towns on the Little Tennessee and the Hiwassee, burning seventeen of them, including Chota, Chilhowee, the original Citico, Tellico, Great Hiwassee, and Chestowee, finishing up on January 1, 1781. Afterwards, the Overhill leaders withdrew from further active conflict for a time, though warriors from the Middle and Valley Towns continued to harass colonists on the frontier.
Cherokee Cumberland campaign, 1780–1781
In the Cumberland area, the new settlements lost around forty people in attacks by the Cherokee, Muscogee, Chickasaw, Shawnee, and Lenape during 1780. The Munsee-Lenape were the first to conduct what became repeated attacks, along with the Chickasaw, Shawnee, Wyandot, and Mingo, on the Cumberland settlements, as well as those in Kentucky.
The Chickamauga Cherokee began their own attacks against the Cumberland settlements in November 1780, sometimes raiding alongside their Chickasaw former enemies. Their last joint venture was the battle at Freeland's Station.
First Cherokee Overmountain campaign of 1781
Not long after returning home from his destruction of the Overhill towns, Sevier had received word that the warriors from the Middle Towns were bent on revenge.
At the beginning of March, Sevier raised a force for a campaign against the Middle and Out Towns east of the mountains. Beginning at Tuckasegee and ending at Cowee, they burned fifteen towns, killed twenty-nine Cherokee, and took nine prisoners.
Martin led another militia group to disperse or destroy a Cherokee war party encamped in the mountains at Cumberland Gap to harass travelers on the Wilderness Road founds signs of their quarry but none of them.
Loss of Pensacola
On March 7, 1781, the Spanish attacked Pensacola again, with an army twice the size of the garrison of British, Choctaw, and Muscogee defenders, and the city fell on 8 May after a hard siege that saw courageous fighting by the Choctaw and Muscogee. Cameron and other Indian Department officials took refuge among the Muscogee, then transferred to Augusta to join Brown, who now had his own headquarters there.
Battle of the Bluff
Three months after the first Chickasaw attack on the Cumberland, on April 2, 1781, the Cherokee launched their largest campaign of the wars against those settlements. This culminated in what became known as the Battle of the Bluff, led by Dragging Canoe in person. It lasted through to the next day and was the last attack of this war.
Afterward, settlers began to abandon these frontier settlements until only three stations were left, a condition which lasted until 1785.
Shawnee Overmountain campaign, 1781–1785
While Dragging Canoe and his warriors turned their attentions to the Cumberland, the Shawnee began raiding settlements in Upper East Tennessee and Southwest Virginia, the latter by now having become Washington County. In particular they targeted those along the Clinch and Holston Rivers and in Powell's Valley. These Shawnee came down from their homes on the Ohio River by way of the Warriors’ Path through the Cumberland Gap. Their attacks continued, along with occasional forays by McGillivray's Upper Muscogee, even after sporadic raids by the Cherokee renewed, until they began to focus all their attention on the Northwest Indian War.
Loss of Augusta
Augusta, under the command of Thomas Brown, was also retaken by the Patriots on June 6 after a two-month siege when the Lower Muskogee relief force led by McIntosh coming to the rescue was unable to arrive in time.
Second Cherokee Overmountain campaign of 1781
In midsummer, a party of Cherokee came west over the mountains and began raiding the new settlements on the French Broad River. Sevier raised a force of one hundred fifty and attacked their camp on Indian Creek.
On July 26, 1781, the Overhill Towns signed the second Treaty of Long-Island-on-the-Holston, this time directly with the Overmountain settlements. It is notable in that, although affirming previous land cessions, it required none further.
While the Middle Towns warriors kept the Overmountain Men busy, the Chickamauga Towns welcome a sizable party of Lenape warriors seeking refuge from the fighting in the Illinois and Ohio Countries. These were not just warriors down south temporarily but permanent resettlers who brought their families.
Politics in the Overhill Towns
In the fall of 1781, the British engineered a coup d'état of sorts that put Savanukah as First Beloved Man in place of the more pacifist Oconostota, who succeeded Attakullakulla. For the next two years, the Overhill Cherokee openly, as they had been doing covertly, supported the efforts of Dragging Canoe and his militant Cherokee.
Cherokee Georgia campaign of 1781
In November 1781, the Cherokee invaded Georgia, ravaging Wilkes County, which was formed from 8100 km2 land ceded by the Cherokee and Muscogee in the 1773 Treaty of Augusta. A combined force of South Carolinians and Georgians under Andrew Pickens retaliated by burning all the Valley Towns up to the Valley River.
Death of Alexander Cameron
On December 27, 1781, Alexander Cameron, British Superintendent of Indian Affairs for the Mississippi District, blood brother to Dragging Canoe, and friend to all Cherokee, died in Savannah. He was replaced by John Graham.
Diplomatic mission to Ft. St. Louis
A party of Cherokee joined the Lenape, Shawnee, and Chickasaw in a diplomatic visit to the Spanish at Fort St. Louis in the Missouri country in March 1782 seeking a new avenue of obtaining arms and other assistance in the prosecution of their ongoing conflict with the Americans in the Ohio Valley. One group of Cherokee at this meeting led by Standing Turkey sought and received permission to settle in Spanish Louisiana, in the region of the White River.
Loss of Savannah
In June 1782 the Patriots took back the British and Muskogee garrison at Savannah. Brown, Graham, and the rest of the Southern Indian Department relocated yet again, this time to St. Augustine in Loyalist East Florida.
Paramount mico Emistisigua was leading the Upper Muscogee effort to relieve them and died in the attempt. McGillivray, by then his right-hand man, succeeded him to become the leading mico of the Upper Towns by 1783.
Cherokee Overmountain campaign of 1782
In response to incursions by new settlers beyond the limits of the treaties, warriors from the Chickamauga Towns began harassing the Holston frontier in the spring and summer of 1782.
In September, an expedition under Sevier once again destroyed many of the towns in the Chickamauga vicinity, and those Cherokee of the former Lower Towns now in North Georgia, from Buffalo Town at the modern Ringgold, Georgia south to Ustanali (Ustanalahi) near modern Calhoun, Georgia, including what he called Vann's Town, as well as Ellijay and Coosawattee. Most of the towns were deserted because having advanced warning of the impending attack, Dragging Canoe and his fellow leaders chose relocation westward. Meanwhile, Sevier's army, guided by John Watts (Kunokeski), somehow never managed to cross paths with any parties of Cherokee.
Migration to the Lower Towns
Upon finishing their move, Dragging Canoe and his people established what whites called the Five Lower Towns downriver from the various natural obstructions in the twenty-six-mile Tennessee River Gorge, known locally as Cash Canyon.
Starting with Tuskegee (aka Brown's or Williams') Island and the sandbars on either side of it, these obstructions included the Tumbling Shoals, the Holston Rock, the Kettle (or Suck), the Suck Shoals, the Deadman's Eddy, the Pot, the Skillet, the Pan, and, finally, the Narrows, ending with Hale's Bar.
The whole twenty-six miles was sometimes called The Suck, and the stretch of river was notorious enough to merit mention even by Thomas Jefferson. These navigational hazards were so formidable, in fact, that the French agents attempting to travel upriver to reach Cherokee country during the French and Indian War, intending to establish an outpost at the spot later occupied by British agent McDonald, gave up after several attempts.
The Five Lower Towns
The Five Lower Towns included Running Water (Amogayunyi), at the current Whiteside in Marion County, Tennessee, where Dragging Canoe made his headquarters; Nickajack (Ani-Kusati-yi, or Koasati place), eight kilometers down the Tennessee River in the same county; Long Island (Amoyeligunahita), on the Tennessee just above the Great Creek Crossing; Crow Town (Kagunyi) on the Tennessee, at the mouth of Crow Creek; and Stecoyee (Utsutigwayi, aka Lookout Mountain Town), at the current site of Trenton, Georgia. Tuskegee Island Town was reoccupied as a lookout post by a small band of warriors to provide advance warning of invasions, and eventually many other settlements in the area were resettled as well.
Because this was a move into the outskirts of Muscogee territory, Dragging Canoe, knowing such a move might be necessary, had previously sent a delegation under Little Owl to meet with Alex McGillivray, the major Muscogee leader in the area, to gain their permission to do so. When the Cherokee moved their base, so too did John McDonald, now deputy to Thomas Brown, along with his own assistant Daniel Ross, making Running Water the base of operations. Graham's deputy, Alexander Campbell, set up his own base at what became Turkeytown.
More Lower Towns
Cherokee continued to migrate westward to join Dragging Canoe's militant band. Many in this influx were Cherokee from North Georgia, who fled the depredations of expeditions such as those of Sevier; a large majority of these were former inhabitants of the original Lower Towns. Cherokee from the Middle, or Hill, Towns also came, a group of whom established a town named Sawtee (Itsati) at the mouth of South Sauta Creek on the Tennessee.
Later major settlements included Willstown (Titsohiliyi) near the later Fort Payne; Turkeytown (Gundigaduhunyi), at the head of the Cumberland Trail where the Upper Creek Path crossed the Coosa River near Centre, Alabama; Creek Path (Kusanunnahiyi), near at the intersection of the Great Indian Warpath with the Upper Creek Path at the modern Guntersville, Alabama; Turnip Town (Ulunyi), seven miles from the present-day Rome, Georgia; and Chatuga (Tsatugi), nearer the site of Rome.
Partly because of the large influx from North Georgia added to the fact that they were no longer occupying the Chickamauga area as their main center, Dragging Canoe's followers and others in the area began to be referred to as the Lower Cherokee.
The ranks of these new Lower Cherokee were further swelled by runaway slaves, white Tories, Muscogee, Yuchi, Natchez, and Shawnee, plus a few Spanish, French, Irish, and Germans. The town Coosada came into the coalition when its Koasati and Kaskinampo inhabitants joined Dragging Canoe's coalition.
The band of Chickasaw living at what Chickasaw Old Fields at the later Ditto's Landing south of Huntsville, Alabama also joined the coalition. The rest of the Chickasaw, however, were trying to play the Americans and the Spanish against each other with no interest in the British.
Another visit from the North
In November 1782, twenty representatives from four northern tribes--Wyandot, Ojibwa, Ottawa, and Potowatami—traveled south to consult with Dragging Canoe and his lieutenants at his new headquarters in Running Water Town, which was nestled far back up the hollow from the Tennessee River onto which it opened. Their mission was to gain the help of Dragging Canoe's Cherokee in attacking Pittsburgh and the American settlements in Kentucky and the Illinois country.
When the party returned north, Turtle-at-Home (Selukuki Woheli), another of Dragging Canoe's brothers, along with some seventy warriors, headed north to live and fight with the Shawnee.
Georgia Indian war of 1782
At the end of 1781, the Cherokee invaded Georgia once again with a group of Muscogee, this time being met by South Carolina and Georgia troops under Pickens and Elijah Clarke at the Oconee River after much back country raiding. Evading the American force, the Cherokee withdrew, adopting a scorched earth strategy to deny their foes supplies. The force eventually retreated, opening the back country to further raids.
By the fall of 1782, Lt. Col. Thomas Waters of the Loyalist Rangers, formerly stationed at Fort Ninety-Six in South Carolina, had retreated to the frontier of Cherokee-Muscogee territory just outside Georgia. From his base at the mouth of Long Swamp Creek on Etowah River, he and his remaining rangers, in conjunction with Cherokee and Muscogee warriors, ravaged backwoods homesteads and settlements.
The states of South Carolina and Georgia sent out a joint expedition led by Andrew Pickens and Elijah Clarke to put an end to his insurgency. Leaving September 16, they invaded that section of the country, ranging at least as far as Ustanali, where they took prisoners. In all they destroyed thirteen towns and villages. By October 22, Waters and his men had escaped and the Cherokee sued for peace.
Cherokee in the Ohio region
At the beginning of 1783, there were at least three major communities of Cherokee in the region. One lived among the Chalahgawtha (Chillicothe) Shawnee. The second Cherokee community lived among the mixed Wyandot-Mingo towns on the upper Mad River near the later Zanesfield, Ohio. A third group of Cherokee is known to have lived among and fought with the Munsee-Lenape, the only portion of the Lenape nation at war with the Americans. Though filled by different warriors shifted back and forth, these three bands remained in the Northwest until after the Treaty of Greenville in 1795.
St. Augustine conference
In January 1783, Dragging Canoe and twelve hundred Cherokee traveled to St. Augustine, the capital of East Florida, for a summit meeting with a delegation of western tribes (Shawnee, Muscogee, Mohawk, Seneca, Lenape, Mingo, Tuscarora, and Choctaw) called for a federation of Indians to oppose the Americans and their frontier colonists. Brown, the British Indian Superintendent, approved the concept.
At Tuckabatchee a few months later, a general council of the major southern tribes (Cherokee, Muscogee, Chickasaw, Choctaw, and Seminole) plus representatives of smaller groups (Mobile, Catawba, Biloxi, Huoma, etc.) took place to follow up, but plans for the federation were cut short by the signing of the Treaty of Paris.
Treaty of Long Swamp Creek (1783)
Signed May 30, 1783, the treaty confirmed the northern boundary between the State of Georgia and the Cherokee, with the Cherokee ceding large amounts of land between the Savannah and Chattachoochee Rivers.
More Overhill politics
In the fall of 1783, the older pacifist leaders replaced Savanukah with another of their number, Corntassel (Kaiyatsatahi, known to history as "Old Tassel"), and sent messages of peace along with complaints of settler encroachment to Virginia and North Carolina. Opposition from pacifist leaders, however, never stopped war parties from traversing the territories of any of the town groups, largely because the average Cherokee supported their cause, nor did it stop small war parties of the Overhill Towns from raiding settlements in East Tennessee, mostly those on the Holston.
Treaty of Paris (1783)
Signed between Great Britain and the United States on September 3, 1783, this treaty formally ended the American Revolution. The U.S. had already unilaterally declared hostilities over the previous April. Brown had already received orders from London in June to cease and desist.
Following that treaty, Dragging Canoe turned to the Spanish (who still claimed all the territory south of the Cumberland and were now working against the Americans) for support, trading primarily through Pensacola and Mobile. Dragging Canoe also maintained relations with the British governor at Detroit, Alexander McKee, through regular diplomatic missions there under his brothers Little Owl and The Badger (Ukuna).
Cherokee Overmountain campaign of 1783
With the end of the Revolutionary War, new settlers began flooding into the Overmountain settlements.
The reaction from the Cherokee was predictable, only it did not come from the towns on the lower Little Tennessee. Instead, warriors from the Middle Towns east of the mountains on the upper Little Tennessee began retaliation against the settlements on the west side, targeting the newer ones on the Pigeon and French Broad Rivers.
In late 1783, Major Peter Fine raised a small militia and crossed the mountains to the east side and burned down the town of Cowee.
Treaty of French Lick
The Chickasaw signed the Treaty of French Lick with the new United States of America on November 6, 1783, and never again took up arms against it. The Lower Cherokee were also present at the conference and apparently made some sort of agreement to cease their attacks on the Cumberland, for after this Americans settlements in the area began to grow again.
Treaty of Augusta (1783)
Also in November 1783, the pro-American camp of the Lower Muscogee nation signed the Treaty of Augusta with Georgia, ceding their claims to territory which roughly comprises the modern counties of Oconee, Franklin, Banks, Barrow, Clarke, Jackson, Stephens, Washington, Greene, Hancock, Johnson, Toombs, Treutlen, and Montgomery, plus parts of surrounding counties. Georgians referred to this region as the Oconee Country, after the tribe who lived there. This enraged McGillivray, who wanted to keep fighting; he burned the houses of the leaders responsible and sent warriors to raid Georgia settlements.
Post-Revolution phase: New directions (1783–1788)
The Spanish now held East Florida and West Florida in addition to Louisiana, Tejas, Nuevo Mexico, and Nueva California. Partly to hold the Americans at bay and party to regain lost parts of La Florida, they armed and supplied the Southern Indians both to curry favor and to encourage them to turn their weapons on the frontier settlements.
The settlement of Coldwater was founded by a party of French traders who had come down from the Wabash to set up a trading center in 1783. It sat a few miles below the foot of the thirty-five-mile-long Muscle Shoals, near the mouth of Coldwater Creek and about three hundred yards back from the Tennessee River, near the site of present-day Tuscumbia, Alabama.
For the next couple of years, trade was all the French did, but then, in 1785, the business changed hands. The new owners not only added firearms, powder, and shot to their wares, they recruited a garrison from the Cherokee of the Lower Towns and the Upper Muscogee. They traded arms to both those nations as well and encouraged them to defend their territory.
Spanish Indian treaties
Largely due to the efforts of Alex McGillivray, the Spanish (in the persons of Arturo O’Neill, governor of West Florida and Estevan Miro, governor of Louisiana) signed the Treaty of Pensacola for alliance and commerce with the Upper Muscogee and the Lower Cherokee on May 30, 1784.
On June 22, 1784, O’Neill and Miro signed the Treaty of Mobile, likewise for alliance and commerce, with the Choctaw and the Alabama. The Chickasaw, also at this conference, refused to sign because of their treaty with the Americans.
With the signing of these two treaties, McDonald and Ross relocated to Turkeytown to consolidate their efforts and business with those of Campbell closer to their Spanish suppliers and to the British trading house of Panton, Leslie & Company in Pensacola.
Unquiet Western frontier
With these assurances of support, the Cherokee of the Lower Towns renewed raiding the Overmountain settlements that summer. These remained only sporadic until the fall, when an incident between one of the settlers, James Hubbard, and a noted Cherokee leader in the Overhill Towns, Noonday, brought the younger Overhill warriors into the fight and incited them all to more violence. This could be considered the start of a Southwest Indian War, fought by the Cherokee and later the Muscogee too.
Towards an Indian alliance
Sponsored by the Spanish, Running Water Town hosted a grand council of western nations and tribes in the summer of 1785 to formulate a strategy for resisting encroachment by settlers from the new United States. Beside the Chickamauga Towns Cherokee, the Upper Muscogee and the Choctaw attended from the South while representatives from the Shawnee, Lenape, Mingo, Miami, Illinois, Wyandot, Ottawa, Mohawk, Kickapoo, Kaskaskia, Odawa, Potawatomi, Ojibwe, Wabash Confederacy, and, of course, the Iroquois League, came from the North, plus a few others.
The same parties met again under sponsorship of the British at Detroit in the fall of 1785. The parties at these two councils agreed among themselves and with their sponsors to deal with the Americans as a unit rather than being picked off piece. This laid the groundwork for the confederacy formally established the next year.
Muscogee council at Tuckabatchee
In spring 1785, McGillivray had convened a council of war at the dominant Upper Muscogee town of Tuckabatchee about recent incursions of Georgian settlers into the Oconee territory. The council, attended by most of the nations and tribes of the soon-to-be Western Confederacy, decided to go on the warpath against the trespassers, starting with the recent settlements along the Oconee River. McGillivray had already secured support from the Spanish in New Orleans. This began the Oconee War, which lasted from May 1785 until September 1794.
Free Republic of Franklin
In May 1785, the settlements of Upper East Tennessee, then comprising four counties of western North Carolina, petitioned the Congress of the Confederation to be recognized as the "State of Franklin". Even though their petition failed to receive the two-thirds votes necessary to qualify, they proceeded to organize what amounted to a secessionist government, holding their first "state" assembly in December 1785. One of their chief motives was to retain the foothold they had recently gained in the Cumberland Basin. The Cumberland settlements were included in the government, but being separated by a wide stretch of hostile Cherokee territory were almost completely autonomous.
Treaty of Dumplin Creek
One of the first acts of the new State of Franklin was to negotiate with the Overhill Towns the Treaty of Dumplin Creek, signed on June 10, 1985, which ceded the "territory south of the French Broad and Holston Rivers and west of the Big Pigeon River and east of the ridge dividing Little River from the Tennessee River" to the State of Franklin.
Northwest Indian War (1785–1795)
In the autumn of 1785, after a conference at Detroit, the Indians of the Northwest—Wyandot, Shawnee, Lenape, Ottawa, Mohawk, Miami, Wabash Confederacy—began frequent small raids against settlements west and north of the Ohio River and in Kentucky. In the next year, these raids by small war parties had grown into invasions by small armies.
As allies of the Shawnee and later as full members of the Western Confederacy, Cherokee warriors of the three previously-mentioned bands in the Northwest took an active part, roughly proportional to the degree of activity by the Shawnee in their own area of operations. They participated in nearly every war party and every major action.
Though most of the action took place in the Northwest, especially the Ohio County, a significant amount occurred in Kentucky, part of the Southwest. From the mid-1780s till the end of the decade, for instance, raiders killed nearly fifteen hundred settlers.
Treaty of Galphinton
As if McGillivray and his people were not angered enough, on November 12, 1785, Georgia officials signed a new treaty with a few compliant Lower Muscogee micos (headmen) in which the latter ceded the land between the Altamaha and St. Mary's Rivers, and from the head of the latter to the Oconee River. They called this wide stretch of land the Tallassee Country, after the tribe which lived there.
Treaty of Hopewell
The Cherokee in the Overhill, Hill, and Valley Towns also signed a treaty with the new United States government, November 28, 1785 Treaty of Hopewell, but in their case it was a treaty made under duress, the frontier colonials by this time having spread further along the Holston and onto the French Broad. Several leaders from the Lower Cherokee signed, including two from Chickamauga Town (which had been rebuilt) and one from Stecoyee.
Houston County, Georgia
After the Hopewell Treaty, the legislature of the State of Georgia, which claimed all of what became Mississippi Territory (everything between the 31st and 35th parallels from its own borders west to the Mississippi River) created Houston County, to take in the Great Bend of the Tennessee River. The project was a joint venture between Georgia and Franklin. To stake their claim, Valentine Sevier and ninety men went south to what is now South Pittsburg in Marion County, Tennessee, and built a stockaded settlement and blockhouse in early December 1785.
The chosen location lay midway between Nickajack and Long Island towns of the Chickamauga-Lower Cherokee. By mid-January 1786, the pioneers tired of the constant life-or-death fighting and ended the project. The Houston County project collapsed, leaving the name open for the current Houston County, Georgia established in 1821.
In order to prevent a reoccurrence, the Cherokee established the town of Crowmocker on Battle Creek near the site of the Civil War-era Fort McCook.
The Spanish Conspiracy
Starting in 1786, the leaders of the State of Franklin and the Cumberland District began secret negotiations with Esteban Rodríguez Miró, governor of Spanish Louisiana, to deliver their regions to the jurisdiction of the Spanish Empire. Those involved included James Robertson, Daniel Smith, and Anthony Bledsoe (1739–1788) of the Cumberland District, John Sevier and Joseph Martin of the State of Franklin, James White, recently appointed American Superintendent for Southern Indian Affairs (replacing Thomas Brown), and James Wilkinson, governor of Kentucky.
The irony lay in the fact that the Spanish backed the Cherokee and Muscogee harassing their territories. Their main counterpart on the Spanish side in New Orleans was Don Diego de Gardoqui. Gardoqui's negotiations with Wilkinson, initiated by the latter, to bring Kentucky (then a territory) into the Spanish orbit also were separate but simultaneous.
The "conspiracy" went as far as the Franklin and Cumberland officials promising to take the oath of loyalty to Spain and renounce allegiance to any other nation. Robertson even successfully petitioned the North Carolina assembly create the "Mero Judicial District" for the three Cumberland counties (Davidson, Sumner, Tennessee). There was a convention held in the failing State of Franklin on the question, and those present voted in its favor.
A large part of their motivation, besides the desire to secede from North Carolina, was the hope that this course of action would bring relief from Indian attacks. The series of negotiations involved Alex McGillivray, with Robertson and Bledsoe writing him of the Mi ro District's peaceful intentions toward the Muscogee and simultaneously sending White as emissary to Gardoqui to convey news of their overture.
Cherokee war of 1786
Conflict erupted largely because of dissatisfaction over the Treaty of Hopewell, the flames of which were fanned by Dragging Canoe. In the east, it primarily involved warriors from the Overhill and Valley Towns against Franklin, which the Lower Towns to the west primarily raided the Cumberland.
In large part elated by their crushing defeat of the attempted Houston County, Cherokee warriors from the Lower Towns raided the Franklin settlements in small parties throughout the spring of 1786.
Due to a combination of resentment of Americans settling on the wrong side of the treaty line and pressure from the Muscogee, warriors of the Overhill Towns picked up the tomahawk in early July, led by John Watts. They were supported by Cherokee of the Valley Towns, and according to some accounts the army was as big as a thousand strong. First attacking a homestead on Beaver Creek near the newly established White's Fort (at the modern Knoxville, Tennessee) on July 20, they dispersed into small parties raiding the upper Holston and other parts of Franklin.
Throughout the summer of 1786, Dragging Canoe and his warriors along with a large contingent of Muscogee raided the Cumberland region, with several parties raiding well into Kentucky. One such occasion that summer was notable for the fact that the raiding party was led by none other than Hanging Maw of Coyatee, who was supposedly friendly at the time.
After the rise of the "local" Cherokee, Sevier responded with a force under joint command of Alexander Outlaw and William Cocke, which drove off the raiders from the Holston before marching for Coyatee near the mouth of the Little Tennessee. Once there, they burned the crops and the town's council house. Meanwhile, he himself led another expedition across the mountains to attack the Valley Towns on the headwaters of the Hiwassee.
Sevier responded to the immediate threat with a force under joint command of Alexander Outlaw and William Cocke, which drove off the raiders from the Holston before marching for Coyatee near the mouth of the Little Tennessee. Once there, they burned the crops and town's council house. Meanwhile, he himself led another across the mountains to attack the Valley Towns on the headwaters of the Hiwassee.
Treaty of Coyatee
The end result was the Treaty of Coyatee on August 3, 1786, in which the State of Franklin forced Corntassel, Hanging Maw, Watts, and the other Overhill leaders to cede the remaining land between the boundary set by the Dumplin treaty and the Little Tennessee River to the State of Franklin.
The Franklinites could now shift military forces to Middle Tennessee in response to increasing frequency of attacks by both the Chickamauga/Lower Cherokee and the Upper Muscogee.
Formation of the Western Confederacy
In addition to the small bands still operating with the Shawnee, Wyandot-Mingo, and Lenape in the Northwest, a large contingent of Cherokee led by The Glass (Tagwadihi) attended and took an active role in a grand council of western tribes (Six Nations Iroquois, Wyandot, Lenape, Shawnee, Odawa, Ojibwe, Potawotami, Twigtis, Wabash Confedracy, and, of course, the Cherokee themselves) lasting November 28 – December 18, 1786, in the Wyandot town of Upper Sandusky just south of the British capital of Detroit. British agents attended, and zealous warriors brought recently acquired scalps.
This meeting, initiated by Joseph Brant (Thayendanegea), the Mohawk leader who was head chief of the Iroquois Six Nations and like Dragging Canoe fought on the side of the British during the American Revolution, led to the formation of the Western Confederacy to resist American incursions into the Old Northwest. Dragging Canoe and his Cherokee were full members of the Confederacy. The purpose of the Confederacy was to coordinate attacks and defense in the Northwest Indian War of 1785–1795.
According to John Norton (Teyoninhokovrawen), Brant's adopted son, it was here in the north that The Glass formed a friendship with his adopted father that lasted well into the 19th century. He apparently served as Dragging Canoe's envoy to the Iroquois as the latter's brothers did to McKee and to the Shawnee.
The passage of the Northwest Ordinance by the Congress of the Confederation (subsequently affirmed by the United States Congress) in 1787, establishing the Northwest Territory and essentially giving away the land upon which they lived, only exacerbated the resentment of the tribes in the region.
Trouble with Franklin and Kentucky
In early 1787, encroachments by American settlers became so great that the Overhill Towns held a council on whether to completely abandon their homes on the Little Tennessee for more removed locations to the west. They elected to stay, but the crisis provoked another rise in the small-scale raiding which never really ceased completely. The situation of the Overhill Cherokee was so bad that refugees appeared in Muscogee towns, and the Chickasaw threatened to break the treaty of 1783 and go on the warpath if something were not done to alleviate the situation.
Though they provided auxiliary support against Franklin, the Cherokee of the Lower Towns, playing their role as members of the confederacy, had made Kentucky the target of most of their efforts. A sally from the Kentucky militia led by John Logan mistakenly attacked a hunting party from the Overhill Towns and killed several of its members. In their non-apology to Chota, the Kentuckians warned the Overhill Towns to control Dragging Canoe's warriors or there would be widespread indiscriminate revenge.
Coldwater Indian war (1785–1787)
Around 1785, the new management began covertly gathering Cherokee and Muscogee warriors into Coldwater town, whom they then encouraged to attack the American settlements along the Cumberland and its environs. The fighting contingent eventually numbered approximately nine Frenchmen, thirty-five Cherokee, and ten Muscogee.
Because the townsite was well-hidden and its presence unannounced, James Robertson, commander of the militia in the Cumberland's Davidson and Sumner Counties, at first accused the Lower Cherokee of the new offensives. In 1787, he marched his men to their borders in a show of force, but without an actual attack, then sent an offer of peace to Running Water.
In answer, Dragging Canoe sent a delegation of leaders led by Little Owl to Nashville under a flag of truce to explain that his Cherokee were not the responsible parties. Meanwhile, the attacks continued.
At the time of the conference in Nashville, two Chickasaw out hunting game along the Tennessee in the vicinity of Muscle Shoals chanced upon Coldwater Town, where they were warmly received and spent the night. Upon returning home to Chickasaw Bluffs, now Memphis, Tennessee, they immediately informed their head man, Piomingo, of their discovery. Piomingo then sent runners to Nashville.
Just after these runners had arrived in Nashville, a war party attacked one of its outlying settlements, killing Robertson's brother Mark. In response, Robertson raised a group of one hundred fifty volunteers and proceeded south by a circuitous land route, guided by two Chickasaw. Somehow catching the town off guard despite the fact they knew Robertson's force was approaching, they chased its would-be defenders to the river, killing about half of them and wounding many of the rest. They then gathered all the trade goods in the town to be shipped to Nashville by boat, burned the town, and departed.
After the wars, it became the site of Colbert's Ferry, owned by Chickasaw leader George Colbert, the crossing place over the Tennessee River of the Natchez Trace.
Because of the perceived insult of the incursion against Coldwater so near to their territory, the Muscogee took up the hatchet against the Cumberland settlements afterwards. They continued their attacks until 1789, but the Cherokee did not join them for this round due partly to internal matters but more because of trouble from the State of Franklin.
Post-Revolution: Peak of Cherokee influence (1788–1792)
Dragging Canoe's last years, 1788–1792, were the peak of his influence and that of the rest of the Lower Cherokee, among the other Cherokee and among other Indian nations, both south and north, as well as with the Spanish of Pensacola, Mobile, and New Orleans, and the British in Detroit. He also sent regular diplomatic envoys to negotiations in Nashville, Jonesborough then Knoxville, and Philadelphia.
Chiksika's band of Shawnee
In early 1788, a band of thirteen Shawnee arrived in Running Water after spending several months hunting in the Missouri River country, led by Chiksika, a leader contemporary with the famous Blue Jacket (Weyapiersenwah). In the band was his brother, the later leader Tecumseh.
Their mother, a Muscogee, had left the north (her husband died at the Battle of Point Pleasant, the only major action of Dunmore's War, in 1774) and gone to live in her old town because without her husband she was homesick. The town was now near those of the Cherokee in the Five Lower Towns. Their mother had died, but Chiksika's Cherokee wife and his daughter were living at nearby Running Water Town, so they stayed.
They were warmly received by the Cherokee warriors, and, based out of Running Water, they participated in and conducted raids and other actions, in some of which Cherokee warriors participated (most notably Bob Benge). Chiksika was killed in one of the actions in which their band took part in February, resulting in Tecumseh becoming leader of the small Shawnee band, gaining his first experiences as a leader in warfare.
Cherokee-Franklin war of 1788 (1788–1789)
This year the conflict between the Cherokee and the Americans in the State of Franklin erupted into its bloodiest and most widespread since 1776, beginning in late spring and lasting well into the beginning of the following year. One important feature of this conflict was the introduction of large numbers of Muscogee warriors fighting in Cherokee war parties, which continued until the end of the Cherokee wars.
Massacre of the Kirk family
In May 1788, a party of Cherokee from Chilhowee came to the house of John Kirk's family on Little River, while he and his oldest son, John Jr., were out. When Kirk and John Jr. returned, they found the other eleven members of their family dead and scalped. This was the beginning of a Cherokee campaign of raids across the region, to which the frontierspeople by retreating inside forts and stations.
Massacre of the Brown family
After a preliminary trip to the Cumberland at the end of which he left two of his sons to begin clearing the plot of land at the mouth of White's Creek, James Brown returned to North Carolina to fetch the rest of the family, with whom he departed Long-Island-on-the-Holston by boat in May 1788. When they passed by Tuskegee Island (Williams Island) five days later, Bloody Fellow stopped them, looked around the boat, then let them proceed, meanwhile sending messengers ahead to Running Water.
Upon the family's arrival at Nickajack, a party of forty under mixed-blood John Vann boarded the boat and killed Col. Brown, his two older sons on the boat, and five other young men travelling with the family. Mrs. Brown, the two younger sons, and three daughters were taken prisoner and distributed to different families.
When he learned of the massacre the following day, The Breath (Unlita), Nickajack's headman, was seriously displeased. He later adopted into his own family the Browns' son Joseph as a son, who had been originally given to Kitegisky (Tsiagatali), who had first adopted him as a brother, treating him well, and of whom Joseph had fond memories in later years.
Mrs. Brown and one of her daughters were given to the Muscogee and ended up in the personal household of Alex McGillivray. George, the elder of the surviving sons, also ended up with the Muscogee, but elsewhere. Another daughter went to a Cherokee nearby Nickajack and the third to a Cherokee in Crow Town.
Franklinite invasion of the Overhill Towns
At the beginning of June 1788, John Sevier, no longer governor of the State of Franklin, raised a hundred volunteers and set out for the Overhill Towns. After a brief stop at the Little Tennessee, the group went to Great Hiwassee and burned it to the ground. Then they returned to the Little Tennessee and burned down Tallasee.
Returning to Chota, Sevier sent a detachment led by James Hubbard to Chilhowee to punish those responsible for the Kirk massacre. Hubbard's force included John Kirk Jr. Hubbard brought along Corntassel and Hanging Man from Chota.
At Chilhowee, Hubbard raised a flag of truce and took Corntassel and Hanging Man to the house of Abraham, still headman of the town. He was there with his son, also bringing along Long Fellow and Fool Warrior. Hubbard posted guards at the door and windows of the cabin, and gave John Kirk Jr. a tomahawk to get his revenge.
The murder of the pacifist Overhill chiefs under a flag of truce angered the entire Cherokee nation. Men who had been reluctant to participate took to the warpath. The increase in hostility lasted for several months. Doublehead, Corntassel's brother, was particularly incensed. Not only did the Cherokee from the Overhill Towns join those from the Lower Towns on the warpath, so too did a large number of Muscogee warriors, outraged at the senseless murders.
Highlighting the seriousness of the matter, Dragging Canoe came in to address the general council of the Nation, now meeting at Ustanali on the Coosawattee River (one of the former Lower Towns on the Keowee River relocated to the vicinity of Calhoun, Georgia) to which the seat of the council had been moved. The council elected Little Turkey (Kanagita) as First Beloved Man to succeed the murdered chief. The election was contested by Hanging Maw of Coyatee; he had been elected chief headman of the traditional Overhill Towns on the Little Tennessee River). Both men had been among those who originally followed Dragging Canoe into the southwest of the nation.
Siege of Houston's Station
In early August 1788, the commander of the garrison at Houston's Station (near the present Maryville, Tennessee) received word that a Cherokee force of nearly five hundred was planning to attack his position. He therefore sent a large reconnaissance patrol to the Overhill Towns.
Stopping in the town of Citico on the south side of the Little Tennessee, which they found deserted, the patrol scattered throughout the town's orchard and began gathering fruit. Six of them died in the first fusilade, another ten while attempting to escape across the river.
With the loss of those men, the garrison at Houston's Station was seriously beleaguered. Only the arrival of a relief force under John Sevier saved the fort from being overrun and its inhabitants slaughtered. With the garrison joining his force, Sevier marched to the Little Tennessee and burned Chilhowee.
Attempted invasion of the Lower Towns
Later in August, Joseph Martin (who was married to Betsy, daughter of Nancy Ward, and living at Chota), with 500 men, marched to the Chickamauga area, intending to penetrate the edge of the Cumberland Mountains to get to the Five Lower Towns. He sent a detachment to secure the pass over the foot of Lookout Mountain (Atalidandaganu), which was ambushed and routed by a large party of Dragging Canoe's warriors, with the Cherokee in hot pursuit. One of the participants later referred to the spot as "the place where we made the Virginians turn their backs". According to one of the participants on the other side, Dragging Canoe, John Watts, Bloody Fellow, Kitegisky, The Glass, Little Owl, and Dick Justice were all present at the encounter.
Dragging Canoe raised an army of 3,000 Cherokee warriors, which he split into more flexible warbands of hundreds of warriors each. One band was headed by John Watts (Kunnessee-i, also known as 'Young Tassel'), with Bloody Fellow, Kitegisky (Tsiagatali), and The Glass. It included a young warrior named Pathkiller (Nunnehidihi), who later became known as The Ridge (Ganundalegi).
Battles of Gillespie's Station and others
In October of that year, Watts' band advanced across country toward White's Fort. Along the way, they attacked Gillespie's Station on the Holston River after capturing settlers who had left the enclosure to work in the fields, storming the stockade when the defender's ammunition ran out, killing the men and some of the women and taking twenty-eight women and children prisoner.
They then proceeded to attack White's Fort and Houston's Station, only to be beaten back. Afterward, the warband wintered at an encampment on the Flint River in present-day Unicoi County, Tennessee as a base of operations.
An attack by another large party against Sherrill's Station on Nolichucky River was driven off by a force commanded by Sevier himself.
In response to the Cherokee incursions, the settlers increased their retaliatory attacks. Troops under Sevier invaded the Middle and Valley Towns in North Carolina.
Bob Benge, with a group of Cherokee warriors, evacuated the general population from Ustalli, on the Hiwassee; they left a rearguard to ensure their escape. After firing the town, Sevier and his group pursued the fleeing inhabitants, and were ambushed at the mouth of the Valley River by Benge's party.
The US soldiers went to the village of Coota-cloo-hee (Gadakaluyi) and burned down its cornfields, but they were chased off by 400 warriors led by Watts (Young Tassel). Watts' army trailed Sevier's all the way from Coota-cloo-hee back to the Franklin settlements, attacking at random.
One result of the above destruction was that the Overhill Cherokee and refugees from the Lower and Valley towns virtually abandoned the settlements on the Little Tennessee and dispersed south and west. Chota was the only Overhill town left with many inhabitants.
John Watts' band on Flint Creek fell upon serious misfortune early the next year. In early January 1789, they were surrounded by a force under John Sevier that was equipped with grasshopper cannons. The gunfire from the Cherokee was so intense, however, that Sevier abandoned his heavy weapons and ordered a cavalry charge that led to savage hand-to-hand fighting. Watt's band lost nearly 150 warriors.
Cherokee attacks upon the Franklin communities continued well into the spring.
Blow to the Western Confederacy
In January 1789, Arthur St. Clair, American governor of the Northwest Territory, concluded two separate peace treaties with members of the Western Confederacy. The first was with the Iroquois, except for the Mohawk, and the other was with the Wyandot, Lenape, Ottawa, Potawotami, Sac, and Ojibway. The Mohawk, the Shawnee, the Miami, and the tribes of the Wabash Confederacy, who had been doing most of the fighting, not only refused to go along but became more aggressive, especially the Wabash tribes.
Implosion of the Spanish Conspiracy
The scheme fell apart for two main reasons. The first was the dithering of the Spanish government in Madrid. The second was the interception of a letter from Joseph Martin which fell into the hands of the Georgia legislature in January 1789.
North Carolina, to which the western counties in question belonged under the laws of the United States, took the simple expedient of ceding the region to the federal government, which established the Southwest Territory in May 1790, with William Blount as governor as well as simultaneously Superintendent for Southern Indian Affairs. The counties in the Overmountain region were grouped together as the Washington District while the counties in the Cumberland region became the Miro District.
Wilkinson remained a paid Spanish agent until his death in 1825, including his years as one of the top generals in the U.S. army, and was involved in the Aaron Burr conspiracy. Ironically, he became the first American governor of Louisiana Territory in 1803.
Council at Coweta
On March 2, 1789, the Lower Muscogee chief town of Coweta hosted a council between their division of the Muscogee Confederacy and the Cherokee. As town headman, John Galphin, half-blood son of former Indian Commissioner for the United States George Galphin, presided. Dragging Canoe and Hanging Maw led the Cherokee delegation. The representative of the two nations present agreed they trusted neither the Americans nor the Spanish and drafted a letter to the government of Great Britain pledging their loyalty in return for the king's direct assistance. They promised that if this happened, then the Mohawk, the Choctaw, and the Chickasaw would come over. Nothing ever came of the petition, but the council is notable for this as well as for where it took place.
Word of their defeat did not reach Running Water until April, when it arrived with an offer from Sevier for an exchange of prisoners which specifically mentioned the surviving members of the Brown family, including Joseph, who had been adopted first by Kitegisky and later by The Breath. Among those captured at Flint Creek were Bloody Fellow and Little Turkey's daughter.
Joseph and his sister Polly were brought immediately to Running Water, but when runners were sent to Crow Town to retrieve Jane, their youngest sister, her owner refused to surrender her. Bob Benge, present in Running Water at the time, mounted his horse and hefted his famous axe, saying, "I will bring the girl, or the owner's head". The next morning he returned with Jane. The three were handed over to Sevier at Coosawattee on April 20.
McGillivray delivered Mrs. Brown and Elizabeth to her son William during a trip to Rock Landing, Georgia, in November. George, the other surviving son from the trip, remained with the Muscogee until 1798.
Non-treaty of Swannanoa
The next month, on May 25, 1789, the Cherokee were supposed to sign a peace treaty with the newly federated United States at the War Ford on the French Broad River, near Swannanoa, North Carolina. The Americans chose the location because it was scene of a major Cherokee defeat in 1776. The Cherokee leaders never showed, but when the Americans under Andrew Pickens ran across Cherokee on their way to Rock Landing on the Oconee River to meet with the Muscogee, they were assured hostilities were over.
The opposite end of Muscle Shoals from Coldwater Town, mentioned above, was occupied in 1790 by a roughly 40-strong warrior party under Doublehead (Taltsuska), plus their families. He had gained permission to establish his town at the head of the Shoals, which was in Chickasaw territory, because the local headman, George Colbert, the mixed-blood leader who later owned Colbert's Ferry at the foot of Muscle Shoals, was his son-in-law.
Like the former Coldwater Town, Doublehead's Town was diverse, with Cherokee, Muscogee, Shawnee, and a few Chickasaw. It quickly grew beyond the initial 40 warriors, who carried out many small raids against settlers on the Cumberland and into Kentucky. During one foray in June 1792, his warriors ambushed a canoe carrying the three sons of Valentine Sevier (brother of John) and three others on a scouting expedition searching for his party. They killed the three Seviers and another man; two escaped.
Doublehead conducted his operations largely independently of the Lower Cherokee, though he did take part in large operations with them on occasion, such as the invasion of the Cumberland in 1792 and that of the Holston in 1793.
Treaty of New York (1790)
Dragging Canoe's long-time ally among the Muscogee, Alex McGillivray, led a delegation of twenty-seven leaders north, where they signed the Treaty of New York in August 1790 with the United States government on behalf of the "Upper, Middle, and Lower Creek and Seminole composing the Creek nation of Indians". In it, McGillivray, who was made an America brigadier general, ceded in the name of the Confederacy the Oconee Country. In return the federal government upheld Muscogee rights to all of the Tallassee Country.
Although intended to end the Oconee War, it angered the American settlers expelled from the Tallassee Country and Muscogee who wanted to keep the Oconee Country, so the war continued. The treaty also marked the beginning of the decline of McGillivray's influence in the Muscogee Confederacy and the rise of that of William Augustus Bowles, a bitter rival dating back to the Spanish campaign against Pensacola. By mid-1791, Bowles wielded enough influence to send large war parties raiding the Cumberland once again despite the treaty.
Muscle Shoals settlement
In January 1791, a group of land speculators named the Tennessee Company from the Southwest Territory led by James Hubbard and Peter Bryant attempted to gain control of the Muscle Shoals and its vicinity by building a settlement and fort at the head of the Shoals. They did so against an executive order of President Washington forbidding it, as relayed to them by the governor of the Southwest Territory, William Blount. The Glass came down from Running Water with sixty warriors and descended upon the defenders, captained by Valentine Sevier, brother of John, told them to leave immediately or be killed, then burned their blockhouse as they departed.
Bob Benge's war
Starting in 1791, Benge and his brother The Tail (Utana; aka Martin Benge), based at Willstown, began leading attacks against settlers in East Tennessee, Southwest Virginia, and Kentucky, often in conjunction with Doublehead and his warriors from Coldwater. Eventually, he became one of the most feared warriors on the frontier.
Meanwhile, Muscogee scalping parties began raiding the Cumberland settlements again, though without mounting any major campaigns.
Treaty of Holston (1791)
The Treaty of Holston, signed in July 1791, required the Upper Towns to cede more land in return for continued peace because the US government proved unable to stop or roll back illegal settlements. As it appeared to guarantee Cherokee sovereignty, the chiefs of the Upper Cherokee believed they had the same status as states. Several representatives of the Lower Cherokee participated in the negotiations and signed the treaty, including John Watts, Doublehead, Bloody Fellow, Black Fox (Dragging Canoe's nephew), The Badger (his brother), and Rising Fawn (Agiligina; aka George Lowery).
Battle of the Wabash
Later in the summer, a small delegation of Cherokee under Dragging Canoe's brother Little Owl traveled north to meet with the Indian leaders of the Western Confederacy, chief among them Blue Jacket (Weyapiersenwah) of the Shawnee, Little Turtle (Mishikinakwa) of the Miami, and Buckongahelas of the Lenape. While they were there, word arrived that Arthur St. Clair, governor of the Northwest Territory, was planning an invasion against the allied tribes in the north. Little Owl immediately sent word south to Running Water.
Dragging Canoe quickly sent a 30-strong war party north under his brother The Badger, where, along with the warriors of Little Owl and Turtle-at-Home they participated in the decisive encounter in November 1791 known as the Battle of the Wabash, the worst defeat ever inflicted by Native Americans upon the American military, the American military body count of which far surpassed that at the more famous Battle of the Little Bighorn in 1876.
Fighting on the other side were a company of militia from the Washington District of Southwest Territory and Chickasaw scouts.
After the battle, Little Owl, The Badger, and Turtle-at-Home returned south with most of the warriors who had accompanied the first two. The warriors who'd come north years earlier, both with Turtle-at-Home and a few years before, remained in the Ohio region, but the returning warriors brought back a party of thirty Shawnee under the leadership of one known as Shawnee Warrior that frequently operated alongside warriors under Little Owl.
Death of the "Savage Napoleon"
Inspired by news of the northern victory, Dragging Canoe, embarked on a mission to unite the native people of his area as had Little Turtle and Blue Jacket, visiting the other major tribes in the region. His embassies to the Lower Muscogee and the Choctaw were successful, but the Chickasaw living to the west refused his overtures. Upon his return, which coincided with that of The Glass and Dick Justice (Uwenahi Tsusti), and of Turtle-at-Home, from successful raids on settlements along the Cumberland (in the case of the former two) and in Kentucky (in the case of the latter), a huge all-night celebration was held at Stecoyee at which the Eagle Dance was performed in his honor.
By morning, March 1, 1792, Dragging Canoe was dead. A procession of honor carried his body to Running Water, where he was buried. By the time of his death, the resistance of the Chickamauga/Lower Cherokee had led to grudging respect from the settlers, as well as the rest of the Cherokee nation. He was even memorialized at the general council of the Nation held in Ustanali on June 28, 1792, by his nephew Black Fox (Inali):
The Dragging Canoe has left this world. He was a man of consequence in his country. He was friend to both his own and the white people. His brother [Little Owl] is still in place, and I mention it now publicly that I intend presenting him with his deceased brother's medal; for he promises fair to possess sentiments similar to those of his brother, both with regard to the red and the white. It is mentioned here publicly that both red and white may know it, and pay attention to him.
The minutes of the council list Little Turkey as “Great Beloved Man of the whole Nation”, Hanging Maw as “Beloved Man of the Northern Division” (Overhill Towns), and The Badger as “Beloved Man of the Southern Division” (Upper Towns in North Georgia).
Such was the respect for him as a leader and patriot of his people that Gov. Blount, leader of his greatest enemies, remarked upon hearing of his death that, "Dragging Canoe stood second to none in the Nation".
Post-Revolution: the Watts years (1792–1795)
With the death of the great war chief, the Cherokee needed new leaders to take over, and several stepped in to fill his shoes. One, however, presided over them all.
At his own previous request, Dragging Canoe was succeeded as leader of the Lower Cherokee by John Watts (Kunokeski), although The Bowl (Diwali) succeeded him as headman of Running Water, along with Bloody Fellow and Doublehead, who continued Dragging Canoe's policy of Indian unity, including an agreement with McGillivray of the Upper Muscogee to build joint blockhouses from which warriors of both tribes could operate at the confluence of the Tennessee and Clinch Rivers, at Running Water, and at Muscle Shoals.
Watts, Tahlonteeskee, and 'Young Dragging Canoe' (whose actual name was Tsula, or "Red Fox") traveled to Pensacola in May at the invitation of Arturo O'Neill de Tyrone, Spanish governor of West Florida. They took with them letters of introduction from John McDonald. Once there, they forged a treaty with O'Neill for arms and supplies with which to carry on the war. Upon returning north, Watts moved his base of operations to Willstown.
Meanwhile John McDonald, now British Indian Affairs Superintendent, moved to Turkeytown with his assistant Daniel Ross and their families. Some of the older chiefs, such as The Glass of Running Water, The Breath of Nickajack, and Dick Justice of Stecoyee, abstained from active warfare but did nothing to stop the warriors in their towns from taking part in raids and campaigns.
Southwest Territory Indian War, 1792–1795
The Trans-Appalachian communities formerly of North Carolina became the Southwest Territory of the United States in 1790. For administrative purposes, the territorial government grouped the counties in the Overmountain region together as the Washington District while those in the Cumberland region became the Miro District, already the name for its judicial district since 1788.
Raiding season, spring and summer 1792
Emboldened by the American loss at the Wabash River, Cherokee and Muscogee warriors and their Shawnee guests began raiding both districts of the Southwest Territory. The Miro District had it worse, suffering at least one a week, often more.
In April 1792, a Cherokee-Shawnee war party led by Bob Benge and Shawnee Warrior invaded the Holston region and began raids all over the vicinity.
Though they didn't stop, the raids slowed down to a handful in the summer. However, one of those raids served as one of the most notorious incidents of the period.
In the summer of 1792, a war party from Running Water led by Little Owl and the Shawnee Warrior joined them in their raids. On June 26, the same day that Dragging Canoe was being memorialized at the national council in Ustanali, the combined group of Cherokee, Shawnee, and a few Muscogee destroyed Zeigler’s Station in Sumner County. This action led the governor of Miro District, James Robertson, to call up a battalion of troops to spread throughout the region as guards.
Invasion of the Miro District
On September 7 or 8, a council of Cherokee meeting at Running Water formally declared war against the United States, or at least against the Southwest Territory.
Watts orchestrated a large campaign intending to attack the Washington District (Overmountain region) with a large combined army in four bands of two hundred each. When the warriors were mustering at Stecoyee, however, he learned that their planned attack was expected and decided to aim for the Miro District (Cumberland region) instead.
The army Watts led into the Cumberland region was nearly a thousand strong, including a contingent of cavalry.
From their launch point, Tahlonteeskee (Ataluntiski; Doublehead's brother) and Bob Benge's brother The Tail led a party to ambush the Kentucky Road. Doublehead led another to the Cumberland Road. Middle Striker (Yaliunoyuka) led his party to do the same on the Walton Road.
Watts himself led the main force, made up of 280 Cherokee, Shawnee, and Muscogee warriors plus cavalry, intending to go against the fort at Nashville. He sent out George Fields (Unegadihi; "Whitemankiller") and John Walker, Jr. (Sikwaniyoha) as scouts ahead of the army, and they killed the two scouts sent out by James Robertson from Nashville.
Near their target on the evening of September 30, Watts's combined force came upon a small fort known as Buchanan's Station, commanded by John Buchanan, son of the original owner of Sapling Grove. Talotiskee, leader of the Muscogee, wanted to attack it immediately, while Watts argued in favor of saving it for the return south. After much bickering, Watts gave in around midnight. The assault proved to be a disaster for Watts. He himself was wounded, and many of his warriors were killed, including Talotiskee and some of Watts' best leaders; Shawnee Warrior, Kitegisky, and Dragging Canoe's brother Little Owl were among those who died in the encounter.
Doublehead's group of sixty ambushed a party of six and took one scalp then headed toward Nashville. On their way, they were attacked by a militia force and lost thirteen men, and only heard of the disaster at Buchanan's Station afterwards.
Tahlonteeskee's party, meanwhile, stayed out into early October, attacking Black's Station on Crooked Creek, killing three, wounding more, and capturing several horses.
Small parties continued raiding into the winter.
In revenge for the deaths at Buchanan's Station, Benge, Doublehead, and his brother Pumpkin Boy led a party of sixty into southwestern Kentucky in early 1793 during which their warriors, in an act initiated by Doublehead, cooked and ate the enemies they had just killed. Afterwards, Doublehead's party returned south and held scalp dances at Stecoyee, Turnip Town, and Willstown, since warriors from those towns had also participated in the raid in addition to his and Benge's groups.
In early 1793, Watts began rotating large war parties back and forth between the Lower Towns and the North at the behest of his allies in the Western Confederacy, which was beginning to lose the ground to the Legion of the United States that had been created in the aftermath of the Battle of the Wabash. With the exception of the 1793 campaign against the Holston, his attention was more focused on the north than on the Southwest Territory and its environs during these next two years.
A party of Shawnee came down from the north on January 12 to reinforce ties with the Cherokee and the Muscogee and to encourage them to punish the Chickasaw for joining St. Clair’s army in the north. They stopped at Ustanali, then Running Water, before proceeding to the Muscogee town of Broken Arrow, home of their leader Talotiskee who had died at Buchanan's Station.
The Muscogee-Chickasaw War began with an attack by the Muscogee upon a Chickasaw hunting party on February 13, 1793, the Muscogee fighting as members of the Western Confederacy, the Chickasaw as allies of the United States.
Death of an ally
The leading chief of the Muscogee Confederacy, Alexander McGillivray, died in Pensacola on February 17, 1793, and was buried there. The confederacy elected his son-in-law, Charles Weatherford, in his place.
Spring and summer campaigns, 1793
A party of Muscogee under a mixed-race warrior named Lesley the Washington District and the recently established Hamilton District (carved out of the former) and began attacking isolated farmsteads. Lesley's party continued harassment of the Holston settlements until the summer of 1794.
Lesley's group was not the only Muscogee party, nor were the Muscogee alone. Warriors from the Upper Towns and some from the Overhill and Valley Towns, also raided the eastern districts in spring and summer 1793.
In the Miro District, besides scalping raids, two parties attacked Bledsoe's Station and Greenfield Station in April 1793. Another party attacked Hays' Station in June. In August, the Coushatta from Coosada raided the country around Clarksville, Tennessee, attacking the homestead of the Baker family, killing all but two who escaped and one taken prisoner who was later ransomed at Coosada Town. A war party of Tuskeegee from the Muscogee town of that name was also active in Middle Tennessee at this time.
After the visit of the Shawnee, Watts sent envoys to Knoxville, then the capital of the Southwest Territory, to meet with Governor William Blount to discuss terms for peace. Blount in turn passed the offer to Philadelphia, which invited the Lower Cherokee leaders to a meeting with President Washington. The party that was sent from the Lower Towns that May included Bob McLemore, Tahlonteeskee, Captain Charley of Running Water, and Doublehead, among several others. They met at Henry's Station on February 4, 1793, and Blount invited the Lower Cherokee to send a delegation to the capital to meet with President Washington.
Attack on the diplomatic party
The meeting in Philadelphia with Washington was scheduled for June 1793. On the way, the diplomatic party from the Lower Towns stopped in Coyatee because Hanging Maw and other chiefs from the Upper Towns were going also and had gathered there along with several whites who had arrived earlier.
A large party of Lower Cherokee (Pathkiller aka The Ridge among them) had been raiding the Upper East, killed two men, and stolen twenty horses. On their way out, they passed through Coyatee, to which the pursuit party tracked them. The militia violated their orders not to cross the Little Tennessee, then the border between the Cherokee nation and the Southwest Territory, firing indiscriminately.
In the ensuing chaos, eleven leading men were killed, including Captain Charley, and several wounded, including Hanging Maw, his wife and daughter, Doublehead, and Tahlonteeskee; one of the white delegates was among the dead.
The Cherokee, even Watts’ hostile warriors, agreed to await the outcome of the subsequent trial, which proved to be a farce, in large part because John Beard, the man responsible, was a close friend of John Sevier.
Invasion of the Eastern Districts
Watts responded to Beard's acquittal by invading the Holston area with one of the largest Indian forces ever seen in the region, over one thousand Cherokee and Muscogee, plus a few Shawnee, intending to attack Knoxville itself. The plan was to have four bodies of troops march toward Knoxville separately, converging at a previously agreed on rendezvous point along the way.
In August, Watts attacked Henry's Station with a force of two hundred, but fell back due to overwhelming gunfire coming from the fort, not wanting to risk another misfortune like that at Buchanan's Station the previous year.
The four columns converged a month later near the present Loudon, Tennessee, and proceeded toward their target. On the way, the Cherokee leaders were discussing among themselves whether to kill all the inhabitants of Knoxville, or just the men, James Vann advocating the latter while Doublehead argued for the former.
Further on the way, they encountered a small settlement called Cavett's Station on September 25. After they had surrounded the place, Benge negotiated with the inhabitants, agreeing that if they surrendered, their lives would be spared. However, after the settlers had walked out, Doublehead's group and his Muscogee allies attacked and began killing them all over the pleas of Benge and the others. Vann managed to grab one small boy and pull him onto his saddle, only to have Doublehead smash the boy's skull with an axe. Watts intervened in time to save another young boy, handing him to Vann, who put the boy behind him on his horse and later handed him over to three of the Muscogee for safe-keeping; unfortunately, one of the Muscogee chiefs killed the boy and scalped him a few days later.
Because of this incident, Vann called Doublehead "Babykiller" (deliberately parodying the honorable title "Mankiller") for the remainder of his life; and it also began a lengthy feud which defined the politics of the early 19th century Cherokee Nation and only ended in 1807 with Doublehead's death at Vann's orders. By this time, tensions among the Cherokee broke out into such vehement arguments that the force broke up, with the main group retiring south.
Battle of Etowah
Sevier countered the invasion with an invasion and occupation of Ustanali, which had been deserted; there was no fighting there other than an indecisive skirmish with a Cherokee-Muscogee scouting party. He and his men then followed the Cherokee-Muscogee force south to the town of Etowah (Itawayi; near the site of present-day Cartersville, Georgia across the Etowah River from the Etowah Indian Mounds), leading to what Sevier called the "Battle of Hightower" on October 17, 1793. His force defeated their opponents soundly, then went on to destroy several Cherokee villages to the west before retiring to the Southwest Territory.
The Battle of Etowah was the last pitched battle of the wars between the Cherokee and the American frontier people.
Southwest Point Blockhouse
Built on direction of John Sevier in November 1793, this blockhouse at the confluence of the Clinch and Holston Rivers was garrisoned initially by Southwest Territory militia. Federalized and expanded into Fort Southwest Point in 1797, it then housed a small contingent of fifteen regular army troops that grew into six hundred forty-five before the agency transferred to Hiwassee Garrison at the modern Calhoun, Tennessee in 1807.
In January 1794, Overhill Towns headman Hanging Maw requested and Governor Blount approved the building of a blockhouse in which to station a garrison of federal troops. John McKee, newly appointed federal agent to the Cherokee, was stationed there as well.
Another Spanish treaty
Using John McDonald, who had remained in communication with Alexander McKee in Canada, as their emissary, the four nations (Cherokee, Muscogee, Choctaw, Seminole; the Chickasaw were left out) negotiated a treaty of military protection with the Spanish government in New Orleans that was signed at Walnut Hills on April 10, 1794.
Spring and summer 1794
Between January and September 1794, there were more than forty raids by war parties of both Cherokee and Muscogee on the Miro District. On the part of the Cherokee, these were mostly carried out by Doublehead. These raids precipitated the Nickajack Expedition in September which ended the Cherokee–American wars once and for all.
Meanwhile, his nephew Bob Benge attacked Washington District and Southwest Virginia, finally losing his life in the latter on April 6, 1794. The militia sent his red-haired scalp to the governor, Henry Lee III, father of Robert E. Lee.
Benge was not alone in raiding the Eastern Districts. Fifty horses were stolen in the region that same month. Twenty-five warriors attacked the Town Creek blockhouse. An entire family save one was massacred south of the French Broad. There were many other attacks.
Frustrated with the governor's call for restraint, John Beard, leader of the chase group that attacked the diplomatic party, organized a party of one hundred fifty men in the Washington District and attacked the Hiwasee Towns, burning two, including Great Hiwassee, and killing several Cherokee.
Against orders, George Doherty of the Hamilton District militia mustered his men and attacked Great Tellico, burning it to the ground, then crossed the mountains into the Valley Towns, in which they burned at least two towns and several acres of crops.
On June 9, 1794, a party of Cherokee under Whitemankiller (Unegadihi; aka George Fields) overtook a river party under William Scott at Muscle Shoals. They killed its white passengers, looted the goods, and took the African-American slaves as captives.
Treaty of Philadelphia (1794)
The federal government signed the Treaty of Philadelphia, which essentially reaffirmed the land cessions of the 1785 Treaty of Hopwell and the 1791 Treaty of Holston, with the Cherokee on June 26. Both the chiefs Doublehead and Bloody Fellow signed it.
End of Lesley’s war party
In July 1794, Hanging Maw sent his men along with the volunteers from the Holston settlements to pursue Lesley’s Muscogee war party, killing two and handing over a third to the whites for trial and execution on August 4.
Two days later, a small war party of Muscogee crossed the Tennessee River at Chestua Creek in modern Bradley County. Hanging Maw called up his warriors, fifty of whom, led by his son Willicoe and Middlestriker of Willstown, joined with federal troops in pursuit while the rest guarded Coyatee. They caught up with the party they were pursuing on August 12 near Craig’s Station and defeated them in battle.
Different Muscogee war parties, however, escaped their pursuers and attacked the Holston frontier for the rest of the month.
Battle of Fallen Timbers
The Indian force of fourteen hundred led by Bluejacket of the Shawnee, Little Turtle (Michikinikwa) of the Miami, and Buckongahela of the Lenape had warriors from those nations and included over a hundred Cherokee, plus Wyandot, Ojibwa, Ottawa, Potowatomi, Mingo, and Muscogee warriors, and a company of Canadian militia under Alexander McKillop.
The short battle ended in a complete rout of the Indian force by the Legion more than twice its size and sounded the death knell of the Western Confederacy.
Aborted invasion of the Miro District
In August 1794, the US Indian Agent to the Chickasaw sent word from Chickasaw territory to General Robertson of the Miro District, as the Cumberland region was then called, that the Cherokee and Muscogee were going to attack settlements all along the river. He reported that a war party of 100 was going to take canoes down the Tennessee to the lower river, while another of 400 was going to attack overland after passing through the Five Lower Towns and picking up reinforcements.
The river party began the journey toward the targets. In the larger mixed Muscogee-Cherokee overland party, however, there was much dissension over the actions of Hanging Maw regarding Lesley's war party. The large war party broke up before reaching the settlements. Only three small parties made it to the Cumberland area, each going to one of the three counties that existed at the time, and they operated into at least September.
In May 1794, Revolutionary War hero Elijah Clarke led a party of fellow Georgians across the Oconee River to settle the west side, annexation by occupation. This came about after a French-backed scheme to invade East Florida fell through. After Clarke and his followers ignored the governor's orders to leave, a combined force of federal troops and state militia destroyed their fort and homesteads in September.
Desiring to end the wars once and for all, Robertson sent a detachment of U.S. regular troops, Miro District militia, and Kentucky volunteers to the Five Lower Towns under U.S. Army Major James Ore. Guided by knowledgeable locals, including former captive Joseph Brown, Ore's army traveled down the Cisca and St. Augustine Trail toward the Five Lower Towns.
On September 13, the army attacked Nickajack without warning, slaughtering many of the inhabitants, including its pacifist chief The Breath. After torching the houses, the soldiers went upriver and burned Running Water, whose residents had long fled. Joseph Brown fought alongside the soldiers, but tried to spare women and children. The Cherokee casualties were relatively light, as the majority of the population of both towns were in Willstown attending a major stickball (similar to lacrosse) game.
Treaty of Tellico Blockhouse (1794)
Watts finally decided to call for peace: he was discouraged by the destruction of the two towns, the death of Robert Benge in April, and the recent defeat of the Western Confederacy by General "Mad Anthony" Wayne's army at the Battle of Fallen Timbers. More than 100 Cherokee had fought there.
The loss of support from the Spanish, who had their own problems with Napoleon I of France in Europe, convinced Watts to end the fighting. Two months later, on November 7, 1794, he made the Treaty of Tellico Blockhouse. It was notable for not requiring any more land cessions by the Cherokee, other than finally ended the series of conflicts, which was notable for not requiring any further cession of land other than requiring the Lower (or Chickamauga) Cherokee to recognize the cessions of the Holston treaty. This led to a period of relative peace into the 19th century.
Muscogee continue the war
The Muscogee kept on fighting after the destruction of Nickajack and Running Water and the following peace between the Lower Cherokee and the United States. In October 1794, they attacked Bledsoe's Station again. In November, they attacked Sevier's Station and massacred fourteen of the inhabitants, Valentine Sevier being one of the few survivors.
In December 1794, a force of Cherokee warriors from the Upper Towns stopped a Muscogee campaign against the frontier settlements of the state of Georgia and warned them to cease attacking the Southwest Territory's Eastern Districts as well.
In early January 1795, however, the Chickasaw, who had sent warriors to take part in the Army of the Northwest, began killing Muscogee warriors found in Middle Tennessee as allies of the United States and taking their scalps, so in March, the Muscogee began to turn their attentions away from the Cumberland to the Chickasaw, over the entreaties of the Cherokee and the Choctaw.
The Muscogee-Chickasaw War ended in a truce negotiated by the U.S. government at Tellico Blockhouse in October that year in a conference attended by the two belligerents and the Cherokee.
Treaty of Greenville
The northern allies of the Lower Cherokee in the Western Confederacy signed the Treaty of Greenville with the United States in August 1795, ending the Northwest Indian War. The treaty required them to cede the territory that became the State of Ohio and part of what became the State of Indiana to the United States and to acknowledge the United States rather Great Britain as the predominant ruler of the Northwest.
None of the Cherokee in the North were present at the treaty. Later that month, Gen. Wayne sent a message to Long Hair (Gitlugunahita), leader of those who remained in the Ohio country, that they should come in and sue for peace. In response, Long Hair replied that all of them would return south as soon as they finished the harvest. However, they did not all do so; at least one, called Shoe Boots (Dasigiyagi), stayed in the area until 1803, so it's likely others did as well.
Treaty of Coleraine
At the trading post of Coleraine in what's now South Georgia, the Muscogee signed a peace treaty with the United States on June 29, 1796, effectively ending the Southwest Indian War.
Treaty of San Lorenzo
Also known as Pinckney’s Treaty, Spain and the United signed this treaty on October 27, 1795, setting the boundary between American territory and Spanish West and East Florida at the 31st parallel. Furthermore, Spain agreed to allow the U.S.A. unobstructed use of the Mississippi River and to dismantle Fort San Fernando de las Barrancas at Chickasaw Bluffs. Both parties agreed to cease stirring up the Indian tribes against each other.
Aftermath and Assessment
Following the peace treaty, leaders from the Lower Cherokee were dominant in national affairs. When the national government of all the Cherokee was organized, the first three persons to hold the office of Principal Chief of the Cherokee Nation – Little Turkey (1788–1801), Black Fox (1801–1811), and Pathkiller (Nunnehidihi; 1811–1827) – had previously served as warriors under Dragging Canoe, as had the first two Speakers of the Cherokee National Council, established in 1794, Doublehead and Turtle-at-Home.
The domination of the Cherokee Nation by the former warriors from the Lower Towns continued well into the 19th century. Even after the revolt of the young chiefs of the Upper Towns, the Lower Towns were a major voice, and the "young chiefs" of the Upper Towns who dominated that region had themselves previously been warriors with Dragging Canoe and Watts.
From 1776 to 1795, the Cherokee–American wars lasted nearly twenty years, one of the longest-running conflicts between Indians and the Americans. It has been often overlooked for its length, its importance at the time, and its influence on later Native American leaders (or considering that Cherokee had been involved at least in small numbers in all the conflicts beginning in 1758, that number could be nearly forty years).
Because of the continuing hostilities that followed the Revolution, the United States placed one of the two permanent garrisons of the new country at Fort Southwest Point at the confluence of the Tennessee and Clinch Rivers; the other was at Fort Pitt in Pennsylvania. Most historians, overlooking these conflicts, have failed to include Dragging Canoe as one of the notable Native American war chiefs and diplomats. Some texts dealing with conflicts between "Americans" and "Indians" often barely mention him.
- Timeline of Cherokee removal
- Historic treaties of the Cherokee
- Eastern Band of Cherokee Indians
- United Keetoowah Band of Cherokee Indians
- Cherokee Nation of Oklahoma
- Principal Chiefs of the Cherokee
- Flora, MacKethan, and Taylor, p. 607 | "Historians use the term Old Southwest to describe the frontier region that was bounded by the Tennessee River to the north, the Gulf of Mexico to the South, the Mississippi River to the west, and the Ogeechee River to the east"
- Goodpasture, p. 27
- Phelan, p. 43
- Alderman, p. 37
- Klink and Talman, p. 62
- O'Donnell, p. 18
- Lavender, p. 4
- Anderson and Lewis, p. 22
- Flint, p. 108
- Evans (1977), "Dragging Canoe," p. 179
- Brown, Old Frontiers, p. 138
- Evans (1977), "Dragging Canoe", pp. 180–182
- Murphy, p. 523
- Brown, Old Frontiers, pp. 141–146
- Calloway, pp. 194–197
- Hoig, p. 59
- Brown, Old Frontiers, pp. 148–149
- O'Donnell, p. 40
- Milling, pp. 314–316
- Milling, p. 318
- Milling, pp. 316–317
- Milling pp. 318
- Hunter, p. 176
- Mays, p. 65
- O'Donnell, p. 47
- O'Donnell, p. 47
- O'Donnell, p. 46
- Alderman, p. 38
- Brown, Old Frontiers, p. 161
- Anderson and Lewis, p. 157
- Moore and Foster, p. 168
- O'Donnell, pp. 57–59
- Lowrie and Clarke, Indian Affairs, p. 20
- Calloway, pp. 261–264
- Gilmore, "Alexander McGillivray", pp. 118–119
- O'Donnell, pp. 69–79
- O'Brien, pp. 125–129
- Brown, Old Frontiers, pp. 161, 163
- Anderson and Lewis, pp. 157–158
- Brown, Old Frontiers, p. 163–164
- Mooney, p. 63
- Brown, Old Frontiers, pp. 161–172
- Brown, pp. 162–163
- Anderson and Lewis, p. 160
- Anderson and Lewis, p. 160
- Anderson and Lewis, p. 163
- Anderson and Lewis, p. 163
- Anderson and Lewis, p. 164
- Goodpasture, p. 37
- O'Donnell, pp. 95–108
- Brown, Old Frontiers, pp. 172–173
- O'Donnell, p. 89
- Anderson and Lewis, p. 166
- O'Donnell, pp. 83–84
- Ramsey, p. 186
- Brown, Old Frontiers, p. 173
- Brown, Old Frontiers, p. 174
- Evans, "Dragging Canoe", p. 184
- Anderson and Lewis, p. 193
- O'Donnell, pp. 84–85
- Anderson and Lewis, p. 25
- Tanner, p. 98
- Brown, Old Frontiers, pp. 205–207
- O'Donnell, pp. 96–97
- Hoig, p. 68
- O'Donnell, pp. 103–104
- O'Donnell, pp. 105–106
- Mooney, pp. 57–58
- O'Donnell, p. 107
- Moore, p. 175
- Brown, Old Frontiers, p. 196
- Mooney, pp. 58–59
- Mooney, p. 59
- O'Donnell, p. 111
- O'Donnell, pp. 113–114
- Moore, pp. 180–182
- Summers, pp. 361–443
- O'Donnell, p. 114–115
- Anderson and Lewis, p. 21
- Tanner, p. 99
- O'Donnell, p. 123
- O'Donnell, p. 135
- Moore, p. 182
- Brown, Old Frontiers, p. 175
- Mooney, p. 59–60
- O'Donnell, pp. 126–127
- Tanner, pp. 101
- Brown, Old Frontiers, pp. 204–205
- Calloway, p. 264
- Evans, "Dragging Canoe", p. 185
- Mooney, Myths and Sacred Formulas, p. 60
- Brown, Old Frontiers, p. 270
- Ramsey, p. 280
- Braund, p. 171
- Moore, pp. 182–187
- Brown, Old Frontiers, pp. 240–241
- Roosevelt, p. 50
- Brown, Old Frontiers, pp. 245–246
- Tanner, pp. 95–105
- Brown, Old Frontiers, p. 215
- Brown, Old Frontiers, p. 251
- Green, pp. 120–138
- Ramsey, pp. 523–540
- Henderson, Chap. XX
- Ramsey, p. 341–342
- Gilmore, John Sevier, pp. 75–84
- Faulkner, pp. 23, 107
- Goodpasture, pp. 140–141
- Williams, History of the Lost State of Franklin, p. 103
- Lowrie and Clarke, Indian Affairs, p. 8
- Roosevelt, p. 51
- Klink and Talman, p. 49
- Lowrie and Clarke, Indian Affairs, pp. 46–48
- Roosevelt, pp. 203–205
- Brown, Old Frontiers, pp. 265–269
- Heard, p. 138
- Drake, Chapt. II
- Eckert, pp. 379–387
- Brown, Old Frontiers, p. 271
- Lowrie and Clarke, Indian Affairs, p. 47
- Brown, Old Frontiers, p. 272
- Haywood, p. 197
- Brown, Old Frontiers, pp. 272–275
- Haywood, pp. 194–196
- Roosevelt, p. 212
- Brown, Old Frontiers, p. 309
- Evans, "Last Battle", 30–40
- Klink and Talman, p. 48
- Brown, Old Frontiers, p. 284
- Draper Mss. 16: DD-59
- Moore, p. 204
- Brown, Old Frontiers, p. 293–295
- Brown, Old Frontiers, p. 297
- Brown, Old Frontiers, pp. 285–286
- Evans, "Bob Benge", p. 100
- Brown, Old Frontiers, pp. 286–290
- Brown, Old Frontiers, pp. 297–299
- Durham, pp. 22–23
- Wilson, pp. 47–48
- Mastromarino, pp. 291–294
- Brown, Old Frontiers, p. 275
- Brown, Old Frontiers, p. 299
- Moore, p. 201
- Lowrie and Clarke, Indian Affairs, pp. 34, 48
- Moore, pp. 233
- Brown, Old Frontiers, p. 270
- Goodpasture, p. 178
- Brown, Old Frontiers, pp. 318–319
- Evans, "Bob Benge", p. 100
- Durham, pp. 65–66
- Lowrie and Clarke, Indian Affairs, pp. 271–272
- Lowrie and Clarke, Indian Affairs, p. 271
- Lowrie and Clarke, Indian Affairs, p. 263
- Goodpasture, p. 186
- Starr, p. 35
- Durham, p. 87
- Starr, p. 36
- Lowrie and Clarke, Foreign Relations, pp. 285–286
- Durham, pp. 80–81
- Durham, p. 82
- Moore, pp. 205–211
- Durham, p. 83
- Brown, Old Frontiers, pp. 344–366
- Lowrie and Clarke, Indian Affairs, pp. 294–295
- Hoig, p. 83
- Durham, pp. 84–85
- Evans, "Bob Benge", p. 101–102
- Brown, Old Frontiers, pp. 367–368
- Durham, pp. 112–114
- Brown, Old Frontiers, pp. 370–371
- Moore, p. 225–231
- Durham, pp. 118–122
- Moore, p. 215–220
- Durham, p. p. 117
- Brown, Old Frontiers, p. 366
- Brown, Old Frontiers, pp. 369–370
- Brown, Old Frontiers, p. 387
- Moore, pp. 220–225
- Evans, "Bob Benge", pp. 103–104
- Brown, Old Frontiers, p. 389
- Brown, Old Frontiers, pp. 389–390
- Faulkner, p. 63
- Brown, Old Frontiers, pp. 390–391
- Wilkins, pp. 25–26
- Faulkner, pp. 76–80
- Ramsey, p. 581
- Ehle, pp 44–46
- Miles, p. 36
- Wilkins, p. 26
- Wilkins, p. 26
- Faulkner, p. 134
- Anderson and Lewis, p. 4
- Brown,Old Frontiers, 418
- Brown,Old Frontiers, p. 397–398
- Durham, p. 166
- Brown, Old Frontiers, pp. 400–402
- Brown, Old Frontiers, pp. 402–403
- Durham, 166–167
- Durham, pp. 135–137
- Durham, pp. 137–138
- Mooney, p. 77
- Evarts, pp. 30–31
- Haywood, p. 323
- Haywood, pp. 323–325
- Brown, Old Frontiers, p. 421
- Brown, Old Frontiers, pp. 421–422
- Mooney, p. 78
- Brown, Old Frontiers, p. 422
- Mooney, pp. 78–79
- Brown, Old Frontiers, pp. 421–431
- Brown, Old Frontiers, pp. 433–436
- Durham, p. 189
- Moore, pp. 244–250
- Royce, p. 36, note 1
|Library resources about
- Adair, James. History of the American Indian. (Nashville: Blue and Gray Press, 1971).
- Alderman, Pat. Dragging Canoe: Cherokee-Chickamauga War Chief. (Johnson City: Overmountain Press, 1978)
- Allen, Penelope. "The Fields Settlement". Penelope Allen Manuscript. Archive Section, Chattanooga-Hamilton County Bicentennial Library.
- Anderson, William, and James A. Lewis. A Guide to Cherokee Documents in Foreign Archives. (Metuchen: Scarecrow Press, 1995).
- Appleton, James. "Treaty of New York (1790)". Encyclopedia of Alabama.
- Braund, Kathryn E. Holland. Deerskins and Duffels: Creek Indian Trade with Anglo-America, 1685–1815. (Lincoln:University of Nebraska Press, 1986).
- Brown, John P. "Eastern Cherokee Chiefs". Chronicles of Oklahoma, Vol. 16, No. 1, pp. 3–35. (Oklahoma City: Oklahoma Historical Society, 1938).
- Brown, John P. Old Frontiers: The Story of the Cherokee Indians from Earliest Times to the Date of Their Removal to the West, 1838. (Kingsport: Southern Publishers, 1938).
- Calloway, Colin G. The American Revolution in Indian Country: Crisis and Diversity in Native American Communities. (Cambridge: Cambridge University Press, 1995).
- Drake, Benjamin. Life Of Tecumseh And Of His Brother The Prophet; With A Historical Sketch Of The Shawanoe Indians. (Mount Vernon : Rose Press, 2008).
- Durham, Walter T. Before Tennessee: The Southwest Territory, 1790–1796 : A Narrative History of the Territory of the United States South of the River Ohio. (Rocky Mount: Rocky Mount Historical Assn., 1990).
- Eckert, Allan W. A Sorrow in Our Heart: The Life of Tecumseh. (New York: Bantam, 1992).
- Ehle, John. Trail of Tears: The Rise and Fall of the Cherokee Nation. (New York: Doubleday, 1988).
- Evans, E. Raymond, ed. "The Battle of Lookout Mountain: An Eyewitness Account, by George Christian". Journal of Cherokee Studies, Vol. III, No. 1. (Cherokee: Museum of the Cherokee Indian, 1978).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Ostenaco". Journal of Cherokee Studies, Vol. 1, No. 1, pp. 41–54. (Cherokee: Museum of the Cherokee Indian, 1976).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Bob Benge". Journal of Cherokee Studies, Vol. 1, No. 2, pp. 98–106. (Cherokee: Museum of the Cherokee Indian, 1976).
- Evans, E. Raymond. "Notable Persons in Cherokee History: Dragging Canoe". Journal of Cherokee Studies, Vol. 2, No. 2, pp. 176–189. (Cherokee: Museum of the Cherokee Indian, 1977).
- Evans, E. Raymond. "Was the Last Battle of the American Revolution Fought on Lookout Mountain?". Journal of Cherokee Studies, Vol. V, No. 1, pp. 30–40. (Cherokee: Museum of the Cherokee Indian, 1980).
- Evans, E. Raymond, and Vicky Karhu. "Williams Island: A Source of Significant Material in the Collections of the Museum of the Cherokee". Journal of Cherokee Studies, Vol. 9, No. 1, pp. 10–34. (Cherokee: Museum of the Cherokee Indian, 1984).
- Evarts, Jeremiah. Essays on the Present Crisis on the Condition of the American Indians. (Boston: Perkins & Martin, 1829).
- Faulkner, Charles. Massacre at Cavett’s Station: Frontier Tennessee during the Cherokee Wars. (Knoxville: University of Tennessee Press, 2013).
- Flint, Timothy. Indian Wars of the West. (Cincinnati: E. H. Flint, 1833).
- Flora, Joseph, Lucinda Hardwick MacKethan, and Todd Taylor. "Old Southwest". The Companion to Southern Literature: Themes, Genres, Places, People, Movements, and Motifs. (Baton Rouge: LSU Press, 2001).
- Frank, Andrew. "Alexander McGillivray". Encyclopedia of Alabama.
- Gilmore, James R. “Alexander McGillivray”. Appleton’s Cyclopaedia of American Biography, Volume 4, James Grant Wilson and John Fiske, ed. (New York: Appleton and Co., 1888).
- Gilmore, James R. John Sevier as a commonwealth builder. (New York: D. Appleton and Co., 1887).
- Goodpasture, Albert V. “Indian Wars and Warriors of the Old Southwest, 1720–1807”. Tennessee Historical Magazine, Volume 4, pp. 3–49, 106–145, 161–210, 252–289. (Nashville: Tennessee Historical Society, 1918).
- Green, Thomas Marshall. The Spanish Conspiracy : a Review of Early Spanish Movements in the South-West. Containing Proofs of the Intrigues of James Wilkinson and John Brown; of the Complicity Therewith of Judges Sebastian, Wallace, and Innes; the Early Struggles of Kentucky for Autonomy; the Intrigues of Sebastian in 1795–7, and the Legislative Investigation of His Corruption. (Cincinnati: Robert Clarke & Co., 1891).
- Hamer, Philip M. Tennessee: A History, 1673–1932. (New York: American History Association, 1933).
- Hays, J.E., ed. Indian Treaties Cessions of Land in Georgia 1705–1837. (Atlanta: Georgia Department of Archives and History, 1941).
- Haywood, W.H. The Civil and Political History of the State of Tennessee from its Earliest Settlement up to the Year 1796. (Nashville: W. H. Haywood, 1823).
- Heard, J. Norman. Handbook of the American Frontier, The Southeastern Woodlands: Four Centuries of Indian-White Relationships. (Metuchen: Scarecrow Press, 1993).
- Henderson, Archibald. The Conquest Of The Old Southwest: The Romantic Story Of The Early Pioneers Into Virginia, The Carolinas, Tennessee And Kentucky 1740 To 1790. (New York: The Century Co., 1920).
- Henderson, Archibald. “The Spanish Conspiracy in Tennessee”. Tennessee Historical Magazine, Vol. 3. (Nashville: Tennessee Historical Society, 1917).
- Hoig, Stanley. The Cherokees and Their Chiefs: In the Wake of Empire. (Fayetteville: University of Arkansas Press, 1998).
- Hunter, C.L. Sketches of Western North Carolina, Historical and Biographical. (Raleigh: Raleigh News Steam Job Print, 1877).
- King, Duane H. The Cherokee Indian Nation: A Troubled History. (Knoxville: University of Tennessee Press, 1979).
- Klink, Karl, and James Talman, ed. The Journal of Major John Norton. (Toronto: Champlain Society, 1970).
- Kneberg, Madeline and Thomas M.N. Lewis. Tribes That Slumber. (Knoxville: University of Tennessee Press, 1958).
- Lavender, Billy. A Pioneer Church in the Oconee Territory: A Historical Synopsis of Antioch Christian Church. (Bloomington: iUniverse, Inc., 2005).
- Lowrie, Walter, and Matthew St. Clair Clarke, ed. American State Papers: Foreign Relations, Volume I. (Washington: Giles and Seaton, 1832).
- Lowrie, Walter, and Matthew St. Clair Clarke, ed. American State Papers: Indian Affairs, Volume I. (Washington: Giles and Seaton, 1832).
- McLoughlin, William G. Cherokee Renascence in the New Republic. (Princeton: Princeton University Press, 1992).
- Mastromarino, Mark A., ed. The Papers of George Washington, Presidential Series, vol. 6, 1 July 1790 – 30 November 1790. (Charlottesville: University Press of Virginia, 1996).
- Mays, Terry. “Cherokee Campaign of 1776”. Historical Dictionary of the American Revolution. (Metuchen: Scarecrow Press, 1999).
- Miles, Tiya. The House on Diamond Hill: A Cherokee Plantation Story. (Chapel Hill: University of North Carolina Press, 2010).
- Milling, Chapman. Red Carolinians. (Chapel Hill: University of North Carolina Press, 1940).
- Mooney, James. Myths of the Cherokee and Sacred Formulas of the Cherokee, Smithsonian Institution, 1891 and 1900; reprinted, (Nashville: Charles and Randy Elder-Booksellers, 1982).
- Moore, John Trotwood and Austin P. Foster. Chapter IX: “Indian Wars and Warriors of Tennessee”. Tennessee, The Volunteer State, 1769–1923, Vol. 1, pp. 157–250. (Chicago: S. J. Clarke Publishing Co., 1923).
- Murphy, Justin D. “Grand Council on Muscle Shoals”. The Encyclopedia of North American Indian Wars, 1607–1890: A Political, Social, and Military History, Spencer C. Tucker, ed. (Santa Barbara: ABC-CLIO, 2011).
- O’Brien, Greg, ed. Pre-removal Choctaw History: Exploring New Paths. (Norman: University of Oklahoma Press, 2008).
- O'Donnell, James. Southern Indians in the American Revolution. (Knoxville: University of Tennessee Press, 1973).
- Phelan, James. History of Tennessee: The Making of a State. (Cambridge: Riverside Press, 1888).
- Ramsey, James Gettys McGregor. The Annals of Tennessee to the End of the Eighteenth Century. (Charleston: John Russell, 1853).
- Reynolds, William R., Jr. Andrew Pickens: South Carolina Patriot in the Revolutionary War. (Jefferson NC: McFarland & Company, Inc., 2012).
- Roosevelt, Theodore. The Winning of the West, Part IV: The Indian Wars, 1784–1787. (New York: Current Literature Publishing Co., 1905).
- Royce, C.C. "The Cherokee Nation of Indians: A narrative of their official relations with the Colonial and Federal Governments". Fifth Annual Report, Bureau of American Ethnology, 1883–1884. (Washington: Government Printing Office, 1889).
- Starr, Emmet. History of the Cherokee Indians, and their Legends and Folklore. (Fayetteville: Indian Heritage Assn., 1967).
- Summers, Lewis Preston. History of Southwest Virginia, 1746–1786, Washington County, 1777–1870. (Richmond: J.L. Printing Co., 1903).
- Tanner, Helen Hornbeck. "Cherokees in the Ohio Country". Journal of Cherokee Studies, Vol. III, No. 2, pp. 95–103. (Cherokee: Museum of the Cherokee Indian, 1978).
- Toulmin, Llewellyn M. “Backcountry Warrior: Brig. Gen. Andrew Williamson”, Journal of Backcountry Studies, Vol. 7 No.1. (Greensboro: 2010).
- Wilkins, Thurman. Cherokee Tragedy: The Ridge Family and the Decimation of a People. (New York: Macmillan Company, 1970).
- Williams, Samuel Cole. Early Travels in the Tennessee Country, 1540–1800. (Johnson City: Watauga Press, 1928).
- Williams, Samuel Cole. History of the Lost State of Franklin. (New York: Press of the Pioneers, 1933).
- Wilson, Frazer Ells. The Peace of Mad Anthony. (Greenville: Chas. B. Kemble Book and Job Printer, 1907).
- The Cherokee Nation
- United Keetoowah Band
- Eastern Band of Cherokee Indians (official site)
- Annual report of the Bureau of Ethnology to the Secretary of the Smithsonian Institution (1897/98: pt.1), Contains The Myths of The Cherokee, by James Mooney
- Muscogee (Creek) Nation of Oklahoma (official site)
- Account of 1786 conflicts between Nashville-area settlers and natives (second item in historical column)
- The journal of Major John Norton
- Emmett Starr's History of the Cherokee Indians |
Learn more about a certain medical condition.
Ankylosing spondylitis (AS) is a type of arthritis that causes inflammation in the joints of the spine. The most common areas affected are the sacroiliac joints, which are the joints at the base of the spine that connect the spine and the pelvis, as well as the joints between the vertebrae. Other joints, such as the hips and shoulders, may also be similarly affected. AS causes pain, stiffness, and inflammation at the affected joints.
AS is the most common of the arthritis conditions known as spondylopathies. The second most common spondylopathy occurs in people with psoriasis.
About 1% of Canadians have AS. Having a family member with AS increases your risk of developing the condition, since the disease is at least partly hereditary. People with a certain molecule called HLA B27 on the surface of their cells are also more likely to get AS. Having both HLA B27 and a family history further increases your risk if a first-degree relative (e.g., a parent) has it. However, if you carry this molecule without a family history, the chance of getting this condition is lower. 90% of Caucasian patients with AS are HLA B27-positive, compared to only 50% of people of African descent. There is also thought to be an environmental risk, since if one identical twin has AS there is only a 50% chance that the other will have it.
AS affects about three times as many men as women, but it may be that the disease is less recognized among women. Most people are first diagnosed between the ages of 15 and 40 years. However, younger and older people can also be affected.
The cause of AS is not completely understood, but it’s believed to be at least partly related to genetics. AS is more common in people with a family history of the condition. One theory is that AS is "triggered" by something in the environment, such as an infection, for people whose genes put them at risk of AS. The immune system responds to this trigger by producing chemicals that cause inflammation in the spine and other joints of the body. There is no evidence, however, that an infection causes the disease.
It is also known that people with a molecule called HLA B27 on the surface of their cells are at higher risk of developing AS. HLA B27 can be passed down from parent to child. Although it increases the risk of AS, not everyone with HLA B27 will get AS.
Symptoms and Complications
AS can cause a variety of different symptoms, but most people with AS have low back pain and stiffness. The stiffness is often worst in the morning and after you have been inactive for a while. The back pain and stiffness can prevent you from moving around comfortably and getting a good night’s sleep.
Most people with other kinds of low back pain feel better after a night’s rest, but a hallmark of AS is that people usually feel worse and stiffer in the morning after sleeping. The involvement of the spine is called axial; involvement of other joints is termed peripheral. Most people will have mainly axial involvement.
Other joints can also become affected by AS, such as the hips, shoulders, and knees. The involvement of the spine is called axial; involvement of other joints is termed peripheral. People with AS can experience fatigue, weight loss, and loss of appetite. The symptoms of AS tend to come and go, with periods of no symptoms followed by flare-ups.
AS can cause complications both in the joints and elsewhere in the body, including:
- bent-over or unusually straight posture
- limited mobility (ability to move around)
- eye inflammation (uveitis, which can cause eye pain, irritation, and sensitivity to light and requires immediate medical attention)
- breathing problems due to stiffness in the joints between the spine and the ribs
- arthritis and sometimes significant damage to hip and shoulder joints (and occasionally to other joints)
- inflammation in places where ligaments and tendons attach to the bone (called enthesitis)
- low-grade fever
- loss of appetite and weight loss
Very rare complications can include:
- inflammation of the aorta, a large blood vessel that brings blood from the heart to the rest of the body, and secondary aortic valve insufficiency (when the aortic valve is weakened, preventing the valve from closing properly)
- spinal cord injury due to fractures (breaks) in the spine
- cauda equina syndrome, where AS damages the nerves at the base of the spinal cord, leading to loss of sensation in the buttocks, rectum, thighs, and bladder; or loss of bowel or bladder control
Making the Diagnosis
Your doctor will make the diagnosis of AS based a combination of your symptoms, an X-ray or another type of imaging of your affected joints, and certain blood tests. The earliest sign is sacroiliitis (inflammation of the sacroiliac joint). It can be seen on an X-ray of the pelvis, or it can be detected at an even earlier stage with an MRI scan.
If you have AS, the X-ray will show areas where the bone has been worn away by the condition. The vertebrae of the spine may start to fuse together because the ligaments between them become calcified. The term for bones growing together due to inflammation is ankylosis, and this is where the name ankylosing spondylitis comes from ("spondyl" refers to the spine and "itis" means inflammation). A physical exam for AS includes the Schober test to assess the flexibility of the spine, which can be abnormal even when it’s not obvious to the person.
Your doctor may also do a blood test for HLA-B27, as well as a test called erythrocyte sedimentation rate (ESR) or a C-reactive protein test (CRP). A high ESR or CRP is a sign of conditions with inflammation, such as AS. However, it does not definitely mean that you have AS, since many other conditions can also cause a high ESR or CRP, and two-thirds of people with AS have a normal ESR.
Since back pain and osteoarthritis are common and AS comes on gradually, there is often a long delay in recognizing AS, especially among doctors who are not specialized in rheumatology (the study and treatment of arthritis and other diseases that affect joints, muscles, and bones).
Treatment and Prevention
Currently, there is no cure for AS, but it can be managed using medications, surgery, and physiotherapy.
Rheumatologists are the most specialized and experienced doctors in the diagnosis and management of AS and other spondylopathies.
Nonsteroidal anti-inflammatory drugs (NSAIDs; e.g., ibuprofen*, naproxen) work by reducing inflammation, which helps relieve the pain, stiffness, and swelling of AS. They do not slow down the progression of the condition; in other words, they don’t stop the disease from getting worse. Possible side effects of NSAIDs include nausea, abdominal pain, asthma, liver damage, heart problems, high blood pressure, stomach ulcers, and bleeding.
Corticosteroids (e.g., prednisone, triamcinolone, methylprednisolone) taken by mouth are rarely used to treat AS; however, steroid injections into the affected joints may be used.. They relieve symptoms by reducing inflammation but do not affect spinal changes. Corticosteroids do not slow down the progression of the condition. Side effects of the injection include joint damage (if the injection is used too often) and infection.
Biologics, (e.g., adalimumab, etanercept, golimumab, infliximab, certolizumab, secukinumab) are used to relieve signs and symptoms of the condition, including symptoms in the spine. Some biologics can also help to improve physical function for people with AS. They can slow or stop the progression of the disease, but they may not work for everyone. They work by blocking specific proteins in the body which are involved in causing inflammation. Biologics are given as an injection under the skin (a subcutaneous injection) or as an injection into the vein over a period of time (an intravenous infusion). Possible side effects include infusion reactions (e.g., rash, flushing, headache, and difficulty breathing), irritation at the injection area, nausea, headache, vomiting, diarrhea, fatigue, joint pain, or an increased risk of serious infection (including brain infection). There have also been reports of multiple sclerosis and systemic lupus erythematosus.
Disease modifying antirheumatic drugs (DMARDs; e.g., methotrexate, sulfasalazine) are used to relieve AS symptoms and slow down the progression of the condition. DMARDs may be beneficial to people with peripheral AS (AS that involves other joints). They do not relieve inflammation in the spine, but they reduce inflammation in other joints. Possible side effects include nausea, diarrhea, increased risk of infections, liver damage, lung damage, and bleeding.
Make sure you understand and discuss all the risks and benefits of taking any medications before you start them.
Surgery may be used to repair joint damage or replace damaged areas. For example, some people with AS may need a hip replacement.
A physiotherapist can show you special exercises to improve your flexibility, strength, and mobility. If the disease is not slowed or stopped, regular exercise and supervised physical therapy are essential to maintain a working posture once the spine becomes fused – it is far better to be stiff upright than bent over.
All material copyright MediResource Inc. 1996 – 2020. Terms and conditions of use. The contents herein are for informational purposes only. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Source: www.medbroadcast.com/condition/getcondition/Ankylosing-Spondylitis |
What are proofs in argument?
A proof of an argument is a list of statements, each of which is obtained from the preceding statements using one of the rules of inference T1, T2, S, C, or P. The last statement in the proof must be the conclusion of the argument.
What is proof of the truth?
A proof is sufficient evidence or a sufficient argument for the truth of a proposition. The concept applies in a variety of disciplines, with both the nature of the evidence or justification and the criteria for sufficiency being area-dependent.
What is the purpose of proof?
The function of a proof is mainly to attest in a rational and logical way a certain issue that we believe to be true. It is basically the rational justification of a belief.
How do you prove proof?
Writing a proof consists of a few different steps.
- Draw the figure that illustrates what is to be proved. …
- List the given statements, and then list the conclusion to be proved. …
- Mark the figure according to what you can deduce about it from the information given.
What is proof of validity?
A formal proof that an argument is valid consists of a sequence of pro- positions such that the last proposition in the sequence is the conclusion of the argument, and every proposition in the sequence is either a premise of the argument or follows by logical deduction from propositions that precede it in the list.
What is the function of proof in an argumentative communication?
Evidence serves as support for the reasons offered and helps compel audiences to accept claims. Evidence comes in different sorts, and it tends to vary from one academic field or subject of argument to another.
What are the 3 types of proofs?
There are many different ways to go about proving something, we’ll discuss 3 methods: direct proof, proof by contradiction, proof by induction.
What does a proof consist of?
A proof is a sequence of logical statements, one implying another, which gives an explanation of why a given statement is true. Previously established theorems may be used to deduce the new ones; one may also refer to axioms, which are the starting points, “rules” accepted by everyone.
What is a statement accepted as true without proof?
Axiom. A statement about real numbers that is accepted as true without proof.
What is truth and validity?
VALIDITY. Truth is the complete accuracy of whatever was, is, or will be, error-proof, beyond doubt, dispute or debate, a final test of right or wrong of people’s ideas and beliefs. Validity is defined as the internal consistency of an argument.
What is validity of argument?
validity, In logic, the property of an argument consisting in the fact that the truth of the premises logically guarantees the truth of the conclusion. Whenever the premises are true, the conclusion must be true, because of the form of the argument.
How do you prove an argument is valid logic?
Valid: an argument is valid if and only if it is necessary that if all of the premises are true, then the conclusion is true; if all the premises are true, then the conclusion must be true; it is impossible that all the premises are true and the conclusion is false.
What makes an argument convincing?
It attempts to persuade a reader to adopt a certain point of view or to take a particular action. The argument must always use sound reasoning and solid evidence by stating facts, giving logical reasons, using examples, and quoting experts.
What is the meaning of argumentative communication?
Argumentative communication is considered a subset of assertiveness because, while all argumentation is assertive, not all assertiveness is argumentative. Argumentative individuals advocate positions on controversial issues and verbally attack other people’s contradictory perspectives.
Which part of an argument gives proof that your main point is correct?
A claim is the main argument. A counterclaim is the opposite of the argument, or the opposing argument. A reason tells why the claim is made and is supported by the evidence. Evidence is the facts or research to support your claim.
What are the parts of an argument called?
Arguments can be divided into four general components: claim, reason, support, and warrant.
What are the 3 parts of argument?
An argument is a connected series of statements that create a logical, clear, and defined statement. There are three stages to creating a logical argument: Premise, inference, and conclusion.
What are the 3 types of claims for an argument?
Three types of claims are as follows: fact, value, and policy. Claims of fact attempt to establish that something is or is not the case. Claims of value attempt to establish the overall worth, merit, or importance of something. Claims of policy attempt to establish, reinforce, or change a course of action.
What are the 5 types of argument claims?
The six most common types of claim are: fact, definition, value, cause, comparison, and policy.
What are claims of fact?
Claim of Fact: Asserts that a condition has existed, exists, or will exist. To support a claim of fact, use factual evidence that is sufficient, reliable, and appropriate. |
Ribonucleic acid or RNAs are the working copies of DNA. The information present in DNA is expressed in the form of these working copies. Three major types of RNA are found in all the living cells; messenger RNA, ribosomal RNA, and transfer RNA.
The messenger RNA (mRNA) carries the message of DNA from the nucleus to the cytoplasm. This message is used by the ribosomes to make proteins. In this article, we will discuss the synthesis of mRNA, its processing, and its role in protein synthesis. We will also see the differences between eukaryotic and prokaryotic mRNA.
A molecule of messenger RNA is a linear chain of ribonucleotides. The nucleotides are arranged in the form of triplets called codons. All the codons are collectively called genetic code.
Genetic code is a dictionary through which the sequence of nucleotides in mRNA is translated into the sequence of amino acids in a protein. The genetic code is composed of words called codons. A codon is a sequence of three nucleotides that codes for one specific amino acid in proteins.
Different codons are represented by the first letter of the base present in the nucleotide. Recall that four different bases could be present in the ribonucleotide of mRNA; adenine (A), guanine (G), cytosine (C), and uracil (U). Using these four bases, 64 different combinations have been made, with each combination taking three bases at a time. These 64 codons code for the 20 amino acids mostly present in proteins.
- 61 codons code for the 20 amino acids. One of these codons is always present at the beginning of the mRNA molecule called initiation codon. The initiation codon (AUG) always codes for methionine amino acid.
- The other 3 codons do not code for any amino acid. They are called stop codons or termination codons. They are usually present at the end of the mRNA molecule. These codons signal to stop the process of protein synthesis.
The genetic code on mRNA is already read in 5’ to 3’ direction. There is no break or punctuation between different codons i.e. they are read continuously. This is the reason why the addition or deletion of one nucleotide changes the entire reading frame.
There are 61 codons for 20 amino acids. This shows that more than one codon may code for the same amino acids. However, one codon codes for only one amino acid.
With a few exceptions, the genetic code is universal. The individual codons code for the same amino acids in all the organisms. Therefore, this genetic code is applicable in the case of any living organism.
Like the other types of RNA, messenger RNA is also made from DNA by the process of transcription. This process involves the making of working copies of DNA in the form of mRNA.
Recall that mRNA carries the message of a gene. The process of transcription involves copying the sequence of nucleotides in a gene into the nucleotide sequence of mRNA. As the DNA molecule is double-stranded, one of the gene’s strand acts as a template strand for making mRNA. The other strand is called the non-template strand. the nucleotide sequence in mRNA is complementary to the template strand while it is identical to the non-template strand.
It is the enzyme that constructs the mRNA chain using the template strand of the gene. The enzyme attaches to the template strand and adds nucleotides to make mRNA. The synthesis of mRNA always takes place in 5’ to 3’ direction. The 5’ end is called head while the 3’ end is called the tail of mRNA.
Steps in Transcription
The process of transcription involves three steps; initiation, elongation, and termination.
The synthesis begins with the recognition of the gene by RNA polymerase. It identifies and attached to the specific sequences present at the beginning of the template strand. Such sequences are called consensus sequences. They are different in eukaryotes and prokaryotes.
The recognition and binding of consensus sequence by RNA polymerase in eukaryotes are facilitated by specific factors called transcription factors.
The RNA polymerase travels down the gene until it reaches the initiation site. Here it begins making the RNA strand complementary to the template strand. The ribonucleoside triphosphate molecules are used as precursors for adding nucleotides in the mRNA chain.
The elongation process continues until a termination sequence is reached. Here, transcription stops.
The process of elongation is facilitated by some factors in eukaryotes, called elongation factors.
Once the RNA polymerase reaches the termination sequence, it halts, and no more elongation takes place. The newly formed molecule of mRNA is released from the transcription complex in either of the two ways;
- A hairpin loop is formed in mRNA that pulls the mRNA from RNA polymerase
- A protein called rho-protein uses ATP to break the bonds and release mRNA from the complex
The DNA double helix unwinds during the process of transcription generating supercoils. These supercoils are removed by special enzymes called topoisomerases. These enzymes are different in eukaryotes and prokaryotes.
Site of Synthesis
The site of mRNA synthesis is different in eukaryotes and prokaryotes. Recall that eukaryotic cells have their entire DNA in the nucleus while prokaryotes lack any nucleus in their cells. The synthesis of mRNA in eukaryotes occurs within the nuclei of the cells. On the other hand, prokaryo6tic mRNA is made within the cytoplasm.
The process of transcription in eukaryotes results in a collection of transcripts collectively known as heterogeneous nuclear RNA (hnRNA). It undergoes several modifications before becoming fully functional.
As mentioned earlier that eukaryotic mRNA is made within the nucleus. It must move outside the nucleus into the cytoplasm so that it can perform its function. The post-transcriptional modifications also assist in this movement of mRNA.
A cap is added to the 5’ end of pre-mRNA in this process. This cap is made up of 6-methylguanosine. It is attached to the 5’ end of mRNA via 5’-5’ triphosphate linkage. It is an unusual linkage formed in the following steps;
- The terminal phosphate is removed from the nucleotide at the 5’ end of pre-mRNA
- A molecule of GMP provided by GTP) is added to the 5’ end by guanylyltransferase enzyme
- The terminal guanine is then methylated by a cytosolic enzyme that adds a methyl residue at its 7’ nitrogen
The 7’methylguanosine cap thus formed has two functions;
- It stabilizes the mRNA molecule against the 5’ exonucleases
- It allows efficient initiation of protein synthesis as eukaryotic ribosomes identify mRNA through its cap
Addition of Tail
Just like a cap at the 5’ end, a tail is added at the 5’ end of pre-mRNA. The tail is made up of 20 to 50 adenine nucleotides and is called a poly-A tail. These nucleotides are not transcribed from DNA. Rather, they are added after the process of transcription by a nuclear enzyme called polyadenylate polymerase.
This enzyme identifies a specific sequence present at the 3’ end of pre-mRNA, called the polyadenylation sequence. The enzyme cleaves the pre-mRNA downstream this sequence and starts making poly-A tail using ATP molecules as precursors.
This poly-A tail serves the following functions;
- Helps in exits of mRNA from the nucleus
- Stabilizes mRNA against 3’ exonucleases
- Assists in the process of translation
Removal of Introns
The primary transcript also contains RNA sequences that do not code for proteins. These sequences are present in between the coding sequences and are called the intervening sequence or introns. The other coding sequences are called exons (expressed sequences).
The maturation of mRNA in eukaryotes involves the removal of these intervening sequences or introns by a process called splicing.
The process of splicing is carried out by complexes called small nuclear RNA particles (snRNPs) also called ‘snurps’. These particles are formed by small nuclear RNA (snRNA) along with multiple proteins.
Specific sequences are present at the beginning and the end of each intron called splice sites. The snRNPs bind to these splice sites and cause a transesterification reaction between the two ends. The introns are thus removed leaving behind the coding sequences of mRNA or exons.
The pre-mRNA can undergo splicing in alternative ways generating multiple variants of mRNA. This results in the synthesis of multiple protein products. This process of alternative splicing helps in the synthesis of a large and diverse set of proteins from a limited number of genes.
After these modifications, the mRNA molecule is ready to perform its role in protein synthesis.
Role in Protein Synthesis
the only purpose of making mRNA is to transmit and decode the instructions present in the gene. The mRNA molecule forwards the message in the gene to the protein-making machinery i.e. ribosomes.
Recall that the information for protein synthesis is present in mRNA in the form of codons that code for amino acids. During the process of protein synthesis, ribosomes bind to the mRNA molecule in such a way that one codon is exposed at one time. This codon is decoded by the anti-codon loop of tRNA. The anti-codon loop has a nucleotide sequence complementary to one specific codon. It recognizes the codon exposed on the ribosome and hybridizes it via hydrogen bonding. The same tRNA molecule also carries an amino acid at its 3’ end. The decoding of the exposed codon results in the addition of this amino acid into the polypeptide chain.
The process continues and the entire information present in mRNA is decoded in the form of amino acids being added to the polypeptide chain. When a termination codon is reached, the decoding process stops, and the polypeptide thus formed is released from the ribosomes.
Eukaryotic and Prokaryotic mRNA
Although mRNA serves the same functions in both prokaryotes and eukaryotes, some differences do exist. Here are the differences between the mRNA of two organisms.
- The eukaryotic mRNA is made in the nucleus while prokaryotes make their mRNA in the cytoplasm
- The eukaryotic mRNA is monocistronic i.e. codes for only one protein while the prokaryotic mRNA is polycistronic i.e. codes for multiple proteins
- mRNA in eukaryotes contains a cap and a tail while they are absent in prokaryotic mRNA
- The process of transcription and translation are coupled in prokaryotes while this is not the case in eukaryotes
- Prokaryotic mRNA has a very short life while eukaryotic mRNA is more stable
- Ribosomes identify prokaryotic mRNA via the Shine-Dalgarno sequence while eukaryotic mRNA is identified by its cap
Messenger RNA carries the message present in the gene to the protein-making machinery of the cells.
It is composed of ribonucleotides that are arranged in the form of codons. One codon is made up of three nucleotides. all the codons in the molecule of mRNA are read in a series without any space, starting from its 5’ end.
Messenger RNA is made from the gene in a process called transcription. This process is carried out by RNA polymerases that identify and bind to the gene at a consensus sequence. The process of transcription includes three steps;
- Identification and binding to the gene by RNA polymerase
- Reading of the gene and making of mRNA chain by combining ribonucleotides
- Breaking of the DNA-RNA hybrid and release of mRNA from the complex
The mRNA formed as a result of transcription is premature RNA (pre-RNA). It undergoes several modifications in eukaryotes before becoming functional. These changes are as follows;
- Addition of a cap at the 5’ end so that it becomes stable and can be identified by the ribosomes
- Addition of poly-A tail at the 3’ end to stabilize this end and exist of mRNA from the nucleus
- Removal of the non-coding intervening sequences (introns) by the process of splicing
The mature RNA is identified and bound by the ribosomes in the cytoplasm. The message in mRNA is translated in the form of an amino acid sequence in the polypeptide chain by the assistance of transfer RNA.
Several differences exist between the messenger RNA found in eukaryotes and prokaryotes. Most of these differences are due to the absence of a nucleus in prokaryotes.
- Denise R. Ferrier, Lippincott Illustrated Reviews, Biochemistry, Ed. 6th
- Rodwell, Kennelly, Harper’s Illustrated Biochemistry, Ed. 30th |
The analysis of physical phenomena and processes requires the measurement of physical quantities. A physical quantity is measured in terms of a small part of it. The small part is conventionally adopted as a unit of measurement of the quantity. We can choose such a unit for every quantity independent of other quantities. However, it is helpful to first establish the units of a few quantities which are called base or fundamental quantities. The corresponding units are called base or fundamental units. The units of the remaining physical quantities are expressed in terms of these base units. These quantities are called derived quantities and their units, derived units. To firm up your basic knowledge we have videos on Base and Derived Quantities and Their Units, Principle of Homogeneity of Dimensions, Order of Magnitude Calculation, Uses of Dimensional Analysis etc.
A quantity which is completely specified by a number with an appropriate unit is called a scalar. A scalar has only magnitude but no direction. Some scalars like volume and mass are always positive. Other scalars like temperature and electric charge can be either positive or negative. A quantity which is completely specified by a number with an appropriate unit and direction is called a vector. Thus, a vector has both magnitude and direction. Examples of vectors are velocity, acceleration, force, momentum, torque etc. We have video lectures on Addition of Vectors, Subtraction of Vectors, Resolution of Vectors, Product of Vectors and other aspects of vector with suitable examples and problems.
Mechanics can be divided into two parts: dynamics and statics. Dynamics is the study of motion of a body under one or more forces. Statics is the study of the condition of rest of a body under a number of forces. Dynamics is further divided into kinematics and kinetics. Kinematics is that part of dynamics which deals with motion without reference to the forces that cause it or the properties of the body in motion. Kinetics is that part which relates the motion of a body to its mass and the casual force(s). The subtopics covered in the present topic such as Projectile Motion, Translational Motion, Graphical Analysis of Rectilinear Motion, Rectilinear Motion with Constant Acceleration, Rectilinear Motion under Gravity etc. fall within the scope of kinematics.
Newton’s three laws of motion are fundamental to the study of Kinetics. Kinetics is the study of how motion of a body is related to its mass and the force(s) acting on it. The force represents the interaction of a body with its environment. In general, the environment consists of nearby bodies and the effect of distant bodies may be ignored. The mass of a body is a measure of its inertia which is the tendency to resist acceleration under a force. The theory of motion was developed by English physicist Sir Isaac Newton (1642 – 1727) in the 17th century. We have video lectures on various subtopics such as Newton’s Laws of Motion, Concept of Force, Free Body Diagrams, Applications of Newton’s Laws, Pseudo Force, Comparison of Inertial Frame And Non-Inertial Frame etc. to provide an in-depth knowledge in an extremely skillful way.
We often simplify problems by assuming that the motion of bodies takes place on “frictionless” surfaces. Strictly speaking, there is no such surface. In real life all motions happening around us are affected by the force of friction. Therefore, a realistic approach to any mechanical problem requires that we identify the frictional forces acting on the system and include them in the respective equations of motion. That is precisely what you will learn to do in the present topic. You will find video lectures on Static and Kinetic friction, Angle of Repose, Angle of Friction, Examples of Motion on Rough Surfaces, Rolling Friction, Drag Force etc. explained and supported by suitable applications and numerical examples.
In the present topic, the same motion will be discussed in greater detail. The topic deals with the kinematics of circular motion without any reference to the forces that cause it as well as the contribution of the forces acting on a particle, causing its circular motion. The study of circular motion is not only important in itself, but also an essential precondition for the study of rotational motion. When a rigid body rotates about an axis, every particle of it describes a circle whose centre lies on the axis of rotation. Therefore, the kinematic equations that we develop in this topic will also be useful to study rotational motion. This topic consists of video lectures on Angular Quantities in Circular Motion, Circular Motion with Constant Angular Acceleration, Two Accelerations of Non-Uniform Circular Motion, Problems on Circular Motion and many more to provide a thorough knowledge and to guide students to develop the problem-solving skill.
Until now we used terms like velocity, acceleration, force etc. which mean more or less the same thing to a physicist and a layman. The term work is an exception. In ordinary conversation the word may mean a wide variety of activities, but in the domain of physics, its use is far more restricted. The concept of energy is closely associated with that of work. Be it a physicist or a common man, everyone has an awareness of energy and what it truly means. We shall mostly be dealing with Mechanical Energy which is further classified into two types: Kinetic Energy and Potential Energy. Through our video lectures on Definition of Work, Work Done by a Varying Force, Concept of Energy and Derivation of Work-Energy theorem, Principle of Conservation of Mechanical energy and Some Applications, Equivalence of Mass and energy, Conservative and Non-Conservative Forces, Potential Energy Function, Principle of Conservation of Mechanical Energy and Some Applications etc. we elaborately illustrated the topic for a crystal clear understanding and application.
As the title suggests, this topic can be broadly divided into three sections. In this topic we have defined impulse of a force, the impulse-momentum theorem, the principle of conservation of linear momentum etc. You have already been introduced to the concept of momentum in the topic 4 (Newton’s Laws of Motion). But it needs more than that brief introduction to have a stronger grasp on the topic. The most useful application of the momentum principle is the study of collision between two bodies. The definition of the centre of mass of a system of particle will show the motion of a system of particles is equivalent to the motion of one representative particle located at the centre of mass. You will develop a clear concept of the topic through the video lectures on Impulse and Impulse-Momentum Theorem, Motion of a Body of Variable Mass, Principle of Conservation of Linear Momentum, Collision in One Dimension, Collision in Two Dimensions, Finding Centres of Mass of Uniform Rigid Bodies, Motion of a System of Particles etc.
When a rigid body performs rotational motion, the individual particles follow different paths and process, different linear velocities and accelerations at any particular instant. The study of rotational motion in the present topic requires that we treat the body as an assemblage of many particles, connected firmly to one another and each moving with its own velocity and acceleration. To strengthen your concept and consolidate your problem-solving skill we offer you video lectures on Kinematics of Rotation about a Fixed Axis, Equation of Rotational Motion of Rigid Body, Torque of a Force Acting on a Particle, Two Important Theorem on Moment of Inertia, Equation of Rotational Motion of a Rigid Body, Over Turning of Vehicles at a Bend, Angular momentum of a Particle and Its Relation with Torque and many more.
When engineers construct buildings and flyovers and designers design small items like scissors, forks, ect, they keep two things in mind. First, the conditions under which the bodies, presumed to be rigid, remain in mechanical equilibrium under the action of external forces and their torques. And second, the conditions under which the bodies continue to remain rigid under the said forces and torques.The branch of physics which studies the condition of equilibrium of a body at rest is called statics. All bodies which attain equilibrium under a set of forces are deformed to a certain extent. However, if these forces are comparatively small, the deformation is also small and the conditions of static equilibrium remain unaffected. At this point, it will be tacitly assumed that the bodies under investigation remain perfectly rigid when a set of forces and torques act on them. You will enjoy the topic while learning through the video lectures on Stability of Static Equilibrium, Various Cases of Static Equilibrium In Two Dimensions, Equilibrium of A Leaning Ladder, Centre of Gravity of a Rigid Body and many more.
We told you in topic 4 that the gravitational force is one of the three fundamental forces of nature, the other two being the electromagnetic force and the nuclear force. The nuclear force operates inside an atomic nucleus and does not make its presence felt in everyday life. The electromagnetic force is often disguised as various forms of contact force. But whether it is the orbital motion of a planet round the sun or the free fall of an apple from a tree, the effect of the gravitational force is easy for all to feel. For your easy understanding of the subject and to increase your level of command to crack and solve the variety of numerical problems, we have video lectures on Newton’s Law of Universal Gravitation, Determination of the Gravitational Constant G, Kepler’s Laws of Planetary Motion, Gravity, Gravitational Field, Gravitational Potential Energy, Gravitational Potential, Escape Speed, Natural and Artificial Satellites of Motion, Mechanical Energy of Satellite-Earth System, etc.
While studying conditions of static equilibrium under Mechanics, we conveniently assumed that the bodies under investigation remained perfectly rigid under a set of forces and torques. In reality, no solid body is perfectly rigid. So, when a system of balanced forces or couples acts on a solid body at rest, the body gets deformed. In other words, though the body does not exhibit any translational or rotational motion as a whole, different parts within it change their relative positions with respect to each other. A light wire, attached to the ceiling and holding a load at its free end, stretches in length. A book lying on a table, subjected to a tangential force on the top cover, changes in shape. A metal sphere taken to the depths of the sea shrinks in volume, albeit by a tiny fraction. The common word for any such change is deformation. The property by virtue of which a body resists any change in its size, shape or both and tends to regain its configuration on withdrawal of the deforming forces is known at the elasticity of the body. To impart in-depth knowledge on the theory of elasticity and to develop skill in solving problems based on this topic, we offer a series of video lectures titled Internal Forces and Stress, Strain, Hooke's Law and Young's Modulus, Shear Modulus, Bulk Modulus, Stress Versus Strain Graph, Elastic Potential Energy of a Deformed Body, and so on.
Matter can be classified into three types: solids, liquids and gases. A solid can withstand shear stress; it has definite volume and shape. Liquids and gases cannot withstand static shear stress and begin to flow under it; hence they are collectively referred to as fluids. None of the fluids has any definite shape of its own and eventually takes the shape of the vessel in which it is kept. While a liquid occupies a definite volume almost unaffected even by very high pressure, a gas can be compressed easily. Because of these distinctive features, we can tell a solid from a fluid in most cases. But there are exceptions such as asphalt. It looks so much like a solid but, in reality, it is a fluid that flows very, very slowly. A single substance may remain in any one of the three states under varying physical conditions. As you all know, if the substance is H2O, the states are named ice (solid), water (liquid) and water vapour (gas). The mechanics of fluids is governed by a number of physical principles which are based on Newton's laws of motion and other force laws. Fluid statics is that part of Fluid Mechanics which discusses fluids at rest. We shall impart a comprehensive knowledge on this topic through a series of video lectures titled Pressure at a Point, Variation of Pressure in a Static Fluid, Measurement of Pressure, Pascal's Principle, Buoyancy and Archimedes' Principle, Equilibrium of a Floating Body, and many more.
A piece of camphor dances on the surface of water without any obvious provocation. A water spider can skate on a pond without wetting its legs. A container with a small hole at the bottom can manage to hold mercury. Great effort is required to separate two flat glass plates if there is a thin layer of water between them. When a narrow glass tube open at both ends is dipped into water, water rises in the tube. All these events can be explained by the fluid property called surface tension. Surface tension is a molecular phenomenon which occurs at the surface of separation between two phases such as a liquid and a solid, a liquid and a gas, or a solid and a gas. We shall teach you the basics of surface tension first, and then follow up with a large number of problems at both simple and challenging levels. Our video lectures on this topic are titled Theory of Surface Tension, Angle of Contact and Shape of Meniscus, Excess Pressure within Liquid Drop and Soap Bubble, Force between Two Plates Separated by a Liquid Film, Rise or Fall of a Liquid in a Capillary Tube, and much more.
While the motion of rigid bodies is rather uninteresting, the motion of fluids observed in nature can indeed be pleasing to the eye. The flow of a gurgling stream, the eruption of molten lava, the swirl of hot gases from a burning tinder, ocean waves – all these and more are the subject of fluid dynamics. While each particle of the fluid follows Newton's laws of motion, we find it convenient to describe the properties of the fluid at each point on its path as a function of time. The motion of a real fluid is complex and not yet fully understood. Therefore, we often make matters simple by assuming an ideal fluid which is non-viscous and incompressible. Further, the flow of the ideal fluid is assumed to be steady and irrotational. However, in the latter part of the topic, we discuss viscosity and steady flow of a viscous fluid. The students will acquire a commanding grip on the topic and will learn to solve a variety of problems – simple to moderate to difficult – through a series of video lectures titled Equation of Continuity, Bernoulli's Equation, Some Applications of Bernoulli's Equation, Viscosity, Poiseuille's Law, Critical Speed and Reynolds Number, Motion of a Solid Body in a Viscous Fluid, et cetera.
We are confident of getting your support and appreciation. That will inspire us to add new features to our website, such as question & problem banks, doubt removal sessions etc. Please let me know your feedback
Hearty welcome to Physicsacademyonline.in / PhysicsAcademyOnline.com /PhysicsAcademyOnline.org / PhysicsAcademyOnline.info / PhysicsAcademyOnline.net. |
- Parity (mathematics)
In mathematics, the parity of an object states whether it is even or odd.
This concept begins with integers. An even number is an integer that is "evenly divisible" by 2, i.e., divisible by 2 without remainder; an odd number is an integer that is not evenly divisible by 2. (The old-fashioned term "evenly divisible" is now almost always shortened to "divisible".) A formal definition of an even number is that it is an integer of the form n = 2k, where k is an integer; it can then be shown that an odd number is an integer of the form n = 2k + 1.
Examples of even numbers are −4, 8, and 1728. Examples of odd numbers are −5, 9, 3, and 71. This classification only applies to integers, i.e., a fractional number like 1/2 or 4.201 is neither even nor odd.
The sets of even and odd numbers can be defined as following:
- Even =
- Odd =
A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it's odd; otherwise it's even. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1 and even if its last digit is 0. In an odd base, the number is even according to the sum of its digits – it is even if and only if the sum of its digits is even.
Arithmetic on even and odd numbers
The following laws can be verified using the properties of divisibility. They are a special case of rules in modular arithmetic, and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative, and multiplication is distributive over addition. However, subtraction in parity is identical to addition, so subtraction also possesses these properties (which are absent from ordinary arithmetic).
Addition and subtraction
- odd ± even = odd;
Rules analogous to these for divisibility by 9 are used in the method of casting out nines.
- even × even = even;
- even × odd = even;
- odd × odd = odd.
The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor.
The ancient Greeks considered 1 to be neither fully odd nor fully even. Some of this sentiment survived into the 19th century: Friedrich Wilhelm August Fröbel's 1826 The Education of Man instructs the teacher to drill students with the claim that 1 is neither even nor odd, to which Fröbel attaches the philosophical afterthought,It is well to direct the pupil's attention here at once to a great far-reaching law of nature and of thought. It is this, that between two relatively different things or ideas there stands always a third, in a sort of balance, seeming to unite the two. Thus, there is here between odd and even numbers one number (one) which is neither of the two. Similarly, in form, the right angle stands between the acute and obtuse angles; and in language, the semi-vowels or aspirants between the mutes and vowels. A thoughtful teacher and a pupil taught to think for himself can scarcely help noticing this and other important laws.
In wind instruments with a cylindrical bore and in effect closed at one end, such as the clarinet at the mouthpiece, the harmonics produced are odd multiples of the fundamental frequency. (With cylindrical pipes open at both ends, used for example in some organ stops such as the open diapason, the harmonics are even multiples of the same frequency for the given bore length, but this has the effect of the fundamental frequency being doubled and all multiples of this fundamental frequency being produced.) See harmonic series (music).
The even numbers form an ideal in the ring of integers, but the odd numbers do not — this is clear from the fact that the identity element for addition, zero, is an element of the even numbers only. An integer is even if it is congruent to 0 modulo this ideal, in other words if it is congruent to 0 modulo 2, and odd if it is congruent to 1 modulo 2.
The squares of all even numbers are even, and the squares of all odd numbers are odd. Since an even number can be expressed as 2x, (2x)2 = 4x2 which is even. Since an odd number can be expressed as 2x + 1, (2x + 1)2 = 4x2 + 4x + 1. 4x2 and 4x are even, which means that 4x2 + 4x + 1 is odd (since even + odd = odd).
Goldbach's conjecture states that every even integer greater than 2 can be represented as a sum of two prime numbers. Modern computer calculations have shown this conjecture to be true for integers up to at least 4 × 1014, but still no general proof has been found.
The Feit–Thompson theorem states that a finite group is always solvable if its order is an odd number. This is an example of odd numbers playing a role in an advanced mathematical theorem where the method of application of the simple hypothesis of "odd order" is far from obvious.
Parity for other objects
Parity is also used to refer to a number of other properties.
- The parity of a permutation (as defined in abstract algebra) is the parity of the number of transpositions into which the permutation can be decomposed. For example (ABC) to (BCA) is even because it can be done by swapping A and B then C and A (two transpositions). It can be shown that no permutation can be decomposed both in an even and in an odd number of transpositions. Hence the above is a suitable definition. In Rubik's Cube, Megaminx, and other twisty puzzles, the moves of the puzzle allow only even permutations of the puzzle pieces, so parity is important in understanding the configuration space of these puzzles.
- The parity of a function describes how its values change when its arguments are exchanged with their negations. An even function, such as an even power of a variable, gives the same result for any argument as for its negation. An odd function, such as an odd power of a variable, gives for any argument the negation of its result when given the negation of that argument. It is possible for a function to be neither odd nor even, and for the case f(x) = 0, to be both odd and even.
- Integer coordinates of points in Euclidean spaces of two or more dimensions also have a parity, usually defined as the parity of the sum of the coordinates. For instance, the checkerboard lattice contains all integer points of even parity. This feature manifests itself in chess, as bishops are constrained to squares of the same parity; knights alternate parity between moves. This form of parity was famously used to solve the Mutilated chessboard problem.
Wikimedia Foundation. 2010.
Look at other dictionaries:
Parity — is a concept of equality of status or functional equivalence. It has several different specific definitions.* Parity (physics), the name of the symmetry of interactions under spatial inversion * Parity (mathematics) indicates whether a number is… … Wikipedia
Parity game — Parity Games are (possibly) infinite games played on a graph by two players, 0 and 1. In such games, game positions are assigned priorities, i.e. natural numbers.A play in a parity game is a maximal sequence of nodes following the transition… … Wikipedia
parity — ► NOUN 1) equality or equivalence. 2) Mathematics the fact of being an even or an odd number. ORIGIN Latin paritas, from par equal … English terms dictionary
Parity of zero — Zero objects, divided into two equal groups Zero is an even number. In other words, its parity the quality of an integer being even or odd is even. Zero fits the definition of even number : it is an integer multiple of 2, namely 0 × 2. As a… … Wikipedia
Mathematics and Physical Sciences — ▪ 2003 Introduction Mathematics Mathematics in 2002 was marked by two discoveries in number theory. The first may have practical implications; the second satisfied a 150 year old curiosity. Computer scientist Manindra Agrawal of the… … Universalium
Parity-check matrix — In coding theory, a parity check matrix of a linear block code C is a generator matrix of the dual code. As such, a codeword c is in C if and only if the matrix vector product Hc=0. The rows of a parity check matrix are parity checks on the… … Wikipedia
Parity of a permutation — Permutations of 4 elements Odd permutations have a green or orange background. The numbers in the right column are the inversion numbers (sequence … Wikipedia
Parity bit — 7 bits of data (number of 1s) 8 bits including parity even odd 0000000 (0) 00000000 10000000 1010001 (3) 11010001 01010001 1101001 (4) 01101001 11101001 1111111 (7) 11111111 … Wikipedia
Mathematics education — A mathematics lecture at Aalto University School of Science and Technology. Educational Research … Wikipedia
parity — I. /ˈpærəti / (say paruhtee) noun 1. equality, as in amount, status, or character. 2. equivalence; correspondence; similarity or analogy. 3. Finance a. equivalence in value in the currency of another country. b. equivalence in value at a fixed… … Australian English dictionary |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.