text stringlengths 4 602k |
|---|
Inflation is a rate at which general level of prices for goods and service is rising and consequently purchasing price of currency is falling .
Let us see an example of inflation :
Let us assume a kilo of apple and a kilo of orange cost 100 and 50 respectively in 2010.
Now at year 2015 ,the cost of apple increased to 110 and cost of orange increased to 55.Both fruits we can see an increase of 10% .This general rise of price in goods and services is termed as Inflation.
So now what do you mean if their is a general price rise in some commodities ?
If a general price rise is some commodities happen, then that price rise is termed as Skewflation.
For example one can always see a trend in Indian economy where food price increase but other prices generally decreases.When we say just food price alone increased while comparing with other products ,then it is skewflation .
There are two types of Inflation.
1.Cost Push Inflation.
2.Demand Pull Inflation .
1.Cost Push Inflation :
Cost push inflation occur when their is an increase in price of inputs like land,labour,raw material etc.
Increased in price rise ==> Decrease of supply of goods .This is a concern because demand for that particular product remains constant.(Look at the example given below )
When Cost Push Inflation generally occur ?
2.Depletion of natural resource
5.Exchange rate changes etc.
Let us see an example
let us assume India has some oil reserve and oil demand in India is very high.Once the oil is depleted,then the raw material cost,input cost etc will increase as India have to import oil from other countries.Already demand is on higher side, so it is mandatory for government to import oil from international market.This situation is related to cost push inflation.
Demand Pull Inflation:
Demand Pull inflation exists when demand for a goods and service outstrips supply.Demand pull inflation starts with increase in consumer demand.Sellers meet such demand with increasing the supply.But when additional supply is not available,sellers will increase the price of the products.This creates Demand Pull Inflation.
Increase In consumer demand==>Sellers Increase the supply==>When no additional supply ==>sellers increase the price ==>Demand Pull Inflation.
When Demand Pull Inflation generally occur ?
1 .When salary increases,consumer demand increases.
2.When government expenditure increases,investment increases and thus demand also increases.
3.Imbalance in demand and supply
4.When consumption expenditure of people increases,demand also increases
5.Excessive monetary growth ,means too much money chasing too few goods.
6.Exchange rate-Depreciation in currency causes export to grow and import will be affected.This also cause demand in some of the sectors where country positioned to import. |
Are you curious about what XRF analysis is and how it works? Dive into this comprehensive guide to understand the fundamentals of XRF analysis, its applications, and why it’s crucial in various industries.
What is XRF Analysis?
XRF, or X-ray fluorescence, is a non-destructive analytical technique used to determine the elemental composition of materials. XRF analysis relies on the principle that individual atoms, when excited by an external energy source, emit X-ray photons of a characteristic energy or wavelength.
The Science Behind XRF Analysis
When a material is bombarded with high-energy X-rays or gamma rays, the atoms in the material absorb this energy. This excitation process pushes an electron from the inner shell to an excited state. When the electron returns to its original state, it emits energy in the form of a secondary X-ray, known as fluorescence. The energy of this fluorescence is unique to the specific element, enabling the identification of the elemental composition of the sample.
Applications of XRF Analysis
XRF analysis is widely used in various industries due to its non-destructive nature and its ability to measure a broad range of elements. Here are a few examples:
1. Environmental Science: XRF is used to analyze soil, sediment, and air filters to determine pollution levels and trace the source of contamination.
2. Archaeology: This technique helps identify the composition of artifacts, revealing vital information about ancient civilizations.
3. Quality Control: In manufacturing industries, XRF ensures the quality of raw materials and finished products.
4. Mining: XRF analysis helps in exploration by identifying the elements present in the ore.
Benefits of XRF Analysis
One of the main advantages of XRF analysis is that it’s non-destructive, meaning the sample remains intact after testing. This feature is particularly beneficial in industries like archaeology, where preserving the sample is crucial.
Furthermore, XRF can analyze a broad range of elements, from lithium to uranium, on the periodic table. It’s also capable of quantifying an element in a sample from 100% down to the parts-per-million (ppm) levels.
In a nutshell, XRF analysis is a powerful tool in the world of analytical chemistry. Its ability to determine the elemental composition of a sample without causing any damage makes it indispensable in various fields. As technology advances, we can only expect the capabilities and applications of XRF analysis to continue to expand.
Check more about: Fusion machines for XRF |
This article needs additional citations for verification. (January 2015) (Learn how and when to remove this template message)
The Dissolution of the Monasteries, occasionally referred to as the Suppression of the Monasteries, was the set of administrative and legal processes between 1536 and 1541 by which Henry VIII disbanded monasteries, priories, convents and friaries, in England, Wales and Ireland, expropriated their income, disposed of their assets, and provided for their former personnel and functions. Although the policy was originally envisaged as increasing the regular income of the Crown, much former monastic property was sold off to fund Henry's military campaigns in the 1540s. He was given the authority to do this in England and Wales by the Act of Supremacy, passed by Parliament in 1534, which made him Supreme Head of the Church in England, thus separating England from papal authority, and by the First Suppression Act (1535) and the Second Suppression Act (1539).
Professor George W. Bernard argues:
The dissolution of the monasteries in the late 1530s was one of the most revolutionary events in English history. There were nearly 900 religious houses in England, around 260 for monks, 300 for regular canons, 142 nunneries and 183 friaries; some 12,000 people in total, 4,000 monks, 3,000 canons, 3,000 friars and 2,000 nuns. If the adult male population was 500,000, that meant that one adult man in fifty was in religious orders.
At the time of their suppression, a small number of English and Welsh religious houses could trace their origins to Anglo-Saxon or Celtic foundations before the Norman Conquest, but the overwhelming majority of the 625 monastic communities dissolved by Henry VIII had developed in the wave of monastic enthusiasm that had swept western Christendom in the 11th and 12th centuries. Few English houses had been founded later than the end of the 13th century; the most recent foundation of those suppressed was the Bridgettine nunnery of Syon Abbey founded in 1415. (Syon was also the only suppressed community to maintain an unbroken continuity in exile; its nuns returned to England in 1861.)
Typically, 11th- and 12th-century founders had endowed monastic houses with both 'temporal' income in the form of revenues from landed estates, and 'spiritual' income in the form of tithes appropriated from parish churches under the founder's patronage. In consequence of this, religious houses in the 16th century controlled appointment to about two-fifths of all parish benefices in England, disposed of about half of all ecclesiastical income, and owned around a quarter of the nation's landed wealth. An English medieval proverb said that if the abbot of Glastonbury married the abbess of Shaftesbury, their heir would have more land than the king of England.
The 200 houses of friars in England and Wales constituted a second distinct wave of foundations, almost all occurring in the 13th century. Friaries, for the most part, were concentrated in urban areas. Unlike monasteries, friaries had eschewed income-bearing endowments; the friars, as mendicants, expected to be supported financially by offerings and donations from the faithful, while ideally being self-sufficient in producing their own basic foods from extensive urban kitchen gardens.
The Dissolution of the Monasteries in England and Ireland took place in the political context of other attacks on the ecclesiastical institutions of Western Roman Catholicism, which had been under way for some time. Many of these were related to the Protestant Reformation in Continental Europe. By the end of the 16th century, monasticism had almost entirely disappeared from those European states whose rulers had adopted Lutheran or Reformed confessions of faith (Ireland being the only major exception). They continued in those states that remained Catholic, and new community orders such as the Jesuits and Capuchins emerged alongside the older orders.
But, the religious and political changes in England under Henry VIII and Edward VI were of a different nature from those taking place in Germany, Bohemia, France, Scotland and Geneva. Across much of continental Europe, the seizure of monastic property was associated with mass discontent among the common people and the lower level of clergy and civil society against powerful and wealthy ecclesiastical institutions. Such popular hostility against the church was rare in England before 1558; the Reformation in England and Ireland was directed from the king and highest levels of society. These changes were initially met with widespread popular suspicion; on some occasions and in particular localities, there was active resistance to the royal program.
Dissatisfaction with the general state of regular religious life, and with the gross extent of monastic wealth, was near to universal amongst late medieval secular and ecclesiastical rulers in the Latin West. Bernard says there was:
widespread concern in the later 15th and early 16th centuries about the condition of the monasteries. A leading figure here is the scholar and theologian Desiderius Erasmus who satirized monasteries as lax, as comfortably worldly, as wasteful of scarce resources, and as superstitious; he also thought it would be better if monks were brought more directly under the authority of bishops. At that time, quite a few bishops across Europe had come to believe that resources expensively deployed on an unceasing round of services by men and women in theory set apart from the world [would] be better spent on endowing grammar schools and university colleges to train men who would then serve the laity as parish priests, and on reforming the antiquated structures of over-large dioceses such as that of Lincoln. Pastoral care was seen as much more important and vital than the monastic focus on contemplation, prayer and performance of the daily office.
Erasmus had made a threefold criticism of the monks and nuns of his day, saying that:
- in withdrawing from the world into their own communal life, they elevated man-made monastic vows of poverty, chastity and obedience above the God-given vows of sacramental baptism; and elevated man-made monastic rules for religious life above the God-given teachings of the Gospels;
- notwithstanding exceptional communities of genuine austere life and exemplary charity, the overwhelming majority of abbeys and priories were havens for idle drones; concerned only for their own existence, reserving for themselves an excessive share of the commonwealth's religious assets, and contributing little or nothing to the spiritual needs of ordinary people; and
- the monasteries, almost without exception, were deeply involved in promoting and profiting from the veneration of relics, in the form of pilgrimages and purported miraculous tokens. The cult of relics was by no means specific to monasteries, but Erasmus was scandalised by the extent to which well-educated and highly regarded monks and nuns would participate in the perpetration of what he considered to be frauds against gullible and credulous lay believers.
The verdict of unprejudiced historians at the present day would probably be—abstracting from all ideological considerations for or against monasticism—that there were far too many religious houses in existence in view of the widespread decline of the fervent monastic vocation, and that in every country the monks possessed too much of wealth and of the sources of production both for their own well-being and for the material good of the economy.
Pilgrimages to monastic shrines continued to be widely popular, until forcibly suppressed in England in 1538 by order of Henry VIII. But the dissolution resulted in few modifications to the practice of religion in England's parish churches; in general the English religious reforms of the 1530s corresponded in few respects to the precepts of Protestant Reformers, and encountered much popular hostility when they did. In 1536, Convocation adopted and Parliament enacted the Ten Articles of which the first half used terminology and ideas drawn from Luther and Melanchthon; but any momentum towards Protestantism stalled when Henry VIII expressed his desire for continued orthodoxy with the Six Articles of 1539, which remained in effect until after his death.
Cardinal Wolsey had obtained a Papal Bull authorising some limited reforms in the English Church as early as 1518, but reformers (both conservative and radical) had become increasingly frustrated at their lack of progress. Henry wanted to change this, and in November 1529 Parliament passed Acts reforming apparent abuses in the English Church. They set a cap on fees, both for the probate of wills and mortuary expenses for burial in hallowed ground; tightened regulations covering rights of sanctuary for criminals; and reduced to two the number of church benefices that could in the future be held by one man. These Acts sought to demonstrate that establishing royal jurisdiction over the Church would ensure progress in "religious reformation" where papal authority had been insufficient.
The monasteries were next in line. J.J. Scarisbrick remarked in his biography of Henry VIII:
Suffice it to say that English monasticism was a huge and urgent problem; that radical action, though of precisely what kind was another matter, was both necessary and inevitable, and that a purge of the religious orders was probably regarded as the most obvious task of the new regime—as the first function of a Supreme Head empowered by statute "to visit, extirp and redress".
The stories of monastic impropriety, vice, and excess that were to be collected by Thomas Cromwell's visitors to the monasteries may have been biased and exaggerated. But the religious houses of England and Wales—with the notable exceptions of those of the Carthusians, the Observant Franciscans, and the Bridgettine nuns and monks—had long ceased to play a leading role in the spiritual life of the country. Other than in these three orders, observance of strict monastic rules was partial at best. The exceptional spiritual discipline of the Carthusian, Observant Franciscan and Bridgettine orders had, over the previous century, resulted in their being singled out for royal favour, in particular with houses benefitting from endowments confiscated by the Crown from the suppressed alien priories.
Otherwise in this later period, donations and legacies had tended to go instead towards parish churches, university colleges, grammar schools and collegiate churches, which suggests greater public approbation of such purposes. Levels of monastic debt were increasing, and average numbers of professed religious were falling, although the monasteries continued to attract recruits right up to the end. Only a few monks and nuns lived in conspicuous luxury, but most were very comfortably fed and housed by the standards of the time, and few any longer set standards of ascetic piety or religious observance. Only a minority of houses could now support the twelve or thirteen professed religious usually regarded as the minimum necessary to maintain the full canonical hours of the Divine Office. Even in houses with adequate numbers, the regular obligations of communal eating and shared living had not been fully enforced for centuries, as communities tended to sub-divide into a number of distinct familiae. In most larger houses, the full observance of the Canonical Hours had become the task of a sub-group of 'Cloister Monks', such that the majority of the professed members of the house were freed to conduct their business and live much of their lives in the secular world. Extensive monastic complexes dominated English towns of any size, but most were less than half full.
From 1534 onwards, Cromwell and King Henry were constantly seeking ways to redirect ecclesiastical income to the benefit of the Crown—efforts they justified by contending that much ecclesiastical revenue had been improperly diverted from royal resources in the first place. Renaissance princes throughout Europe were facing severe financial difficulties due to sharply rising expenditures, especially to pay for armies, fighting ships and fortifications. Most tended, sooner or later, to resort to plundering monastic wealth, and to increasing taxation on the clergy. Protestant princes would justify this by claiming divine authority; Catholic princes would obtain the agreement and connivance of the papacy. Monastic wealth, regarded everywhere as excessive and idle, offered a standing temptation for cash-strapped secular and ecclesiastical authorities.
In consequence, almost all official action in respect of the Dissolution in England and Wales was directed at the monasteries and monastic property. The closing of the monasteries aroused popular opposition, but recalcitrant monasteries and abbots became the targets of royal hostility. The surrender of the friaries, from an official perspective, arose almost as an afterthought, as an exercise in administrative tidiness once it had been determined that all religious houses would have to go. In terms of popular esteem, however, the balance tilted the other way. Almost all monasteries supported themselves from their endowments; in late medieval terms 'they lived off their own'. Unless they were notably bad landlords or scandalously neglected those parish churches in their charge, they tended to enjoy widespread local support; particularly as they commonly appointed local notables to fee-bearing offices. The friars, not being self-supporting, were by contrast much more likely to have been the objects of local hostility, especially since their practice of soliciting income through legacies appears often to have been perceived as diminishing anticipated family inheritances.
English precedents of the Church
By the time Henry VIII turned his mind to the business of monastery reform, royal action to suppress religious houses had a history of more than 200 years. The first case was that of the so-called 'Alien Priories'. As a result of the Norman Conquest, some French religious orders held substantial property through their daughter monasteries in England.
Some of these were merely granges, agricultural estates with a single foreign monk in residence to supervise things; others were rich foundations in their own right. (e.g., Lewes Priory was a daughter of Cluny of Paris and answered to the abbot of that great French house).
Owing to the fairly constant state of war between England and France in the Late Middle Ages, successive English governments had objected to money going overseas to France from these Alien Priories, as the hostile French king might get hold of it. They also objected to foreign prelates having jurisdiction over English monasteries.
Furthermore, after 1378, French monasteries (and hence alien priories dependent on them) maintained allegiance to the continuing Avignon Papacy. Their suppression was supported by the rival Roman Popes, conditional on all confiscated monastic property eventually being redirected into other religious uses. The king's officers first sequestrated the assets of the Alien Priories in 1295–1303 under Edward I, and the same thing happened repeatedly for long periods over the course of the 14th century, most particularly in the reign of Edward III.
Those Alien Priories that had functioning communities were forced to pay large sums to the king, while those that were mere estates were confiscated and run by royal officers, the proceeds going to the king's pocket. Such estates were a valuable source of income for the Crown in its French wars. Most of the larger Alien Priories were allowed to become naturalised (for instance Castle Acre Priory), on payment of heavy fines and bribes, but for around ninety smaller houses and granges, their fates were sealed when Henry V dissolved them by act of Parliament in 1414.
The properties were taken over by the Crown; some were kept, some were subsequently given or sold to Henry's supporters, others were assigned to his new monasteries of Syon Abbey and the Carthusians at Sheen Priory; others were used for educational purposes. All these suppressions enjoyed Papal approval. But successive 15th-century popes continued to press for assurances that, now that the Avignon Papacy had been defeated, the confiscated monastic income would revert to religious and educational uses.
The medieval understanding of religious houses as institutions associated monasteries and nunneries with their property; that is to say, their endowments of land and spiritual income, and not with their current personnel of monks and nuns. If the property with which a house had been endowed by its founder were to be confiscated or surrendered, then the house ceased to exist, whether its members continued in the religious life or not. Consequently, the founder, and their heirs, had a continuing (and legally enforceable) interest in certain aspects of the house's functioning; their nomination was required at the election of an abbot or prior, they could claim hospitality within the house when needed, and they could be buried within the house when they died. In addition, though this scarcely ever happened, the endowments of the house would revert to the founder's heirs if the community failed or dissolved. The status of 'founder' was considered in civil law to be real property; and could consequently be bought and sold, in which case the purchaser would be termed the patron. Furthermore, like any other real property, in intestacy and some other circumstances the status of 'founder' would revert to the Crown; a procedure that many houses actively sought, as it might be advantageous in their legal dealings in the King's courts.
The founders of the Alien Priories had been foreign monasteries refusing allegiance to the English Crown. These property rights were therefore automatically forfeited to the Crown when their English dependencies were dissolved by Act of Parliament. But the example created by these events prompted questions as to what action might be taken should houses of English foundation cease for any reason to exist. Much would depend on who, at the time the house ended, held the status of founder or patron; and, as with other such disputes in real property, the standard procedure was to empanel a jury to decide between disputing claimants. In practice, the Crown claimed the status of 'founder' in all such cases that occurred. Consequently, when a monastic community failed (e.g. through the death of most of its members, or through insolvency), the bishop would seek to obtain Papal approval for alternative use of the house's endowments in canon law. This, with royal agreement claiming 'foundership', would be presented to an 'empanelled jury' for consent to disposal of the property of the house in civil law.
The royal transfer of alien monastic estates to educational foundations inspired bishops and, as the 15th century waned, they advocated more such actions, which became common. The subjects of these dissolutions were usually small, poor, and indebted Benedictine or Augustinian communities (especially those of women) with few powerful friends; the great abbeys and orders exempt from diocesan supervision such as the Cistercians were unaffected.
The consequent new foundations were most often Oxford University and Cambridge University colleges: instances of this include John Alcock, Bishop of Ely dissolving the Benedictine St Radegund's Priory, Cambridge to found Jesus College, Cambridge (1496), and William Waynflete, Bishop of Winchester acquiring Selborne Priory in Hampshire in 1484 for Magdalen College, Oxford.
In the following century, Lady Margaret Beaufort obtained the property of Creake Abbey (whose religious had all died of Black Death in 1506) to fund her works at Oxford and Cambridge. She was advised in this action by the staunch traditionalist John Fisher, Bishop of Rochester.
In 1522, Fisher himself dissolved the women's monasteries of Bromhall and Higham to aid St John's College, Cambridge. That same year Cardinal Wolsey dissolved St Frideswide's Priory (now Oxford Cathedral) to form the basis of his Christ Church, Oxford; in 1524, he secured a papal bull to dissolve some twenty other monasteries to provide an endowment for his new college. In all these suppressions, the remaining friars, monks and nuns were absorbed into other houses of their respective orders. Juries found the property of the house to have reverted to the Crown as founder.
The conventional wisdom of the time was that the proper daily observance of the Divine Office of prayer required a minimum of twelve professed religious, but by the 1530s only a minority of religious houses in England could provide this. Most observers were agreed that a systematic reform of the English church must necessarily involve the drastic concentration of monks and nuns into fewer, larger houses, potentially making much monastic income available for more productive religious, educational and social purposes.
But this apparent consensus often faced strong resistance in practice. Members of religious houses proposed for dissolution might resist relocation; the houses invited to receive them might refuse to co-operate; and local notables might resist the disruption in their networks of influence. Moreover, reforming bishops found they faced intractable opposition when urging the heads of religious houses to enforce rigorous observation of their monastic rules; especially in respect of requiring monks and nuns to remain within their cloisters. Monks and nuns in almost all late medieval English religious communities, although theoretically living in religious poverty, were nevertheless paid an annual cash wage (peculium) and were in receipt of other regular cash rewards and pittances; which accorded considerable effective freedom from claustral rules for those disinclined to be restricted by them. Religious superiors met their bishops' pressure with the response that the austere and cloistered ideal was no longer acceptable to more than a tiny minority of regular clergy, and that any attempt on their part to enforce their order's stricter rules could be overturned in counter-actions in the secular courts, were aggrieved monks and nuns to obtain a writ of praemunire.
The King actively supported Wolsey, Fisher and Richard Foxe in their programmes of monastic reform; but even so, progress was painfully slow, especially where religious orders had been exempted from episcopal oversight by Papal authority. Moreover, it was by no means certain that juries would always find in favour of the Crown in disposing of the property of dissolved houses; any action that impinged on monasteries with substantial assets might be expected to be contested by a range of influential claimants. In 1532, the priory of Christchurch Aldgate, facing financial and legal difficulties, petitioned the King as founder for assistance, only to find themselves dissolved willy-nilly. Rather than risk empanelling a jury, and with Papal participation at this juncture no longer being welcome, the Lord Chancellor, Thomas Audley recommended that dissolution should be legalised retrospectively through a special act of Parliament.
While these transactions were going on in England, elsewhere in Europe events were taking place which presaged a storm. In 1521, Martin Luther had published De votis monasticis (On the monastic vows), a treatise which declared that the monastic life had no scriptural basis, was pointless and also actively immoral in that it was not compatible with the true spirit of Christianity. Luther also declared that monastic vows were meaningless and that no one should feel bound by them. Luther, a one-time Augustinian friar, found some comfort when these views had a dramatic effect: a special meeting of the German province of his order held the same year accepted them and voted that henceforth every member of the regular clergy should be free to renounce their vows, resign their offices and marry. At Luther's home monastery in Wittenberg all the friars, save one, did so.
News of these events did not take long to spread among Protestant-minded rulers across Europe, and some, particularly in Scandinavia, moved very quickly. In the Riksdag of Västerås in 1527, initiating the Reformation in Sweden, King Gustavus Vasa secured an edict of the Diet allowing him to confiscate any monastic lands he deemed necessary to increase royal revenues, and to allow the return of donated properties to the descendants of those who had donated them, should they wish to retract them. By the following Reduction of Gustav I of Sweden, Gustav gained large estates, as well as loyal supporters among the nobility who chose to use the permission to retract donations done by their families to the convents. The Swedish monasteries and convents were simultaneously deprived of their livelihoods. They were banned from accepting new novices, as well as forbidden to prevent their existing members from leaving if they wished to do so. However, the former monks and nuns were allowed to reside in the convent buildings for life on state allowance, and many of them consequently survived the Reformation for decades. The last of them was Vreta Abbey, where the last nuns died in 1582, and Vadstena Abbey, from which the last nuns emigrated in 1595, about half a century after the introduction of reformation.
In Denmark, King Frederick I made a similar act in 1528, confiscating 15 of the houses of the wealthiest monasteries and convents. Further laws under his successor over the course of the 1530s banned the friars, and forced monks and nuns to transfer title to their houses to the Crown, which passed them out to supportive nobles who soon acquired former monastic lands. Danish monastic life was to vanish in a way identical to that of Sweden.
In Switzerland, too, monasteries were under threat. In 1523, the government of the city-state of Zurich pressured nuns to leave their monasteries and marry, and followed up the next year by dissolving all monasteries in its territory, under the pretext of using their revenues to fund education and help the poor. The city of Basel followed suit in 1529, and Geneva adopted the same policy in 1530. An attempt was also made in 1530 to dissolve the famous Abbey of St. Gall, which was a state of the Holy Roman Empire in its own right, but this failed, and St. Gall has survived.
In France and Scotland, by contrast, royal action to seize monastic income proceeded along entirely different lines. In both countries, the practice of nominating abbacies in commendam had become widespread. Since the 12th century, it had become universal in Western Europe for the household expenses of abbots and conventual priors to be separated from those of the rest of the monastery, typically appropriating more than half the house's income. With papal approval, these funds might be diverted on a vacancy to support a non-monastic ecclesiastic, commonly a bishop or member of the Papal Curia; and although such arrangements were nominally temporary, commendatory abbacies often continued long-term. Then, by the Concordat of Bologna in 1516, Pope Leo X granted to Francis I effective authority to nominate almost all abbots and conventual priors in France. Ultimately around 80 per cent of French abbacies came to be held in commendam, the commendators often being lay courtiers or royal servants; and by this means around half the income of French monasteries was diverted into the hands of the Crown, or of royal supporters; all entirely with the Popes' blessing. Where the French kings led, the Scots kings followed. In Scotland, where the proportion of parish tiends appropriated by higher ecclesiastical institutions exceeded 85 per cent, in 1532 the young James V obtained from the Pope approval to appoint his illegitimate infant sons (of which he eventually acquired nine) as commendators to abbacies in Scotland. Other Scots aristocratic families were able to strike similar deals, and consequently over £40,000 (Scots) per annum was diverted from monasteries into the royal coffers.
It is inconceivable that these moves went unnoticed by the English government and particularly by Thomas Cromwell, who had been employed by Wolsey in his monastic suppressions, and who was shortly to become Henry VIII's King’s Secretary. However, Henry himself appears to have been much more influenced by the opinions on monasticism of the humanists Desiderius Erasmus and Thomas More, especially as found in Erasmus's work In Praise of Folly (1511) and More's Utopia (1516). Erasmus and More promoted ecclesiastical reform while remaining faithful to the Church of Rome, and had ridiculed such monastic practices as repetitive formal religion, superstitious pilgrimages for the veneration of relics, and the accumulation of monastic wealth. Henry appears from the first to have shared these views, never having endowed a religious house and only once having undertaken a religious pilgrimage, to Walsingham in 1511. From 1518, Thomas More was increasingly influential as a royal servant and counsellor, in the course of which his correspondence included a series of strong condemnations of the idleness and vice in much monastic life, alongside his equally vituperative attacks on Luther. Henry himself corresponded continually with Erasmus, prompting him to be more explicit in his public rejection of the key tenets of Lutheranism and offering him church preferment should he wish to return to England.
Declaration as Head of the Church
On famously failing to receive from the Pope a declaration of nullity regarding his marriage, Henry had himself declared Supreme Head of the Church of England in February 1531, and instigated a programme of legislation to establish this Royal Supremacy in law and enforce its acceptance throughout his realm. In April 1533, an Act in Restraint of Appeals eliminated the right of clergy to appeal to "foreign tribunals" (Rome) over the King's head in any spiritual or financial matter. All ecclesiastical charges and levies that had previously been payable to Rome, would now go to the King. By the Submission of the Clergy, the English clergy and religious orders subscribed to the proposition that the King was, and had always been, the Supreme Head of the Church in England. Consequently, in Henry's view, any act of monastic resistance to royal authority would not only be treasonable, but also a breach of the monastic vow of obedience. Under heavy threats, almost all religious houses joined the rest of the Church in acceding to the Royal Supremacy; and in swearing to uphold the validity of the King's divorce and remarriage. Opposition was concentrated in the houses of Carthusian monks, Observant Franciscan friars and Bridgettine monks and nuns, which were to the Government's embarrassment, exactly those orders where the religious life was acknowledged as being fully observed. Great efforts were made to cajole, bribe, trick and threaten these houses into formal compliance, with those religious who continued in their resistance being liable to imprisonment until they submitted or if they persisted, to execution for treason. All the houses of the Observant Friars were handed over to the mainstream Franciscan order; the friars from the Greenwich house were imprisoned, where many died from ill-treatment. The Carthusians eventually submitted, other than the monks of the London house which was suppressed; some of the monks were executed for high treason in 1535, and others starved to death in prison. Also opposing the Supremacy and consequently imprisoned were leading Bridgettine monks from Syon Abbey, although the Syon nuns, being strictly enclosed, escaped sanction at this stage, the personal compliance of the abbess being taken as sufficient for the government's purposes.
G.W.O. Woodward concluded that:
All but a very few took it without demur. They were, after all, Englishmen, and shared the common prejudice of their contemporaries against the pretensions of foreign Italian prelates.
Visitation of the monasteries
In 1534, Cromwell undertook, on behalf of the King, an inventory of the endowments, liabilities and income of the entire ecclesiastical estate of England and Wales, including the monasteries (see Valor Ecclesiasticus), for the purpose of assessing the Church's taxable value, through local commissioners who reported in May 1535. At the same time, Henry had Parliament authorise Cromwell to "visit" all the monasteries, including those like the Cistercians previously exempted from episcopal oversight by papal dispensation, to purify them in their religious life, and to instruct them in their duty to obey the King and reject Papal authority. Cromwell delegated his visitation authority to hand-picked commissioners, chiefly Richard Layton, Thomas Legh, John ap Rice and John Tregonwell for the purposes of ascertaining the quality of religious life being maintained in religious houses, of assessing the prevalence of 'superstitious' religious observances such as the veneration of relics, and for inquiring into evidence of moral laxity (especially sexual). The chosen commissioners were mostly secular clergy, and appear to have been Erasmian in their views, doubtful of the value of monastic life and universally dismissive of relics and miraculous tokens. An objective assessment of the quality of monastic observance in England in the 1530s would almost certainly have been largely negative. By comparison with the valuation commissions, the timetable for these monastic visitations was very tight, with some houses missed altogether, and inquiries appear to have concentrated on gross faults and laxity; consequently where the reports of misbehaviour returned by the visitors can be checked against other sources, they commonly appear to have been both rushed and greatly exaggerated, often recalling events and scandals from years before. The visitors interviewed individually each member of the house and selected servants, prompting each one both to make individual confessions of wrongdoing and also to inform on one another. From their correspondence with Cromwell it can be seen that the visitors knew that findings of impropriety were both expected and desired; however it is also clear that, where no faults were revealed, none were reported. The visitors put the worst construction they could on whatever they were told, but they do not appear to have fabricated allegations of wrongdoing outright.
Reports and further visitations
In the autumn of 1535, the visiting commissioners were sending back to Cromwell written reports of all the lurid doings they claimed to have discovered, enclosing with them bundles of purported miraculous wimples, girdles and mantles that monks and nuns had been lending out for cash to the sick, or to mothers in labour. The commissioners appear consistently to have instructed houses to reintroduce the strict practice of common dining and cloistered living, urging that those unable to comply should be encouraged to leave; and considerable numbers appear to have taken up the opportunity offered to be released from their monastic vows, so as to make a life elsewhere. The visitors reported the number of professed religious persons continuing in each house. In the case of seven houses, impropriety or irreligion had been so great, or the numbers remaining so few, that the commissioners had felt compelled to suppress it on the spot; in others, the abbot, prior or noble patron was reported to be petitioning the King for a house to be dissolved. Such authority had formerly rested with the Pope, but now the King would need to establish a legal basis for dissolution in statutory law. Moreover, it was by no means clear that the property of a surrendered house would automatically be at the disposal of the Crown; a good case could be made for this property to revert to the heirs and descendants of the founder or other patron. Accordingly, Parliament enacted the Suppression of Religious Houses Act 1535 ("Dissolution of the Lesser Monasteries Act") in early 1535, relying in large part on the reports of "impropriety" Cromwell had received, establishing the power of the King to dissolve religious houses that were failing to maintain a religious life, consequently providing for the King to compulsorily dissolve monasteries with annual incomes declared in the Valor Ecclesiasticus of less than £200 (of which there were potentially 419) but also giving the King the discretion to exempt any of these houses from dissolution at his pleasure. All property of the dissolved house would revert to the Crown. Accordingly, many monasteries falling below the threshold forwarded a case for continuation, offering to pay substantial fines in recompense. Many such cases were accepted, so that only around 330 were referred to suppression commissions, and only 243 houses were actually dissolved at this time. The choice of a £200 threshold as the criterion for general dissolution under the legislation has been queried, as this does not appear to correspond to any clear distinction in the quality of religious life reported in the visitation reports, and the preamble to the legislation refers to numbers rather than income. Adopting a financial criterion was most likely determined pragmatically; the Valor Ecclesiasticus returns being both more reliable and more complete than those of Cromwell's visitors.
The smaller houses identified for suppression were then visited during 1536 by a further set of local commissions, one for each county, charged with creating an inventory of assets and valuables, and empowered to obtain prompt co-operation from monastic superiors by the allocation to them of pensions and cash gratuities. It was envisaged that some houses might offer immediate surrender, but in practice few did; consequently a two-stage procedure was applied, the commissions reporting back to Cromwell for a decision as to whether to proceed with dissolution. In a number of instances these commissioners supported the continuation of a house where they found no serious current cause for concern; arguments that Cromwell, as vicegerent, appears often to have accepted. Around 80 houses were exempted, mostly offering a substantial fine. Where dissolution was determined on, a second visit would effect the arrangements for closure of the house, disposal of its assets and endowments and provisions for the future of the members of the house; otherwise the second visit would collect the agreed fine. In general, the suppression commissioners were less inclined to report serious faults in monastic observance within the smaller houses than the visiting commissioners had been, although this may have been coloured by an awareness that monks and nuns with a bad reputation would be more difficult to place elsewhere. The 1536 Act established that, whatever the claims of founders or patrons, the property of the dissolved smaller houses reverted to the Crown; and Cromwell established a new government agency, the Court of Augmentations, to manage it. However, although the property rights of lay founders and patrons were legally extinguished, the incomes of lay holders of monastic offices, pensions and annuities were generally preserved, as were the rights of tenants of monastic lands. Ordinary monks and nuns were given the choice of secularisation (with a cash gratuity but no pension), or of transfer to a continuing larger house of the same order. The majority of those then remaining chose to continue in the religious life; in some areas, the premises of a suppressed religious house was recycled into a new foundation to accommodate them, and in general, rehousing those seeking a transfer proved much more difficult and time-consuming than appears to have been anticipated. Two houses, Norton Priory in Cheshire and Hexham Abbey in Northumberland, attempted to resist the commissioners by force, actions which Henry interpreted as treason, resulting in his writing personally to demand the summary brutal punishment of those responsible. The prior and canons of Norton were imprisoned for several months, and were fortunate to escape with their lives; the canons of Hexham, who made the further mistake of becoming involved in the Pilgrimage of Grace, were executed.
Initial round of suppressions
This section does not cite any sources. (July 2020) (Learn how and when to remove this template message)
The first round of suppressions initially aroused considerable popular discontent, especially in Lincolnshire and Yorkshire where they contributed to the Pilgrimage of Grace of 1536, an event which led to Henry increasingly associating monasticism with betrayal, as some of the spared religious houses in the north of England (more or less willingly) sided with the rebels, while former monks resumed religious life in several of the suppressed houses. Clauses within the Treasons Act 1534 provided that the property of those convicted of treason would automatically revert to the Crown, clauses that Cromwell had presciently drafted with the intention of effecting the dissolution of religious houses whose heads were so convicted, arguing that the superior of the house (abbot, abbess, prior or prioress) was the legal "owner" of all its monastic property. The wording of the First Suppression Act had been clear that reform, not outright abolition of monastic life, was being presented to the public as the objective of the legislative policy; and there has been continuing academic debate as to whether a universal dissolution was nevertheless being covertly prepared for at this point.
The predominant academic opinion is that the extensive care taken to provide for monks and nuns from the suppressed houses to transfer to continuing houses if they wished, demonstrates that monastic reform was still, at least in the mind of the King, the guiding principle; but that further large-scale action against substandard richer monasteries was always envisaged. By definition, the selection of poorer houses for dissolution in the First Act minimised the potential release of funds to other purposes; and once pensions had been committed to former superiors, cash rewards paid to those wishing to leave the religious life, and appropriate funding allocated for refounded houses receiving transferred monks and nuns, it is unlikely that there was much if any profit at this stage other than from the fines levied on exempted houses. Nevertheless, there was during most of 1537 (possibly conditioned by concern not to re-ignite rebellious impulses) a distinct standstill in official action towards any further round of dissolutions. Episcopal visitations were renewed, monasteries adapted their internal discipline in accordance with Cromwell's injunctions, and many houses undertook overdue programmes of repair and reconstruction.
The remaining monasteries required funds, especially those facing a new need to pay fines for exemption. During 1537 and 1538, there was a large increase in monastic lands and endowments being leased out; and in lay notables being offered fee-paying offices and annuities in return for cash and favours. By establishing additional long-term liabilities, these actions diminished the eventual net return to the Crown from each house's endowments, but they were not officially discouraged; indeed Cromwell obtained and solicited many such fees in his own personal favour. Crucially, having created the precedent that tenants and lay recipients of monastic incomes might expect to have their interests recognised by the Court of Augmentations following dissolution, the government's apparent acquiescence to the granting of additional such rights and fees helped establish a predisposition towards dissolution amongst local notables and landed interests. At the same time however, and especially once the loss of income from shrines and pilgrimages was taken into account, the long-term financial sustainability of many of the remaining houses was increasingly in question.
Although Henry continued in public to maintain that his sole objective was monastic reform, it became increasingly clear, from around the end of 1537, that official policy was now envisaging the general extinction of monasticism in England and Wales; but that this extinction was now expected to be achieved through individual applications from superiors for voluntary surrender rather than through a systematic statutory dissolution. One major Abbey whose monks had been implicated in the Pilgrimage of Grace was that of Furness in Lancashire; the abbot, fearful of a treason charge, petitioned to be allowed to make a voluntary surrender of his house, which Cromwell happily approved. From then on, all dissolutions that were not a consequence of convictions for treason were legally "voluntary" — a principle that was taken a stage further with the voluntary surrender of Lewes priory in November 1537 when, as at Furness, the monks and were not accorded the option of transfer to another house, but with the additional motivating consideration that this time (and on all future occasions) ordinary monks were offered life pensions if they co-operated. This created a pairing of positive and negative incentives in favour of further dissolution: Abbots and priors came under pressure from their communities to petition for voluntary surrender if they could obtain favourable terms for pensions; they also knew that if they refused to surrender they might suffer the penalty for treason and their religious house would be dissolved anyway. Where the King had been able to establish himself as founder, he exploited his position to place compliant monks and nuns as the head of the house while non-royal patrons and founders also tended to press superiors for an early surrender, hoping thereby to get preferential treatment in the disposal of monastic rights and properties. From the beginning of 1538, Cromwell targeted the houses that he knew to be wavering in their resolve to continue, cajoling and bullying their superiors to apply for surrender. Nevertheless, the public stance of the government was that the better-run houses could still expect to survive, and Cromwell dispatched a circular letter in March 1538 condemning false rumours of a general policy of dissolution while also warning superiors against asset-stripping or concealment of valuables, which could be construed as treasonable action.
Second round of dissolutions
As 1538 proceeded, applications for surrender flooded in. Cromwell appointed a local commissioner in each case to ensure rapid compliance with the King's wishes, to supervise the orderly sale of monastic goods and buildings, to dispose of monastic endowments, and to ensure that the former monks and nuns were provided with pensions, cash gratuities and clothing. The second time round, the process proved to be much quicker and easier. Existing tenants would have their tenancies continued, and lay office holders would continue to receive their incomes and fees (even though they now had no duties or obligations). Monks or nuns who were aged, handicapped or infirm were marked out for more generous pensions, and care was taken throughout that there should be nobody cast out of their place unprovided for (who might otherwise have increased the burden of charity for local parishes). In a few instances, even monastic servants were provided with a year's wages on discharge.
The endowments of the monasteries, landed property and appropriated parish tithes and glebe were transferred to the Court of Augmentations, who would thereon pay out life pensions and fees at the agreed rate; subject to the court's fee of 4d in the pound, plus in most years the clerical 'Tenth', a 10% tax deduction on clergy incomes. Pensions averaged around £5 per annum before tax for monks, with those for superiors typically assessed at 10% of the net annual income of the house, and were not reduced if the pensioner obtained other employment. If, however, the pensioner accepted a royal appointment or benefice of greater annual value than their pension, the pension would be extinguished. In 1538, £5 compared with the annual wages of a skilled worker; and although the real value of such a fixed income would suffer through inflation, it remained a significant sum; all the more welcome as prompt payment could largely be relied upon.
Pensions granted to nuns were notably less generous, averaging around £3 per annum. During Henry's reign, former nuns, like monks, continued to be forbidden to marry, therefore it is more possible that genuine hardship resulted, especially as former nuns had little access to opportunities for gainful employment. Where nuns came from well-born families, as many did, they seem commonly to have returned to live with their relatives. Otherwise, there were a number of instances where former nuns of a house clubbed together in a shared household. Moreover, there were no retrospective pensions for those monks or nuns who had already sought secularisation following the 1535 visitation, nor for those members of the smaller houses dissolved in 1536 and 1537 who had not then remained in the religious life, nor for those houses dissolved before 1538 due to the conviction for treason of their superior, and no friars were pensioned.
Once it had become clear that dissolution was now to be the general expectation, the future of the ten monastic cathedrals came into question. For two of these, Bath and Coventry, there was a second secular cathedral church in the same diocese, and both surrendered in 1539; but the other eight would necessarily need to continue in some form. The question being, what that form might be? A possible model was presented by the collegiate church of Stoke-by-Clare, Suffolk, where, in 1535 the evangelically-minded Dean, Matthew Parker, had recast the college statutes away from the saying of chantry masses; and towards preaching, observance of the office, and children's education.
In May 1538, the monastic cathedral community of Norwich surrendered, adopting new collegiate statutes as secular priests along similar lines. The new foundation in Norwich provided for around half the number of clergy as had been monks in the former monastery; with a dean, five prebendaries and sixteen minor canons. This change corresponded with ideas of a reformed future for monastic communities that had been a subject of debate and speculation amongst some leading Benedictine abbots for several decades; and sympathetic voices were being heard from a number of quarters in the late summer of 1538.
The Lord Chancellor, Thomas Audley proposed Colchester and St Osyth's Priory as a possible future college. Thomas Howard, 3rd Duke of Norfolk and Lord Treasurer proposed Thetford Priory, making extensive preparations to adopt statutes similar to those from Stoke-by-Clare, and expending substantial sums into moving shrines, relics and architectural fittings from the dissolved Castle Acre Priory into Thetford priory church. Cromwell himself proposed Little Walsingham (once purged of its "superstitious" shrine), and Hugh Latimer, the evangelical bishop of Worcester, wrote to Cromwell in 1538 to plead for the continuation of Great Malvern Priory, and of "two or three in every shire of such remedy". By early 1539, the continuation of a selected group of great monasteries as collegiate refoundations had become an established expectation; and when the Second Suppression Act was presented to Parliament in May 1539, it was accompanied by an Act giving the King authority to establish new bishoprics and collegiate cathedral foundations from existing monastic houses. But while the principle had been established, the numbers of successor colleges and cathedrals remained unspecified.
King Henry's enthusiasm for creating new bishoprics was second to his passion for building fortifications. When an apparent alliance of France and the Empire against England was agreed at Toledo in January 1539, this precipitated a major invasion scare. Even though, by midsummer, the immediate danger had passed; Henry still demanded from Cromwell unprecedented sums for the coastal defence works from St Michael's Mount to Lowestoft; and the scale of the proposed new foundations was drastically cut back. In the end, six abbeys were raised to be cathedrals of new dioceses; and only a further two major abbeys, Burton-on-Trent and Thornton, were re-founded as non-cathedral colleges. To the intense displeasure of Thomas Howard, Thetford was not spared; and was amongst the last houses to be dissolved in February 1540, while the Duke was out of the country on a hastily-arranged embassy to France.
Even late in 1538, Cromwell himself appears to have envisaged that a select group of nunneries might be allowed to continue in the religious life; where they were able to demonstrate both a high quality of regular observance and a commitment to the principles of religious reform. One such was Godstow Abbey near Oxford, whose abbess, Lady Katherine Bulkeley, was one of three whom Cromwell had, in 1535, personally promoted to be elected to the headship of richer nunneries. Godstow was invaded by Dr John London, Cromwell's commissioner, in October 1538, demanding the surrender of the abbey; but following a direct appeal to Cromwell himself, the house was assured that it could continue. In response, Lady Katherine assured Cromwell that "there is neither pope nor purgatory, image nor pilgrimage nor praying to dead saints used or regarded amongst us". Godstow Abbey was providing highly regarded boarding and schooling for girls of notable families; and this was the case for several other nunneries amongst the houses still standing; a factor which may have accounted for their surviving so long. Diarmaid MacCulloch further suggests that "customary male cowardice" was also a factor in the reluctance of the government to confront the heads of female religious houses. But the stay of execution for Godstow Abbey lasted only just over a year: the abbey was suppressed in November 1539 along with all other nunnery survivors; as Henry was determined that none should continue.
None of this process of legislation and visitation had applied to the houses of the friars. At the beginning of the 14th century there had been around 5,000 friars in England, occupying extensive complexes in all towns of any size. There were still around 200 friaries in England at the dissolution. But, except for the Observant Franciscans, by the 16th century the friars' income from donations had collapsed, their numbers had shrunk to less than 1,000 and their conventual buildings were often ruinous or leased out commercially, as too were their enclosed vegetable gardens. No longer self-sufficient in food and with their cloistered spaces invaded by secular tenants, almost all friars, in contravention of their rules, were now living in rented lodgings outside their friaries, and meeting for divine service in the friary church. Many friars now supported themselves through paid employment and held personal property.
By early 1538, suppression of the friaries was widely being anticipated; in some houses all friars save the prior had already left, and realisable assets (standing timber, chalices, vestments) were being sold off. Cromwell deputed Richard Yngworth, suffragan Bishop of Dover and former Provincial of the Dominicans, to obtain the friars' surrender; which he achieved rapidly by drafting new injunctions that enforced each order's rules and required friars to resume a strict conventual life within their walls. In effect, failure to accede to the king's wish for voluntary surrender would result, for most, in enforced homelessness and starvation. Once surrender had been accepted, and formally witnessed, Yngworth reported briefly to Cromwell on his actions; noting for each friary, who was the current tenant of the gardens, what was the general state of the friary buildings, and whether the friary church had valuable lead on roofs and gutters. Mostly he had found poverty, derelict buildings and leased-out gardens as the only income-bearing asset.
Yngworth had no authority to dispose of lands and property and could not negotiate pensions; so the friars appear simply to have been released from their vows and dismissed with a gratuity of around 40 shillings each, which Yngworth took from whatever cash resources were in hand. He listed by name the friars remaining in each house at surrender so that Cromwell could provide them with capacities, legal permission to pursue a career as a secular priest. Furthermore, Yngworth had no discretion to maintain use of the friary churches, even though many had continued to attract congregations for preaching and worship; and these mostly were disposed of rapidly by the Court of Augmentations. Of all the friary churches in England and Wales, only St. Andrew's Hall, Norwich, Atherstone Priory (Warwickshire), the Chichester Guildhall, and Greyfriars Church, Reading remain standing (although the London church of the Austin Friars continued in use by the Dutch Church until destroyed in the London Blitz). Almost all other friaries have disappeared with few visible traces.
In April 1539, Parliament passed a new law retrospectively legalising acts of voluntary surrender and assuring tenants of their continued rights, but by then the vast majority of monasteries in England, and Wales had already been dissolved or marked out for a future as a collegiate foundation. Some still resisted, and that autumn the abbots of Colchester, Glastonbury, and Reading were hanged, drawn and quartered for treason, their houses being dissolved and their monks, on these occasions, receiving a basic pension of £4-year.
St Benet's Abbey in Norfolk was the only abbey in England which escaped formal dissolution. As the last abbot had been appointed to the see of Norwich, the abbey endowments were transferred alongside him directly into those of the bishops. The last two abbeys to be dissolved were Shap Abbey, in January 1540, and Waltham Abbey, on 23 March 1540, and several priories also survived into 1540, including Bolton Priory in Yorkshire (dissolved 29 January 1540) and Thetford Priory in Norfolk (dissolved 16 February 1540). It was not until April 1540, that the cathedral priories of Canterbury and Rochester were transformed into secular cathedral chapters.
Effects on public life
The surrender of monastic endowments was recognised automatically as terminating all regular religious observance by its members, except in the case of a few communities, such as Syon, who went into exile. There are several recorded instances where groups of former members of a house set up residence together, but no cases where an entire community did so; and there is no indication that any such groups continued to pray the Divine Office. The dissolution Acts were concerned solely with the disposal of endowed property, at no point do they explicitly forbid the continuance of a regular life. However, given Henry's attitude to those religious who resumed their houses during the Pilgrimage of Grace, it would have been most unwise of any former community of monks or nuns within his dominions to have maintained covert monastic observance.
The local commissioners were instructed to ensure that, where portions of abbey churches were also used by local parishes or congregations, this use should continue. Accordingly, parts of 117 former monasteries survived (and mostly still remain) in use for parochial worship, in addition to the fourteen former monastic churches that survived in their entirety as cathedrals. In around a dozen instances, wealthy benefactors or parishes purchased a complete former monastic church from the commissioners, and presented it to their local community as a new parish church building. Many other parishes bought and installed former monastic woodwork, choir stalls and stained glass windows. As it was commonly the case, by the late medieval period, that the abbot's lodging had been expanded to form a substantial independent residence, these properties were frequently converted into country houses by lay purchasers. In other cases, such as Lacock Abbey and Forde Abbey, the conventual buildings themselves were converted to form the core of a Tudor great mansion. Otherwise the most marketable fabric in monastic buildings was likely to be the lead on roofs, gutters and plumbing, and buildings were burned down as the easiest way to extract this. Building stone and slate roofs were sold off to the highest bidder. Many monastic outbuildings were turned into granaries, barns and stables. Cromwell had already instigated a campaign against "superstitions": pilgrimages and veneration of saints, in the course of which, ancient and precious valuables were grabbed and melted down; the tombs of saints and kings ransacked for whatever profit could be got from them, and their relics destroyed or dispersed. Even the crypt of King Alfred the Great was not spared the looting frenzy. Great abbeys and priories like Glastonbury, Walsingham, Bury St Edmunds, and Shaftesbury which had flourished as pilgrimage sites for many centuries, were soon reduced to ruins. However, the tradition that there was widespread mob action resulting in destruction and iconoclasm, that altars and windows were smashed, partly confuses the looting spree of the 1530s with the vandalism wrought by the Puritans in the next century against the Anglican privileges. Woodward concludes:
There was no general policy of destruction, except in Lincolnshire where the local government agent was so determined that the monasteries should never be restored that he razed as many as he could to the ground. More often, the buildings have simply suffered from unroofing and neglect, or by quarrying.
Once the new and re-founded cathedrals and other endowments had been provided for, the Crown became richer to the extent of around £150,000 (equivalent to £97,356,000 in 2019), per year, although around £50,000 (equivalent to £32,452,000 in 2019) of this was initially committed to fund monastic pensions. Cromwell had intended that the bulk of this wealth should serve as regular income of government. However, after Cromwell's fall in 1540, Henry needed money quickly to fund his military ambitions in France and Scotland; and so monastic property was sold off, representing by 1547 an annual value of £90,000 (equivalent to £52,838,000 in 2019). Lands and endowments were not offered for sale, let alone auctioned; instead the government responded to applications for purchase, of which had indeed been a continual flood ever since the process of dissolution got under way. Many applicants had been founders or patrons of the relevant houses, and could expect to be successful subject to paying the standard market rate of twenty years' income. Purchasers were predominantly leading nobles, local magnates and gentry; with no discernible tendency in terms of conservative or reformed religion, other than a determination to maintain and extend their family's position and local status. The landed property of the former monasteries included large numbers of manorial estates, each carrying the right and duty to hold a court for tenants and others. Acquiring such feudal rights was regarded as essential to establish a family in the status and dignity of the late medieval gentry; but for a long period freehold manorial estates had been very rare in the market; and families of all kinds seized on the opportunity now offered to entrench their position in the social scale. Nothing would subsequently induce them to surrender their new acquisitions. The Court of Augmentations retained lands and spiritual income sufficient to meet its continuing obligations to pay annual pensions; but as pensioners died off, or as pensions were extinguished when their holders accepted a royal appointment of higher value, then surplus property became available each year for further disposal. The last surviving monks continued to draw their pensions into the reign of James I (1603–1625), more than 60 years after the dissolution's end.
The Dissolution of the Monasteries impinged relatively little on English parish church activity. Parishes that had formerly paid their tithes to support a religious house, now paid them to a lay impropriator, but rectors, vicars and other incumbents remained in place, their incomes unaffected and their duties unchanged. Congregations that had shared monastic churches for worship continued to do so; the former monastic parts now walled off and derelict. Most parish churches had been endowed with chantries, each maintaining a stipended priest to say Mass for the souls of their donors, and these continued for the moment unaffected. In addition there remained after the dissolution of the monasteries, over a hundred collegiate churches in England, whose endowments maintained regular choral worship through a corporate body of canons, prebends or priests. All these survived the reign of Henry VIII largely intact, only to be dissolved under the Chantries Act 1547, by Henry's son Edward VI, their property being absorbed into the Court of Augmentations and their members being added to the pensions list. Since many former monks had found employment as chantry priests, the consequence for these clerics was a double experience of dissolution, perhaps mitigated by being economically in receipt thereafter of a double pension.
This section does not cite any sources. (July 2020) (Learn how and when to remove this template message)
The dissolutions in Ireland followed a very different course from those in England and Wales. There were around 400 religious houses in Ireland in 1530—many more, relative to population and material wealth, than in England and Wales. In marked distinction to the situation in England, in Ireland the houses of friars had flourished in the 15th century, attracting popular support and financial endowments, undertaking many ambitious building schemes, and maintaining a regular conventual and spiritual life. Friaries constituted around half of the total number of religious houses. Irish monasteries, by contrast, had experienced a catastrophic decline in numbers of professed religious, such that by the 16th century only a minority maintained the daily observance of the Divine Office. Henry's direct authority, as Lord of Ireland and, from 1541, as King of Ireland, only extended to the area of the Pale immediately around Dublin. Outside this area, he could only proceed by tactical agreement with clan chiefs and local lords.
Nevertheless, Henry was determined to carry through a policy of dissolution in Ireland — and in 1537 introduced legislation into the Irish Parliament to legalise the closure of monasteries. The process faced considerable opposition, and only sixteen houses were suppressed. Henry remained resolute however, and from 1541 as part of the Tudor conquest of Ireland he continued to press for the area of successful dissolution to be extended. For the most part, this involved making deals with local lords, under which monastic property was granted away in exchange for allegiance to the new Irish Crown; and consequently Henry acquired little if any of the wealth of the Irish houses.
By the time of Henry's death (1547) around half of the Irish houses had been suppressed; but many continued to resist dissolution until well into the reign of Elizabeth I, and some houses in the West of Ireland remained active until the early 17th century. In 1649, Oliver Cromwell led a Parliamentary army to conquer Ireland, and systematically sought out and destroyed former monastic houses. Subsequently, however, sympathetic landowners housed monks or friars close to several ruined religious houses, allowing them a continued covert existence during the 17th and 18th centuries, subject to the dangers of discovery and legal ejection or imprisonment.
Social and economic
The abbeys of England, Wales and Ireland had been among the greatest landowners and the largest institutions in the kingdoms, although by the early 16th century, religious donors increasingly tended to favour parish churches, collegiate churches, university colleges and grammar schools, and these were now the predominant centres for learning and the arts. Nevertheless, and particularly in areas far from London, the abbeys, convents and priories were centres of hospitality and learning, and everywhere they remained a main source of charity for the old and infirm. The removal of over eight hundred such institutions, virtually overnight, left great gaps in the social fabric.
In addition, about a quarter of net monastic wealth on average consisted of "spiritual" income arising where the religious house held the advowson of a benefice with the legal obligation to maintain the cure of souls in the parish, originally by nominating the rector and taking an annual rental payment. Over the medieval period, monasteries and priories continually sought papal exemptions, so as to appropriate the glebe and tithe income of rectoral benefices in their possession to their own use. However, from the 13th century onwards, English diocesan bishops successfully established the principle that only the glebe and 'greater tithes' of grain, hay and wood could be appropriated by monastic patrons in this manner; the 'lesser tithes' had to remain within the parochial benefice; the incumbent of which thenceforward carried the title of 'vicar'. By 1535, of 8,838 rectories, 3,307 had thus been appropriated with vicarages; but at this late date, a small sub-set of vicarages in monastic ownership were not being served by beneficed clergy at all. In almost all such instances, these were parish churches in the ownership of houses of Augustinian or Premonstratensian canons, orders whose rules required them to provide parochial worship within their conventual churches, for the most part as chapels of ease of a more distant parish church. From the mid-fourteenth century onwards the canons had been able to exploit their hybrid status to justify petitions for papal privileges of appropriation, allowing them to fill vicarages in their possession either from among their own number, or from secular stipendiary priests removable at will; these arrangements corresponded to those for their chapels of ease.
On the dissolution these spiritual income streams were sold off on the same basis as landed endowments, creating a new class of lay impropriators, who thereby became entitled to the patronage of the living together with the income from tithes and glebe lands; albeit that they also as lay rectors became liable to maintain the fabric of the parish chancel. The existing incumbent rectors and vicars serving parish churches formerly the property of the monasteries continued in post, their incomes unaffected. However, in those of the canons' parish churches and chapels of ease which had become unbeneficed, the lay rector as patron was additionally obliged to establish a stipend for a perpetual curate.
It is unlikely that the monastic system could have been broken simply by royal action had there not been the overwhelming bait of enhanced status for gentry large and small, and the convictions of the small but determined Protestant faction. Anti-clericalism was a familiar feature of late medieval Europe, producing its own strain of satiric literature that was aimed at a literate middle class.
Arts and culture
Along with the destruction of the monasteries, some of them many hundreds of years old, the related destruction of the monastic libraries was perhaps the greatest cultural loss caused by the English Reformation. Worcester Priory (now Worcester Cathedral) had 600 books at the time of the dissolution. Only six of them are known to have survived intact to the present day. At the abbey of the Augustinian Friars at York, a library of 646 volumes was destroyed, leaving only three known survivors. Some books were destroyed for their precious bindings, others were sold off by the cartload. The antiquarian John Leland was commissioned by the King to rescue items of particular interest (especially manuscript sources of Old English history), and other collections were made by private individuals, notably Matthew Parker. Nevertheless, much was lost, especially manuscript books of English church music, none of which had then been printed.
A great nombre of them whych purchased those supertycyous mansyons, resrved of those lybrarye bokes, some to serve theyr jakes, some to scoure candelstyckes, and some to rubbe their bootes. Some they solde to the grossers and soapsellers.— John Bale, 1549
Health and education
The Act of 1539 also provided for the suppression of religious hospitals, which had constituted in England a distinct class of institution, endowed for the purpose of caring for older people. A very few of these, such as Saint Bartholomew's Hospital in London (which still exists, though under a different name between 1546 and 1948), were exempted by special royal dispensation but most closed, their residents being discharged with small pensions.
Monasteries had also supplied free food and alms for the poor and destitute, and it has been argued that the removal of this and other charitable resources, amounting to about 5 per cent of net monastic income, was one of the factors in the creation of the army of "sturdy beggars" that plagued late Tudor England, causing the social instability that led to the Edwardian and Elizabethan Poor Laws. This argument has been disputed, for example, by G.W.O. Woodward, who summarises:
No great host of beggars was suddenly thrown on the roads for monastic charity had had only marginal significance and, even had the abbeys been allowed to remain, could scarcely have coped with the problems of unemployment and poverty created by the population and inflationary pressures of the middle and latter parts of the sixteenth century.
Monasteries had necessarily undertaken schooling for their novice members, which in the later medieval period had tended to extend to cover choristers and sometimes other younger scholars; all this educational resource was lost with their dissolution. By contrast, where monasteries had provided grammar schools for older scholars, these were commonly refounded with enhanced endowments; some by royal command in connection with the newly re-established cathedral churches, others by private initiative. Monastic orders had maintained, for the education of their members, six colleges at the universities of Oxford or Cambridge, of which five survived as refoundations. Hospitals too were frequently to be re-endowed by private benefactors; and many new almshouses and charities were to be founded by the Elizabethan gentry and professional classes (London Charterhouse/Charterhouse School being an example which still survives). Nevertheless, it has been estimated that only in 1580 did overall levels of charitable giving in England return to those before the dissolution. On the eve of the overthrow, the various monasteries owned approximately 2,000,000 acres (just under 8 100 km²), over 16 percent of England, with tens of thousands of tenant farmers working those lands, some of whom had family ties to a particular monastery going back many generations.
It has been argued[by whom?] that the suppression of the English monasteries and nunneries contributed as well to the spreading decline of that contemplative spirituality which once thrived in Europe, with the occasional exception found only in groups such as the Society of Friends ("Quakers"). This may be set against the continuation in the retained and newly established cathedrals of the daily singing of the Divine Office by choristers and vicars choral, now undertaken as public worship, which had not been the case before the dissolution. The deans and prebends of the six new cathedrals were overwhelmingly former heads of religious houses. The secularised former monks and friars commonly looked for re-employment as parish clergy; and consequently numbers of new ordinations dropped drastically in the ten years after the dissolution, and ceased almost entirely in the reign of Edward VI. It was only in 1549, after Edward came to the throne, that former monks and nuns were permitted to marry; but within a year of permission being granted around a quarter had done so, only to find themselves forcibly separated (and denied their pensions) in the reign of Mary. On the succession of Elizabeth, these former monks and friars (happily reunited both with their wives and their pensions) formed a major part of the backbone of the new Anglican church, and may properly claim much credit for maintaining the religious life of the country until a new generation of ordinands became available in the 1560s and 1570s.
In the medieval church, there had been no seminaries or other institutions dedicated to training men as parish clergy. An aspiring candidate for ordination, having acquired a grammar school education and appropriate experience, would have been presented to the bishop's commissary for examination; typically sponsored in this by an ecclesiastical corporation which provided him with a 'title', a notional patrimony assuring the bishop of his financial security. By the 16th century the sponsors were overwhelmingly religious houses, although monasteries provided no formal parochial training, and the financial 'title' was a legal fiction. With the rapid expansion of grammar school provision in the late medieval period, the numbers of men being presented each year for ordination greatly exceeded the number of benefices falling vacant through the death of the incumbent priest, and consequently most newly ordained parish clergy could commonly expect to succeed to a benefice, if at all, only after many years as a Mass priest of low social standing.
In the knowledge that alternative arrangements for sponsorship and title would now need to be made, the dissolution legislation provided that the lay and ecclesiastical successors of the monks in former monastic endowments could in the future provide valid title for ordinands. However, these new arrangements appear to have taken a considerable period to gain general acceptance, and the circumstances of the church in the late 1530s may not have encouraged candidates to come forward. Consequently, and for 20 years afterwards until the succession of Elizabeth I, the number of ordinands in every diocese in England and Wales fell drastically below the numbers required to replace the mortality of existing incumbents. At the same time, the restrictions on 'pluralism' introduced through legislation in 1529 prevented the accumulation of multiple benefices by individual clergy, and accordingly by 1559 some 10 per cent of benefices were vacant and the former reserve army of Mass priests had largely been absorbed into the ranks of beneficed clergy. Monastic successors tended thereafter to prefer to sponsor university graduates as candidates for the priesthood; and, although the government signally failed to respond to the consequent need for expanded educational provision, individual benefactors stepped into the breach, with the refoundation as university colleges of five out of the six former monastic colleges of Oxford and Cambridge; while Jesus College, Oxford and Emmanuel College, Cambridge were newly founded with the express purpose of educating a Protestant parish clergy. Consequently, one unintended long-term consequence of the dissolution was the transformation of the parish clergy in England and Wales into an educated professional class of secure beneficed incumbents of distinctly higher social standing; one that furthermore, through intermarriage of one another's children, became substantially self-perpetuating.
Although it had been promised that the King's enhanced wealth would enable the founding or enhanced endowment of religious, charitable and educational institutions, in practice only about 15 per cent of the total monastic wealth was reused for these purposes. This comprised: the refoundation of eight out of ten former monastic cathedrals (Coventry and Bath being the exceptions), together with six wholly new bishoprics (Bristol, Chester, Gloucester, Oxford, Peterborough, Westminster) with their associated cathedrals, chapters, choirs and grammar schools; the refoundation as secular colleges of monastic houses in Brecon, Thornton and Burton on Trent, the endowment of five Regius Professorships in each of the universities of Oxford and Cambridge, the endowment of the colleges of Trinity College, Cambridge, and Christ Church, Oxford and the maritime charity of Trinity House. Thomas Cranmer objected to the provision of the new cathedrals with complete chapters of prebendaries at high stipends, but in the face of pressure to ensure that well-paid posts would continue, his protests had no effect. On the other hand, Cranmer was able to ensure that the new grammar schools attached both to 'New Foundation' and 'Old Foundation' cathedrals should be well funded, and accessible to boys from all walks of life. About a third of total monastic income was required to maintain pension payments to former monks and nuns, and hence remained with the Court of Augmentations. This left just over half to be available to be sold at market rates (very little property was given away by Henry to favoured servants, and any that was tended to revert to the Crown once their recipients fell out of favour, and were indicted for treason). By comparison with the forcible closure of monasteries elsewhere in Protestant Europe, the English and Welsh dissolutions resulted in a relatively modest volume of new educational endowments; but the treatment of former monks and nuns was more generous, and there was no counterpart elsewhere to the efficient mechanisms established in England to maintain pension payments over successive decades.
The dissolution and destruction of the monasteries and shrines was very unpopular in many areas. In the north of England, centring on Yorkshire and Lincolnshire, the suppression of the monasteries led to a popular rising, the Pilgrimage of Grace, that threatened the Crown for some weeks. In 1536 there were major, popular uprisings in Lincolnshire and Yorkshire and a further rising in Norfolk the following year. Rumours were spread that the King was going to strip the parish churches too, and even tax cattle and sheep. The rebels called for an end to the dissolution of the monasteries and for the removal of Cromwell. Henry defused the movement with solemn promises, all of which went unkept, and then summarily executed the leaders.
When Henry VIII's Catholic daughter, Mary I, succeeded to the throne in 1553, her hopes for a revival of English religious life proved a failure. Westminster Abbey, which had been retained as a cathedral, reverted to being a monastery; while the communities of the Bridgettine nuns and of the Observant Franciscans, which had gone into exile in the reign of Henry VIII, were able to return to their former houses at Syon and Greenwich respectively. A small group of fifteen surviving Carthusians was re-established in their old house at Sheen, as also were eight Dominican canonesses in Dartford. A house of Dominican friars was established at Smithfield, but this was only possible through importing professed religious from Holland and Spain, and Mary's hopes of further refoundations foundered, as she found it very difficult to persuade former monks and nuns to resume the religious life; consequently schemes for restoring the abbeys at Glastonbury and St Albans failed for lack of volunteers. All the refounded houses were in properties that had remained in Crown possession; but, in spite of much prompting, none of Mary's lay supporters would co-operate in returning their holdings of monastic lands to religious use; while the lay lords in Parliament proved unremittingly hostile, as a revival of the "mitred" abbeys would have returned the House of Lords to having an ecclesiastical majority. Moreover, there remained a widespread suspicion that the return of religious communities to their former premises might call into question the legal title of lay purchasers of monastic land, and accordingly all Mary's foundations were technically new communities in law. In 1554 Cardinal Pole, the papal legate, negotiated a papal dispensation allowing the new owners to retain the former monastic lands, and in return Parliament enacted the heresy laws in January 1555. When Mary died in 1558 and was succeeded by her half-sister, Elizabeth I, five of the six revived communities left again to exile in continental Europe. An Act of Elizabeth's first parliament dissolved the refounded houses. But although Elizabeth offered to allow the monks in Westminster to remain in place with restored pensions if they took the Oath of Supremacy and conformed to the new Book of Common Prayer, all refused and dispersed unpensioned. In less than 20 years, the monastic impulse had effectively been extinguished in England; and was only revived, even amongst Catholics, in the very different form of the new and reformed counter-reformation orders, such as the Jesuits.
- Cestui que
- Charter of Liberties
- Compendium Competorum
- List of monasteries dissolved by Henry VIII of England
- Little Jack Horner, a children's rhyme allegedly based on the episode.
- Religion in the United Kingdom
- Second Act of Dissolution
- G. W. Bernard, "The Dissolution of the Monasteries," History (2011) 96#324 p 390
- Dickens, p. 175.
- Dickens, p. 75.
- Studies in the Early History of Shaftesbury Abbey, Dorset County Council, 1999
- G. W. Bernard, "The Dissolution of the Monasteries," History (2011) 96#324 p 390
- Knowles, David (1959). 'The Religious Orders of England; volume III'. CUP. p. 150.
- Knowles, David (1959). 'The Religious Orders of England; volume III'. CUP. p. 150.
- Marshall, Peter (2017). 'Heretics and Believers'. Yale University Press. p. 30.
- Scarisbrick, p. 337.
- Dickens, p. 79.
- Dickens, p. 74.
- Dickens, p. 77.
- Lutherus, Martinus (1521). On Monastic Vows – De votis monasticis. Melchior Lotter d.J. / World Digital Library. Retrieved 1 March 2014.
- Lennart Jörälv: Reliker och mirakel. Den heliga Birgitta och Vadstena (2003)
- Woodward, G.W.O. The Dissolution of the Monasteries. p. 19.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwell; a life'. Allen Lane. p. 489.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwell; a life'. Allen Lane. p. 490.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwell; a life'. Allen Lane. p. 490.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwell; a life'. Allen Lane. p. 511.
- Erler, Mary.C. (2013). 'Reading and Writing During the Dissolution'. C.U.P. pp. 60–72.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwwell; a life'. Allen Lane. p. 463.
- MacCulloch, Diarmaid (2018). 'Thomas Cromwell; a life'. Allen Lane. p. 491.
- Salter, M. Medieval English Friaries. 2010. Malvern: Folly Publications. ISBN 978-1-871731-87-3
- Woodward, G.W.O. The Dissolution of the Monasteries. p. 23.
- UK Retail Price Index inflation figures are based on data from Clark, Gregory (2017). "The Annual RPI and Average Earnings for Britain, 1209 to Present (New Series)". MeasuringWorth. Retrieved 2 February 2020.
- Knowles, David The Religious Orders in England, Vol II Cambridge University Press, 1955, p.290
- Knowles, David The Religious Orders in England, Vol II Cambridge University Press, 1955, p.291
- Knowles, David The Religious Orders in England, Vol II Cambridge University Press, 1955, p.292
- For background on Chaucer's Pardoner and other Chaucerian anticlerical satire, see John Peter, Complaint and Satire in Early English Literature. (Oxford: Clarendon Press), 1956.
- Murray, Stuart. (2009). The library: An illustrated history. Skyhorse Publishing, p. 94.
- Carley, J. (1997). Marks in books and the libraries of Henry VIII. The Papers of the Bibliographical Society of America, 91(4), 583-606.
- Woodward, G.W.O. The Dissolution of the Monasteries. p. 24.
- Bucholz, RO and Key, N.: "Early modern England 1485–1714: a narrative history" Wiley-Blackwell, 2009, pp. 110–111. ISBN 978-1-4051-6275-3
- Baskerville, Geoffrey (1937). English Monks and the Suppression of the Monasteries. New Haven, Connecticut: Yale University Press.
- Bernard, G. W. (October 2011). "The Dissolution of the Monasteries". History. 96 (324): 390–409. doi:10.1111/j.1468-229X.2011.00526.x.
- Bradshaw, Brendan (1974). The Dissolution of the Religious Orders in Ireland under Henry VIII. London: Cambridge University Press.
- Cornwall, J.C.K. (1988). Wealth and Society in Early Sixteenth-Century England. Cambridge: Cambridge University Press.
- Dickens, A. G. (1989). The English Reformation (2nd ed.). London: B. T. Batsford.
- Duffy, Eamon (1992). The Stripping of the Altars: Traditional Religion in England, 1400–1580. New Haven, Connecticut: Yale University Press. ISBN 0-300-06076-9.
- Gasquet, F. A. (1925). Henry VIII and the English Monasteries (8th ed.). London.
- Haigh, Christopher (1969). The Last Days of the Lancashire Monasteries and the Pilgrimage of Grace. Manchester: Manchester University Press for Chetham Society.
- Knowles, David (1959). The Religious Orders in England. III. Cambridge: Cambridge University Press.
- Savine, Alexander (1909). English Monasteries on the Eve of the Dissolution. Oxford: The Clarendon Press.
- White, Newport B. (1943). Extents of Irish Monastic possessions 1540–1. Dublin: Irish Manuscripts Commission. Retrieved 19 May 2017.
- Willmott, Hugh. (2020). The Dissolution of the Monasteries in England and Wales. Sheffield: Equinox Publishing.
- Woodward, G.W.O. (1974). The Dissolution of the Monasteries. London: Pitkin Pictorials Ltd. Concentrates on England and Wales.
- Youings, J. (1971). The Dissolution of the Monasteries. London: Allen and Unwin.
- The Dissolution and Westminster Abbey (2007) Barbara Harvey; a detailed survey of the dissolution process at Westminster, in the context of overall government policy.
- Dissolution of the Monasteries on In Our Time at the BBC
- Dissolution of the Monasteries
- Dissolution of the Monasteries and historical records of some of the abbeys dissolved
- BBC Timeline: Tudors
- BBC History: English Reformation
- Catholic Encyclopedia: Suppression of English Monasteries
- Dissolution of the monasteries in England 1536–1541: on website The History Notes
- The Protestant Reformation in England by William Cobbett (1763–1835), Letter VI. Confiscation of the Monasteries. |
The giant cluster of elliptical galaxies in the centre of this image contains so much dark matter mass that its gravity bends light. This means that for very distant galaxies in the background, the cluster’s gravitational field acts as a sort of magnifying glass, bending and concentrating the distant object’s light towards Hubble. These gravitational lenses are one tool astronomers can use to extend Hubble’s vision beyond what it would normally be capable of observing. Using Abell 383, a team of astronomers have identified and studied a galaxy so far away we see it as it was less than a billion years after the Big Bang. Viewing this galaxy through the gravitational lens meant that the scientists were able to discern many intriguing features that would otherwise have remained hidden, including that its stars were unexpectedly old for a galaxy this close in time to the beginning of the Universe. This has profound implications for our understanding of how and when the first galaxies formed, and how the diffuse fog of neutral hydrogen that filled the early Universe was cleared. Credit: NASA, ESA, J. Richard (CRAL) and J.-P. Kneib (LAM). Acknowledgement: Marc Postman (STScI) Using the amplifying power of a cosmic gravitational lens, astronomers have discovered a distant galaxy whose stars were born unexpectedly early in cosmic history. This result sheds new light on the formation of the first galaxies, as well as on the early evolution of the Universe.
Johan Richard, the lead author of a new study says: "We have discovered a distant galaxy that began forming stars just 200 million years after the Big Bang. This challenges theories of how soon galaxies formed and evolved in the first years of the Universe. It could even help solve the mystery of how the hydrogen fog that filled the early Universe was cleared."
Richard's team spotted the galaxy in recent observations from the NASA/ESA Hubble Space Telescope, verified it with observations from the NASA Spitzer Space Telescope and measured its distance using W. M. Keck Observatory in Hawaii.
This video shows a phenomenon known as gravitational lensing, which is used by astronomers to study very distant and very faint galaxies. Credit: NASA, ESA The distant galaxy is visible through a cluster of galaxies called Abell 383, whose powerful gravity bends the rays of light almost like a magnifying glass. The chance alignment of the galaxy, the cluster and the Earth amplifies the light reaching us from this distant galaxy, allowing the astronomers to make detailed observations. Without this gravitational lens, the galaxy would have been too faint to be observed even with today's largest telescopes.
After spotting the galaxy in Hubble and Spitzer images, the team carried out spectroscopic observations with the Keck-II telescope in Hawaii. Spectroscopy is the technique of breaking up light into its component colours. By analysing these spectra, the team was able to make detailed measurements of its redshift and infer information about the properties of its component stars.
The galaxy's redshift is 6.027, which means we see it as it was when the Universe was around 950 million years old. This does not make it the most remote galaxy ever detected — several have been confirmed at redshifts of more than 8, and one has an estimated redshift of around 10 (heic1103), placing it 400 million years earlier. However the newly discovered galaxy has dramatically different features from other distant galaxies that have been observed, which generally shine brightly with only young stars.
"When we looked at the spectra, two things were clear," explains co-author Eiichi Egami. "The redshift placed it very early in cosmic history, as we expected. But the Spitzer infrared detection also indicated that the galaxy was made up of surprisingly old and relatively faint stars. This told us that the galaxy was made up of stars already nearly 750 million years old — pushing back the epoch of its formation to about 200 million years after the Big Bang, much further than we had expected."
Co-author Dan Stark continues: "Thanks to the amplification of the galaxy's light by the gravitational lens, we have some excellent quality data. Our work confirms some earlier observations that had hinted at the presence of old stars in early galaxies. This suggests that the first galaxies have been around for a lot longer than previously thought."
This illustration shows a phenomenon known as gravitational lensing, which is used by astronomers to study very distant and very faint galaxies. Note that the scale has been greatly exaggerated in this diagram. In reality, the distant galaxy is much further away and much smaller. Lensing clusters are clusters of elliptical galaxies whose gravity is so strong that they bend the light from the galaxies behind them. This produces distorted, and often multiple images of the background galaxy. But despite this distortion, gravitational lenses allow for greatly improved observations as the gravity bends the light’s path towards Hubble, amplifying the light and making otherwise invisible objects observable. A team of astronomers has used Abell 383, one such gravitational lens, to observe a distant galaxy whose light is resolved into two images by the cluster. The gravitational lensing effect means that astronomers have been able to determine fascinating insights about the galaxy that would not normally be visible even with Hubble or large ground-based telescopes. Among their discoveries is that the distant galaxy’s stars are very old, meaning that galaxies probably formed earlier in cosmic history than we first thought. Credit: NASA, ESA The discovery has implications beyond the question of when galaxies first formed, and may help explain how the Universe became transparent to ultraviolet light in the first billion years after the Big Bang. In the early years of the cosmos, a diffuse fog of neutral hydrogen gas blocked ultraviolet light in the Universe. Some source of radiation must therefore have progressively ionised the diffuse gas, clearing the fog to make it transparent to ultraviolet rays as it is today — a process known as reionisation.
Astronomers believe that the radiation that powered this reionisation must have come from galaxies. But so far, nowhere near enough of them have been found to provide the necessary radiation. This discovery may help solve this enigma.
"It seems probable that there are in fact far more galaxies out there in the early Universe than we previously estimated — it's just that many galaxies are older and fainter, like the one we have just discovered," says co-author Jean-Paul Kneib. "If this unseen army of faint, elderly galaxies is indeed out there, they could provide the missing radiation that made the Universe transparent to ultraviolet light."
As of today, we can only discover these galaxies by observing through massive clusters that act as cosmic telescopes. In coming years, the NASA/ESA/CSA James Webb Space Telescope, scheduled for launch later this decade, will specialise in high resolution observations of distant, highly redshifted objects. It will therefore be in an ideal position to solve this mystery once and for all. |
BC: The End
FC: The Geometry Scrapbook By Evelyn Turner 2nd Period December 12th, 2012
1: Table of Contents | Chapter 1: Pages 2-3: Geometry Basics Pages 4-5: Angles and their measures Chapter 2: Pages 6-7: Angle and Segment Bisectors Pages 8-9: Complementary, Supplementary, and Vertical Angles Chapter3: Pages 10-11: Parallel lines and angles formed by a transversal Pages 12-13: Perpendicular lines Chapter 4: Pages 14-15: Triangles Pages 16-17: Pythagorean Theorem and Distance Formula Chapter 5: Pages 18-19: Congruent Triangles Chapter 6: Pages 20-21: Polygons Extra Credit: Pages 22-23: Extra Credit
2: Chapter 1: Geometry Basics | A plane has two dimensions. It is represented by a shape that looks like a floor or wall. | An example of a plane is a wall or floor. | A real world example of planes is the walls in my room. | A plane
3: A point has no dimension. It is represented by a small dot. | A line has one dimension. It extends without end in two directions. It is represented by a line with two arrowheads. | A line | A point | An example of a line would be the lines on the middle of the road. | An example of a point is the period at the end of a sentence.
4: Angles and their measures | - Acute angles measure between 0-90 degrees. - Right angles measure 90 degrees. - Obtuse angles measure between 90-180 degrees. - Straight angles measure 180 degrees.
5: Acute angle | Right angle | Obtuse angle | Straight angle | An example of a angle is on a modern GPS route.
6: An angle bisector is a ray that divides an angle into two angles that are congruent. A segment bisector is a segment, ray, line, or plane that intersects a segment at its midpoint. | Chapter 2: Angle and Segment Bisectors
7: Angle Bisector | Segment Bisector | A real example of an angle bisector is a tent.
8: Complementary, Supplementary, and Vertical Angles | Two angles are complementary angles if the sum of their measures is 90 degrees. | Two angles are supplementary angles if the sum of their measures is 180 degrees. | Complementary Angles | Supplementary Angles
9: Two angles are vertical angles if they are not adjacent and their sides are formed by two intersecting lines. | Vertical Angles | A real world example of vertical angles is crossed snow skis.
10: Chapter | Parallel Lines and Angles Formed By a Transversal | Two lines are parallel lines if they lie in the same plane and do not intersect. | There are four types of angles that can occur because of a transversal. | Two angles are corresponding angles if they occupy corresponding positions. | Parallel Lines | A real world example of parallel lines is the yellow stripes on the road. | Corresponding Angles
11: Two angles are alternate interior angles if they lie between the two lines on the opposite sides of the transversal. | Two angles are alternate exterior angles if they lie outside the two lines on the opposite sides of the transversal. | Two angles are same-side interior angles if they lie between the two lines on the opposite sides of the transversal. | Alternate Interior Angles | Same-side Interior Angles | Alternate Exterior Angles | E and D
12: Perpendicular Lines | A real world example of perpendicular lines is the lines on a tennis court.
13: Two lines are perpendicular lines if they intersect to form a right angle.
14: Chapter 4: Triangles | A triangle is a figure formed by three segments joining three non-collinear points. | Triangle | A real world example of a triangle is Nabisco's logo. | An acute triangle has three acute angles. | An Equiangular triangle has three congruent angles. | Acute Triangle | Equiangular Triangle
15: A Equilateral triangle has three congruent sides. | An Isosceles triangle has at least two congruent sides. | A Scalene Triangle has no congruent sides. | A right triangle has one right angle. | An obtuse triangle has an obtuse angle. | Right Triangle | Obtuse Triangle
16: Pythagorean Theorem and the Distance Formula | Pythagorean Theorem: In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the legs. | Pythagorean Theorem | A real world example of the Pythagorean theorem is the height of a fire-fighter's ladder against a building.
17: Distance Formula: IfA (x1,y1) and B (x2,y2) are points in a coordinate plane, then the distance between A and B is AB= the square root of (x2-x1)to the second power + (y2-y1) to the second power. | Distance Formula
18: Chapter 5 | Chapter 5 | Proving Triangles are Congruent | ASA | Angle Side Angle: If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then the two triangles are congruent. | SSS | Side Side Side: If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent. | Side Angle Side: If two sides and the included angle of one triangle are congruent to two sides and the included angle of a second triangle, then the two triangles are congruent. | SAS
19: - Figures are congruent if all pairs of corresponding angles are congruent and all pairs of corresponding sides are congruent. | AAS | Angle Angle Side: If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent. | HL | Hypotenuse-Leg: If the hypotenuse and a leg of a right triangle are congruent to the hypotenuse and a leg of a second right triangle, then the two triangles are congruent. | A real world example of congruent triangles are the Egyptian pyramids.
20: Chapter 6: | Polygons | A polygon is a plane figure that is formed by three or more segments called sides. A parallelogram is a quadrilateral with both pairs of opposite sides parallel. | Polygon | Parallelogram | A real world example of a parallelogram is window panes.
21: A rhombus is a parallelogram with four congruent sides. | A rectangle is a parallelogram with four right angles. | A square is a parallelogram with four congruent sides and four congruent angles. | Square | Rhombus
22: Extra Credit: | Coplanar Points are points that lie on the same plane. | Coplanar Lines are lines that lie on the same plane.
23: A conjecture is an unproven statement that is based on a pattern or observation. | Postulates are statements that are accepted without further justification. | Collinear Points are points that lie on the same line. |
Propositions and their truth values are two elemental ingredients of logical reasoning.
One type of thought process is an argument. An argument is a series of statements; some of these statements are premises: assertions, reasons, claims; from these premises is derived a conclusion. The argument claims that, because the premises are true, the conclusion is true. If the conclusion does indeed logically follow from the premises, the argument is valid; if the conclusion does not logically follow from the premises, the argument is invalid. Note that the words “valid” and “invalid” apply to conclusions or to
arguments, not to premises. When we refer to premises, we describe them as being true or untrue. Whenever we want to evaluate an argument, we
should examine both the premises and conclusions. The premises, i.e., the evidence, should be thorough and accurate; the conclusion should clearly and incontrovertibly derive from that evidence. When an argument
is unsuccessful, it has probably gone wrong in one of the following areas:
1. The evidence has not been thorough; contradictory evidence has been overlooked or ignored.
2. The evidence has not been accurate; false or unsubstantiated or misleading statements have been claimed as fact.
3. The conclusion has not clearly and uncontrovertibly come from the evidence; the relationship between evidence and conclusion has not been a firm one.
When one or more of these phenomena occur in an argument, that argument is said to be fallacious. The argument claims to have done something that it in fact has not done. Another way to describe arguments is to determine whether they are sound or unsound. To be sound (1) the premises must be true and (2) the conclusions must logically follow from these premises. If either of these conditions is violated, the argument is unsound.Robert J. Gula, Nonsense: Red Herrings, Straw Men and Sacred Cows: How We Abuse Logic in Our Everyday Language (2007). |
Using Right Triangles to Make the Unit Circle
Lesson 3 of 9
Objective: SWBAT elaborate on the connections between right triangles and the equation of a circle, laying groundwork for the extension of the definitions of trig ratios to the unit circle on the coordinate plane.
The previous class ended with an exit slip. In reading those exit slips, I saw how many students were able to write the equation of the unit circle. Enough of them need review that I’ll start with that today. Another question on that exit slip was about connections between the Pythagorean Theorem and the equation of a circle on the coordinate plane. At this point, most students understand that these have the same structure. I want to develop their ability to use these connections to develop new ideas: looking for and making use of structure is Mathematical Practice #7; seeing connections between structures is key to making sense of problems (MP1), abstract reasoning (MP2), and modeling with mathematics (MP4).
Today’s opener consists of four problems that will pick where the exit slip left off and that will set the stage for what we’re doing today, which is to begin to see how the definitions of the trig functions can be extended using the unit circle (CCS F-TF.A.2). Please take a look at the first eight slides of today’s Prezi to get an idea of how these problems will guide the lesson. I give students a few minutes to consider the four problems for themselves, before moving pretty quickly through a class discussion of each problem. The purpose of these problems is as much to guide students through some notes at the start of class as to engage them in rigorous problem solving.
With the first problem, I make sure to explicitly state the equation of the unit circle for anyone who didn’t know it, and to get students to say why the equation is what it is. The second problem is an example of a “unit right triangle” that students were working with in the previous class, and I direct the class conversation toward seeing how quickly this problem can be done. Part of understanding the unit circle is seeing the number 1 as a tool, and using it strategically (MP5): students must recognize that when a right triangle has a hypotenuse with length 1, the task of finding its side lengths is simple. This understanding is prerequisite to a strong grasp of the structure (MP7) of the unit circle. Students will have a little time to finish the Unit Right Triangles handout during today’s work time.
The third question is about lattice points on the unit circle. Most students are comfortable with the idea that there are four, and with the idea that all other points will have coordinate values between -1 and 1. As an extension question, I ask if anyone can think of any rational numbers that fit into the Pythagorean Theorem if c = 1. This is the sort of problem that I just leave out there for now; we’ll return to it in an upcoming class.
Impossible Right Triangles
Finally, the most fun problem of the opener is #4, in which students are asked to sketch a right triangle with c = 1 and another angle measuring 120 degrees. I act very nonchalant when I put this problem up, and I read it in my most serious, straightforward voice. I’m looking at faces to see who is confused by this. I give students a little time to mull it over. There are two ways I have seen this go. The one I hope for is that students will just take over the discussion here, and someone will try to sketch this triangle on the board, and there will be some spirited argument about whether or not this can really work. The other possibility is that students will just be too confused to know what do, and they will sit there assuming it’s their fault that this is confusing. If I see the latter happening, I’ll be quick to jump in and say, “if you’re confused right now, then that’s because you’re really paying attention, and you’re right!” before leading a conversation about why this is impossible.
Either way, it’s fun to see if anyone comes up with the idea of a triangle with the angles 90, 120 and -30. Some classes get there on their own, others need a little guidance. When I ask students to see if the relationship cos A = sin B holds for any two angles A and B that add up to 90 degrees, students check it on their calculators and their curiosity spikes.
In some classes, we also talk about the degenerate right triangle, with angles of 0, 90 and 90 degrees. I like to draw a line on the board and emphatically point to its “three sides”: the hypotenuse, the leg with the same length, and the leg with length zero.
Learning Target Review: UC1 and UC2
We look at both learning targets for Unit 3. I briefly review the first one (Unit Circle 1), asking students to identify the key word in this SLT. It’s “why” of course! I tell students that they should be able to explain why radian measure and arc length on the unit circle are the same thing in 140 characters or less (tomorrow’s class opens with a check in quiz along those lines). I show them Figure #1, and remind them that this is a tool that should be complete.
Next I introduce the second learning target for the first time:
- Unit Circle 2: I can use the unit circle to extend the definitions of trigonometric functions to all real numbers.
I put it up on the screen and ask students to identify the key words in this SLT. Students notice that we’re still talking about the unit circle, and that this learning target is once again about the trig ratios. (We notice, with some curiosity, that this SLT actually uses the word “functions” instead of “ratios,” and I tell students to consider this a coming attraction.)
When a students points out the word “extend,” I try to hook the rest of the class on that. I ask what it means to extend something, and we get at the idea of making something bigger, so then I ask why the definitions of the trig ratios would have to be extended. We look back to the right triangles from today’s opener, and how it was possible to make have an acute angle like 15 degrees in a right triangle, but not to have an angle measuring 120 degrees. I ask how we have defined sine and cosine, and I note that as soon as someone says “opposite over hypotenuse,” they’re assuming the use of a right triangle. Students are pretty quick to say that we can’t have an acute angle greater than 89 degrees in a right triangle. I like to note that, technically, you could have an 89.999 degree angle, which gets students thinking a little more precisely. We all agree that the domain of the trigonometric functions has been 0 < x < 90. So that’s why we’re going to need to extend these definitions, and we’re going to use the unit circle to do it.
Students are able to move at their own pace today. Differentiation is built into today’s work time because each task leads into the next, and my role is to provide support wherever students need it. Students must finish their Unit Right Triangles handout (see description in previous lesson) to get the instructions for Figure #2a. When they finish the first part of Figure #2a, they get a laptop and learn about reference angles by completing exercises on Delta Math.
In order to start to extend the definitions of the trigonometric functions, students will plot their “Unit Right Triangles” on the coordinate plane. Figure #2a is where students will plot these triangles. There is a staggered start to their work on this, because I make sure that students have completed the Unit Right Triangles handout before showing them how to get started on Figure #2a.
See the narrative video for this section to see how I introduce this task to students, and how I demonstrate the self-checking nature of this exercise.
Figure #2a follows the same format as Figure #1: it doesn’t yet have a title and it has space at the bottom for students to write the learning target and to answer the question, “What’s happening here?” I’ve titled it Figure #2a because Figure #2b is the same thing, with a unit circle already graphed. I keep a few copies of Figure #2b around as an accommodation for any students who really don’t get the idea that all the unit right triangles will make a circle.
If all (or at least a majority of) students are ready at the same time, you can show them the instructions as a whole class. I almost always end up sitting down next to a student or two and demonstrating the process, then I rely on my early starters to explain the process to their classmates. Think of it as playing telephone with the instructions - this is a great structure because it gives kids a chance to shore up their understanding by explaining what they know to someone else.
When students are done plotting their unit right triangles in Figure #2a, they have will have some time to grab a laptop and work onDelta Math. (Here is a screen shot of the practice modules on today's Delta Math assignment.) Note that at this point, students have only plotted triangles in the first quadrant of Figure #2a. Before we extend these plots to the other three quadrants, I want students to spend some time getting familiar with angles outside the domain of 0 to 90 degrees.
On this assignment, students will work with central angles and reference angles - ideas we haven’t explicitly studied yet. This gives them a chance to develop this knowledge independently before we apply it in the next lesson.
This exit task is repeat of what students did with Figure #1 two classes ago. Figure #2a needs a title in the blank at the top of the page that will help someone understand what's going on here. I also ask students to fill in the bottom of the page by writing the learning target in their own words, and answering the question, "What's happening here?" I'll collect this from students as they leave.
When I debriefed student explanations of what's happening here, I pointed out that a lot of them wrote very literally what they had been doing on Figure #1 (ie, "I'm converting from degrees to radians," or "We're labeling a circle every 15 degrees."), and I asked them to think of this question more as, "What's going on behind the scenes here?" or "What patterns are happening here?" I'll be looking to see if they take their responses to that next level.
Of course, it's good information to know how much each of student has completed on this figure, and I'll look for that, but I'm not grading them. I'm most curious to see their titles, their wording of the SLT, and their description of what's happening here. I'll take some pictures of their responses and use them to help form the lecture notes for the next class (see resulting notes in the Prezi for the next lesson). |
In chemistry, the standard state of a material (pure substance, mixture or solution) is a reference point used to calculate its properties under different conditions. In principle, the choice of standard state is arbitrary, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a conventional set of standard states for general use. IUPAC recommends using a standard pressure p⦵ = 105 Pa. Strictly speaking, temperature is not part of the definition of a standard state. For example, as discussed below, the standard state of a gas is conventionally chosen to be unit pressure (usually in bar) ideal gas, regardless of the temperature. However, most tables of thermodynamic quantities are compiled at specific temperatures, most commonly 298.15 K (25.00 °C; 77.00 °F) or, somewhat less commonly, 273.15 K (0.00 °C; 32.00 °F).
For a given material or substance, the standard state is the reference state for the material’s thermodynamic state properties such as enthalpy, entropy, Gibbs free energy, and for many other material standards. The standard enthalpy change of formation for an element in its standard state is zero, and this convention allows a wide range of other thermodynamic quantities to be calculated and tabulated. The standard state of a substance does not have to exist in nature: for example, it is possible to calculate values for steam at 298.15 K and 105 Pa, although steam does not exist (as a gas) under these conditions. The advantage of this practice is that tables of thermodynamic properties prepared in this way are self-consistent.
Conventional standard states
Many standard states are non-physical states, often referred to as “hypothetical states”. Nevertheless, their thermodynamic properties are well-defined, usually by an extrapolation from some limiting condition, such as zero pressure or zero concentration, to a specified condition (usually unit concentration or pressure) using an ideal extrapolating function, such as ideal solution or ideal gas behavior, or by empirical measurements.
The standard state for a gas is the hypothetical state it would have as a pure substance obeying the ideal gas equation at standard pressure (105 Pa, or 1 bar). No real gas has perfectly ideal behavior, but this definition of the standard state allows corrections for non-ideality to be made consistently for all the different gases.
Liquids and solids
The standard state for liquids and solids is simply the state of the pure substance subjected to a total pressure of 105 Pa. For most elements, the reference point of ΔHf⦵ = 0 is defined for the most stable allotrope of the element, such as graphite in the case of carbon, and the β-phase (white tin) in the case of tin. An exception is white phosphorus, the most common allotrope of phosphorus, which is defined as the standard state despite the fact that it is only metastable.
For a substance in solution (solute), the standard state is the hypothetical state it would have at the standard state molality or amount concentration but exhibiting infinite-dilution behavior. The reason for this unusual definition is that the behavior of a solute at the limit of infinite dilution is described by equations which are very similar to the equations for ideal gases. Hence taking infinite-dilution behavior to be the standard state allows corrections for non-ideality to be made consistently for all the different solutes. Standard state molality is 1 mol kg−1, while standard state amount concentration is 1 mol dm−3.
At the time of development in the nineteenth century, the superscript Plimsoll symbol (⦵) was adopted to indicate the non-zero nature of the standard state. IUPAC recommends in the 3rd edition of Quantities, Units and Symbols in Physical Chemistry a symbol which seems to be a degree sign (°) as a substitute for the plimsoll mark. In the very same publication the plimsoll mark appears to be constructed by combining a horizontal stroke with a degree sign. A range of similar symbols are used in the literature: a stroked lowercase letter O (
o), a superscript zero (0) or a circle with a horizontal bar either where the bar extends beyond the boundaries of the circle (U+29B5 ⦵ CIRCLE WITH HORIZONTAL BAR) or is enclosed by the circle, dividing the circle in half (U+2296 ⊖ CIRCLED MINUS). When compared to the plimsoll symbol used on vessels, the horizontal bar should extend beyond the boundaries of the circle; care should be taken not to confuse the symbol with the Greek letter theta (uppercase Θ or ϴ, lowercase θ ).
- International Union of Pure and Applied Chemistry (1982). “Notation for states and processes, significance of the word standard in chemical thermodynamics, and remarks on commonly tabulated forms of thermodynamic functions” (PDF). Pure Appl. Chem.. 54 (6): 1239–50. doi:10.1351/pac198254061239.
- IUPAC–IUB–IUPAB Interunion Commission of Biothermodynamics (1976). “Recommendations for measurement and presentation of biochemical equilibrium data” (PDF). J. Biol. Chem.. 251 (22): 6879–85.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) “standard state“. doi:10.1351/goldbook.S05925
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) “standard pressure“. doi:10.1351/goldbook.S05921
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) “standard conditions for gases“. doi:10.1351/goldbook.S05910
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) “standard solution“. doi:10.1351/goldbook.S05924
- Housecroft C.E. and Sharpe A.G., Inorganic Chemistry (2nd ed., Pearson Prentice-Hall 2005) p.392
- Prigogine, I. & Defay, R. (1954) Chemical thermodynamics, p. xxiv
- E.R. Cohen, T. Cvitas, J.G. Frey, B. Holmström, K. Kuchitsu, R. Marquardt, I. Mills, F. Pavese, M. Quack, J. Stohner, H.L. Strauss, M. Takami, and A.J. Thor, “Quantities, Units and Symbols in Physical Chemistry”, IUPAC Green Book, 3rd Edition, 2nd Printing, IUPAC & RSC Publishing, Cambridge (2008), p. 60
- IUPAC (1993) Quantities, units and symbols in physical chemistry (also known as The Green Book) (2nd ed.), p. 51
- Narayanan, K. V. (2001) A Textbook of Chemical Engineering Thermodynamics (8th printing, 2006), p. 63
- “Miscellaneous Mathematical Symbols-B” (PDF). Unicode. 2013. Retrieved 2013-12-19.
- Mills, I. M. (1989) “The choice of names and symbols for quantities in chemistry”. Journal of Chemical Education (vol. 66, number 11, November 1989 p. 887–889) [Note that Mills (who was involved in producing a revision of Quantities, units and symbols in physical chemistry) refers to the symbol ⊖ (Unicode 2296 “Circled minus” as displayed in https://www.unicode.org/charts/PDF/U2980.pdf) as a plimsoll symbol although it lacks an extending bar in the printed article. Mills also says that a superscript zero is an equal alternative to indicate “standard state”, though a degree symbol (°) is used in the same article] |
Repetitive Sequences, Nucleic Acid: Sequences of DNA or RNA that occur in multiple copies. There are several types: INTERSPERSED REPETITIVE SEQUENCES are copies of transposable elements (DNA TRANSPOSABLE ELEMENTS or RETROELEMENTS) dispersed throughout the genome. TERMINAL REPEAT SEQUENCES flank both ends of another sequence, for example, the long terminal repeats (LTRs) on RETROVIRUSES. Variations may be direct repeats, those occurring in the same direction, or inverted repeats, those opposite to each other in direction. TANDEM REPEAT SEQUENCES are copies which lie adjacent to each other, direct or inverted (INVERTED REPEAT SEQUENCES).Peptide Nucleic Acids: DNA analogs containing neutral amide backbone linkages composed of aminoethyl glycine units instead of the usual phosphodiester linkage of deoxyribose groups. Peptide nucleic acids have high biological stability and higher affinity for complementary DNA or RNA sequences than analogous DNA oligomers.Base Sequence: The sequence of PURINES and PYRIMIDINES in nucleic acids and polynucleotides. It is also called nucleotide sequence.Nucleic Acid Hybridization: Widely used technique which exploits the ability of complementary sequences in single-stranded DNAs or RNAs to pair with each other to form a double helix. Hybridization can take place between two complimentary DNA sequences, between a single-stranded DNA and a complementary RNA, or between two RNA sequences. The technique is used to detect and isolate specific sequences, measure homology, or define other characteristics of one or both strands. (Kendrew, Encyclopedia of Molecular Biology, 1994, p503)DNA: A deoxyribonucleotide polymer that is the primary genetic material of all cells. Eukaryotic and prokaryotic organisms normally contain DNA in a double-stranded state, yet several important biological processes transiently involve single-stranded regions. DNA, which consists of a polysugar-phosphate backbone possessing projections of purines (adenine and guanine) and pyrimidines (thymine and cytosine), forms a double helix that is held together by hydrogen bonds between these purines and pyrimidines (adenine to thymine and guanine to cytosine).Molecular Sequence Data: Descriptions of specific amino acid, carbohydrate, or nucleotide sequences which have appeared in the published literature and/or are deposited in and maintained by databanks such as GENBANK, European Molecular Biology Laboratory (EMBL), National Biomedical Research Foundation (NBRF), or other sequence repositories.Nucleic Acid Renaturation: The reformation of all, or part of, the native conformation of a nucleic acid molecule after the molecule has undergone denaturation.Nucleic Acid Conformation: The spatial arrangement of the atoms of a nucleic acid or polynucleotide that results in its characteristic 3-dimensional shape.Interspersed Repetitive Sequences: Copies of transposable elements interspersed throughout the genome, some of which are still active and often referred to as "jumping genes". There are two classes of interspersed repetitive elements. Class I elements (or RETROELEMENTS - such as retrotransposons, retroviruses, LONG INTERSPERSED NUCLEOTIDE ELEMENTS and SHORT INTERSPERSED NUCLEOTIDE ELEMENTS) transpose via reverse transcription of an RNA intermediate. Class II elements (or DNA TRANSPOSABLE ELEMENTS - such as transposons, Tn elements, insertion sequence elements and mobile gene cassettes of bacterial integrons) transpose directly from one site in the DNA to another.Nucleic Acid Denaturation: Disruption of the secondary structure of nucleic acids by heat, extreme pH or chemical treatment. Double strand DNA is "melted" by dissociation of the non-covalent hydrogen bonds and hydrophobic interactions. Denatured DNA appears to be a single-stranded flexible structure. The effects of denaturation on RNA are similar though less pronounced and largely reversible.Nucleic Acid Probes: Nucleic acid which complements a specific mRNA or DNA molecule, or fragment thereof; used for hybridization studies in order to identify microorganisms and for genetic studies.Cloning, Molecular: The insertion of recombinant DNA molecules from prokaryotic and/or eukaryotic sources into a replicating vehicle, such as a plasmid or virus vector, and the introduction of the resultant hybrid molecules into recipient cells without altering the viability of those cells.Sequence Analysis, DNA: A multistage process that includes cloning, physical mapping, subcloning, determination of the DNA SEQUENCE, and information analysis.Sequence Homology, Nucleic Acid: The sequential correspondence of nucleotides in one nucleic acid molecule with those of another nucleic acid molecule. Sequence homology is an indication of the genetic relatedness of different organisms and gene function.RNA: A polynucleotide consisting essentially of chains with a repeating backbone of phosphate and ribose units to which nitrogenous bases are attached. RNA is unique among biological macromolecules in that it can encode genetic information, serve as an abundant structural component of cells, and also possesses catalytic activity. (Rieger et al., Glossary of Genetics: Classical and Molecular, 5th ed)Nucleic Acid Amplification Techniques: Laboratory techniques that involve the in-vitro synthesis of many copies of DNA or RNA from one original template.Long Interspersed Nucleotide Elements: Highly repeated sequences, 6K-8K base pairs in length, which contain RNA polymerase II promoters. They also have an open reading frame that is related to the reverse transcriptase of retroviruses but they do not contain LTRs (long terminal repeats). Copies of the LINE 1 (L1) family form about 15% of the human genome. The jockey elements of Drosophila are LINEs.Alu Elements: The Alu sequence family (named for the restriction endonuclease cleavage enzyme Alu I) is the most highly repeated interspersed repeat element in humans (over a million copies). It is derived from the 7SL RNA component of the SIGNAL RECOGNITION PARTICLE and contains an RNA polymerase III promoter. Transposition of this element into coding and regulatory regions of genes is responsible for many heritable diseases.Genome, Plant: The genetic complement of a plant (PLANTS) as represented in its DNA.DNA Transposable Elements: Discrete segments of DNA which can excise and reintegrate to another site in the genome. Most are inactive, i.e., have not been found to exist outside the integrated state. DNA transposable elements include bacterial IS (insertion sequence) elements, Tn elements, the maize controlling elements Ac and Ds, Drosophila P, gypsy, and pogo elements, the human Tigger elements and the Tc and mariner elements which are found throughout the animal kingdom.DNA, Plant: Deoxyribonucleic acid that makes up the genetic material of plants.Polymerase Chain Reaction: In vitro method for producing large amounts of specific DNA or RNA fragments of defined length and sequence from small amounts of short oligonucleotide flanking sequences (primers). The essential steps include thermal denaturation of the double-stranded target molecules, annealing of the primers to their complementary sequences, and extension of the annealed primers by enzymatic synthesis with DNA polymerase. The reaction is efficient, specific, and extremely sensitive. Uses for the reaction include disease diagnosis, detection of difficult-to-isolate pathogens, mutation analysis, genetic testing, DNA sequencing, and analyzing evolutionary relationships.Repetitive Sequences, Amino Acid: A sequential pattern of amino acids occurring more than once in the same protein sequence.DNA, Satellite: Highly repetitive DNA sequences found in HETEROCHROMATIN, mainly near centromeres. They are composed of simple sequences (very short) (see MINISATELLITE REPEATS) repeated in tandem many times to form large blocks of sequence. Additionally, following the accumulation of mutations, these blocks of repeats have been repeated in tandem themselves. The degree of repetition is on the order of 1000 to 10 million at each locus. Loci are few, usually one or two per chromosome. They were called satellites since in density gradients, they often sediment as distinct, satellite bands separate from the bulk of genomic DNA owing to a distinct BASE COMPOSITION.Blotting, Southern: A method (first developed by E.M. Southern) for detection of DNA that has been electrophoretically separated and immobilized by blotting on nitrocellulose or other type of paper or nylon membrane followed by hybridization with labeled NUCLEIC ACID PROBES.Amino Acid Sequence: The order of amino acids as they occur in a polypeptide chain. This is referred to as the primary structure of proteins. It is of fundamental importance in determining PROTEIN CONFORMATION.Chromosomes, Artificial, Bacterial: DNA constructs that are composed of, at least, a REPLICATION ORIGIN, for successful replication, propagation to and maintenance as an extra chromosome in bacteria. In addition, they can carry large amounts (about 200 kilobases) of other sequence for a variety of bioengineering purposes.DNA Restriction Enzymes: Enzymes that are part of the restriction-modification systems. They catalyze the endonucleolytic cleavage of DNA sequences which lack the species-specific methylation pattern in the host cell's DNA. Cleavage yields random or specific double-stranded fragments with terminal 5'-phosphates. The function of restriction enzymes is to destroy any foreign DNA that invades the host cell. Most have been studied in bacterial systems, but a few have been found in eukaryotic organisms. They are also used as tools for the systematic dissection and mapping of chromosomes, in the determination of base sequences of DNAs, and have made it possible to splice and recombine genes from one organism into the genome of another. EC 3.21.1.DNA Probes: Species- or subspecies-specific DNA (including COMPLEMENTARY DNA; conserved genes, whole chromosomes, or whole genomes) used in hybridization studies in order to identify microorganisms, to measure DNA-DNA homologies, to group subspecies, etc. The DNA probe hybridizes with a specific mRNA, if present. Conventional techniques used for testing for the hybridization product include dot blot assays, Southern blot assays, and DNA:RNA hybrid-specific antibody tests. Conventional labels for the DNA probe include the radioisotope labels 32P and 125I and the chemical label biotin. The use of DNA probes provides a specific, sensitive, rapid, and inexpensive replacement for cell culture techniques for diagnosing infections.Chromosome Mapping: Any method used for determining the location of and relative distances between genes on a chromosome.Centromere: The clear constricted portion of the chromosome at which the chromatids are joined and by which the chromosome is attached to the spindle during cell division.Dendrobium: A plant genus of the family ORCHIDACEAE that contains dihydroayapin (COUMARINS) and phenanthraquinones.Short Interspersed Nucleotide Elements: Highly repeated sequences, 100-300 bases long, which contain RNA polymerase III promoters. The primate Alu (ALU ELEMENTS) and the rodent B1 SINEs are derived from 7SL RNA, the RNA component of the signal recognition particle. Most other SINEs are derived from tRNAs including the MIRs (mammalian-wide interspersed repeats).Primed In Situ Labeling: A technique that labels specific sequences in whole chromosomes by in situ DNA chain elongation or PCR (polymerase chain reaction).Tandem Repeat Sequences: Copies of DNA sequences which lie adjacent to each other in the same orientation (direct tandem repeats) or in the opposite direction to each other (INVERTED TANDEM REPEATS).In Situ Hybridization, Fluorescence: A type of IN SITU HYBRIDIZATION in which target sequences are stained with fluorescent dye so their location and size can be determined using fluorescence microscopy. This staining is sufficiently distinct that the hybridization signal can be seen both in metaphase spreads and in interphase nuclei.DNA, Bacterial: Deoxyribonucleic acid that makes up the genetic material of bacteria.Physical Chromosome Mapping: Mapping of the linear order of genes on a chromosome with units indicating their distances by using methods other than genetic recombination. These methods include nucleotide sequencing, overlapping deletions in polytene chromosomes, and electron micrography of heteroduplex DNA. (From King & Stansfield, A Dictionary of Genetics, 5th ed)Species Specificity: The restriction of a characteristic behavior, anatomical structure or physical system, such as immune response; metabolic response, or gene or gene variant to the members of one species. It refers to that property which differentiates one species from another but it is also used for phylogenetic levels higher or lower than the species.Sequence Alignment: The arrangement of two or more amino acid or base sequences from an organism or organisms in such a way as to align areas of the sequences sharing common properties. The degree of relatedness or homology between the sequences is predicted computationally or statistically based on weights assigned to the elements aligned between the sequences. This in turn can serve as a potential indicator of the genetic relatedness between the organisms.Restriction Mapping: Use of restriction endonucleases to analyze and generate a physical map of genomes, genes, or other segments of DNA.Genes: A category of nucleic acid sequences that function as units of heredity and which code for the basic instructions for the development, reproduction, and maintenance of organisms.Heterochromatin: The portion of chromosome material that remains condensed and is transcriptionally inactive during INTERPHASE.Oligonucleotides: Polymers made up of a few (2-20) nucleotides. In molecular genetics, they refer to a short sequence synthesized to match a region where a mutation is known to occur, and then used as a probe (OLIGONUCLEOTIDE PROBES). (Dorland, 28th ed)Deoxyribonuclease BamHI: One of the Type II site-specific deoxyribonucleases (EC 184.108.40.206). It recognizes and cleaves the sequence G/GATCC at the slash. BamHI is from Bacillus amyloliquefaciens N. Numerous isoschizomers have been identified. EC 3.1.21.-.Base Composition: The relative amounts of the PURINES and PYRIMIDINES in a nucleic acid.Chromosomes: In a prokaryotic cell or in the nucleus of a eukaryotic cell, a structure consisting of or containing DNA which carries the genetic information essential to the cell. (From Singleton & Sainsbury, Dictionary of Microbiology and Molecular Biology, 2d ed)Chromosomes, Plant: Complex nucleoprotein structures which contain the genomic DNA and are part of the CELL NUCLEUS of PLANTS.Genomic Library: A form of GENE LIBRARY containing the complete DNA sequences present in the genome of a given organism. It contrasts with a cDNA library which contains only sequences utilized in protein coding (lacking introns).Chromosome Walking: A technique with which an unknown region of a chromosome can be explored. It is generally used to isolate a locus of interest for which no probe is available but that is known to be linked to a gene which has been identified and cloned. A fragment containing a known gene is selected and used as a probe to identify other overlapping fragments which contain the same gene. The nucleotide sequences of these fragments can then be characterized. This process continues for the length of the chromosome.Plasmids: Extrachromosomal, usually CIRCULAR DNA molecules that are self-replicating and transferable from one organism to another. They are found in a variety of bacterial, archaeal, fungal, algal, and plant species. They are used in GENETIC ENGINEERING as CLONING VECTORS.Gene Library: A large collection of DNA fragments cloned (CLONING, MOLECULAR) from a given organism, tissue, organ, or cell type. It may contain complete genomic sequences (GENOMIC LIBRARY) or complementary DNA sequences, the latter being formed from messenger RNA and lacking intron sequences.Prototheca: A genus of achlorophyllic algae in the family Chlorellaceae, and closely related to CHLORELLA. It is found in decayed matter; WATER; SEWAGE; and SOIL; and produces cutaneous and disseminated infections in various VERTEBRATES including humans.Phylogeny: The relationships of groups of organisms as reflected by their genetic makeup.Cosmids: Plasmids containing at least one cos (cohesive-end site) of PHAGE LAMBDA. They are used as cloning vehicles.Contig Mapping: Overlapping of cloned or sequenced DNA to construct a continuous region of a gene, chromosome or genome.3' Flanking Region: The region of DNA which borders the 3' end of a transcription unit and where a variety of regulatory sequences are located.DNA, Fungal: Deoxyribonucleic acid that makes up the genetic material of fungi.Evolution, Molecular: The process of cumulative change at the level of DNA; RNA; and PROTEINS, over successive generations.DNA, Recombinant: Biologically active DNA which has been formed by the in vitro joining of segments of DNA from different sources. It includes the recombination joint or edge of a heteroduplex region where two recombining DNA molecules are connected.DNA, Viral: Deoxyribonucleic acid that makes up the genetic material of viruses.Transcription, Genetic: The biosynthesis of RNA carried out on a template of DNA. The biosynthesis of DNA from an RNA template is called REVERSE TRANSCRIPTION.Genome: The genetic complement of an organism, including all of its GENES, as represented in its DNA, or in some cases, its RNA.DNA Primers: Short sequences (generally about 10 base pairs) of DNA that are complementary to sequences of messenger RNA and allow reverse transcriptases to start copying the adjacent sequences of mRNA. Primers are used extensively in genetic and molecular biology techniques.Recombination, Genetic: Production of new arrangements of DNA by various mechanisms such as assortment and segregation, CROSSING OVER; GENE CONVERSION; GENETIC TRANSFORMATION; GENETIC CONJUGATION; GENETIC TRANSDUCTION; or mixed infection of viruses.DNA Fingerprinting: A technique for identifying individuals of a species that is based on the uniqueness of their DNA sequence. Uniqueness is determined by identifying which combination of allelic variations occur in the individual at a statistically relevant number of different loci. In forensic studies, RESTRICTION FRAGMENT LENGTH POLYMORPHISM of multiple, highly polymorphic VNTR LOCI or MICROSATELLITE REPEAT loci are analyzed. The number of loci used for the profile depends on the ALLELE FREQUENCY in the population.Telomere: A terminal section of a chromosome which has a specialized structure and which is involved in chromosomal replication and stability. Its length is believed to be a few hundred base pairs.Databases, Nucleic Acid: Databases containing information about NUCLEIC ACIDS such as BASE SEQUENCE; SNPS; NUCLEIC ACID CONFORMATION; and other properties. Information about the DNA fragments kept in a GENE LIBRARY or GENOMIC LIBRARY is often maintained in DNA databases.Genetic Variation: Genotypic differences observed among individuals in a population.Inverted Repeat Sequences: Copies of nucleic acid sequence that are arranged in opposing orientation. They may lie adjacent to each other (tandem) or be separated by some sequence that is not part of the repeat (hyphenated). They may be true palindromic repeats, i.e. read the same backwards as forward, or complementary which reads as the base complement in the opposite orientation. Complementary inverted repeats have the potential to form hairpin loop or stem-loop structures which results in cruciform structures (such as CRUCIFORM DNA) when the complementary inverted repeats occur in double stranded regions.Genomic Structural Variation: Contiguous large-scale (1000-400,000 basepairs) differences in the genomic DNA between individuals, due to SEQUENCE DELETION; SEQUENCE INSERTION; or SEQUENCE INVERSION.Retroelements: Elements that are transcribed into RNA, reverse-transcribed into DNA and then inserted into a new site in the genome. Long terminal repeats (LTRs) similar to those from retroviruses are contained in retrotransposons and retrovirus-like elements. Retroposons, such as LONG INTERSPERSED NUCLEOTIDE ELEMENTS and SHORT INTERSPERSED NUCLEOTIDE ELEMENTS do not contain LTRs.Sensitivity and Specificity: Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)DNA Methylation: Addition of methyl groups to DNA. DNA methyltransferases (DNA methylases) perform this reaction using S-ADENOSYLMETHIONINE as the methyl group donor.Deoxyribonucleases, Type II Site-Specific: Enzyme systems containing a single subunit and requiring only magnesium for endonucleolytic activity. The corresponding modification methylases are separate enzymes. The systems recognize specific short DNA sequences and cleave either within, or at a short specific distance from, the recognition sequence to give specific double-stranded fragments with terminal 5'-phosphates. Enzymes from different microorganisms with the same specificity are called isoschizomers. EC 220.127.116.11.DNA, Ribosomal: DNA sequences encoding RIBOSOMAL RNA and the segments of DNA separating the individual ribosomal RNA genes, referred to as RIBOSOMAL SPACER DNA.Microsatellite Repeats: A variety of simple repeat sequences that are distributed throughout the GENOME. They are characterized by a short repeat unit of 2-8 basepairs that is repeated up to 100 times. They are also known as short tandem repeats (STRs).Dinucleotide Repeats: The most common of the microsatellite tandem repeats (MICROSATELLITE REPEATS) dispersed in the euchromatic arms of chromosomes. They consist of two nucleotides repeated in tandem; guanine and thymine, (GT)n, is the most frequently seen.Chromosomes, Artificial, Yeast: Chromosomes in which fragments of exogenous DNA ranging in length up to several hundred kilobase pairs have been cloned into yeast through ligation to vector sequences. These artificial chromosomes are used extensively in molecular biology for the construction of comprehensive genomic libraries of higher organisms.Self-Sustained Sequence Replication: An isothermal in-vitro nucleotide amplification process. The process involves the concomitant action of a RNA-DIRECTED DNA POLYMERASE, a ribonuclease (RIBONUCLEASES), and DNA-DIRECTED RNA POLYMERASES to synthesize large quantities of sequence-specific RNA and DNA molecules.Physarum: A genus of protozoa, formerly also considered a fungus. Characteristics include the presence of violet to brown spores.RNA, Ribosomal: The most abundant form of RNA. Together with proteins, it forms the ribosomes, playing a structural role and also a role in ribosomal binding of mRNA and tRNAs. Individual chains are conventionally designated by their sedimentation coefficients. In eukaryotes, four large chains exist, synthesized in the nucleolus and constituting about 50% of the ribosome. (Dorland, 28th ed)Introns: Sequences of DNA in the genes that are located between the EXONS. They are transcribed along with the exons but are removed from the primary gene transcript by RNA SPLICING to leave mature RNA. Some introns code for separate genes.RNA, Messenger: RNA sequences that serve as templates for protein synthesis. Bacterial mRNAs are generally primary transcripts in that they do not require post-transcriptional processing. Eukaryotic mRNA is synthesized in the nucleus and must be exported to the cytoplasm for translation. Most eukaryotic mRNAs have a sequence of polyadenylic acid at the 3' end, referred to as the poly(A) tail. The function of this tail is not known for certain, but it may play a role in the export of mature mRNA from the nucleus as well as in helping stabilize some mRNA molecules by retarding their degradation in the cytoplasm.Genome, Human: The complete genetic complement contained in the DNA of a set of CHROMOSOMES in a HUMAN. The length of the human genome is about 3 billion base pairs.Software: Sequential operating programs and data which instruct the functioning of a digital computer.Genes, Plant: The functional hereditary units of PLANTS.Oligonucleotide Probes: Synthetic or natural oligonucleotides used in hybridization studies in order to identify and study specific nucleic acid fragments, e.g., DNA segments near or within a specific gene locus or gene. The probe hybridizes with a specific mRNA, if present. Conventional techniques used for testing for the hybridization product include dot blot assays, Southern blot assays, and DNA:RNA hybrid-specific antibody tests. Conventional labels for the probe include the radioisotope labels 32P and 125I and the chemical label biotin.Poly A: A group of adenine ribonucleotides in which the phosphate residues of each adenine ribonucleotide act as bridges in forming diester linkages between the ribose moieties.Cytosine: A pyrimidine base that is a fundamental unit of nucleic acids.Mutation: Any detectable and heritable change in the genetic material that causes a change in the GENOTYPE and which is transmitted to daughter cells and to succeeding generations.2-Acetylaminofluorene: A hepatic carcinogen whose mechanism of activation involves N-hydroxylation to the aryl hydroxamic acid followed by enzymatic sulfonation to sulfoxyfluorenylacetamide. It is used to study the carcinogenicity and mutagenicity of aromatic amines.Escherichia coli: A species of gram-negative, facultatively anaerobic, rod-shaped bacteria (GRAM-NEGATIVE FACULTATIVELY ANAEROBIC RODS) commonly found in the lower part of the intestine of warm-blooded animals. It is usually nonpathogenic, but some strains are known to produce DIARRHEA and pyogenic infections. Pathogenic strains (virotypes) are classified by their specific pathogenic mechanisms such as toxins (ENTEROTOXIGENIC ESCHERICHIA COLI), etc.Exons: The parts of a transcript of a split GENE remaining after the INTRONS are removed. They are spliced together to become a MESSENGER RNA or other functional RNA.Oligodeoxyribonucleotides: A group of deoxyribonucleotides (up to 12) in which the phosphate residues of each deoxyribonucleotide act as bridges in forming diester linkages between the deoxyribose moieties.DNA, Protozoan: Deoxyribonucleic acid that makes up the genetic material of protozoa.Sequence Homology, Amino Acid: The degree of similarity between sequences of amino acids. This information is useful for the analyzing genetic relatedness of proteins and species.Conserved Sequence: A sequence of amino acids in a polypeptide or of nucleotides in DNA or RNA that is similar across multiple species. A known set of conserved sequences is represented by a CONSENSUS SEQUENCE. AMINO ACID MOTIFS are often composed of conserved sequences.Models, Genetic: Theoretical representations that simulate the behavior or activity of genetic processes or phenomena. They include the use of mathematical equations, computers, and other electronic equipment.RNA, Viral: Ribonucleic acid that makes up the genetic material of viruses.Multigene Family: A set of genes descended by duplication and variation from some ancestral gene. Such genes may be clustered together on the same chromosome or dispersed on different chromosomes. Examples of multigene families include those that encode the hemoglobins, immunoglobulins, histocompatibility antigens, actins, tubulins, keratins, collagens, heat shock proteins, salivary glue proteins, chorion proteins, cuticle proteins, yolk proteins, and phaseolins, as well as histones, ribosomal RNA, and transfer RNA genes. The latter three are examples of reiterated genes, where hundreds of identical genes are present in a tandem array. (King & Stanfield, A Dictionary of Genetics, 4th ed)Oryza sativa: Annual cereal grass of the family POACEAE and its edible starchy grain, rice, which is the staple food of roughly one-half of the world's population.Cell Line: Established cell cultures that have the potential to propagate indefinitely.Polymorphism, Restriction Fragment Length: Variation occurring within a species in the presence or length of DNA fragment generated by a specific endonuclease at a specific site in the genome. Such variations are generated by mutations that create or abolish recognition sites for these enzymes or change the length of the fragment.Chromosomes, Fungal: Structures within the nucleus of fungal cells consisting of or containing DNA, which carry genetic information essential to the cell.Deoxyribonuclease EcoRI: One of the Type II site-specific deoxyribonucleases (EC 18.104.22.168). It recognizes and cleaves the sequence G/AATTC at the slash. EcoRI is from E coliRY13. Several isoschizomers have been identified. EC 3.1.21.-.Computational Biology: A field of biology concerned with the development of techniques for the collection and manipulation of biological data, and the use of such data to make biological discoveries or predictions. This field encompasses all computational methods and theories for solving biological problems including manipulation of models and datasets.Chromosome Banding: Staining of bands, or chromosome segments, allowing the precise identification of individual chromosomes or parts of chromosomes. Applications include the determination of chromosome rearrangements in malformation syndromes and cancer, the chemistry of chromosome segments, chromosome changes during evolution, and, in conjunction with cell hybridization studies, chromosome mapping.Molecular Weight: The sum of the weight of all the atoms in a molecule.Genomics: The systematic study of the complete DNA sequences (GENOME) of organisms.Frameshift Mutation: A type of mutation in which a number of NUCLEOTIDES deleted from or inserted into a protein coding sequence is not divisible by three, thereby causing an alteration in the READING FRAMES of the entire coding sequence downstream of the mutation. These mutations may be induced by certain types of MUTAGENS or may occur spontaneously.Electrophoresis, Agar Gel: Electrophoresis in which agar or agarose gel is used as the diffusion medium.Kinetics: The rate dynamics in chemical or physical systems.Genetic Markers: A phenotypically recognizable genetic trait which can be used to identify a genetic locus, a linkage group, or a recombination event.DNA, Single-Stranded: A single chain of deoxyribonucleotides that occurs in some bacteria and viruses. It usually exists as a covalently closed circle.DNA, Complementary: Single-stranded complementary DNA synthesized from an RNA template by the action of RNA-dependent DNA polymerase. cDNA (i.e., complementary DNA, not circular DNA, not C-DNA) is used in a variety of molecular cloning experiments as well as serving as a specific hybridization probe.Nucleolus Organizer Region: The chromosome region which is active in nucleolus formation and which functions in the synthesis of ribosomal RNA.Biological Evolution: The process of cumulative change over successive generations through which organisms acquire their distinguishing morphological and physiological characteristics.Methylation: Addition of methyl groups. In histo-chemistry methylation is used to esterify carboxyl groups and remove sulfate groups by treating tissue sections with hot methanol in the presence of hydrochloric acid. (From Stedman, 25th ed)Proteins: Linear POLYPEPTIDES that are synthesized on RIBOSOMES and may be further modified, crosslinked, cleaved, or assembled into complex proteins with several subunits. The specific sequence of AMINO ACIDS determines the shape the polypeptide will take, during PROTEIN FOLDING, and the function of the protein.Euchromatin: Chromosome regions that are loosely packaged and more accessible to RNA polymerases than HETEROCHROMATIN. These regions also stain differentially in CHROMOSOME BANDING preparations.DNA, Circular: Any of the covalently closed DNA molecules found in bacteria, many viruses, mitochondria, plastids, and plasmids. Small, polydisperse circular DNA's have also been observed in a number of eukaryotic organisms and are suggested to have homology with chromosomal DNA and the capacity to be inserted into, and excised from, chromosomal DNA. It is a fragment of DNA formed by a process of looping out and deletion, containing a constant region of the mu heavy chain and the 3'-part of the mu switch region. Circular DNA is a normal product of rearrangement among gene segments encoding the variable regions of immunoglobulin light and heavy chains, as well as the T-cell receptor. (Riger et al., Glossary of Genetics, 5th ed & Segen, Dictionary of Modern Medicine, 1992)Fibroins: Fibrous proteins secreted by INSECTS and SPIDERS. Generally, the term refers to silkworm fibroin secreted by the silk gland cells of SILKWORMS, Bombyx mori. Spider fibroins are called spidroins or dragline silk fibroins.Hybrid Cells: Any cell, other than a ZYGOTE, that contains elements (such as NUCLEI and CYTOPLASM) from two or more different cells, usually produced by artificial CELL FUSION.Triticum: A plant genus of the family POACEAE that is the source of EDIBLE GRAIN. A hybrid with rye (SECALE CEREALE) is called TRITICALE. The seed is ground into FLOUR and used to make BREAD, and is the source of WHEAT GERM AGGLUTININS.Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Binding Sites: The parts of a macromolecule that directly participate in its specific combination with another molecule.Aptamers, Nucleotide: Nucleotide sequences, generated by iterative rounds of SELEX APTAMER TECHNIQUE, that bind to a target molecule specifically and with high affinity.Genome, Insect: The genetic complement of an insect (INSECTS) as represented in its DNA.Cell Nucleus: Within a eukaryotic cell, a membrane-limited body which contains chromosomes and one or more nucleoli (CELL NUCLEOLUS). The nuclear membrane consists of a double unit-type membrane which is perforated by a number of pores; the outermost membrane is continuous with the ENDOPLASMIC RETICULUM. A cell may contain more than one nucleus. (From Singleton & Sainsbury, Dictionary of Microbiology and Molecular Biology, 2d ed)Gene Rearrangement: The ordered rearrangement of gene regions by DNA recombination such as that which occurs normally during development.Bombyx: A genus of silkworm MOTHS in the family Bombycidae of the order LEPIDOPTERA. The family contains a single species, Bombyx mori from the Greek for silkworm + mulberry tree (on which it feeds). A native of Asia, it is sometimes reared in this country. It has long been raised for its SILK and after centuries of domestication it probably does not exist in nature. It is used extensively in experimental GENETICS. (From Borror et al., An Introduction to the Study of Insects, 4th ed, p519)Protein Binding: The process in which substances, either endogenous or exogenous, bind to proteins, peptides, enzymes, protein precursors, or allied compounds. Specific protein-binding measures are often used as assays in diagnostic assessments.Consensus Sequence: A theoretical representative nucleotide or amino acid sequence in which each nucleotide or amino acid is the one which occurs most frequently at that site in the different sequences which occur in nature. The phrase also refers to an actual sequence which approximates the theoretical consensus. A known CONSERVED SEQUENCE set is represented by a consensus sequence. Commonly observed supersecondary protein structures (AMINO ACID MOTIFS) are often formed by conserved sequences.Y Chromosome: The male sex chromosome, being the differential sex chromosome carried by half the male gametes and none of the female gametes in humans and in some other male-heterogametic species in which the homologue of the X chromosome has been retained.Trinucleotide Repeats: Microsatellite repeats consisting of three nucleotides dispersed in the euchromatic arms of chromosomes.RNA, Bacterial: Ribonucleic acid in bacteria having regulatory and catalytic roles as well as involvement in protein synthesis.Chromosomes, Human: Very long DNA molecules and associated proteins, HISTONES, and non-histone chromosomal proteins (CHROMOSOMAL PROTEINS, NON-HISTONE). Normally 46 chromosomes, including two sex chromosomes are found in the nucleus of human cells. They carry the hereditary information of the individual.Polymorphism, Genetic: The regular and simultaneous occurrence in a single interbreeding population of two or more discontinuous genotypes. The concept includes differences in genotypes ranging in size from a single nucleotide site (POLYMORPHISM, SINGLE NUCLEOTIDE) to large nucleotide sequences visible at a chromosomal level.Sequence Deletion: Deletion of sequences of nucleic acids from the genetic material of an individual.DNA Replication: The process by which a DNA molecule is duplicated.Gene Order: The sequential location of genes on a chromosome.Templates, Genetic: Macromolecular molds for the synthesis of complementary macromolecules, as in DNA REPLICATION; GENETIC TRANSCRIPTION of DNA to RNA, and GENETIC TRANSLATION of RNA into POLYPEPTIDES.5' Flanking Region: The region of DNA which borders the 5' end of a transcription unit and where a variety of regulatory sequences are located.RNA, Transfer: The small RNA molecules, 73-80 nucleotides long, that function during translation (TRANSLATION, GENETIC) to align AMINO ACIDS at the RIBOSOMES in a sequence determined by the mRNA (RNA, MESSENGER). There are about 30 different transfer RNAs. Each recognizes a specific CODON set on the mRNA through its own ANTICODON and as aminoacyl tRNAs (RNA, TRANSFER, AMINO ACYL), each carries a specific amino acid to the ribosome to add to the elongating peptide chains.Models, Molecular: Models used experimentally or theoretically to study molecular shape, electronic properties, or interactions; includes analogous molecules, computer-generated graphics, and mechanical structures.Nucleic Acid Heteroduplexes: Double-stranded nucleic acid molecules (DNA-DNA or DNA-RNA) which contain regions of nucleotide mismatches (non-complementary). In vivo, these heteroduplexes can result from mutation or genetic recombination; in vitro, they are formed by nucleic acid hybridization. Electron microscopic analysis of the resulting heteroduplexes facilitates the mapping of regions of base sequence homology of nucleic acids.Chromatin: The material of CHROMOSOMES. It is a complex of DNA; HISTONES; and nonhistone proteins (CHROMOSOMAL PROTEINS, NON-HISTONE) found within the nucleus of a cell.Sex Chromosomes: The homologous chromosomes that are dissimilar in the heterogametic sex. There are the X CHROMOSOME, the Y CHROMOSOME, and the W, Z chromosomes (in animals in which the female is the heterogametic sex (the silkworm moth Bombyx mori, for example)). In such cases the W chromosome is the female-determining and the male is ZZ. (From King & Stansfield, A Dictionary of Genetics, 4th ed)Gene Amplification: A selective increase in the number of copies of a gene coding for a specific protein without a proportional increase in other genes. It occurs naturally via the excision of a copy of the repeating sequence from the chromosome and its extrachromosomal replication in a plasmid, or via the production of an RNA transcript of the entire repeating sequence of ribosomal RNA followed by the reverse transcription of the molecule to produce an additional copy of the original DNA sequence. Laboratory techniques have been introduced for inducing disproportional replication by unequal crossing over, uptake of DNA from lysed cells, or generation of extrachromosomal sequences from rolling circle replication.Genome, Bacterial: The genetic complement of a BACTERIA as represented in its DNA.DNA-Binding Proteins: Proteins which bind to DNA. The family includes proteins which bind to both double- and single-stranded DNA and also includes specific DNA binding proteins in serum which can be used as markers for malignant diseases.Base Pairing: Pairing of purine and pyrimidine bases by HYDROGEN BONDING in double-stranded DNA or RNA.Chromosome Breakage: A type of chromosomal aberration involving DNA BREAKS. Chromosome breakage can result in CHROMOSOMAL TRANSLOCATION; CHROMOSOME INVERSION; or SEQUENCE DELETION.DNA, Neoplasm: DNA present in neoplastic tissue.Open Reading Frames: A sequence of successive nucleotide triplets that are read as CODONS specifying AMINO ACIDS and begin with an INITIATOR CODON and end with a stop codon (CODON, TERMINATOR).Genome, Mitochondrial: The genetic complement of MITOCHONDRIA as represented in their DNA.RNA, Plant: Ribonucleic acid in plants having regulatory and catalytic roles as well as involvement in protein synthesis.Sequence Analysis: A multistage process that includes the determination of a sequence (protein, carbohydrate, etc.), its fragmentation and analysis, and the interpretation of the resulting sequence information.Mycobacterium tuberculosis: A species of gram-positive, aerobic bacteria that produces TUBERCULOSIS in humans, other primates, CATTLE; DOGS; and some other animals which have contact with humans. Growth tends to be in serpentine, cordlike masses in which the bacilli show a parallel orientation.Gene Dosage: The number of copies of a given gene present in the cell of an organism. An increase in gene dosage (by GENE DUPLICATION for example) can result in higher levels of gene product formation. GENE DOSAGE COMPENSATION mechanisms result in adjustments to the level GENE EXPRESSION when there are changes or differences in gene dosage.DNA, Mitochondrial: Double-stranded DNA of MITOCHONDRIA. In eukaryotes, the mitochondrial GENOME is circular and codes for ribosomal RNAs, transfer RNAs, and about 10 proteins.Genetic Linkage: The co-inheritance of two or more non-allelic GENES due to their being located more or less closely on the same CHROMOSOME.Chromosome Inversion: An aberration in which a chromosomal segment is deleted and reinserted in the same place but turned 180 degrees from its original orientation, so that the gene sequence for the segment is reversed with respect to that of the rest of the chromosome.Cytogenetic Analysis: Examination of CHROMOSOMES to diagnose, classify, screen for, or manage genetic diseases and abnormalities. Following preparation of the sample, KARYOTYPING is performed and/or the specific chromosomes are analyzed.Gene Silencing: Interruption or suppression of the expression of a gene at transcriptional or translational levels.Sea Urchins: Somewhat flattened, globular echinoderms, having thin, brittle shells of calcareous plates. They are useful models for studying FERTILIZATION and EMBRYO DEVELOPMENT.Globins: A superfamily of proteins containing the globin fold which is composed of 6-8 alpha helices arranged in a characterstic HEME enclosing structure.Plants: Multicellular, eukaryotic life forms of kingdom Plantae (sensu lato), comprising the VIRIDIPLANTAE; RHODOPHYTA; and GLAUCOPHYTA; all of which acquired chloroplasts by direct endosymbiosis of CYANOBACTERIA. They are characterized by a mainly photosynthetic mode of nutrition; essentially unlimited growth at localized regions of cell divisions (MERISTEMS); cellulose within cells providing rigidity; the absence of organs of locomotion; absence of nervous and sensory systems; and an alternation of haploid and diploid generations.Promoter Regions, Genetic: DNA sequences which are recognized (directly or indirectly) and bound by a DNA-dependent RNA polymerase during the initiation of transcription. Highly conserved sequences within the promoter include the Pribnow box in bacteria and the TATA BOX in eukaryotes.Polyribonucleotides: A group of 13 or more ribonucleotides in which the phosphate residues of each ribonucleotide act as bridges in forming diester linkages between the ribose moieties.Genome, Fungal: The complete gene complement contained in a set of chromosomes in a fungus.DNA, Catalytic: Molecules of DNA that possess enzymatic activity.Karyotyping: Mapping of the KARYOTYPE of a cell.Cattle: Domesticated bovine animals of the genus Bos, usually kept on a farm or ranch and used for the production of meat or dairy products or for heavy labor.Bacterial Proteins: Proteins found in any species of bacterium.Viruses: Minute infectious agents whose genomes are composed of DNA or RNA, but not both. They are characterized by a lack of independent metabolism and the inability to replicate outside living host cells.G-Quadruplexes: Higher-order DNA and RNA structures formed from guanine-rich sequences. They are formed around a core of at least 2 stacked tetrads of hydrogen-bonded GUANINE bases. They can be formed from one two or four separate strands of DNA (or RNA) and can display a wide variety of topologies, which are a consequence of various combinations of strand direction, length, and sequence. (From Nucleic Acids Res. 2006;34(19):5402-15)X Chromosome: The female sex chromosome, being the differential sex chromosome carried by half the male gametes and all female gametes in human and other male-heterogametic species.PolynucleotidesProtein Biosynthesis: The biosynthesis of PEPTIDES and PROTEINS on RIBOSOMES, directed by MESSENGER RNA, via TRANSFER RNA that is charged with standard proteinogenic AMINO ACIDS.Intercalating Agents: Agents that are capable of inserting themselves between the successive bases in DNA, thus kinking, uncoiling or otherwise deforming it and therefore preventing its proper functioning. They are used in the study of DNA.Chromosome Deletion: Actual loss of portion of a chromosome.Minisatellite Repeats: Tandem arrays of moderately repetitive, short (10-60 bases) DNA sequences which are found dispersed throughout the GENOME, at the ends of chromosomes (TELOMERES), and clustered near telomeres. Their degree of repetition is two to several hundred at each locus. Loci number in the thousands but each locus shows a distinctive repeat unit.Oligonucleotides, Antisense: Short fragments of DNA or RNA that are used to alter the function of target RNAs or DNAs to which they hybridize.Genotype: The genetic constitution of the individual, comprising the ALLELES present at each GENETIC LOCUS.Nucleotides: The monomeric units from which DNA or RNA polymers are constructed. They consist of a purine or pyrimidine base, a pentose sugar, and a phosphate group. (From King & Stansfield, A Dictionary of Genetics, 4th ed)GuanineMolecular Diagnostic Techniques: MOLECULAR BIOLOGY techniques used in the diagnosis of disease.CpG Islands: Areas of increased density of the dinucleotide sequence cytosine--phosphate diester--guanine. They form stretches of DNA several hundred to several thousand base pairs long. In humans there are about 45,000 CpG islands, mostly found at the 5' ends of genes. They are unmethylated except for those on the inactive X chromosome and some associated with imprinted genes.Oligonucleotide Array Sequence Analysis: Hybridization of a nucleic acid sample to a very large set of OLIGONUCLEOTIDE PROBES, which have been attached individually in columns and rows to a solid support, to determine a BASE SEQUENCE, or to detect variations in a gene sequence, GENE EXPRESSION, or for GENE MAPPING.Fluorescent Dyes: Agents that emit light after excitation by light. The wave length of the emitted light is usually longer than that of the incident light. Fluorochromes are substances that cause fluorescence in other substances, i.e., dyes used to mark or label other compounds with fluorescent tags.PrimatesGenes, Bacterial: The functional hereditary units of BACTERIA.Histones: Small chromosomal proteins (approx 12-20 kD) possessing an open, unfolded structure and attached to the DNA in cell nuclei by ionic linkages. Classification into the various types (designated histone I, histone II, etc.) is based on the relative amounts of arginine and lysine in each.Nucleosides: Purine or pyrimidine bases attached to a ribose or deoxyribose. (From King & Stansfield, A Dictionary of Genetics, 4th ed)Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Chickens: Common name for the species Gallus gallus, the domestic fowl, in the family Phasianidae, order GALLIFORMES. It is descended from the red jungle fowl of SOUTHEAST ASIA.Genes, Insect: The functional hereditary units of INSECTS.DNA, Intergenic: Any of the DNA in between gene-coding DNA, including untranslated regions, 5' and 3' flanking regions, INTRONS, non-functional pseudogenes, and non-functional repetitive sequences. This DNA may or may not encode regulatory functions.Immobilized Nucleic Acids: DNA or RNA bound to a substrate thereby having fixed positions. |
A covalent bond is a type of chemical bond characterized by the sharing of a pair of electrons between two atoms. The electron pair interacts with the nuclei of both atoms, and this attractive interaction holds the atoms together. The covalent bond is much stronger than the hydrogen bond (between polar molecules) and is similar in strength to or stronger than the ionic bond.
Covalent bonding occurs most frequently between atoms with similar electronegativity values. It plays a particularly important role in building the structures of organic compounds (compounds of carbon). Each carbon atom can form four covalent bonds that are oriented along definite directions in space, leading to the varied geometries of organic molecules. Moreover, numerous chemical reactions, in both living and nonliving systems, involve the formation and disruption of covalent bonds.
The idea of covalent bonding can be traced to chemist Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms. He introduced the so-called Lewis Notation or Electron Dot Notation, in which valence electrons (those in the outer shell of each atom) are represented as dots around the atomic symbols. Pairs of these electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double and triple bonds. Some examples of Electron Dot Notation are shown in the figure on the right. An alternative style, in which bond-forming electron pairs are represented as solid lines, is shown alongside.
The sharing of electrons between atoms allows the atoms to attain a stable electron configuration similar to that of a noble gas. For example, in a hydrogen molecule (H2), each hydrogen atom takes part in the sharing of two electrons, corresponding to the number of electrons in the helium atom. In the case of methane (CH4), each carbon atom shares an electron pair with each of four hydrogen atoms. Thus, each carbon atom in methane shares a total of eight electrons, corresponding to the number of electrons in the outermost shell of an atom of any of the other noble gases (neon, argon, krypton, and radon).
In addition, each covalent bond in a molecule is oriented toward a certain direction in space, thereby giving the molecule its characteristic shape. For example, a molecule of methane takes the shape of a tetrahedron, with the carbon atom at the center.
While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond, specifically that of molecular hydrogen, in 1927. Their work was based on the valence bond model, according to which a chemical bond is formed by overlap between certain atomic orbitals (in the outer electron shells) of participating atoms. In valence bond theory, molecular geometries are accounted for by the formation of hybrid atomic orbitals through the combination of normal atomic orbitals. These atomic orbitals are known to have specific angular relationships between each other, and thus the valence bond model can successfully predict the bond angles observed in simple molecules.
The valence bond model has been supplanted by the molecular orbital model. As two atoms are brought together to form a bond, their atomic orbitals are thought to interact to form molecular orbitals that extend between and around the nuclei of these atoms. These molecular orbitals can be constructed mathematically, based on the theory of "linear combination of atomic orbitals" (LCAO theory).
Using quantum mechanics, it is possible to calculate the electronic structure, energy levels, bond angles, bond distances, dipole moments, and electromagnetic spectra of simple molecules with a high degree of accuracy. Bond distances and angles can be calculated as accurately as they can be measured (distances to a few picometers and bond angles to a few degrees).
The covalent bond differs from an ionic bond, which is characterized by electrostatic attraction between oppositely charged ions. Yet, even in the molecular orbital model for a covalent bond, there is an implicit attraction between the positively charged atomic nuclei and the negatively charged electrons—without the atomic nuclei, there would be no orbitals for the electrons to populate.
Covalent bonding is a broad concept that covers many kinds of interactions. In particular, it includes what are known as sigma (σ) bonds, pi (π) bonds, metal-metal bonds, agostic interactions, and three-center two-electron bonds (Smith and March, 2007; Miessler and Tarr, 2003).
Bond order is a term that describes the number of pairs of electrons shared between atoms forming covalent bonds.
In most cases of covalent bonding, the electrons are not localized between a pair of atoms, so the above classification, although powerful and pervasive, is of limited validity. Also, the so-called "three-center bond" does not conform readily to the above conventions.
There are two types of covalent bonds: Polar covalent bonds, and nonpolar (or "pure") covalent bonds. A pure covalent bond is formed between two atoms that have no difference (or practically no difference) between their electronegativity values. (Some texts put the difference in values at less than 0.2.) A polar covalent bond (according to the most widely accepted definition) is a bond formed between two atoms that have an electronegativity difference of less than or equal to 2.1 but greater than or equal to 0.5.
When a covalent bond is formed between two atoms of differing electronegativity, the more electronegative atom draws the shared (bonding) electrons closer to itself. This results in a separation of charge along the bond: the less electronegative atom bears a partial positive charge and the more electronegative atom bears a partial negative charge. In this situation, the bond has a dipole moment and is said to be polar.
The polar covalent bond is sometimes thought of as a mixing of ionic and covalent character in the bond. The greater the polarity in a covalent bond, the greater its ionic character. Thus, the ionic bond and the nonpolar covalent bond are two extremes of bonding, with polar bonds forming a continuity between them.
A special case of covalent bonding is called a coordinate covalent bond or dative bond. It occurs when one atom contributes both of the electrons in forming a covalent bond with the other atom or ion. The atom that donates the electron pair acts as a "Lewis base," and the atom that accepts the electrons acts as a "Lewis acid." The formation of this type of bond is called "coordination." The electron donor acquires a positive formal charge, while the electron acceptor acquires a negative formal charge.
Once this type of bond has been formed, its strength and description are no different from those of other polar covalent bonds. In this sense, the distinction from ordinary covalent bonding is artificial, but the terminology is popular in textbooks, especially when describing coordination compounds (noted below).
Any compound that contains a lone pair of electrons is potentially capable of forming a coordinate bond. Diverse chemical compounds can be described as having coordinate covalent bonds.
Coordinate bonding is popularly used to describe coordination complexes, especially involving metal ions. In such complexes, several Lewis bases "donate" their "free" pairs of electrons to an otherwise naked metal cation, which acts as a Lewis acid and "accepts" the electrons. Coordinate bonds are formed, the resulting compound is called a coordination complex, and the electron donors are called ligands. A coordinate bond is sometimes represented by an arrow pointing from the donor of the electron pair to the acceptor of the electron pair. A more useful description of bonding in coordination compounds is provided by the Ligand Field Theory, which incorporates molecular orbitals in describing bonding in such polyatomic compounds.
Many chemical compounds can serve as ligands. They often contain oxygen, sulfur, or nitrogen atoms, or halide ions. The most common ligand is water (H2O), which forms coordination complexes with metal ions, such as [Cu(H2O)6]2+. Ammonia (NH3) is also a common ligand. Anions are common ligands, especially fluoride (F-), chloride (Cl-), and cyanide (CN-).
Many bonding situations can be described with more than one valid Lewis Dot Structure (LDS). An example is benzene (C6H6), which consists of a ring of six carbon atoms held together by covalent bonds, with a hydrogen atom attached to each carbon atom. If one were to write the LDS for the benzene ring, one would get two similar structures, each of which would have alternating single and double bonds between the carbon atoms (as shown in the figure). Each structure, if taken by itself, would suggest that the bonds between the carbon atoms differ in length and strength. In reality, the six bonds between the ring carbon atoms are all equally long and equally strong, indicating that the bonding electrons are evenly distributed within the ring. To take this situation into account, the two structures are thought of as theoretical "resonance" structures, and the actual structure is called a resonance hybrid of the two. Electron sharing in the aromatic structure is often represented by a circle within the ring of carbon atoms. The benzene ring is an example of what is called an aromatic ring, and aromatic compounds constitute a major class of organic chemical compounds.
A second example is the structure of ozone (O3). In an LDS diagram of O3, the central O atom would have a single bond with one adjacent atom and a double bond with the other. Two possible structures can be written, in which the single and double bonds switch positions. Here again, the two possible structures are theoretical "resonance structures," and the structure of ozone is called a resonance hybrid of the two. In the actual structure of ozone, both bonds are equal in length and strength. Each bond is midway between a single bond and a double bond, sharing three electrons in each bond.
All links retrieved June 27, 2013.
|Topics in organic chemistry|
|List of organic compounds|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Multiplying And Dividing Mixed Numbers Worksheet
Multiplying and dividing negative numbers.
Multiplying and dividing mixed numbers worksheet. Sheet includes practice aqa multiple choice question problem solving and feedback sheet. Our multiplying and dividing fractions and mixed numbers worksheets are designed to supplement our multiplying and dividing fractions and mixed numbers lessons. Below are six versions of our grade 5 math worksheet on multiplying mixed numbers together. Dividing mixed numbers by mixed numbers.
Simplify answers where possible. Be sure to check out the fun interactive fraction activities and additional worksheets below. Multiply fractions and mixed numbers. Multiply divide dividing mixed numbers.
Clear presentation plus two sided worksheet progressing from multiplication of common fractions to multiplication of mixed numbers. Addition subtraction multiplication and division because that might be. Multiplying and dividing negative numbers displaying top 8 worksheets found for this concept. This page includes mixed operations math worksheets with addition subtraction multiplication and division and worksheets for order of operations.
We ve started off this page by mixing up all four operations. These worksheets are pdf files. Multiplying dividing fractions and mixed numbers date period find each product. Step by step look at multiplication of mixed numbers building on multiplication of proper fractions and addition to create a simple algorithm.
This math worksheet was created on 2013 02 14 and has been viewed 402 times this week and 361 times this month. These ready to use printable worksheets help assess student learning. This worksheets combine basic multiplication and division word problems. Worksheets math grade 5 fractions.
Below are six versions of our grade 6 math worksheet on dividing mixed numbers by other mixed numbers. Multiply divide multiplying mixed numbers. Mixed multiplication and division word problems. The division problems do not include remainders.
It may be printed downloaded or saved and used in your classroom home school or other educational environment to help someone learn math. Multiplying and dividing fractions and mixed numbers homework sheet with answers. Multiplying dividing fractions and mixed numbers date period find each product. These worksheets are pdf files.
Worksheets math grade 6 fractions. These worksheets require the students to differentiate between the phrasing of a story problem that requires multiplication versus one that requires division to reach the answer.
- Med Surg Nursing Worksheet
- Letter Formations Worksheet
- Levels Of Biological Organization Worksheet
- Missing Factors Worksheet
- Linking Verbs Worksheet
- Landforms And Bodies Of Water Worksheet
- Limiting Reagent And Percent Yield Worksheet
- Making Words Worksheet
- Matrix Multiplication Worksheet
- Medical Terminology Suffixes Worksheet
- Logarithm Worksheet With Answers
- Lowercase A Worksheet
- Midpoint And Distance Formula Worksheet
- Medians And Altitudes Of Triangles Worksheet
- Lewis Structure Practice Worksheet
- Law Of Sines Ambiguous Case Worksheet
- Layers Of The Atmosphere Worksheet
- Learn Spanish Worksheet
- Mathematics Worksheet For Preschool
- Measuring Mass Worksheet
- Molar Conversion Worksheet
- Letter B Worksheet
- Masque Of The Red Death Worksheet Answer Key
- Membrane Structure And Function Worksheet
- Linear And Nonlinear Functions Worksheet
- Lab Safety Worksheet Answers
- Meal Planning Worksheet
- Line Of Symmetry Worksheet
- Liquid Measurements Worksheet
- Mitosis Vs Meiosis Worksheet Answers |
15 Collections and Looping: Lists and for
A list, as its name implies, is a list of data (integers, floats, strings, Booleans, or even other lists or more complicated data types). Python lists are similar to arrays or vectors in other languages. Like letters in strings, elements of a list are indexed starting at
syntax. We can also use brackets to create a list with a few elements of different types, though in practice we won’t do this often.
Just like with strings, we can use
notation to get a sublist “slice,” and we can use the
len() function to get the length of a list.
Unlike strings, though, lists are mutable, meaning we can modify them after they’ve been created, for example, by replacing an element with another element. As mentioned above, lists can even contain other lists!
We will typically want our code to create an empty list, and then add data elements to it one element at a time. An empty list is returned by calling the
list() function with no parameters. Given a variable which references a list object, we can append an element to the end using the
.append() method, giving the method the element we want to append as a parameter.
This syntax might seem a bit odd compared to what we’ve seen so far. Here
new_list.append("G") is telling the list object the
new_list variable refers to to run its
.append() method, taking as a parameter the string
"G". We’ll explore the concepts of objects and methods more formally in later chapters. For now, consider the list not just a collection of data, but a “smart” object with which we can interact using
Note that the
.append() method asks the list to modify itself (which it can do, because lists are mutable), but this operation doesn’t return anything of use.
This type of command opens up the possibility for some insidious bugs; for example, a line like
new_list = new_list.append("C") looks innocent enough and causes no immediate error, but it is probably not what the programmer intended. The reason is that the
new_list.append("C") call successfully asks the list to modify itself, but then the
None value is returned, which would be assigned to the
new_list variable with the assignment. At the end of the line,
new_list will refer to
None, and the list itself will no longer be accessible. (In fact, it will be garbage collected in due time.) In short, use
some_list = some_list.append(el).
We often want to sort lists, which we can do in two ways. First, we could use the
sorted() function, which takes a list as a parameter and returns a new copy of the list in sorted order, leaving the original alone. Alternatively, we could call a lists
.sort() method to ask a list to sort itself in place.
As with the
.append() method above, the
.sort() method returns
None, so the following would almost surely have resulted in a bug:
a_list = a_list.sort().
At this point, one would be forgiven for thinking that
. methods always return
None and so assignment based on the results isn’t useful. But before we move on from lists, let’s introduce a simple way to split a string up into a list of substrings, using the
.split() method on a string data type. For example, let’s split up a string wherever the subsequence
If the sequence was instead
"CGCGTATACAGA", the resulting list would have contained
["CGCG", "", "CAGA"] (that is, one of the elements would be a zero-length empty string). This example illustrates that strings, like lists, are also “smart” objects with which we can interact using
. methods. (In fact, so are integers, floats, and all other Python types that we’ll cover.)
Tuples (Immutable Lists)
As noted above, lists are mutable, meaning they can be altered after their creation. In some special cases, it is helpful to create an immutable version of a list, called a “tuple” in Python. Like lists, tuples can be created in two ways: with the
tuple() function (which returns an empty tuple) or directly.
Tuples work much like lists—we can call
len() on them and extract elements or slices with
syntax. We can’t change, remove, or insert elements.
Looping with for
A for-loop in Python executes a block of code, once for each element of an iterable data type: one which can be accessed one element at a time, in order. As it turns out, both strings and lists are such iterable types in Python, though for now we’ll explore only iterating over lists with for-loops.
A block is a set of lines of code that are grouped as a unit; in many cases they are executed as a unit as well, perhaps more than one time. Blocks in Python are indicated by being indented an additional level (usually with four spaces—remember to be consistent with this indentation practice).
When using a for-loop to iterate over a list, we need to specify a variable name that will reference each element of the list in turn.
In the above, one line is indented an additional level just below the line defining the for-loop. In the for-loop, the
gene_id variable is set to reference each element of the
gene_ids list in turn. Here’s the output of the loop:
Using for-loops in Python often confuses beginners, because a variable (e.g.,
gene_id) is being assigned without using the standard
= assignment operator. If it helps, you can think of the first loop through the block as executing
gene_id = gene_ids, the next time around as executing
gene_id = gene_ids, and so on, until all elements of
gene_ids have been used.
Blocks may contain multiple lines (including blank lines) so that multiple lines of code can work together. Here’s a modified loop that keeps a
counter variable, incrementing it by one each time.
The output of this loop would be the same as the output above, with an additional line printing
3 (the contents of
counter after the loop ends).
Some common errors when using block structures in Python include the following, many of which will result in an
- Not using the same number of spaces for each indentation level, or mixing tab indentation with multiple-space indentation. (Most Python programmers prefer using four spaces per level.)
- Forgetting the colon
:that ends the line before the block.
- Using something like a for-loop line that requires a block, but not indenting the next line.
- Needlessly indenting (creating a block) without a corresponding for-loop definition line.
We often want to loop over a range of integers. Conveniently, the
range() function returns a list of numbers. It commonly takes two parameters: (1) the starting integer (inclusive) and (2) the ending integer (exclusive). Thus we could program our for-loop slightly differently by generating a list of integers to use as indices, and iterating over that:
The output of one of the loops above:
The second example above illustrates the rationale behind the inclusive/exclusive nature of the
range() function: because indices start at zero and go to one less than the length of the list, we can use
range(0, len(ids)) (as opposed to needing to modify the ending index) to properly iterate over the indices of
ids without first knowing the length of the list. Seasoned programmers generally find this intuitive, but those who are not used to counting from zero may need some practice. You should study these examples of looping carefully, and try them out. These concepts are often more difficult for beginners, but they are important to learn.
Loops and the blocks they control can be nested, to powerful effect:
In the above, the outer for-loop controls a block of five lines; contained within is the inner for-loop controlling a block of only two lines. The outer block is principally concerned with the variable
i, while the inner block is principally concerned with the variable
j. We see that both blocks also make use of variables defined outside them; the inner block makes use of
j, while lines specific to the outer block make use of
i (but not
j). This is a common pattern we’ll be seeing more often. Can you determine the value of
total at the end without running the code?
Python and a few other languages include specialized shorthand syntax for creating lists from other lists known as list comprehensions. Effectively, this shorthand combines a for-loop syntax and list-creation syntax into a single line.
Here’s a quick example: starting with a list of numbers
[1, 2, 3, 4, 5, 6], we generate a list of squares (
[1, 4, 9, 16, 25, 36]):
Here we’re using a naming convention of
num in nums, but like a for-loop, the looping variable can be named almost anything; for example,
squares = [x ** 2 for x in nums] would accomplish the same task.
List comprehensions can be quite flexible and used in creative ways. Given a list of sequences, we can easily generate a list of lengths.
These structures support “conditional inclusion” as well, though we haven’t yet covered operators like
The next example generates a list of
1s for each element where the first base is
"T", and then uses the
sum() function to sum up the list, resulting in a count of sequences beginning with
Although many Python programmers often use list comprehensions, we won’t use them much in this book. Partially, this is because they are a feature that many programming languages don’t have, but also because they can become difficult to read and understand owing to their compactness. As an example, what do you think the following comprehension does?
[x for x in range(2, n) if x not in [j for i in range(2, sqrtn) for j in range (i*2, n, i)]] (Suppose
n = 100 and
sqrtn = 10. This example also makes use of the fact that
range() can take a step argument, as in
range(start, stop, step). )
- What is the value of
totalat the end of each loop set below? First, see if you can compute the answers by hand, and then write and execute the code with some added
print()statements to check your answers.
- Suppose we say the first for-loop block above has “depth” 1 and “width” 4, and the second has depth 2 and width 4, and the third has depth 3 and width 4. Can you define an equation that indicates what the
totalwould be for a nested for-loop block with depth d and width w? How does this equation relate to the number of times the interpreter has to execute the line
total = total + 1?
- Determine an equation that relates the final value of
totalbelow to the value of
- Given a declaration of a sequence string, like
seq = "ATGATAGAGGGATACGGGATAG", and a subsequence of interest, like
subseq = "GATA", write some code that prints all of the locations of that substring in the sequence, one per line, using only the Python concepts we’ve covered so far (such as
len(), for-loops, and
.split()). For the above example, the output should be
Your code should still work if the substring occurs at the start or end of the sequence, or if the subsequence occurs back to back (e.g., in
"GATA"occurs at positions 1, 7, and 11). As a hint, you may assume the subsequence is not self-overlapping (e.g., you needn’t worry about locating
"GAGAGAGAGA", which would occur at positions 1, 3, 5, and 7).
- Suppose we have a matrix represented as a list of columns:
cols = [[10, 20, 30, 40], [5, 6, 7, 8], [0.9, 0.10, 0.11, 0.12]]. Because each column is an internal list, this arrangement is said to be in “column-major order.” Write some code that produces the same data in “row-major order”; for example,
[[10, 5, 0.9], [20, 6, 0.10], [30, 7, 0.11], [40, 8, 0.12]]. You can assume that all columns have the same number of elements and that the matrix is at least 2 by 2.
This problem is a bit tricky, but it will help you organize your thoughts around loops and lists. You might start by first determining the number of rows in the data, and then building the “structure” of
rowsas a list of empty lists.
- It returns a special data type known as
None, which allows for a variable to exist but not reference any data. (Technically,
Noneis a type of data, albeit a very simple one.)
Nonecan be used as a type of placeholder, and so in some situations isn’t entirely useless. ↵
- Tuples are a cause of one of the more confusing parts of Python, because they are created by enclosing a list of elements inside of parentheses, but function calls also take parameters listed inside of parentheses, and mathematical expressions are grouped by parentheses, too! Consider the expression
(4 + 3) * 2. Is
(4 + 3)an integer, or a single-element tuple? Actually, it’s an integer. By default, Python looks for a comma to determine whether a tuple should be created, so
(4 + 3)is an integer, while
(4 + 3, 8)is a two-element tuple and
(4 + 3,)is a single-element tuple. Use parentheses deliberately in Python: either to group mathematical expressions, create tuples, or call functions—where the function name and opening parenthesis are neighboring, as in
print (a). Needlessly adding parentheses (and thereby accidentally creating tuples) has been the cause of some difficult-to-find bugs. ↵
- An alternative to
range()in Python 2.7 is
xrange(), which produces an iterable type that works much like a list of numbers but is more memory efficient. In more recent versions of Python (3.0 and above) the
range()function works like
xrange()has been removed. Programmers using Python 2.7 may wish to use
xrange()for efficiency, but we’ll stick with
range()so that our code works with the widest variety of Python versions, even if it sacrifices efficiency in some cases. There is one important difference between
xrange()in Python 2.7:
range()returns a list, while
xrange()returns an iterable type that lacks some features of true lists. For example,
nums = range(0, 4)followed by
nums = 1000would result in
[0, 1, 2, 1000], while
nums = xrange(0, 4)followed by
nums = 1000would produce an error. ↵ |
Stoichiometry is a quantitatve process. Given an initial mass or volume of reactant or product, the molar relationships between reactants and products in a chemical reaction are used to calculate a specific mass or volume of another reactant or product.
Problems - Mass/Mass
Essentially, all stoichiometry problems can be broken down into three steps:
1. Take the given quantity (i.e. mass or volume) and convert to moles
2. Use a mole to mole ratio to find the number of moles of the desired compound
3. Answer the question - convert the moles of the desired compound to the appropriate quantity (i.e. mass, volume)
Consider the following reaction and problem:
Determine the mass of iron that is produced from 25.36 g of iron(III) oxide.
This problem is often referred to as a mass-mass problem since you are given the mass of a compound in the problem and asked to find the mass of another compound. The three step method described above can be applied in the setup shown below:
In the first conversion factor above, the molar mass of Fe2O3 is determined since it is needed to convert to moles (step one). When given a mass (as in this problem), dividing by molar mass always converts to moles. In the second conversion factor (step two), a ratio of Fe to Fe2O3 is determined from the coefficients in the reaction. It is worth noting that this is the only time the coefficients of the reaction are used when solving a stoichiometry problem. Finally, the molar mass of Fe is used to convert moles of Fe to grams of Fe (step three). It's worth noting that in a mass-mass problem, the first and third steps are opposites of each other. That is, in step one you convert grams to moles (divide by molar mass) and in the third step you convert moles to grams (multiply by molar mass). Alternatively, this may visually represented in a simplified manner:
25.36 g Fe2O3 ÷ 159.70 g/mol Fe2O3 × 2 Fe:1 Fe2O3 × 55.85 g/mol Fe = 17.74 g Fe
Problems - Mass/Volume
The next problem is a mass-volume problem. In this case, either the mass of a compound will be given and the volume of another is asked, or the volume of a gas will be given and the mass of another compound will be asked. Reconsider the equation from above:
Determine the volume of carbon dioxide gas that will be produced from 112.5 grams of iron at STP.
First, it's important to understand the concept of STP, standard temperature and pressure. Standard temperature and pressure is a set of conditions (273.15 K and 1 atm) at which 1 mole of any ideal gas will occupy 22.414 L. As a conversion factor, 1 mole gas = 22.414 L.
Although many authors will assume STP unless otherwise specified, it is important to determine the conditions of the reaction. If the reaction is not occuring at STP, the conversion factor given above cannot be used. The problem can still be solved, but the ideal gas law must be incorporated. Note the usage of the aforementioned conversion factor in the solution:
Problems - Volume/Volume
A volume-volume problem concerns only the gaseous compounds of a reaction. Again, the relationship between 1 mole of gas at STP and the molar volume of 22.414 L is important. Consider the reaction below:
What is the volume of ammonia gas will react with 22.5 L of oxygen gas?
Note that the first and last step in a volume-volume problem will cancel each other. This is because the first step, converting to liters of oxygen to moles, requires a division by 22.414. In the third step, conversion of moles of ammonia to liters, requires multiplication by 22.414. These two steps cancel each other and render step two (mole to mole ratio) the only important step. It needs to be stressed that this only happens in a volume-volume problem.
Problems - Volumes Not at STP
Let's reconsider the mass/volume problem from earlier. What if the question had asked to determine the mass of carbon monoxide produced from 112.5 g of iron at 35.5°C and 855 torr of pressure? This problem now becomes a little longer because the 1 mol of gas = 22.414 L cannot be used. By clearly stating a temperature and/or pressure (in this case it's both) that are not at STP, the ideal gas law must be utilized in this problem. However, the first two steps of the problem remain unchanged. This is because the first step requires converting mass to moles. A mass-mol conversion is only reliant on molar mass, and the pressure/temperature of the reaction is irrelevant. The second step involves a mol-mol ratio, once again pressure and temperature are immaterial. The final step involves calculating a volume of gas. It is at this point that the ideal gas law is used. After these first two steps, the following can be determined:
The ideal gas law, PV=nRT, must be used to finish this problem. The variable P represents pressure, and must be in atm. The variable V is the volume, and is what we are solving for. The variable n represents moles, and 0.4764 will be substituted into the equation here. The variable R is the gas law constant and has a value of 0.0821. Its units are atm • L • mol-1 • K-1; note how the unit incorporates the units of all the other variables. It is for this reason that pressure must be in atmospheres. The temperature T must be in kelvin. First, let's make the necessary conversions for temperature and pressure.
For temperature, to convert degrees Celsius to kelvin, add 273.15. Therefore, 35.5 + 273.15 = 308.65 K. For pressure we must setup a set of proportions using the conversion factor 1 atm = 760 torr.
Solving the above proportion gives a value of 1.13 atm for "x." Now that these conversions are complete, the pressure, temperature, moles of carbon dioxide (solved earlier), and gas law constant can be plugged into the ideal gas law:
Problems - Limiting Reactant
In the previous example, it was assumed that there was an unlimited supply of carbon monoxide to react with all of the iron. Sometimes this is not an appropriate or plausible assumption. Sometimes two distinct masses of reactants are given, and it cannot be assumed that they will consume each other completely. Imagine trying to bake a cake. The recipe states that two eggs are needed to make a cake. With a dozen eggs available, six cakes can be made. What if the recipe also states that a cup of sugar is necessary and only four cups of sugar are available? Regardless of the dozen eggs, only four cakes can be made, because after consuming four cups of sugar (with eight eggs), there will be no sugar remaining. At this point, no more cakes can be made. Sugar is considered the limiting reactant in this example. The eggs are the excess reactant. Recall the reaction from before:
What mass of iron will be produced from 25.00 g of iron(III) oxide and 25.00 g of carbon monoxide?
The solution is similar to the mass-mass problem from before, except there are two problems being solved at the same time.
The reactant that produces the smaller mass (or volume for a gas) of product is the limiting reactant. The limiting reactant always dictates the mass/volume of product that is produced, therefore the smaller quantity of product is always the solution to a limiting reactant problem.
Problems - Excess Reactant
In the last problem it was imperative for us to calculate which reactant was consumed first because the reaction would stop at that point. The mass of products had to be determined from this substance, which was called the limiting reactant (Fe2O3). The other substance that was not fully consumed (CO) is the excess reactant. The mass of carbon monoxide that is consumed can be calculated, and the mass of carbon dioxide that remains unreacted can also be found. In order to do so, a stoichiometry problem must first be completed in which the limiting reactant is used to calculate the mass of excess reactant consumed.
Utilize the list of reactions and find a specific reaction for mass/mass, mass/volume, and volume/volume calculations.
A solution stoichiometry problem will involve aqueous reactants for which you will need to calculate a molarity or volume. Calculations that are part of a titration experiment are guaranteed to be solution stoichiometry. First, let's look at a solution stoichiometry problem that also incorporates some concepts discussed earlier on this page:
Given the reaction that follows, find the volume of sulfur dioxide gas that will be produced from 25.36 mL of 0.966 M hydrochloric acid at STP.
This problem is unique in that there are two numbers given within the problem, but since they are not values for different compounds, it is not a limiting reactant problem. The volume and molarity given must be used to find the number of moles of HCl. From there, the rest of the problem continues in the same manner as previous problems. Note that the volume must be converted to liters.
The molarity formula does not need to be done separately as it can be included into the normal dimensional analysis setup. Notice the units of the first two ratios. The first value is strictly liters, the unit for volume. The next ratio's units are mol/L, the unit for molarity, and when multiplied by the first value it is the equivalent of n = M×V.
Solution Stoichiometry - Titration
The following could be a problem from a textbook, or data from a lab experiment:
19.52 mL of 0.285 M sulfuric acid was needed to titrate 42.81 mL of sodium hydroxide. Find the molarity of the sodium hydroxide.
Since the problem explicitly states that the molarity of sodium hydroxide is the unknown, then we must have enough information to start with sulfuric acid. Upon inspecting the values again, it can be seen that both the molarity and volume of sulfuric acid are given, enough to find moles of sulfuric acid and complete the problem. |
Using powerful supercomputers, astronomers at Durham University reveal further evidence of the existence of dark matter – the mysterious substance that is believed to hold the Universe together.
The scientists used computer models to simulate the formation of galaxies in the presence of dark matter and were able to demonstrate that their size and rotation speed were linked to their brightness in a similar way to observations made by astronomers.
Until now, theories of dark matter have predicted a much more complex relationship between the size, mass, and brightness (or luminosity) of galaxies than is actually observed, which has led to dark matter skeptics proposing alternative theories that are seemingly a better fit with what we see.
The research led by Dr. Aaron Ludlow of the Institute for Computational Cosmology, is published in the academic journal, Physical Review Letters.
Most cosmologists believe that more than 80 percent of the total mass of the Universe is made up of dark matter – a mysterious particle that has so far not been detected but explains many of the properties of the Universe such as the microwave background measured by the Planck satellite.
Alternative theories include Modified Newtonian Dynamics, or MOND. While this does not explain some observations of the Universe as convincingly as dark matter theory it has, until now, provided a simpler description of the coupling of the brightness and rotation velocity, observed in galaxies of all shapes and sizes.
The Durham team used powerful supercomputers to model the formation of galaxies of various sizes, compressing billions of years of evolution into a few weeks, in order to demonstrate that the existence of dark matter is consistent with the observed relationship between mass, size, and luminosity of galaxies.
Long-standing problem resolved
Dr. Ludlow said: “This solves a long-standing problem that has troubled the dark matter model for over a decade. The dark matter hypothesis remains the main explanation for the source of the gravity that binds galaxies. Although the particles are difficult to detect, physicists must persevere.”
Durham University collaborated on the project with Leiden University, Netherlands; Liverpool John Moores University, England and the University of Victoria, Canada. The research was funded by the European Research Council, the Science and Technology Facilities Council, Netherlands Organization for Scientific Research, COFUND, and The Royal Society.
Reference: “Mass-Discrepancy Acceleration Relation: A Natural Outcome of Galaxy Formation in Cold Dark Matter Halos” by Aaron D. Ludlow, Alejandro Benítez-Llambay, Matthieu Schaller, Tom Theuns, Carlos S. Frenk, Richard Bower, Joop Schaye, Robert A. Crain, Julio F. Navarro, Azadeh Fattahi and Kyle A. Oman, 21 April 2017, Physical Review Letters. |
Usually, more than one independent variable influences the dependent
variable. You can imagine in the above example that sales are
influenced by advertising as well as other factors, such as the number
of sales representatives and the commission percentage paid to sales
representatives. When one independent variable is used in a regression,
it is called a simple regression; when two or more independent variables are used, it is called a multiple regression.
Regression models can be either linear or nonlinear. A linear model
assumes the relationships between variables are straight-line
relationships, while a nonlinear model assumes the relationships
between variables are represented by curved lines. In business you will
often see the relationship between the return of an individual stock
and the returns of the market modeled as a linear relationship, while
the relationship between the price of an item and the demand for it is
often modeled as a nonlinear relationship.
As you can see, there are several different classes of regression
procedures, with each having varying degrees of complexity and
explanatory power. The most basic type of regression is that of simple linear regression.
A simple linear regression uses only one independent variable, and it
describes the relationship between the independent variable and
dependent variable as a straight line. This review will focus on the
basic case of a simple linear regression.
How does regression work to enable prediction? View the following
animation for a brief explanation of the basics of simple linear
regression. The subsequent text will develop ideas mentioned in the
As indicated by the animation, one of the first steps in
regression is to plot your data on a scatter plot. The following
table lists the monthly sales and advertising expenditures for all of
last year by a digital electronics company.
In this case, you would plot last year's data for monthly sales
and advertising expenditures as shown on the scatter plot below.
(Data for independent and dependent variables must be from the same
period of time.)
Scatter plots are effective in visually identifying relationships
between variables. These relationships can be expressed
mathematically in terms of a correlation coefficient, which is
commonly referred to as a correlation. Correlations are indices of
the strength of the relationship between two variables. They can be
any value from –1 to +1. (Correlations are covered in greater detail
in the Covariance and Correlation topic of this section.)
When you use regression to predict future values of the dependent
variable, the ideal correlation between the independent and
dependent variable is high—in absolute value
terms, somewhere in the range between .5 to .99. Viewing the scatter
plot above, you can see that there appears to be some degree of
correlation between the level of advertising expenditure and product
awareness. When calculated, this correlation equals .89. This
historical data will enable you to predict the relationship between
the two variables in the future, before any further expense is
incurred. In order to make these predictions, a regression line must
be drawn from the information appearing in the scatter plot.
The figure below is the same as the scatter plot above, with the
addition of a regression line fitted to the historical data.
The regression line is the line with the smallest possible set of
distances between itself and each data point. As you can see, the
regression line touches some data points, but not others. The
distances of the data points from the regression line are called error terms.
A regression line will always contain error terms because, in
reality, independent variables are never perfect predictors of the
dependent variables. There are many
uncontrollable factors in the business world. The error term exists
because a regression model can never include all possible variables;
some predictive capacity will always be absent, particularly in
The typical procedure
for finding the line of best fit is called the least-squares method.
This calculation is usually performed using computer software. In
this calculation, the best fit is found by taking the difference
between each data point and the line, squaring each difference, and
adding the values together. The least-squares method is based upon
the principle that the sum of the squared errors should be made as
small as possible so the regression line has the least error.
Once this line is determined, it can be extended beyond the
historical data to predict future levels of product awareness, given
a particular level of advertising expenditure.
The extension of the line of regression requires the assumption
that the underlying process causing the relationship beween the two
variables is valid beyond the range of the sample data. Regression is
a powerful business tool due to its ability to predict
future relationships between variables such as these.
When you run a regression in Excel or in a statistics program,
the program will provide you with a report. The details of these
reports, and the definition of all the terms included in the report,
are beyond the scope of the course.
Equation of a Regression Line
You may recall the equation of a straight line from your review
of the Linear Functions topic in the Algebra section of this
Variables, constants, and coefficients are represented in the
equation of a line as
x represents the independent variable
f(x) represents the dependent variable
the constant b denotes the y-intercept—this will be the value of the dependent variable if
the independent variable is equal to zero
the coefficient m describes the movement in the
dependent variable as a result of a given movement in the independent
In finance, linear regressions are commonly used to describe the
returns of an individual security (dependent variable) compared to
the returns of the market in general (independent variable). The
equation for the simple linear regressions used to describe security
movements is also a straight line and is expressed in a format, which,
while similar, does contain a couple of twists. The equation below is
a regression equation for a straight line describing the relationship
between the returns of security I and the market in general.
ri represents the return of security I and is
the dependent variable
rm represents the return of the market in
general and is the independent variable
b is the slope of the regression line, and
it describes the level of movement in security I as a result of a
unit of movement in the market in general
a is the y-intercept of the
I is an error term that describes the
distance between an actual data point and the corresponding point on
the regression line
The graph below provides a visual depiction of this regression
line. The returns of the market in general are represented in this
graph by the returns of the S&P 500—a common surrogate for market returns.
You may be familiar with discussions in financial circles about
the beta (b) of a security being a measure
of the security's risk. The risk measure of beta is calculated using
Beta, the slope of the regression
line, was described above as the level of movement in the returns of
a given security for each unit of movement in the market in general.
A security with a high beta is considered risky and will experience
big swings in its returns as compared to those of the market. A
security with a low beta is considered less risky and will have
returns that fluctuate less than those of the market. The alpha term
(a) in the regression equation of a
security represents the security's propensity to move independent of
the market. The alpha and beta of a security cannot be observed
directly but are estimated, based on the past performance of a
security, through regression analysis.
1. The placement office of a graduate business school would like to
predict the starting salaries of its students. The placement office
administrators are highly confident (based on their collective past
experience) that starting salaries depend on a combination of factors,
including the number of years of previous work experience, the
student's graduate school GPA, and the student's GMAT score. Is it
appropriate for the placement office to use a simple linear regression
to predict the starting salaries of its students? Why or why not?
2. The following two graphs illustrate simple linear regressions. Which has a higher predictive quality and why?
3. Consider the following scatter plot with a regression line of the
advertising dollars spent (in thousands) and sales (in millions). If
the company were to spend $275,000 on advertising, what would you
predict the sales level to be? |
Three Newton's Laws
Newton's three laws -- what do they mean?
Newton's laws provide a prescription for calculating the path of a body moving under the influence of external forces. In this knol, we explain the laws on the high school level, without the use of advanced mathematics and technical jargon. This presentation differs from most textbooks in focusing on the meaning of the physical concepts, rather than analyzing Newton's formulation. If you ever wondered what "action equals reaction" was supposed to mean, this knol is for you.
Explaining the ExplanationEssential to understanding Newton's work is a grasp of the experimental basis behind his discoveries. The previous knol describes the sky as it was perceived before the year 1600. It discusses, with numerous animated illustrations, the apparent motion of the stars and planets, the controversy about "what is orbiting what," as well as the roles of latitude and longitude. It describes data collected by Tycho de Brahe and summarized by Kepler in his three laws. (Pink border that means icon is clickable, leading to the previous knol "The Sky Before the Telescope";
Kepler showed that, "from the point of the view of stars" planetary orbits are ellipses. However, the reason why they were ellipses was a mystery. It was Newton who explained the motion of the planets. An explanation, in physics, means showing how the observed data follow from a deeper, more general mathematical theory. There was no such theory available at the time. Newton had to create that general theory, and even create the required mathematics on the fly, while he was making his deductions about motion. On the other hand, he did not have to make discoveries attributed him by some textbooks, such as the idea that the force of gravity is universal and gets weaker with distance. Newton's younger friend, Edmond Halley visited him in Oxford in 1724 to ask whether he could calculate what the orbit of a planet would be if gravity was decreasing as the inverse square of the distance. Halley had found the mathematics of calculating this too daunting. So also did Newton's colleague and fellow member of the Royal Society, Robert Hooke, who corresponded with Newton about the properties of gravity. Newton had been mulling over such problems already, and Halley's challenging question spurred him to work on the question. Newton was eventually, after years of work, able to do the calculation Halley requested. First he had to make that question more precise. He had to formulate an equation which used new mathematical concepts, and also to solve it. The solution of that equation describes the motion of the planets. Halley was impressed with Newton's groundbreaking work, urged him to publish it and, being a man of means, underwrote the cost of printing it. The famous Principia [1-3]... was the result. Newton's three laws, the propositions of the Principia, were an attempt to put the rules for the equation into words. Those words are difficult to grasp, because Newton himself struggled with his new concepts [4-7], and also because elementary explanations avoid the advanced mathematics needed for the equation. As a result, students tend to memorize and parrot back statements which they do not understand. Here, we propose to change that unfortunate state of affairs, by taking a different approach to the same pedagogical problem, an approach known as "conceptual physics." (See the author's bio.)
Newton's World - The Sacred and the Seculartheories explaining experimental data were not clearly separated in Newton's time. The controversy which Galileo's discoveries stirred up with the Catholic church are but one small illustration. The transition from the medieval belief in crystaline spheres emitting celestial music to the post- Newtonian world of scientific astronomy, guided by his theories, was slow but unrelenting during the 14th to 17th centuries.
Newton was born in 1648, just a few months after Galileo Galieli died, and 126 years after Ferdinand Magellan's crew completed the first circumnavigation of the globe. As he was growing up , Newton knew that the Earth is round, that gravity, "pointing down", pulls things to the center of the Earth. He knew that Galileo had aimed his telescope at the sky and observed "four new stars" clearly orbiting Jupiter, not the Earth. These objects are now known as Jupiter's moons. Galileo had also observed the surface of our moon, and saw it was not unlike surface of the Earth. It was not a perfect sphere made of "heavenly etheral substance" but rather a stony desert covered with craters. The time was ripe to bridge the divide which in the past had separated the secular and the sacred spheres, and to show that same laws are valid everywhere " as on the Earth, so in the heavens".
While the telescope was in use in Newton's time, and Newton himself made important improvements in it, data on the motion of the planets had already been gathered and organized before the telescope. Roughly a half century before Newton was born, Johannes Kepler and Tycho de Brahe had observed both novas (new stars appearing in the heavens) and comets, which passed without hindrance through the hypothetical "crystal sphere" (1557). Their accomplishments and lives are discussed in more detail in "The Sky Before the Telescope." knol. Their work culminated in Kepler's laws, the understanding of which is essential for the next step, which was Newton's. Newton made a successful synthesis of what was then known and wrote an equation which was able to produce the orbits of the planets. Today, we call such equations "the equations of motion."
As in Dumas' "The Three Musketeers," there are actually four, not three, laws needed to explain the motion of planets. The "three laws" of textbook fame and the "law of universal gravity" have to work together to produce the equation of motion. On the other hand, the "First Law" is a consequence of the second. so we take one away and add another, we end up with three laws anyway. Before plunging into Newton's theoretical contributions, we need to look at another important area of exploration essential to understanding his work, the state of physical theory in his time.
Static Forces and the Equations for EquilibriumThe motion of the planets had been observed to be periodic: their movement repeats, without apparent changes. This contrasts with observations on the Earth, where a moving mass tends to slow down and eventually stop. When all motion has come to a stop, the system is said to be in equilibrium. These observed differences had led to the ancient belief that the laws governing the heavens were different that those on earth. Detailed new observations of the heavens and explorations of forces had begun to erode this belief, however.
The concept of force was less clearly defined in Newton's time than it is today. Scholars of the Middle Ages discussed "vis viva," a living force,
which brings stationary objects into motion, and "vis morte," a dead force, which created tension, or pressure, but did not give birth to new motion. Humans had, of course, experienced both since the advent of the human race.
They recognized different static forces: the force needed to move an object or lift a weight, the force needed to buttress a cathedral wall or arch, and the force of springs. Boyle titled his 'gas law' article on gas resisting compression "The spring of air"
In this applet, called a spring-pendulum, heavy bobs can be hung on elastic springs. The amount of friction can be adjusted. system which exhibits the two types of behavior we mentioned above, motion as seen in the sky and motion as seen on Earth. When the friction constant b is set to zero, the mass keeps moving in a wavy, periodic motion forever, like the planets. When you introduce friction , the amplitude decreases and after a few oscillations the motion stops and the system is in equilibrium. This second behavior illustrates the situation common on earth. If we know the parameters of the system, that is the strength of the spring and the weight of the bob, we can calculate its equilibrium position by solving a simple algebraic equation with one unknown. Of course, we can measure that position, too. Here we have both the theory and the experiment. Use the applet in lieu of building the experiment. Click on the applet and try different parameters for the system, different spring constants, different masses. Drag the bob away from the equilibrium point using the mouse, and then click start. Damped spring pendulum At this stage, we are not attempting to calculate the pattern of the bobbling motion which the mass makes before it settles into equilibrium. That motion is the dynamics of the system. However, the same spring and mass will always tend to the same equilibrium and this, the statics of the system, we can easily calculate. The equation to solve it in this case is the balance of two forces. The spring pulls the mass up; gravity pulls it down. An algebraic equation describing this simple pendulum looks like this:
K * y + m *g = 0 force of spring + weight of mass m = 0There is one unknown, a number y, the extension of the spring. You may call this balance "action equals reaction," the forces being equal in magnitude, and opposite in sign
( F.spring = - F.weight).
However, it it is better to write this equation as "the forces are balanced", i.e. their sum is zero. ( F.spring + F.weight = 0 ):
At equilibrium, the sum of all forces acting on a mass is zero.Vector algebra
Warning: Two paragraphs of math concepts ahead! We promised no higher math, but we must mention vectors. The motion of the mass in the spring-pendulum applet was just up and down in one dimension, along a vertical line. Position was described by one number (the elevation above the floor, or the extension of the spring) and so was velocity. In the applet at the left, motions are confined to a plane, or two dimensions. The position is given by two numbers, and forces have a direction. For our purposes, we can say that a vector is a pair of numbers. When we consider motion in three dimensional space, then positions, velocities, and forces are given by a triplet of numbers. The concept of vectors involves more than this, but considering a vector as n-tuple of numbers, known as the components of a vector, (a pair, a triplet,..) is sufficient here for our uses. One vector equation is actually three equations, one for each component. Think of it as a shorthand way of grouping together the numbers which have similar physical meanings.
Three forces in an equlibriumEquilibrium of Three Forces The static equilibrium applet at the right shows a more complex system, a system of three bobs, constrained by ropes and pulleys so as to move together in one plane. Playing with the applet will illustrate clearly what is explained in words below. The positions of the three masses considered as a unit are called the "configuration of the system." When we get serious about the study of any system, we need to select a "frame of reference," which is a way to assign numbers to different positions of masses, i.e. different configurations of the system. Here the configuration is determined by the position of the knot where three ropes are joined. The knot moves in the vertical plane and its position determines the heights or elevations of the three masses above a selected horizontal plane, e.g. the floor of room. The forces acting at the knot are "vectors," meaning they have a magnitude and a direction. These 2 components, magnitude and direction, are shown by the arrows. Note that a longer arrow means a greater magnitude.
The concept of vectors and their algebra is the subject for a separate knol, one on linear algebra and geometry. However, this applet has an option ("Parallelogram of forces") which shows how, at equilibrium, the sum of two forces is equal in size, and opposite in direction, to the third force. As in the case of pendulum in the first applet, we can find the equilibrium point. (Note that we cannot as yet describe the process by which it settles to that equilibrium.) In still more complex systems, we may need several equations to determine the position of several masses in a three dimensional space. In such cases, we have several algebraic equations, for several unknowns. By solving the equations and finding those unknowns, we can determine the equilibrium configuration of the system.
The Catenary CurveIn even more complex cases, not even several algebraical equations are sufficient. A classical example is this question: "What is the shape of a rope suspended by its ends, as illustrated at the right?"
Here, the unknown is not a single number, but a curve. That curve, called a catenary , can be found as the solution of a different type equation, a differential equation". While Newton was looking for an equation which would have as its solution a curve, the orbit of a planet, Gottfried Wilhelm Leibniz in France formulated an equation for catenary and developed differential calculus independetly of Newton, when solving that problem. . For for more on catenary click here .
It is not difficult to get an intuitive feel for differential calculus: Imagine the hanging rope replaced by a chain, made up of many links. You can see the question of the shape as a problem of the static equilibrium of many masses. There would be a large number of algebraic equations, which would have as their solution the elevations of the links. Here catenary is compared with parabola in red . The elevations of all the links define the curve. That curve, that catenary is shown here in blue, compared with a red parabola. (If you enter the search term "catenary' into a search engine, you will find other interesting properties it has.)
Thus we see that differential equations can be imagined as a large number of algebraic equations. While Leibniz and his colleagues to used the tools of differential calculus to find the static equilibrium of complex mechanical systems, Newton used these same mathematical methods to find the dynamics of a simple system, the motion of a single mass in a field of force.
Dynamic Forces and The Equation of Motion
We mentioned the force of friction when discussing the spring-pendulum applet. It is a dynamic force because it depends on velocity. Friction is zero when velocity is zero. In other words, when an object is not moving, there is no friction. How friction depends on velocity is a complex story. In some cases, friction is proportional to velocity. An example of this is an object pulled slowly through a fluid (Stokes law). In other cases it depends on velocity squared, or just the sign of the velocity. Fortunately for us in this knol, we do not need to consider friction in our discussion. Newton does not talk about frictional force in his three laws because he was thinking about planets, and friction was seen to be so very small that it could be ignored. Planets move through a vacuum. Nevertheless, they do slow down slightly over milennia. It is not a coincidence that moon is always facing the Earth with the same side; its rotation around its axis has slowed so that it now makes only one rotation per orbit. However, these are other stories. Newton focused on the other dynamic force, the force of inertia. Recall that weight is a force; it is the force of gravity acting on a mass. Weight is proportional to mass; doubling the mass doubles the weight. In a similar way, the force of inertia is proportional to the mass of a body. It also is proportional to the object's change in velocity, that is to its acceleration. Acceleration, in the mathematical sense, can increase or decrease the velocity; it can be positive or negative. Force is required to get an object going faster, to get it to slow down, to stop, or to change its direction. An object with greater mass requires more force to get it to perform these actions than does a object with a smaller mass. Doubling the mass requires doubling of the force to get comparable acceleration. If you encounter the textbook statement that "Force is mass multiplied by acceleration," remember that this refers only to this kind of force, the force needed to overcome inertia. There are many forces, as we have already discussed. The force of inertia is just one additional force, a dynamic force, which Newton added to the collection of known forces. It was known qualitatively before Newton. Newton gave it numerical values. (Since force, like velocity and acceleration, is a vector, it has both direction and magnitude, and so is described by several numbers.) We can calculate the motion of any moving body with this general rule: The sum of all forces (dynamic and static) acting on each mass is zero. Each moving mass in a system has its own equation of motion. This is not an algebraic equation, but a differential equation. For the simple spring-pendulum system discussed above, that equation looks like this:
K * y - m * y'' = 0 The force of the spring - inertia = 0Here y, the extension of the spring, describes the position of the mass. The symbol y'' describes the acceleration (the change of velocity with time). The symbol y', not used in this equation, would describe its velocity (the change of position with time). Damped Spring Applet
Instead of focusing on the static condition of equilibrium, we are now interested in the motion of the suspended bob before it reaches equilibrium. Let us look at it in the simplest possible terms. The bob moves up and down with a certain velocity before coming to rest. The velocity is changing and the force of inertia is resisting that change. The sum of the force of inertia and the force of the spring equals zero. That equation can be solved, of course. The damped spring applet here illustrates the solution which these explanations and equations describe. When the frictional or damping parameter is zero, the solution is a periodic curve is called a sinusoid.
The Motion of the PlanetsNewton was interested in a slightly more complex motion than the oscillation of a pendulum, namely in the motion of a planet orbiting the sun. It was clear to Newton, as to many others before him, that the moon, the closest of all heavenly bodies, is orbiting the Earth. The Earth attracts the moon, as it does everything else, and the moon is kept in its circular orbit by centrifugal force. Here we have the dynamic balance of two forces, the force of gravity and centrifugal force, which is a special case of the force of inertia. [5-8] Newton was able to verify that Earth's gravity at the moon's orbit is weaker than on the surface of the Earth. That calculation is easy for a circular orbit, but the orbits of the planets were known not to be circular.
In this applet, you may select a date to see the position of planets, choose animation or you single step by << and >> buttons.This is how planet orbits look from Earth. When observed from the 'firmamnet' (imaginary sphere to which the stars are atatched) Orbiys are ellipses, with the sun in one focus. When Halley and others approximated the orbits as circles with the sun at the center, they saw that centrifugal force was just right to balance gravity, provided that gravity was getting weaker with inverse square of their distance from the sun. The tantalizing problem of calculating that motion exactly was the problem Halley challenged Newton to solve. Newton described his solution in his Principia. These are the basic conceptual underpinnings of Classical aka Newtonian mechanics.
Solving the equations of motionsYou can use the two applets described below to see how Newton's equations are solved and obtain the trajectory of a moving body. You can now be a prime mover: Create a planet with a stroke of your mouse. The direction and speed of that stroke define the velocity of new planet; position and velocity determine the orbit. This applet demonstrates how an orbit is generated by a differential equation, once the position and velocity of an object is given. If the initial speed is too small, the planet will crush into the star; if it is too large, exeeding escape velocity, the planet will fly away, never to return. "law of universal gravitation": Gravity gets weaker as the "inverse square" of the distance. It is illustrated here. Simply stated, if the distance between bodies is doubled, the attraction is reduced 4 times. For the more mathematically inclined: Mathematically stated, if the distance is r and gravity is g, then at distance 2*r gravity will be g/4. In more complex mathematical terms: Gravity changes as 1/(r*r). Or, r^(-2) = r to the power of -2. Here -2 is the exponent (- means inverse and 2 means square). The inverse square law applies to the attenuation of many other physical quantities, such as electric charges and the intensity of light or sound.
You can effortlessly check Newton's calculations by means of this applet.
Only the exponent -2 will produce ellipses.
You now have an outline of the concepts and a taste of the mathematics in Newton's discoveries. What remains is to be aware of the different, and sometimes confusing, terminology some textbooks use and of the limitations of our new knowledge.
Newton's Three, plus One, LawsWe presented Newton's laws by explaining his thinking and his accomplishments, while avoiding attempts to analyze the words with which he tried to explain his discoveries. Newton wrote an equation for calculating the motion of an planet, inventing the mathematics for solving it and showed that the solution was an ellipse, with the sun in one focus. Wikipedia gives us a glimpse at what Newton himself wrote. You will find there the literal text of Newton's three laws in the original Latin and in an English translation. How does what Newton wrote and what textbooks paraphrase relate to what we have explained? We summarize this here, with the laws in reverse order, for ease in understanding them.
Inertial Systems and the Limitations of Newton's MechanicsThe ideal which Einstein emphasized is to formulate the laws of physics so that they are valid in all frames of reference, moving or not. When laws are the same in all frames, they are said to be "invariant" (unchanging). We can then do our calculations in any frame of reference. We will get different curves in different frames of reference, different orbits, but when we transform them by the rules of geometry, they will all agree. It is still important to choose the frame of reference carefully, since the calculations and the results are simpler in some frames.
Newton's equations, as he formulated them, are not invariant. They are valid only in special frames of reference called inertial frames. In an inertial system, inertia obeys the second of Newton's laws. In non-inertial systems it does not. Newton and his contemporaries considered a frame fixed to the stars as representing absolute space. As we see things now, such a frame is just one inertial system. All frames which are moving uniformly with respect to the stars are "inertial systems." We can use Newton's laws in any of them. For example, we know that in the frame of reference attached to the stars, the orbits of the planets are simple; they are ellipses. Those simple orbits, ellipses, are obtained by Newton's laws in any inertial system. A simple example of a non-inertial frame is your own experience on a bus which suddenly changes direction. You are pulled to one side. It is inertia of course. However, in the frame of reference attached to the bus, this force does not follow the second law. Instead of leaving you at rest, inertia pushes you around. For many practical problems we can treat the Earth as an inertial system. In the experiment described by the spring-pendulum applet, we used a frame of reference attached to the floor, that is to the Earth. The fact that Earth rotates with respect to stars does not change the results of most Earthly experiments appreciably. It is observable in some experiments which last a long time and involve rotating masses. For example, hurricanes, large rotating masses persisting for days, are affected by the rotation of the Earth.
The challenging question, "With which stars does the the axis of a gyroscope remain aligned?" has been asked and answer is complex and not quite final. For our purposes, we consider them to be the stars that we can see, that is, the stars of our galaxy. In situations of high velocities or of gravitational fields of high intensity, Newton's theory does not work. Einstein's Theory of General Relativity must be used in those situations. Newton's mechanics, today called Classical Mechanics, contains both dynamics and statics. His concepts ruled physics up until the 20th century. They were generalized further and are part of the disciplines of "modern physics," which now includes Quantum Mechanics and Einstein's Theory of Relativity. Fundamental aspects of these theories are still based on principles first formulated by Newton in his Principia. The Theory of relativity is explained in the follow up knols called Relativity triptych.
References<=== Principia full text
Halley demonstrated univeral validity of Newton's new theory by applying it to a comet... |
Astrophysicists simulate the dark matter that cradles a galaxy.
In the early 1930s, the eminent Swiss astronomer Fritz Zwicky noticed something very odd as he was looking to the skies: Galaxies seemed to move around each other too fast.
Zwicky was scrutinizing a group of eight galaxies orbiting one another more than 350 million light years away in the Coma Galaxy Cluster. Drawing from early work by Issac Newton and Albert Einstein, he understood the balance of forces necessary to keep the galaxies in this dance. Like a yo-yo swung by a child, they need both the centrifugal force pushing them outward and the string—in this case gravity—pulling them back in. Too much force inward and the system collapses; too much outward and the galaxies fly apart.
From a yo-yo to a galaxy, every object in the universe with mass exerts a gravitational pull on other objects. To Zwicky, the galaxy cluster he was observing appeared to have too little mass, and therefore too little gravity, to keep the galaxies from flying off into space. He and his colleagues theorized that these and all galaxies must be dominated by matter invisible to the eye. They called it dark matter.
Equipped with an understanding of how gravity works and extensive observations of planets, stars and galaxies, scientists have in fact concluded that less than one-fifth of the matter in the universe is visible. The remainder, dark matter, has no interaction with regular matter except through the force of gravity. Nevertheless, so much dark matter exists in the universe that its gravitational force controls the lives of stars and galaxies.
What, then, would dark matter look like if we could see it? A team led by astrophysicist Piero Madau of the University of California–Santa Cruz has taken a substantial step toward answering this question. Using the power of Oak Ridge National Laboratory's Jaguar supercomputer, Madau's team has run the largest simulation ever of dark matter evolving over billions of years to envelop a galaxy such as our own Milky Way. The envelope is known as a dark matter halo.
Madau and his collaborators—including Juerg Diemand and Marcel Zemp, both of UCSC, and Michael Kuhlen of the Institute for Advanced Study in Princeton, New Jersey—reviewed the simulation and their findings in the journal Nature. The simulation followed a galaxy worth of dark matter through nearly the entire history of the universe, dividing the dark matter into more than a billion separate parcels. The effort was staggering and involved tracking over 13 billion years the evolution of 9,000 trillion trillion trillion tons of invisible materials spread across 176 trillion trillion trillion square miles. Each parcel of dark matter was 4,000 times as massive as the sun.
Hypothetical particles with real gravity
Scientists are still trying to determine exactly what dark matter is. Candidates include hypothetical particles such as the neutralino, the sterile neutrino, the axion or some other weakly interacting massive particle. Fortunately, researchers do not need to fully understand dark matter in order to simulate it. All they need to know is that dark matter interacts with other matter only through gravity and is cold, meaning the matter is made up of particles that were moving slowly when galaxies and clusters began to form. Using initial conditions provided by observations of the cosmic microwave background, Madau and his team were able to simulate dark matter through a computer application called PKDGRAV2, developed by a group of numerical astrophysicists at the University of Zurich, who ignored visible matter and focused entirely on the gravitational interaction among a billion dark matter particles. The project had a major allocation of supercomputer time through the Department of Energy's Innovative and Novel Computational Impact on Theory and Experiment program. The simulation used about 1 million processor hours on the Jaguar system, located at ORNL's National Center for Computational Sciences.
"The computer was basically just computing gravity," Madau explained. "We have to compute the gravitational force among 1 billion particles, and to do that is very tricky. We are following the orbits of these particles in a gravitational potential that is varying all the time. The code allows us to compute with very high precision the gravitational force due to the particles that are next to us and with increasingly less precision the gravitational force due to the particles that are very far away because the gravity becomes weaker and weaker with distance."
Dark matter is not evenly spread, although researchers speculate it was nearly homogeneously distributed immediately after the Big Bang. Over time, however, gravity pulled the matter together, first into tiny "clumps" having more or less the mass of Earth. Over billions of years these clumps were drawn together, a process that continued until they combined to form halos of dark matter massive enough to host galaxies.
One lingering question was whether the smaller clumps would remain identifiable or would smooth out within the larger galactic halos. The answer required a state-of-the-art supercomputer such as Jaguar, which at the time of the simulations in November 2007 was capable of nearly 120 trillion calculations a second. Because earlier simulations did not have the resolution to resolve any unevenness, the results appeared to show the dark matter smoothing out, especially in the galaxy's dense inner reaches. Madau's billion-cell simulation, however, provided enough resolution to verify that the earliest forms of dark matter do indeed survive and retain their identity, even in the very inner regions, where our solar system is located.
"We expected a hierarchy of structure in cold dark matter," Madau explained. "What we did not know is what sort of structure would survive the assembly because as these subclumps come together they are subject to tidal forces and can be stripped and destroyed. Their existence in the field had been predicted. The issue was whether they would survive as assembled together to bigger and bigger structures."
"What we find," he continued, "is the survival fraction is quite high."
Madau's team will be able to verify its simulation results using the National Aeronautics and Space Administration's Gamma-Ray Large Area Space Telescope. Launched on June 11, 2008, The telescope will scan the heavens to study some of the universe's most extreme and puzzling phenomena: gamma-ray bursts, neutron stars, supernovas and dark matter, just to name a few. While dark matter particles cannot themselves be detected (direct detection of dark matter is being pursued by large underground detectors), researchers believe that dark matter particles and antiparticles may be annihilated when they bump into each other, producing gamma rays that can be observed from space. The clumps of dark matter predicted by Madau's team should bring more particles together and thereby produce an increased level of gamma rays.
A second verification comes from an effect known as gravitational lensing, in which the gravity exerted by a galaxy along the line of sight bends the light traveling from faraway quasars in the background. If the dark matter halos of galaxies are as clumpy as this simulation suggests, the light from a distant quasar should be broken up, like a light shining through frosted glass.
"We already have some data there," Madau noted, "which seems to imply that the inner regions of galaxies are rather clumpy. The flux ratios of multiply imaged quasars are not as you would predict with a smooth intervening lens potential. Instead of a smooth lens, there is substructure that appears to be affecting the lensing process. Our simulation seems to produce the right amount of lumpiness."
Madau's simulations in less than two years have reshaped the discussion about how our universe is held together. As researchers have access to increasingly powerful supercomputers, their findings could enable them to join their predecessors Newton and Einstein in unlocking the door to some of humankind's most fundamental questions.—Leo Williams
Web site provided by Oak Ridge National Laboratory's Communications and External Relations |
Abstract: The experimenters conducted a total of four mini labs. In each lab they had to find measurements dealing with different instruments such as a ruler, caliper, stopwatch, and two spring scales of different newtons. The objective in each experiment was to record and measure different objects and to also give advantages and uncertainties when dealing with different instruments. The experimenters found that each instrument comes with an uncertainty. When dealing with a ruler, the measurement can be very accurate but is not as precise as a caliper. The caliper was the most precise instrument that the experimenters used. When recording with the stopwatch, the experimenters found that reaction time played an important role in the lab. The two spring scales came with different problems due to the fact that each one was limited in newtons. Each advantage was how accurate the instruments were.
Introduction and Background: The experimenters assumed that if more instruments are used to measure or record an object, then their measurements should be more precise and accurate. The main question that they asked was, “How do we know if our measurements are precise or not?” The experimenters knew that they must need the right instruments and use more than one for each experiment. Their main concern was the precision of each of their measurements; therefore, multiple instruments were used in three out of the four labs. Method: In the first part of the lab, Rulers vs. Calipers, the experimenters were asked to compare the precision and accuracy measuring the same objects with a ruler and a caliper.
Materials: One marble
One 8 oz styrofoam cup
Procedures: Step one: The experimenters used the ruler to measure the diameter of the marble then recorded the measurement, along with an uncertainty, and then did the same with the caliper. Step two: They used the ruler to measure the outer and inner diameter of the washer then recorded the measurement, along with an uncertainty. The experimenters used the caliper to reassure their measurements. Step three: Repeat these steps to find the thickness of the washer, the height of the cup, and the length of the string. In the second experiment, The Spring Force Scale, the experimenters learned how to calibrate a spring scale, and how to measure the mass in both grams and newtons. To calibrate the the spring scale, the experimenters adjusted the knobs on top of the scales until the plastic piece in the center reached zero.
Materials: Roll of masking Tape
Box of Modeling Clay
Single Hole Punch
5 N Spring Scale
10 N Spring Scale
Procedures: Step One: The experimenters adjusted each scale as needed, until it was level with zero. Step two: The experimenters got each item and hung it freely on the 5 N spring scale. They observed how many grams it was equal to by seeing the plunger in correlation to the numbers on the scale. Step three: They wrote down how many grams, and what uncertainties they had when performing this experiment. Step four: The experimenters repeated the first few steps with the 10 N spring scale. In the third experiment, The Stopwatch, the experimenters used a stopwatch to record the time a marble fell from a constant height to the ground. The main objective of this experiment was to learn the errors and uncertainties when dealing with a stopwatch.
Materials: One Marble
Constant Drop Height (which the experimenters provided)
Procedures: Step one: The experimenters chose a constant height in which they dropped their marble from. They decided that it be a flat desktop so they can reduce mistakes. Step two: One of the experimenters became familiar with the stopwatch so that it would be easier to record the time it took to hit the ground. Step three: The other experimenter recorded each time on a table, along with the uncertainties. Step four: They dropped the marble from the constant height about five times. In the final experiment, Density of the Mass Set, the experimenters used the vernier caliper to lead them to the volume of some of the figures in the mass set, and this eventually led them to find the density.
The experimenters used the equation Density = mass/volume. The equation helped them figure out the density of the 100 g mass. Discussion: The experimenters concluded that with every thing you measure there will always be an uncertainty. Each experiment has different uncertainties. For the first lab the uncertainties included being off by a millimeter or so for the ruler. The caliper was very precise but also had a minor uncertainty with the millimeters. In the second experiment, the experimenters faced any uncertainties. For example, the two different spring scales ended with different results. The experimenters also had uncertainties with the third experiment when measuring the time that the marble fell. The biggest uncertainty was the reaction time of the experimenter. In the final experiment the main uncertainty was the equation.
The experimenters were unsure of the measurements; therefore, they were not able to get precise results. Their hypothesis was if more instruments are used to measure or record an object, then their measurements should be more precise and accurate. After all of the research and labs, the hypothesis came out to be correct. The experimenters experienced error when measuring with a caliper. The way they can change it in the future is to make sure they know how to use it before they start the experiments. What they learned throughout the whole experience is you will always encounter problems, and the only way to fix them is to keep trying unil you get the best possible answer.
(2011). escience labs: Introductory physics. (Vol. 3.3). Sheridan, CO: eScience Labs, LLC. Retrieved from
http://ecampus.wtc.edu/pluginfile.php/9210/mod_resource/content/6/escience_Lab_Manual.pdf-6350594626-62737728386811341667240564558479910928062762250838200Vernier Caliper Ruler
Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email.
Please, specify your valid email address
Topic: The uncertainties of using a ruler and caliper
We can't stand spam as much as you doNo, thank’s. I prefer suffering on my own.
Remember that this is just a sample essay and since it might not be original, we do not recommend to submit it. However, we might edit this sample to provide you with a plagiarism-free paperEdit this sample
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX |
In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis (published 1768–70).
The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method.
Informal geometrical description
Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated.
The idea is that while the curve is initially unknown, its starting point, which we denote by is known (see the picture on top right). Then, from the differential equation, the slope to the curve at can be computed, and so, the tangent line.
Take a small step along that tangent line up to a point Along this small step, the slope does not change too much, so will be close to the curve. If we pretend that is still on the curve, the same reasoning as for the point above can be used. After several steps, a polygonal curve is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite.
Choose a value for the size of every step and set . Now, one step of the Euler method from to is
The value of is an approximation of the solution to the ODE at time : . The Euler method is explicit, i.e. the solution is an explicit function of for .
While the Euler method integrates a first-order ODE, any ODE of order N can be represented as a first-order ODE: to treat the equation
we introduce auxiliary variables and obtain the equivalent equation
This is a first-order system in the variable and can be handled by Euler's method or, in fact, by any other scheme for first-order systems.
Given the initial value problem
we would like to use the Euler method to approximate .
Using step size equal to 1 (h = 1)
The Euler method is
so first we must compute . In this simple differential equation, the function is defined by . We have
By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point . Recall that the slope is defined as the change in divided by the change in , or .
The next step is to multiply the above value by the step size , which we take equal to one here:
Since the step size is the change in , when we multiply the step size and the slope of the tangent, we get a change in value. This value is then added to the initial value to obtain the next value to be used for computations.
The above steps should be repeated to find , and .
Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors.
0 1 0 1 1 1 2 1 2 1 2 1 2 4 2 4 2 4 1 4 8 3 8 3 8 1 8 16
The conclusion of this computation is that . The exact solution of the differential equation is , so . Thus, the approximation of the Euler method is not very good in this case. However, as the figure shows, its behaviour is qualitatively right.
Using other step sizes
As suggested in the introduction, the Euler method is more accurate if the step size is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure.
step size result of Euler's method error 1 16 38.598 0.25 35.53 19.07 0.1 45.26 9.34 0.05 49.56 5.04 0.025 51.98 2.62 0.0125 53.26 1.34
The error recorded in the last column of the table is the difference between the exact solution at and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details.
Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order.
We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, people usually employ alternative, higher-order methods such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired.
The Euler method can be derived in a number of ways. Firstly, there is the geometrical description mentioned above.
Another possibility is to consider the Taylor expansion of the function around :
The differential equation states that . If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises. The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge–Kutta methods.
A closely related derivation is to substitute the forward finite difference formula for the derivative,
Finally, one can integrate the differential equation from to and apply the fundamental theorem of calculus to get:
Now approximate the integral by the left-hand rectangle method (with only one rectangle):
Local truncation error
The local truncation error of the Euler method is error made in a single step. It is the difference between the numerical solution after one step, , and the exact solution at time . The numerical solution is given by
For the exact solution, we use the Taylor expansion mentioned in the section Derivation above:
The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations:
This result is valid if has a bounded third derivative.
This shows that for small , the local truncation error is approximately proportional to . This makes the Euler method less accurate (for small ) than other higher-order techniques such as Runge-Kutta methods and linear multistep methods, for which the local truncation error is proportional to a higher power of the step size.
A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If has a continuous second derivative, then there exists a such that
In the above expressions for the error, the second derivative of the unknown exact solution can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation that
Global truncation error
The global truncation error is the error at a fixed time , after however many steps the methods needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be , which is proportional to , and the error committed in each step is proportional to (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to .
This intuitive reasoning can be made precise. If the solution has a bounded second derivative and is Lipschitz continuous in its second argument, then the global truncation error (GTE) is bounded by
where is an upper bound on the second derivative of on the given interval and is the Lipschitz constant of .
The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to . For this reason, the Euler method is said to be first order.
The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation
The exact solution is , which decays to zero as . However, if the Euler method is applied to this equation with step size , then the numerical solution is qualitatively wrong: it oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance , then the numerical solution does decay to zero.
If the Euler method is applied to the linear equation , then the numerical solution is unstable if the product is outside the region
illustrated on the right. This region is called the (linear) instability region. In the example, equals −2.3, so if then which is outside the stability region, and thus the numerical solution is unstable.
This limitation —along with its slow convergence of error with h— means that the Euler method is not often used, except as a simple example of numerical integration.
The discussion up to now has ignored the consequences of rounding error. In step n of the Euler method, the rounding error is roughly of the magnitude εyn where ε is the machine epsilon. Assuming that the rounding errors are all of approximately the same size, the combined rounding error in N steps is roughly Nεy0 if all errors points in the same direction. Since the number of steps is inversely proportional to the step size h, the total rounding error is proportional to ε / h. In reality, however, it is extremely unlikely that all rounding errors point in the same direction. If instead it is assumed that the rounding errors are independent rounding variables, then the total rounding error is proportional to .
Thus, for extremely small values of the step size, the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.
Modifications and extensions
A simple modification of the Euler method which eliminates the stability problems noted in the previous section is the backward Euler method:
This differs from the (standard, or forward) Euler method in that the function is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly.
More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article:
This leads to the family of Runge–Kutta methods.
The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method:
This leads to the family of linear multistep methods.
- Crank–Nicolson method
- Dynamic errors of numerical methods of ODE discretization
- Gradient descent similarly uses finite steps, here to find minima of functions
- List of Runge-Kutta methods
- Linear multistep method
- Numerical integration (for calculating definite integrals)
- Numerical methods for ordinary differential equations
- Butcher 2003, p. 45; Hairer, Nørsett & Wanner 1993, p. 35
- Atkinson 1989, p. 342; Butcher 2003, p. 60
- Butcher 2003, p. 45; Hairer, Nørsett & Wanner 1993, p. 36
- Butcher 2003, p. 3; Hairer, Nørsett & Wanner 1993, p. 2
- See also Atkinson 1989, p. 344
- Hairer, Nørsett & Wanner 1993, p. 40
- Atkinson 1989, p. 342; Hairer, Nørsett & Wanner 1993, p. 36
- Atkinson 1989, p. 342
- Atkinson 1989, p. 343
- Butcher 2003, p. 60
- Atkinson 1989, p. 342
- Stoer & Bulirsch 2002, p. 474
- Atkinson 1989, p. 344
- Butcher 2003, p. 49
- Atkinson 1989, p. 346; Lakoba 2012, equation (1.16)
- Iserles 1996, p. 7
- Butcher 2003, p. 63
- Butcher 2003, p. 70; Iserles 1996, p. 57
- Butcher 2003, pp. 74–75
- Butcher 2003, pp. 75–78
- Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0.
- Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8.
- Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-471-96758-3.
- Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0.
- Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2
- Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3.
- Lakoba, Taras I. (2012), Simple Euler method and its modifications (PDF) (Lecture notes for MATH334, University of Vermont), retrieved 29 February 2012.
|The Wikibook Calculus has a page on the topic of: Euler's Method|
- In Popular Culture: There’s this lovely part in the movie in which your character turns to “old” math — Euler’s method — to figure out how to get John Glenn back down from orbit. Did that really happen? It seemed logical to me. I could see in my mind what I needed and sort of worked backwards.
- Media related to Euler method at Wikimedia Commons
- Euler's Method for O.D.E.'s, by John H. Matthews, California State University at Fullerton.
- Euler method implementations in different languages by Rosetta Code |
- Course type: Self-paced
- Available Lessons: 33
- Average Lesson Length: 8 min
Eligible for Certificate:
Certificates show that you have completed the course. They do not provide credit.
Watch a preview:chapter 1 / lesson 1Comparing and Ordering Fractions
Course SummaryUsing easy to understand text and video lessons, math fundamentals helps you to identify and understand different number sets along with how to use them. This course provides a clear understanding through examples of formulas, calculations, and applications.
to start this course today
Try it risk-free for 30 days
Course Practice TestCheck your knowledge of this course with a 50-question practice test.
- Comprehensive test covering all topics
- Detailed video explanations for wrong answers
About This Course
Broken down into easy to digest sections, this course offers short 5-10 minute lessons covering algebra, data, and geometry skills. Within this course, you'll learn fraction comparison, decimal place values, systems of equations, and types of angles. The course also provides a deeper understanding of the properties of shapes, the Pythagorean Theorem, and algebraic expressions in a way that is easy to understand.
How It Works
Comprehensive, short lessons can be viewed on a variety of different devices like laptops, desktop computers, tablets, and even your smartphone. Available sequentially to help you slowly build your knowledge, these flexible lessons can be completed when you have a minute to spare on your commute or during a little downtime.
The course makes concepts easy to follow through:
- Easy to follow lessons clearly and simply break down difficult math concepts
- Examples demonstrate calculations like square roots and coefficients clearly
- Lesson quizzes help to ensure your understanding of theorems and calculations are clear
- Chapter exams verify your understanding of the data, algebraic expressions, and geometric theorems
How It Helps
Data Awareness: Individuals need a clear and comprehensive understanding of numbers, formulas, and measurements used to create tables, analyze data sets, and make predictions about future projections.
Algebraic Equations: It is important to have a clear understanding of variables and algebraic expressions along with the order of operations. Individuals also need to be able to understand how equations or number systems relate to one another.
Graphing Skills: Teaches individuals to understand the use of intercepts and graphing principles along with how ordered pairs of numbers fit on the graph.
Shapes & Lines Understand the functions, purpose, and dimensions that make shapes and lines fall into a specific category. Explore how these systems all work together.
After completion of this course, individuals should have a clear understanding of the following:
- How numbers and data sets, like fractions and decimals, work together along with conversions
- Relationships that exist between algebraic expressions and variable usage within different equations
- Geometric and graphing skills including shape and line properties along with the parts of a graph
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Research Schools, Degrees & Careers
Get the unbiased info you need to find the right school.
Browse Articles By Category
Browse an area of study or degree level.
- Biological and Biomedical Sciences
- Communications and Journalism
- Computer Sciences
- Culinary Arts and Personal Services
- Liberal Arts and Humanities
- Mechanic and Repair Technologies
- Medical and Health Professions
- Physical Sciences
- Transportation and Distribution
- Visual and Performing Arts
- 5 Lessons to Help Your Students Understand the Elections
- GED RLA: Understanding Informational Texts
- Flashcards - GED RLA: Understanding Informational Texts
- What Are Some Online Courses that Will Assist in the Understanding of Data Structure Concepts?
- What to Do When Parents Don't Understand Boundaries
- Understanding Grades: Inflation
- Understanding the New Private Student Loans
- Understanding OCW: A Field Guide to Free Education
- Understanding Grades: The Syllabus
- Ch 2. Understanding Plate Tectonics
- Answers and detailed explanations to each question
- Video lessons to explain complicated concepts
Explore our library of over 79,000 lessons
- College Courses
- High School Courses
- Other Courses |
In chemistry, a nonmetal (or non-metal) is a chemical element that mostly lacks the characteristics of a metal. Physically, a nonmetal tends to have a relatively low melting point, boiling point, and density. A nonmetal is typically brittle when solid and usually has poor thermal conductivity and electrical conductivity. Chemically, nonmetals tend to have relatively high ionization energy, electron affinity, and electronegativity. They gain or share electrons when they react with other elements and chemical compounds. Seventeen elements are generally classified as nonmetals: most are gases (hydrogen, helium, nitrogen, oxygen, fluorine, neon, chlorine, argon, krypton, xenon and radon); one is a liquid (bromine); and a few are solids (carbon, phosphorus, sulfur, selenium, and iodine). Metalloids such as boron, silicon, and germanium are sometimes counted as nonmetals.
|Part of a series on the|
The nonmetals are divided into two categories reflecting their relative propensity to form chemical compounds: reactive nonmetals and noble gases. The reactive nonmetals vary in their nonmetallic character. The less electronegative of them, such as carbon and sulfur, mostly have weak to moderately strong nonmetallic properties and tend to form covalent compounds with metals. The more electronegative of the reactive nonmetals, such as oxygen and fluorine, are characterised by stronger nonmetallic properties and a tendency to form predominantly ionic compounds with metals. The noble gases are distinguished by their great reluctance to form compounds with other elements.
The distinction between categories is not absolute. Boundary overlaps, including with the metalloids, occur as outlying elements in each category show or begin to show less-distinct, hybrid-like, or atypical properties.
Although five times more elements are metals than nonmetals, two of the nonmetals—hydrogen and helium—make up over 99 percent of the observable universe. Another nonmetal, oxygen, makes up almost half of the Earth's crust, oceans, and atmosphere. Living organisms are composed almost entirely of nonmetals: hydrogen, oxygen, carbon, and nitrogen. Nonmetals form many more compounds than metals.
Definition and applicable elements
There is no rigorous definition of a nonmetal. Broadly, any element lacking a preponderance of metallic properties can be regarded as a nonmetal.
The elements generally classified as nonmetals include one element in group 1 (hydrogen); one in group 14 (carbon); two in group 15 (nitrogen and phosphorus); three in group 16 (oxygen, sulfur and selenium); most of group 17 (fluorine, chlorine, bromine and iodine); and all of group 18 (with the possible exception of oganesson).
As there is no widely agreed definition of a nonmetal, elements in the periodic table vicinity of where the metals meet the nonmetals are inconsistently classified by different authors. Elements sometimes also classified as nonmetals are the metalloids boron (B), silicon (Si), germanium (Ge), arsenic (As), antimony (Sb), tellurium (Te), and astatine (At). The nonmetal selenium (Se) is sometimes instead classified as a metalloid, particularly in environmental chemistry.
JJ Zuckerman and FC Nachod
In Steudel's Chemistry of the non-metals (1977, preface)
Nonmetals show more variability in their properties than do metals. These properties are largely determined by the interatomic bonding strengths and molecular structures of the nonmetals involved, both of which are subject to variation as the number of valence electrons in each nonmetal varies. Metals, in contrast, have more homogenous structures and their properties are more easily reconciled.
Physically, they largely exist as diatomic or monatomic gases, with the remainder having more substantial (open-packed) forms, unlike metals, which are nearly all solid and close-packed. If solid, they have a submetallic appearance (with the exception of sulfur) and are mostly brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of metals.
Chemically the nonmetals mostly have high ionisation energies, high electron affinities (nitrogen and the noble gases have negative electron affinities) and high electronegativity values noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals (including – to a limited extent – xenon and probably radon) usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic.
Complicating the chemistry of the nonmetals is the first row anomaly seen particularly in hydrogen, (boron), carbon, nitrogen, oxygen and fluorine; and the alternation effect seen in (arsenic), selenium and bromine. The first row anomaly largely arises from the electron configurations of the elements concerned.
Hydrogen is noted for the different ways it forms bonds. It most commonly forms covalent bonds. It can lose its single valence electron in aqueous solution, leaving behind a bare proton with tremendous polarising power. This subsequently attaches itself to the lone electron pair of an oxygen atom in a water molecule, thereby forming the basis of acid-base chemistry. Under certain conditions a hydrogen atom in a molecule can form a second, weaker, bond with an atom or group of atoms in another molecule. Such bonding, "helps give snowflakes their hexagonal symmetry, binds DNA into a double helix; shapes the three-dimensional forms of proteins; and even raises water’s boiling point high enough to make a decent cup of tea."
From (boron) to neon, since the 2p subshell has no inner analogue and experiences no electron repulsion effects it consequently has a relatively small radius, unlike the 3p, 4p and 5p subshells of heavier elements (a similar effect is seen in the 1s elements, hydrogen and helium). Ionisation energies and electronegativities among these elements are consequently higher than would otherwise be expected, having regard to periodic trends. The small atomic radii of carbon, nitrogen, and oxygen facilitates the formation of triple or double bonds. The larger atomic radii, which enable higher coordination numbers, and lower electronegativities, which better tolerate higher positive charges, of the heavier group 15–18 nonmetals means they are able to exhibit valences other than the lowest for their group (that is, 3, 2, 1, or 0) for example in PCl5, SF6, IF7, and XeF2. Period four elements immediately after the first row of the transition metals, such as selenium and bromine, have unusually small atomic radii because the 3d electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity.
Immediately to the left of most nonmetals on the periodic table are metalloids such as boron, silicon, and germanium, which generally behave chemically like nonmetals, and are included here for comparative purposes. In this sense they can be regarded as the most metallic of nonmetallic elements.
Based on shared attributes, the nonmetals can be divided into the two categories of reactive nonmetal, and noble gas. The metalloids and the two nonmetal categories then span a progression in chemical nature from weakly nonmetallic, to moderately nonmetallic, to strongly nonmetallic (oxygen and the four nonmetallic halogens), to almost inert. Analogous categories occur among the metals in the form of the weakly metallic (the post-transition metals), the moderately metallic (most of the transition metals), the strongly metallic (the alkali metal and alkaline earth metals, and the lanthanides and actinides), and the relatively inert (the noble transition metals).
As with categorisation schemes generally, there is some variation and overlapping of properties within and across each category. One or more of the metalloids are sometimes classified as nonmetals. Among the reactive nonmetals, carbon, phosphorus, selenium, and iodine—which border the metalloids—show some metallic character, as does hydrogen. Among the noble gases, radon is the most metallic and begins to show some cationic behaviour, which is unusual for a nonmetal.
The seven metalloids are boron (B), silicon (Si), germanium (Ge), arsenic (As), antimony (Sb), tellurium (Te), and astatine (At). On a standard periodic table, they occupy a diagonal area in the p-block extending from boron at the upper left to astatine at lower right, along the dividing line between metals and nonmetals shown on some periodic tables. They are called metalloids mainly in light of their physical resemblance to metals.
While they each have a metallic appearance, they are brittle and only fair conductors of electricity. Boron, silicon, germanium, tellurium are semiconductors. Arsenic and antimony have the electronic band structures of semimetals although both have less stable semiconducting allotropes. Astatine has been predicted to have a metallic crystalline structure.
of metalloids and nonmetals
|Electronegativity (EN) gives some indication of nonmetallic character. The metalloids have uniformly moderate values (1.8–2.2). Among the reactive nonmetals, hydrogen (2.2) and phosphorus (2.19) have moderate values but they each have higher ionisation energies than the metalloids, and are very rarely classed as such. Oxygen and the nonmetallic halogens have uniformly high EN values; nitrogen has a high EN but a marginally negative electron affinity that makes it a reluctant anion former. The noble gases have some of the highest ENs but their complete valence shells and sizeably negative electron affinities render them chemically inert to a large degree.|
Chemically the metalloids generally behave like (weak) nonmetals. They have moderate ionisation energies, low to high electron affinities, moderate electronegativity values, are poor to moderately strong oxidising agents, and demonstrate a tendency to form alloys with metals.
The reactive nonmetals have a diverse range of individual physical and chemical properties. In periodic table terms they largely occupy a position between the weakly nonmetallic metalloids to the left and the noble gases to the right.
Physically, five are solids, one is a liquid (bromine), and five are gases. Of the solids, graphite carbon, selenium, and iodine are metallic-looking, whereas S8 sulfur has a pale-yellow appearance. Ordinary white phosphorus has a yellowish-white appearance but the black allotrope, which is the most stable form of phosphorus, has a metallic-looking appearance. Bromine is a reddish-brown liquid. Of the gases, fluorine and chlorine are coloured pale yellow, and yellowish green. Electrically, most are insulators whereas graphite is a semimetal and black phosphorus, selenium, and iodine are semiconductors.
Chemically, they tend to have moderate to high ionisation energies, electron affinities, and electronegativity values, and be relatively strong oxidising agents. Collectively, the highest values of these properties are found among oxygen and the nonmetallic halogens. Manifestations of this status include oxygen's major association with the ubiquitous processes of corrosion and combustion, and the intrinsically corrosive nature of the nonmetallic halogens. All five of these nonmetals exhibit a tendency to form predominately ionic compounds with metals whereas the remaining nonmetals tend to form predominately covalent compounds with metals.
Six nonmetals are categorised as noble gases: helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), and the radioactive radon (Rn). In periodic table terms they occupy the outermost right column. They are called noble gases in light of their characteristically very low chemical reactivity.
They have very similar properties, all being colorless, odorless, and nonflammable. With their closed valence shells the noble gases have feeble interatomic forces of attraction resulting in very low melting and boiling points. That is why they are all gases under standard conditions, even those with atomic masses larger than many normally solid elements.
Chemically, the noble gases have relatively high ionization energies, negative electron affinities, and relatively high electronegativities. Compounds of the noble gases number less than half a thousand, with most of these occurring via oxygen or fluorine combining with either krypton, xenon or radon.
The status of the period 7 congener of the noble gases, oganesson (Og), is not known—it may or may not be a noble gas. It was originally predicted to be a noble gas but may instead be a fairly reactive solid with an anomalously low first ionisation potential, and a positive electron affinity, due to relativistic effects. On the other hand, if relativistic effects peak in period 7 at element 112, copernicium, oganesson may turn out to be a noble gas after all, albeit more reactive than either xenon or radon. While oganesson could be expected to be the most metallic of the group 18 elements, credible predictions on its status as either a metal or a nonmetal (or a metalloid) appear to be absent.
|Nonmetal categorisation and alternative schemes|
|Reactive nonmetal||Noble gas|
|H, C, N, P, O, S, Se, F, Cl, Br, I||He, Ne, Ar, Kr, Xe, Rn|
|(1)||Other nonmetal||Halogen||Noble gas|
|H, C, N, P, O, S, (Se)||F, Cl, Br, I, At||He, Ne, Ar, Kr, Xe, Rn|
|C, P, S, Se, I, At||Br||H, N, O, F, Cl, He, Ne, Ar, Kr, Xe, Rn|
|H, C, P, S, Se, I||N, O, F, Cl, Br||He, Ne, Ar, Kr, Xe, Rn|
element (noble gas)
|C, P, S, Se||H, N, O, F, Cl, Br, I||He, Ne, Ar, Kr, Xe, Rn|
|H||C, N, P, O, S, Se||F, Cl, Br, I, At||He, Ne, Ar, Kr, Xe, Rn|
|H||B, Si, Ge, As, Sb, Te, Po||C, N, P, O, S, Se||F, Cl, Br, I, At||He, Ne, Ar, Kr, Xe, Rn|
The nonmetals are sometimes instead divided according to either (1) the relative homogeneity of the halogens; (2) physical form; (3) electronegativity; (4) molecular structure; (5) the peculiar nature of hydrogen, and the relative homogeneity of the halogens; or (6) the uniqueness of hydrogen and the treatment of the metalloids as nonmetallic analogues of the post-transition metals.
In scheme (1), the halogens are in a category of their own; astatine is classed as a nonmetal, rather than a metalloid; and the remaining nonmetals are referred to as other nonmetals. If selenium is counted as a metalloid rather than another nonmetal, the resulting set of less active nonmetals (H, C, N, P, O, S) are sometimes instead referred to or categorised as organogens, CHONPS elements or biogens. Collectively these six nonmetals comprise the bulk of life on Earth; a rough estimate of the composition of the biosphere is C1450H3000O1450N15P1S1.
In scheme (2), the nonmetals can simply be divided based on their physical forms at room temperature and pressure. The fluid nonmetals (bromine and the gaseous nonmetals) have the highest ionisation energy and electronegativity values among the elements, with the exception of hydrogen which tends to be anomalous in whichever category it is placed in. The solid nonmetals are collectively the most metallic of the nonmetallic elements, apart from the metalloids.
In scheme (3), the nonmetals are divided based on a loose correlation between electronegativity and oxidizing power. Very electronegative nonmetals have electronegativity values over 2.8; electronegative nonmetals have values of 1.9 to 2.8.
In scheme (4), the nonmetals are distinguished based on the molecular structures of their most thermodynamically stable forms in ambient conditions. Polyatomic nonmetals form structures or molecules in which each atom has two or three nearest neighbours (Cx, P4, S8, Sex); diatomic nonmetals form molecules in which each atom has one nearest neighbour (H2, N2, O2, F2, Cl2, Br2, I2); and the monatomic noble gases exist as isolated atoms (He, Ne, Ar, Kr, Xe, Rn) with no fixed nearest neighbour. This gradual reduction in the number of nearest neighbours corresponds (approximately) to a reduction in metallic character. A similar progression is seem among the metals. Metallic bonding tends to involve close-packed centrosymmetric structures with a high number of nearest neighbours. Post-transition metals and metalloids, sandwiched between the true metals and the nonmetals, tend to have more complex structures with an intermediate number of nearest neighbours.
In scheme (5), hydrogen is placed by itself on account of it being "so different from all other elements". The remaining nonmetals are divided into nonmetals, halogens, and noble gases, with the unnamed category being distinguished by including nonmetals with relatively strong interatomic bonding, and the metalloids being effectively treated as a third super-category alongside metals and nonmetals.
In scheme (6), hydrogen is again placed by itself on account of its uniqueness. The remaining nonmetals are divided into metalloids, quintessential nonmetals, halogens, and noble gases. Since the metalloids abut the post-transition or "poor" metals, they might be renamed the "poor non-metals".
Comparison of properties
Characteristic and other properties of metalloids, reactive nonmetals, and noble gases are summarized in the following table. Metalloids have been included in light of their generally nonmetallic chemistry. Physical properties are listed in loose order of ease of determination; chemical properties run from general to specific, and then to descriptive.
|Physical property||Metalloid||Reactive nonmetal||Noble gas|
|Form||solid||solid: C, P, S, Se, I
gaseous: H, N, O, F, Cl
|Appearance||metallic||metallic, coloured, or translucent||translucent|
|Elasticity||brittle||brittle if solid||soft and easily crushed when frozen|
|Atomic structure||close-packed* or polyatomic||polyatomic: C, P, S, Se
diatomic: H, N, O, F, Cl, Br, I
|Bulk coordination number||12*, 6, 4, 3, or 2||3, 2, or 1||0|
|Allotropes||most form||known for C, P, O, S, Se||none form|
|Electrical conductivity||moderate||poor to moderate||poor|
|Volatility||low: B, Si, Ge, Sb, Te
moderate: As, At?
moderate: P, S, Se, Br, I
high: H, N, O, F, Cl
|Electronic structure||metallic* to semiconductor||semimetallic, semiconductor, or insulator||insulator|
|Outer s and p electrons||3–7||1, 4–7||2, 8|
|Crystal structure||rhombohedral: B, As, Sb
cubic: Si, Ge, At?
cubic: P, O, F
hexagonal: H, C, N, Se
orthorhombic: S, Cl, Br, I
cubic: Ne, Ar, Kr, Xe, Rn
|Chemical property||Metalloid||Reactive nonmetal||Noble gas|
|General chemical behaviour||nonmetallic to incipient metallic||• inert to nonmetallic|
• Rn shows some cationic behaviour
|Ionization energy||low||moderate to high||high to very high|
|Electron affinity||low to high||moderate to high (exception: N is negative)||negative|
Si < Ge ≈ B ≈ Sb < Te < As ≈ At
|moderate to high:
P < Se ≈ C < S < I < Br < N < Cl < O < F
|moderate to very high|
|Non-zero oxidation states||• negative oxidation states known for all, but for H this is an unstable state
• positive oxidation states known for all but F, and only exceptionally for O
• from −5 for B to +7 for Cl, Br, I, and At
|• only positive oxidation states known, and only for heavier noble gases|
• from +2 for Kr, Xe, and Rn to +8 for Xe
|Oxidising power||low (exception: At is moderate)
||low to high||n/a|
|Catenation||marked tendency||marked tendency: C, P, S, Se
less tendency: H, N, O, F, Cl, Br, I
|Compounds with metals||tend to form alloys or inter-metallic compounds||mainly covalent: H†, C, N, P, S, Se
mainly ionic: O, F, Cl, Br, I
|none form simple compounds|
|Oxides||• polymeric in structure
• B, Si, Ge, As, Sb, Te are glass formers
• tend to be amphoteric or weakly acidic
|• C, P, S, Se, and I are known in at least one polymeric form
• P, S, Se are glass formers; CO2 forms a glass at 40 GPa
• acidic, or neutral (H2O, CO, NO, N2O)
|• XeO2 is polymeric; other noble gas oxides are molecular|
• no glass formers
• stable xenon oxides (XeO3, XeO4) are acidic
|Sulfates||most form||some form||not known|
*Bulk astatine has been predicted to have a metallic face-centred cubic structure
† Hydrogen can also form alloy-like hydrides
Properties of nonmetals (and metalloids) by Group
- Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 0.08988 × 10−3 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of its forms. It has a high ionisation energy (1312.0 kJ/mol), moderate electron affinity (73 kJ/mol), and moderate electronegativity (2.2). Hydrogen is a poor oxidising agent (H2 + 2e− → 2H– = –2.25 V at pH 0). Its chemistry, most of which is based around its tendency to acquire the electron configuration of the noble gas helium, is largely covalent in nature, noting it can form ionic hydrides with highly electropositive metals, and alloy-like hydrides with some transition metals. The common oxide of hydrogen (H2O) is a neutral oxide.
Boron is a lustrous, barely reactive solid with a density 2.34 g/cm3 (cf. aluminium 2.70), and is hard (MH 9.3) and brittle. It melts at 2076 °C (cf. steel ~1370 °C) and boils at 3927 °C. Boron has a complex rhombohedral crystalline structure (CN 5+). It is a semiconductor with a band gap of about 1.56 eV. Boron has a moderate ionisation energy (800.6 kJ/mol), low electron affinity (27 kJ/mol), and moderate electronegativity (2.04). Being a metalloid, most of its chemistry is nonmetallic in nature. Boron is a poor oxidizing agent (B12 + 3e → BH3 = –0.15 V at pH 0). While it bonds covalently in nearly all of its compounds, it can form intermetallic compounds and alloys with transition metals of the composition MnB, if n > 2. The common oxide of boron (B2O3) is weakly acidic.
Carbon (as graphite, its most thermodynamically stable form) is a lustrous and comparatively unreactive solid with a density of 2.267 g/cm3, and is soft (MH 0.5) and brittle. It sublimes to vapour at 3642 C°. Carbon has a hexagonal crystalline structure (CN 3). It is a semimetal in the direction of its planes, with an electrical conductivity exceeding that of some metals, and behaves as a semiconductor in the direction perpendicular to its planes. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (122 kJ/mol), and high electronegativity (2.55). Carbon is a poor oxidising agent (C + 4e− → CH4 = 0.13 V at pH 0). Its chemistry is largely covalent in nature, noting it can form salt-like carbides with highly electropositive metals. The common oxide of carbon (CO2) is a medium-strength acidic oxide.
Silicon is a metallic-looking relatively unreactive solid with a density of 2.3290 g/cm3, and is hard (MH 6.5) and brittle. It melts at 1414 °C (cf. steel ~1370 °C) and boils at 3265 °C. Silicon has a diamond cubic structure (CN 4). It is a semiconductor with a band gap of about 1.11 eV. Silicon has a moderate ionisation energy (786.5 kJ/mol), moderate electron affinity (134 kJ/mol), and moderate electronegativity (1.9). It is a poor oxidising agent (Si + 4e → Si4 = –0.147 at pH 0). As a metalloid the chemistry of silicon is largely covalent in nature, noting it can form alloys with metals such as iron and copper. The common oxide of silicon (SiO2) is weakly acidic.
Germanium is a shiny, mostly unreactive grey-white solid with a density of 5.323 g/cm3 (about two-thirds that of iron), and is hard (MH 6.0) and brittle. It melts at 938.25 °C (cf. silver 961.78 °C) and boils at 2833 °C. Germanium has a diamond cubic structure (CN 4). It is a semiconductor with a band gap of about 0.67 eV. Germanium has a moderate ionisation energy (762 kJ/mol), moderate electron affinity (119 kJ/mol), and moderate electronegativity (2.01). It is a poor oxidising agent (Ge + 4e → GeH4 = –0.294 at pH 0). As a metalloid the chemistry of germanium is largely covalent in nature, noting it can form alloys with metals such as aluminium and gold. Most alloys of germanium with metals lack metallic or semimetallic conductivity. The common oxide of germanium (GeO2) is amphoteric.
Nitrogen is a colourless, odourless, and relatively inert diatomic gas with a density of 1.251 × 10−3 g/cm3 (marginally heavier than air). It condenses to a colourless liquid at −195.795 °C and freezes into an ice- or snow-like solid at −210.00 °C. The solid form (density 0.85 g/cm3; cf. lithium 0.534) has a hexagonal crystalline structure and is soft and easily crushed. Nitrogen is an insulator in all of its forms. It has a high ionisation energy (1402.3 kJ/mol), low electron affinity (–6.75 kJ/mol), and high electronegativity (3.04). The latter property manifests in the capacity of nitrogen to form usually strong hydrogen bonds, and its preference for forming complexes with metals having low electronegativities, small cationic radii, and often high charges (+3 or more). Nitrogen is a poor oxidising agent (N2 + 6e− → 2NH3 = −0.057 V at pH 0). Only when it is in a positive oxidation state, that is, in combination with oxygen or fluorine, are its compounds good oxidising agents, for example, 2NO3− → N2 = 1.25 V. Its chemistry is largely covalent in nature; anion formation is energetically unfavourable owing to strong inter electron repulsions associated with having three unpaired electrons in its outer valence shell, hence its negative electron affinity. The common oxide of nitrogen (NO) is weakly acidic. Many compounds of nitrogen are less stable than diatomic nitrogen, so nitrogen atoms in compounds seek to recombine if possible and release energy and nitrogen gas in the process, which can be leveraged for explosive purposes.
Phosphorus in its most thermodynamically stable black form, is a lustrous and comparatively unreactive solid with a density of 2.69 g/cm3, and is soft (MH 2.0) and has a flaky comportment. It sublimes at 620 °C. Black phosphorus has an orthorhombic crystalline structure (CN 3). It is a semiconductor with a band gap of 0.3 eV. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (72 kJ/mol), and moderate electronegativity (2.19). In comparison to nitrogen, phosphorus usually forms weak hydrogen bonds, and prefers to form complexes with metals having high electronegativities, large cationic radii, and often low charges (usually +1 or +2. Phosphorus is a poor oxidising agent (P4 + 3e− → PH3– = −0.046 V at pH 0 for the white form, −0.088 V for the red). Its chemistry is largely covalent in nature, noting it can form salt-like phosphides with highly electropositive metals. Compared to nitrogen, electrons have more space on phosphorus, which lowers their mutual repulsion and results in anion formation requiring less energy. The common oxide of phosphorus (P2O5) is a medium-strength acidic oxide.
When assessing periodicity in the properties of the elements it needs to be borne in mind that the quoted properties of phosphorus tend to be those of its least stable white form rather than, as is the case with all other elements, the most stable form. White phosphorus is the most common, industrially important, and easily reproducible allotrope. For those reasons it is the standard state of the element. Paradoxically, it is also thermodynamically the least stable, as well as the most volatile and reactive form. It gradually changes to red phosphorus. This transformation is accelerated by light and heat, and samples of white phosphorus almost always contain some red phosphorus and, accordingly, appear yellow. For this reason, white phosphorus that is aged or otherwise impure is also called yellow phosphorus. When exposed to oxygen, white phosphorus glows in the dark with a very faint tinge of green and blue. It is highly flammable and pyrophoric (self-igniting) upon contact with air. White phosphorus has a density of 1.823 g/cm3, is soft (MH 0.5) as wax, pliable and can be cut with a knife. It melts at 44.15 °C and, if heated rapidly, boils at 280.5 °C; it otherwise remains solid and transforms to violet phosphorus at 550 °C. It has a body-centred cubic structure, analogous to that of manganese, with unit cell comprising 58 P4 molecules. It is an insulator with a band gap of about 3.7 eV.
Arsenic is a grey, metallic looking solid which is stable in dry air but develops a golden bronze patina in moist air, which blackens on further exposure. It has a density of 5.727 g/cm3, and is brittle and moderately hard (MH 3.5; more than aluminium; less than iron). Arsenic sublimes at 615 °C. It has a rhombohedral polyatomic crystalline structure (CN 3). Arsenic is a semimetal, with an electrical conductivity of around 3.9 × 104 S•cm−1 and a band overlap of 0.5 eV. It has a moderate ionisation energy (947 kJ/mol), moderate electron affinity (79 kJ/mol), and moderate electronegativity (2.18). Arsenic is a poor oxidising agent (As + 3e → AsH3 = –0.22 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form brittle alloys with metals, and has an extensive organometallic chemistry. Most alloys of arsenic with metals lack metallic or semimetallic conductivity. The common oxide of arsenic (As2O3) is acidic but weakly amphoteric.
Antimony is a silver-white solid with a blue tint and a brilliant lustre. It is stable in air and moisture at room temperature. Antimony has a density of 6.697 g/cm3, and is moderately hard (MH 3.0; about the same as copper). It has a rhombohedral crystalline structure (CN 3). Antimony melts at 630.63 °C and boils at 1635 °C. It is a semimetal, with an electrical conductivity of around 3.1 × 104 S•cm−1 and a band overlap of 0.16 eV. Antimony has a moderate ionisation energy (834 kJ/mol), moderate electron affinity (101 kJ/mol), and moderate electronegativity (2.05). It is a poor oxidising agent (Sb + 3e → SbH3 = –0.51 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form alloys with one or more metals such as aluminium, iron, nickel, copper, zinc, tin, lead and bismuth, and has an extensive organometallic chemistry. Most alloys of antimony with metals have metallic or semimetallic conductivity. The common oxide of antimony (Sb2O3) is amphoteric.
MD Joesten, L Hogg, and ME Castellion
In The world of chemistry (2007, p. 217)
Oxygen is a colourless, odourless, and unpredictably reactive diatomic gas with a gaseous density of 1.429 × 10−3 g/cm3 (marginally heavier than air). It is generally unreactive at room temperature. Thus, sodium metal will "retain its metallic lustre for days in the presence of absolutely dry air and can even be melted (m.p. 97.82 °C) in the presence of dry oxygen without igniting". On the other hand, oxygen can react with many inorganic and organic compounds either spontaneously or under the right conditions, (such as a flame or a spark) [or ultra-violet light?]. It condenses to pale blue liquid −182.962 °C and freezes into a light blue solid at −218.79 °C. The solid form (density 0.0763 g/cm3) has a cubic crystalline structure and is soft and easily crushed. Oxygen is an insulator in all of its forms. It has a high ionisation energy (1313.9 kJ/mol), high electron affinity (141 kJ/mol), and high electronegativity (3.44). Oxygen is a strong oxidising agent (O2 + 4e → 2H2O = 1.23 V at pH 0). Metal oxides are largely ionic in nature.
Sulfur is a bright-yellow moderately reactive solid. It has a density of 2.07 g/cm3 and is soft (MH 2.0) and brittle. It melts to a light yellow liquid 95.3 °C and boils at 444.6 °C. Sulfur has an abundance on earth one-tenth that of oxygen. It has an orthorhombic polyatomic (CN 2) crystalline structure, and is brittle. Sulfur is an insulator with a band gap of 2.6 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Sulfur has a moderate ionisation energy (999.6 kJ/mol), moderate electron affinity (200 kJ/mol), and high electronegativity (2.58). It is a poor oxidising agent (S8 + 2e− → H2S = 0.14 V at pH 0). The chemistry of sulfur is largely covalent in nature, noting it can form ionic sulfides with highly electropositive metals. The common oxide of sulfur (SO3) is strongly acidic.
Selenium is a metallic-looking, moderately reactive solid with a density of 4.81 g/cm3 and is soft (MH 2.0) and brittle. It melts at 221 °C to a black liquid and boils at 685 °C to a dark yellow vapour. Selenium has a hexagonal polyatomic (CN 2) crystalline structure. It is a semiconductor with a band gap of 1.7 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Selenium has a moderate ionisation energy (941.0 kJ/mol), high electron affinity (195 kJ/mol), and high electronegativity (2.55). It is a poor oxidising agent (Se + 2e− → H2Se = −0.082 V at pH 0). The chemistry of selenium is largely covalent in nature, noting it can form ionic selenides with highly electropositive metals. The common oxide of selenium (SeO3) is strongly acidic.
Tellurium is a silvery-white, moderately reactive, shiny solid, that has a density of 6.24 g/cm3 and is soft (MH 2.25) and brittle. It is the softest of the commonly recognised metalloids. Tellurium reacts with boiling water, or when freshly precipitated even at 50 °C, to give the dioxide and hydrogen: Te + 2 H2O → TeO2 + 2 H2. It has a melting point of 450 °C and a boiling point of 988 °C. Tellurium has a polyatomic (CN 2) hexagonal crystalline structure. It is a semiconductor with a band gap of 0.32 to 0.38 eV. Tellurium has a moderate ionisation energy (869.3 kJ/mol), high electron affinity (190 kJ/mol), and moderate electronegativity (2.1). It is a poor oxidising agent (Te + 2e− → H2Te = −0.45 V at pH 0). The chemistry of tellurium is largely covalent in nature, noting it has an extensive organometallic chemistry and that many tellurides can be regarded as metallic alloys. The common oxide of tellurium (TeO2) is amphoteric.
Fluorine is an extremely toxic and reactive pale yellow diatomic gas that, with a gaseous density of 1.696 × 10−3 g/cm3, is about 40% heavier than air. Its extreme reactivity is such that it was not isolated (via electrolysis) until 1886 and was not isolated chemically until 1986. Its occurrence in an uncombined state in nature was first reported in 2012, but is contentious. Fluorine condenses to a pale yellow liquid at −188.11 °C and freezes into a colourless solid at −219.67 °C. The solid form (density 1.7 g/cm−3) has a cubic crystalline structure and is soft and easily crushed. Fluorine is an insulator in all of its forms. It has a high ionisation energy (1681 kJ/mol), high electron affinity (328 kJ/mol), and high electronegativity (3.98). Fluorine is a powerful oxidising agent (F2 + 2e → 2HF = 2.87 V at pH 0); "even water, in the form of steam, will catch fire in an atmosphere of fluorine". Metal fluorides are generally ionic in nature.
Chlorine is an irritating green-yellow diatomic gas that is extremely reactive, and has a gaseous density of 3.2 × 10−3 g/cm3 (about 2.5 times heavier than air). It condenses at −34.04 °C to an amber-coloured liquid and freezes at −101.5 °C into a yellow crystalline solid. The solid form (density 1.9 g/cm−3) has an orthorhombic crystalline structure and is soft and easily crushed. Chlorine is an insulator in all of its forms. It has a high ionisation energy (1251.2 kJ/mol), high electron affinity (349 kJ/mol; higher than fluorine), and high electronegativity (3.16). Chlorine is a strong oxidising agent (Cl2 + 2e → 2HCl = 1.36 V at pH 0). Metal chlorides are largely ionic in nature. The common oxide of chlorine (Cl2O7) is strongly acidic.
Bromine is a deep brown diatomic liquid that is quite reactive, and has a liquid density of 3.1028 g/cm3. It boils at 58.8 °C and solidifies at −7.3 °C to an orange crystalline solid (density 4.05 g/cm−3). It is the only element, apart from mercury, known to be a liquid at room temperature. The solid form, like chlorine, has an orthorhombic crystalline structure and is soft and easily crushed. Bromine is an insulator in all of its forms. It has a high ionisation energy (1139.9 kJ/mol), high electron affinity (324 kJ/mol), and high electronegativity (2.96). Bromine is a strong oxidising agent (Br2 + 2e → 2HBr = 1.07 V at pH 0). Metal bromides are largely ionic in nature. The unstable common oxide of bromine (Br2O5) is strongly acidic.
Iodine, the rarest of the nonmetallic halogens, is a metallic looking solid that is moderately reactive, and has a density of 4.933 g/cm3. It melts at 113.7 °C to a brown liquid and boils at 184.3 °C to a violet-coloured vapour. It has an orthorhombic crystalline structure with a flaky habit. Iodine is semiconductor in the direction of its planes, with a band gap of about 1.3 eV and a conductivity of 1.7 × 10−8 S•cm−1 at room temperature. This is higher than selenium but lower than boron, the least electrically conducting of the recognised metalloids. Iodine is an insulator in the direction perpendicular to its planes. It has a high ionisation energy (1008.4 kJ/mol), high electron affinity (295 kJ/mol), and high electronegativity (2.66). Iodine is a moderately strong oxidising agent (I2 + 2e → 2I− = 0.53 V at pH 0). Metal iodides are predominantly ionic in nature. The only stable oxide of iodine (I2O5) is strongly acidic.
Astatine is expected to have properties intermediate between iodine, a nonmetal with incident metallic properties, and tennessine, which is predicted to be a metal. Astatine has not so far been synthesised in sufficient quantities to enable a determination of its bulk properties. A macro-sized sample of astatine would vaporise itself due to radioactive heating; it is not known if such a phenomenon could be prevented with sufficient cooling. Many of the properties of astatine have nevertheless been predicted. It is expected to have a metallic appearance, a density of 6.35±0.15 g/cm3, a melting point of 302 °C, a boiling point of 337 °C(?), and a face-centred cubic crystalline structure. It has a moderate ionisation energy (899.003 kJ/mol), and is expected to have a high electron affinity (222 kJ/mol), and moderate electronegativity (2.2). Astatine is a weak oxidizing agent (At + e → At− = 0.3 V at pH 0).
Helium has a density of 0.1785 × 10−3 g/cm3 (cf. air 1.225 × 10−3 g/cm3), liquifies at −268.928 °C, and cannot be solidified at normal pressure. It has the lowest boiling point of all of the elements. Liquid helium exhibits super-fluidity, superconductivity, and near-zero viscosity; its thermal conductivity is greater than that of any other known substance (more than 1,000 times that of copper). Helium can only be solidified at −272.20 °C under a pressure of 2.5 MPa. It has a very high ionisation energy (2372.3 kJ/mol), low electron affinity (estimated at −50 kJ/mol), and very high electronegativity (5.5 AR). No normal compounds of helium have so far been synthesised.
Neon has a density of 0.9002 × 10−3 g/cm3, liquifies at −245.95 °C, and solidifies at −248.45 °C. It has the narrowest liquid range of any element and, in liquid form, has over 40 times the refrigerating capacity of liquid helium and three times that of liquid hydrogen. Neon has a very high ionisation energy (2080.7 kJ/mol), low electron affinity (estimated at −120 kJ/mol), and very high electronegativity (4.84 AR). It is the least reactive of the noble gases; no normal compounds of neon have so far been synthesised.
Argon has a density of 1.784 × 10−3 g/cm3, liquifies at −185.848 °C, and solidifies at −189.34 °C. Although non-toxic, it is 38% denser than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because (like all the noble gases) it is colourless, odourless, and tasteless. Argon has a high ionisation energy (1520.6 kJ/mol), low electron affinity (estimated at −96 kJ/mol), and high electronegativity (3.2 AR). One interstitial compound of argon, Ar1C60 is a stable solid at room temperature.
Krypton has a density of 3.749 × 10−3 g/cm3, liquifies at −153.415 °C, and solidifies at −157.37 °C. It has a high ionisation energy (1350.8 kJ/mol), low electron affinity (estimated at −60 kJ/mol), and high electronegativity (2.94 AR). Krypton can be reacted with fluorine to form the difluoride, KrF2. The reaction of KrF
2 with B(OTeF
3 produces an unstable compound, Kr(OTeF
2, that contains a krypton-oxygen bond.
Xenon has a density of 5.894 × 10−3 g/cm3, liquifies at −161.4 °C, and solidifies at −165.051 °C. It is non-toxic, and belongs to a select group of substances that penetrate the blood–brain barrier, causing mild to full surgical anesthesia when inhaled in high concentrations with oxygen. Xenon has a high ionisation energy (1170.4 kJ/mol), low electron affinity (estimated at −80 kJ/mol), and high electronegativity (2.4 AR). It forms a relatively large number of compounds, mostly containing fluorine or oxygen. An unusual ion containing xenon is the tetraxenonogold(II) cation, AuXe2+
4, which contains Xe–Au bonds. This ion occurs in the compound AuXe
2, and is remarkable in having direct chemical bonds between two notoriously unreactive atoms, xenon and gold, with xenon acting as a transition metal ligand. The compound Xe
11 contains a Xe–Xe bond, the longest element-element bond known (308.71 pm = 3.0871 Å). The most common oxide of xenon (XeO3) is strongly acidic.
Radon, which is radioactive, has a density of 9.73 × 10−3 g/cm3, liquifies at −61.7 °C, and solidifies at −71 °C. It has a high ionisation energy (1037 kJ/mol), low electron affinity (estimated at −70 kJ/mol), and moderate electronegativity (2.06 AR). The only confirmed compounds of radon, which is the rarest of the naturally occurring noble gases, are the difluoride RnF2, and trioxide, RnO3. It has been reported that radon is capable of forming a simple Rn2+ cation in halogen fluoride solution, which is highly unusual behaviour for a nonmetal, and a noble gas at that. Radon trioxide (RnO3) is expected to be acidic.
Oganesson, the heaviest element on the periodic table, has only recently been synthesized. Owing to its short half-life, its chemical properties have not yet been investigated. Due to the significant relativistic destabilisation of the 7p3/2 orbitals, it is expected to be significantly reactive and behave more similarly to the group 14 elements, as it effectively has four valence electrons outside a pseudo-noble gas core. Its boiling point is expected to be about 80±30 °C, so that it is probably neither noble nor a gas; as a liquid it is expected to have a density of about 5 g/cm3. It is expected to have a barely positive electron affinity (estimated as 5 kJ/mol) and a moderate ionisation energy of about 860 kJ/mol, which is rather low for a nonmetal and close to those of the metalloids tellurium and astatine. The oganesson fluorides OgF2 and OgF4 are expected to show significant ionic character, suggesting that oganesson may have at least incipient metallic properties. The oxides of oganesson, OgO and OgO2, are predicted to be amphoteric.
Some pairs of nonmetals show additional relationships, beyond those associated with group membership.
Hydrogen in group 1, and carbon in group 14, show some out-of-group similarities. These include proximity in ionization energies, electron affinities and electronegativity values; half-filled valence shells; and correlations between the chemistry of H–H and C–H bonds.
Just as the metalloids cluster along a diagonal path, similar diagonal relationships occur between carbon and phosphorus, and between nitrogen and sulfur.
Carbon and phosphorus represent an example of a less-well known diagonal relationship, especially in organic chemistry. "Spectacular" evidence of this relationship was provided in 1987 with the synthesis of a ferrocene-like molecule in which six of the carbon atoms were replaced by phosphorus atoms. Further illustrating the theme is the "extraordinary" similarity between low coordinate phosphorus compounds and unsaturated carbon compounds, and related research into organophosphorus chemistry.
Nitrogen and sulfur have a less-well known diagonal relationship, manifested in like charge densities and electronegativities (the latter are identical if only the p electrons are counted; see Hinze and Jaffe 1962) especially when sulfur is bonded to an electron-withdrawing group. They are able to form an extensive series of seemingly interchangeable sulfur nitrides, the most famous of which, polymeric sulfur nitride, is metallic, and a superconductor below 0.26 K. The aromatic nature of the S3N22+ ion, in particular, serves as an "exemplar" of the similarity of electronic energies between the two nonmetals.
Fluorine and oxygen share the ability to often bring out the highest oxidation states among the elements.
"Chlorination reactions have many similarities to oxidation reactions. They tend not to be limited to thermodynamic equilibrium and often go to complete chlorination. The reactions are often highly exothermic. Chlorine, like oxygen, forms flammable mixtures with organic compounds."
The chemistry of iodine in its oxidation states of +1, +3, +5, and +7 is analogous to that of xenon in an immediately higher oxidation state.
Many nonmetals have less stable allotropes, with either nonmetallic or metallic properties. Graphite, the standard state of carbon, has a lustrous appearance and is a fairly good electrical conductor. The diamond allotrope of carbon is clearly nonmetallic, however, being translucent and having a relatively poor electrical conductivity. Carbon is also known in several other allotropic forms, including semiconducting buckminsterfullerene (C60). Nitrogen can form gaseous tetranitrogen (N4), an unstable polyatomic molecule with a lifetime of about one microsecond. Oxygen is a diatomic molecule in its standard state; it also exists as ozone (O3), an unstable nonmetallic allotrope with a half-life of around half an hour. Phosphorus, uniquely, exists in several allotropic forms that are more stable than that of its standard state as white phosphorus (P4). The red and black allotropes are probably the best known; both are semiconductors. Phosphorus is also known as diphosphorus (P2), an unstable diatomic allotrope. Sulfur has more allotropes than any other element; all of these, except plastic sulfur (a metastable ductile mixture of allotropes) have nonmetallic properties. Selenium has several nonmetallic allotropes, all of which are much less electrically conducting than its standard state of grey "metallic" selenium. Iodine is also known in a semiconducting amorphous form. Under sufficiently high pressures, just over half of the nonmetals, starting with phosphorus at 1.7 GPa, have been observed to form metallic allotropes.
Most metalloids, like the less electronegative nonmetals, form allotropes. Boron is known in several crystalline and amorphous forms. The discovery of a quasispherical allotropic molecule borospherene (B40) was announced in July 2014. Silicon was most recently known only in its crystalline and amorphous forms. Silicene, a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene, was observed in 2010. The synthesis of an orthorhombic allotrope Si24, was subsequently reported in 2014. At pressure of ~10–11 GPa, germanium transforms to a metallic phase with the same tetragonal structure as tin; when decompressed—and depending on the speed of pressure release—metallic germanium forms a series of allotropes that are metastable at ambient condition. Germanium also forms a graphene analogue, germanene. Arsenic and antimony form several well known allotropes (yellow, grey, and black). Tellurium is known only in its crystalline and amorphous forms; astatine is not known to have any allotropes.
Abundance and extraction
Hydrogen and helium are estimated to make up approximately 99 per cent of all ordinary matter in the universe. Less than five per cent of the Universe is believed to be made of ordinary matter, represented by stars, planets and living beings. The balance is made of dark energy and dark matter, both of which are poorly understood at present.
Hydrogen, carbon, nitrogen, and oxygen constitute the great bulk of the Earth's atmosphere, oceans, crust, and biosphere; the remaining nonmetals have abundances of 0.5 per cent or less. In comparison, 35 per cent of the crust is made up of the metals sodium, magnesium, aluminium, potassium and iron; together with a metalloid, silicon. All other metals and metalloids have abundances within the crust, oceans or biosphere of 0.2 per cent or less.
Nonmetals, and metalloids, in their elemental forms are extracted from: brine: Cl, Br, I; liquid air: N, O, Ne, Ar, Kr, Xe; minerals: B (borate minerals); C (coal; diamond; graphite); F (fluorite); Si (silica) P (phosphates); Sb (stibnite, tetrahedrite); I (in sodium iodate NaIO3 and sodium iodide NaI); natural gas: H, He, S; and from ores, as processing byproducts: Ge (zinc ores); As (copper and lead ores); Se, Te (copper ores); and Rn (uranium bearing ores). Astatine is produced in minute quantities by irradiating bismuth.
Applications in common
- For prevalent and speciality applications of individual nonmetals see the main article for each element.
Nonmetals do not have any universal or near-universal applications. This is not the case with metals, most of which have structural uses; nor the metalloids, the typical uses of which extend to (for example) oxide glasses, alloying components, and semiconductors.
Shared applications of different subsets of the nonmetals instead encompass their presence in, or specific uses in the fields of cryogenics and refrigerants: H, He, N, O, F and Ne; fertilisers: H, N, P, S, Cl (as a micronutrient) and Se; household accoutrements: H (primary constituent of water), He (party balloons), C (in pencils, as graphite), N (beer widgets), O (as peroxide, in detergents), F (as fluoride, in toothpaste), Ne (lighting), P (matches), S (garden treatments), Cl (bleach constituent), Ar (insulated windows), Se (glass; solar cells), Br (as bromide, for purification of spa water), Kr (energy saving fluorescent lamps), I (in antiseptic solutions), Xe (in plasma TV display cells), while Rn also sometimes occurs, but then as an unwanted, potentially hazardous indoor pollutant; industrial acids: C, N, F, P, S and Cl; inert air replacements: N, Ne, S (in sulfur hexafluoride SF6), Ar, Kr and Xe; lasers and lighting: He, C (in carbon dioxide lasers, CO2), N, O (in a chemical oxygen iodine laser), F (in a hydrogen fluoride laser, HF), Ne, S (in a sulfur lamp), Ar, Kr and Xe; and medicine and pharmaceuticals: He, O, F, Cl, Br, I, Xe and Rn.
The number of compounds formed by nonmetals is vast. The first nine places in a "top 20" table of elements most frequently encountered in 8,427,300 compounds, as listed in the Chemical Abstracts Service register for July 1987, were occupied by nonmetals. Hydrogen, carbon, oxygen and nitrogen were found in the majority (greater than 64 per cent) of compounds. Silicon, a metalloid, was in 10th place. The highest rated metal, with an occurrence frequency of 2.3 per cent, was iron, in 11th place.
Antiquity: C, S, (Sb)
Carbon, sulfur, and antimony were known in antiquity. The earliest known use of charcoal dates to around 3750 BCE. The Egyptians and Sumerians employed it for the reduction of copper, zinc, and tin ores in the manufacture of bronze. Diamonds were probably known from as early as 2500 BCE. The first true chemical analyses were made in the 18th century; Lavoisier recognized carbon as an element in 1789. Sulfur usage dates from before 2500 BCE; it was recognized as an element by Antoine Lavoisier in 1777. Antimony usage was concurrent with that of sulfur; the Louvre holds a 5,000 year old vase made of almost pure antimony.
13th century: (As)
Albertus Magnus (Albert the Great, 1193–1280) is believed to have been the first to isolate the element from a compound in 1250, by heating soap together with arsenic trisulfide. If so, it was the first element to be chemically discovered.
17th century: P
Phosphorus was prepared from urine, by Hennig Brand, in 1669.
18th century: H, O, N, (Te), Cl
Hydrogen: Cavendish, in 1766, was the first to distinguish hydrogen from other gases, although Paracelsus around 1500, Robert Boyle (1670), and Joseph Priestley (?) had observed its production by reacting strong acids with metals. Lavoisier named it in 1793. Oxygen: Carl Wilhelm Scheele obtained oxygen by heating mercuric oxide and nitrates in 1771, but did not publish his findings until 1777. Priestley also prepared this new "air" by 1774, but only Lavoisier recognized it as a true element; he named it in 1777. Nitrogen: Rutherford discovered nitrogen while he was studying at the University of Edinburgh. He showed that the air in which animals breathed, after removal of exhaled carbon dioxide, was no longer able to burn a candle. Scheele, Henry Cavendish, and Priestley also studied this element at about the same time; Lavoisier named it in 1775 or 1776. Tellurium: In 1783, Franz-Joseph Müller von Reichenstein, who was then serving as the Austrian chief inspector of mines in Transylvania, concluded that a new element was present in a gold ore from the mines in Zlatna, near today's city of Alba Iulia, Romania. In 1789, a Hungarian scientist, Pál Kitaibel, discovered the element independently in an ore from Deutsch-Pilsen that had been regarded as argentiferous molybdenite, but later he gave the credit to Müller. In 1798, it was named by Martin Heinrich Klaproth, who had earlier isolated it from the mineral calaverite. Chlorine: In 1774, Scheele obtained chlorine from hydrochloric acid but thought it was an oxide. Only in 1808 did Humphry Davy recognize it as an element.
Early 19th century: (B) I, Se, (Si), Br
Boron was identified by Sir Humphry Davy in 1808 but not isolated in a pure form until 1909, by the American chemist Ezekiel Weintraub. Iodine was discovered in 1811 by Courtois from the ashes of seaweed. Selenium: In 1817, when Berzelius and Johan Gottlieb Gahn were working with lead they discovered a substance that was similar to tellurium. After more investigation Berzelius concluded that it was a new element, related to sulfur and tellurium. Because tellurium had been named for the Earth, Berzelius named the new element "selenium", after the moon. Silicon: In 1823, Berzelius prepared amorphous silicon by reducing potassium fluorosilicate with molten potassium metal. Bromine: Balard and Gmelin both discovered bromine in the autumn of 1825 and published their results in the following year.
Late 19th century: He, F, (Ge), Ar, Kr, Ne, Xe
Helium: In 1868, Janssen and Lockyer independently observed a yellow line in the solar spectrum that did not match that of any other element. In 1895, in each case at around the same time, Ramsay, Cleve, and Langlet independently observed helium trapped in cleveite. Fluorine: André-Marie Ampère predicted an element analogous to chlorine obtainable from hydrofluoric acid, and between 1812 and 1886 many researchers tried to obtain it. Fluorine was eventually isolated in 1886 by Moissan. Germanium: In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named argyrodite because of its silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element, germanium, which he was able to isolate in 1886. Argon: Lord Rayleigh and Ramsay discovered argon in 1894 by comparing the molecular weights of nitrogen prepared by liquefaction from air, and nitrogen prepared by chemical means. It was the first noble gas to be isolated. Krypton, neon, and xenon: In 1898, within a period of three weeks, Ramsay and Travers successively separated krypton, neon and xenon from liquid argon by exploiting differences in their boiling points.
20th century: Rn, (At)
In 1898, Friedrich Ernst Dorn discovered a radioactive gas resulting from the radioactive decay of radium; Ramsay and Robert Whytlaw-Gray subsequently isolated radon in 1910. Astatine was synthesised in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè. They bombarded bismuth-209 with alpha particles in a cyclotron to produce, after emission of two neutrons, astatine-211.
- An ionisation energy of less than 750 kJ/mol is taken to be low, 750–1000 is moderate, and > 1000 is high (> 2000 is very high); an electron affinity of less than 70 kJ/mol is taken to be low, 70–140 is moderate, and > 140 is high; an electronegativity of less than 1.8 is taken to be low; 1.8–2.2 is moderate; and > than 2.2 is high (> 4.0 is very high).
- Revised Pauling values are used for the metalloids, and reactive nonmetals; Allred-Rochow values for the noble gases
- The nonmetallic halogens (F, Cl, Br, I) readily form anions including in aqueous solution; the oxide ion O2− is unstable in aqueous solution—its affinity for H+ is so great that it abstracts a proton from a solvent H2O molecule (O2− + H2O → 2 OH−)—but is found in an extensive series of metal oxides
- The common oxide is the most stable oxide for that element
Unless otherwise stated, melting points, boiling points, densities, crystalline structures, ionisation energies, electron affinities, and electronegativity values are from the CRC Handbook of Physics and Chemistry; standard electrode potentials are from the 1989 compilation by Steven Bratsch.
- Sukys 1999, p. 60.
- Bettelheim et al. 2016, p. 33.
- Schulze-Makuch & Irwin 2008, p. 89.
- Steurer 2007, p. 7.
- Cox 2004, p. 26
- Meyer et al. 2005, p. 284; Manahan 2001, p. 911; Szpunar et al. 2004, p. 17
- Brown & Rogers 1987, p. 40
- Kneen, Rogers & Simpson 1972, p. 262
- Greenwood & Earnshaw 2002, p. 434
- Bratsch 1989; Bard, Parsons & Jordan 1985, p. 133
- Yoder, Suydam & Snavely 1975, p. 58
- Kneen, Rogers & Simpson 1972, p. 360
- Lee 1996, p. 240
- Greenwood & Earnshaw 2002, p. 43
- Cressey 2010
- Siekierski & Burgess 2002, p. 24–25
- Siekierski & Burgess 2002, p. 23
- Cox 2004, p. 146
- Kneen, Rogers & Simpson 1972, p. 362
- Bailar et al. 1989, p. 742
- Stein 1983, p. 165
- Jolly 1966, p. 20
- Clugston & Flemming 2000, pp. 100–1, 104–5, 302
- Seaborg 1969, p. 626
- Nash 2005
- Scerri 2013, pp. 204–8
- Challoner 2014, p. 5; Government of Canada 2015; Gargaud et al. 2006, p. 447
- Ivanenko et al. 2011, p. 784
- Catling 2013, p. 12
- Crawford 1968, p. 540
- Berkowitz 2012, p. 293
- Jørgensen & Mitsch 1983, p. 59
- Wulfsberg 1987, p. 159–160
- Bettelheim et al. 2016, p. 33—34
- Field & Gray 2011, p. 12; see also Myers, Oldham & Tocci 2004, pp. 120–121 who categorize nonmetals as hydrogen; semiconductors "(also known as metalloids)"; less active nonmetals (C, N, O, P, S, Se); halogens; or noble gases
- Dingle 2017, pp. 101, 179
- Stein 1969; Pitzer 1975; Schrobilgen 2011
- Brasted 1974, p. 814
- Sidorov 1960
- Rochow 1966, p. 4
- Atkins 2006 et al., pp. 8, 122–23
- Ritter 2011, p. 10
- Wiberg 2001, p. 680
- Wiberg 2001, p. 403
- Greenwood & Earnshaw 2002, p. 612
- Moeller 1952, p. 208
- Cotton 2003, p. 205
- Wiberg 2001, p. 403
- Wulfsberg 1987, p. 159
- Cronyn 2003
- Rayner-Canham 2011, p. 126
- Dillon, Mathey & Nixon 1998
- Kent 2007, p. 104
- Cacace, de Petris & Troiani 2002
- Koziel 2002, p. 18
- Piro et al. 2006
- Steudel & Eckert 2003, p. 1
- Greenwood & Earnshaw 2002, pp. 659–660
- Moss 1952, p. 192; Greenwood & Earnshaw 2002, p. 751
- Shanabrook, Lannin & Hisatsune 1981
- Yousuf 1998, p. 425
- Ostriker & Steinhardt 2001
- Nelson 1987, p. 732
- Emsley 2001, p. 428
- Bolin 2012, p. 2-1
- Maroni 1995
- King & Caldwell 1954, p. 17; Brady & Senese 2009, p. 69
- Nelson 1987, p. 735
- Lide 2003
- Bratsch 1989
- Addison WE 1964, The allotropy of the elements, Oldbourne Press, London
- Arunan E, Desiraju GR, Klein RA, Sadlej J, Scheiner S, Alkorta I, Clary DC, Crabtree RH, Dannenberg JJ, Hobza P, Kjaergaard HG, Legon AC, Mennucci B & Nesbitt DJ 2011, "Defining the hydrogen bond: An account (IUPAC Technical Report)", Pure and Applied Chemistry, vol. 83, no. 8, pp. 1619–36, doi:10.1351/PAC-REP-10-01-01
- Ashford TA 1967, The physical sciences: From atoms to stars, 2nd ed., Holt, Rinehart and Winston, New York
- Atkins P & de Paula J 2011, Physical chemistry for the life sciences, 2nd ed., Oxford University Press, Oxford, ISBN 978-1429231145
- Aylward G & Findlay T 2008, SI chemical data, 6th ed., John Wiley & Sons Australia, Milton, Queensland
- Bailar JC, Moeller T, Kleinberg J, Guss CO, Castellion ME & Metz C 1989, Chemistry, 3rd ed., Harcourt Brace Jovanovich, San Diego, ISBN 0-15-506456-8
- Ball P 2013, "The name's bond", Chemistry World, vol. 10, no. 6, p. 41
- Bard AJ, Parsons R & Jordan J 1985, Standard potentials in aqueous solution, Marcel Dekker, New York, ISBN 978-0-8247-7291-8
- Berkowitz J 2012, The stardust revolution: The new story of our origin in the stars, Prometheus Books, Amherst, New York, ISBN 978-1-61614-549-1
- Bettelheim FA, Brown WH, Campbell MK, Farrell SO 2010, Introduction to general, organic, and biochemistry, 9th ed., Brooks/Cole, Belmont California, ISBN 9780495391128
- Bettelheim FA, Brown WH, Campbell MK, Farrell SO & Torres OJ 2016, Introduction to general, organic, and biochemistry, 11th ed., Cengage Learning, Boston, ISBN 978-1-285-86975-9
- Bogoroditskii NP & Pasynkov VV 1967, Radio and electronic materials, Iliffe Books, London
- Bolin P 2000, "Gas-insulated substations, in JD McDonald (ed.), Electric power substations engineering, 3rd, ed., CRC Press, Boca Raton, FL, pp. 2–1–2-19, ISBN 9781439856383
- Borg RJ & Dienes GJ 1992, The physical chemistry of solids, Academic Press, San Diego, California, ISBN 9780121184209
- Brady JE & Senese F 2009, Chemistry: The study of matter and its changes, 5th ed., John Wiley & Sons, New York, ISBN 9780470576427
- Bratsch SG 1989, "Standard electrode potentials and temperature coefficients in water at 298.15 K," Journal of Physical Chemical Reference Data, vol. 18, no. 1, pp. 1–21, doi:10.1063/1.555839
- Brown WH & Rogers EP 1987, General, organic and biochemistry, 3rd ed., Brooks/Cole, Monterey, California, ISBN 0534068707
- Bryson PD 1989, Comprehensive review in toxicology, Aspen Publishers, Rockville, Maryland, ISBN 0871897776
- Bunge AV & Bunge CF 1979, "Electron affinity of helium (1s2s)3S", Physical Review A, vol. 19, no. 2, pp. 452–456, doi:10.1103/PhysRevA.19.452
- Cacace F, de Petris G & Troiani A 2002, "Experimental detection of tetranitrogen", Science, vol. 295, no. 5554, pp. 480–81, doi:10.1126/science.1067681
- Cairns D 2012, Essentials of pharmaceutical chemistry, 4th ed., Pharmaceutical Press, London, ISBN 9780853699798
- Cambridge Enterprise 2013, "Carbon 'candy floss' could help prevent energy blackouts", Cambridge University, viewed 28 August 2013
- Catling DC 2013, Astrobiology: A very short introduction, Oxford University Press, Oxford, ISBN 978-0-19-958645-5
- Challoner J 2014, The elements: The new guide to the building blocks of our universe, Carlton Publishing Group, ISBN 978-0-233-00436-5
- Chapman B & Jarvis A 2003, Organic chemistry, kinetics and equilibrium, rev. ed., Nelson Thornes, Cheltenham, ISBN 978-0-7487-7656-6
- Chung DD 1987, "Review of exfoliated graphite", Journal of Materials Science, vol. 22, pp. 4190–98, doi:10.1007/BF01132008
- Clugston MJ & Flemming R 2000, Advanced chemistry, Oxford University Press, Oxford, ISBN 9780199146338
- Conroy EH 1968, "Sulfur", in CA Hampel (ed.), The encyclopedia of the chemical elements, Reinhold, New York, pp. 665–680
- Cotton FA, Darlington C & Lynch LD 1976, Chemistry: An investigative approach, Houghton Mifflin, Boston ISBN 978-0-395-21671-2
- Cotton S 2006, Lanthanide and actinide chemistry, 2nd ed., John Wiley & Sons, New York, ISBN 9780470010068
- Cox T 2004, Inorganic chemistry, 2nd ed., BIOS Scientific Publishers, London, ISBN 1-85996-289-0
- Cracolice MS & Peters EI 2011, Basics of introductory chemistry: An active learning approach, 2nd ed., Brooks/Cole, Belmont California, ISBN 9780495558507
- Crawford FH 1968, Introduction to the science of physics, Harcourt, Brace & World, New York
- Cressey 2010, "Chemists re-define hydrogen bond", Nature newsblog, accessed 23 August 2017
- Cronyn MW 2003, "The proper place for hydrogen in the periodic table", Journal of Chemical Education, vol. 80, no. 8, pp. 947—951, doi:10.1021/ed080p947
- Daniel PL & Rapp RA 1976, "Halogen corrosion of metals", in MG Fontana & RW Staehle (eds), Advances in corrosion science and technology, Springer, Boston, pp. 55–172, doi:10.1007/978-1-4615-9062-0_2
- DeKock RL & Gray HB 1989, Chemical structure and bonding, 2nd ed., University Science Books, Mill Valley, California, ISBN 093570261X
- Desch CH 1914, Intermetallic Compounds, Longmans, Green and Co., New York
- Dias RP, Yoo C, Kim M & Tse JS 2011, "Insulator-metal transition of highly compressed carbon disulfide," Physical Review B, vol. 84, pp. 144104–1–6, doi:10.1103/PhysRevB.84.144104
- Dillon KB, Mathey F & Nixon JF 1998, Phosphorus: The carbon copy: From organophosphorus to phospha-organic chemistry, John Wiley & Sons, Chichester
- Dingle A 2017, The elements: An encyclopedic tour of the periodic table, Quad Books, Brighton, ISBN 978-0-85762-505-2
- Donohue J 1982, The structures of the elements, Robert E. Krieger, Malabar, Florida, ISBN 0-89874-230-7
- Eagleson M 1994, Concise encyclopedia chemistry, Walter de Gruyter, Berlin, ISBN 3110114518
- Eastman ED, Brewer L, Bromley LA, Gilles PW, Lofgren NL 1950, "Preparation and properties of refractory cerium sulfides", Journal of the American Chemical Society, vol. 72, no. 5, pp. 2248–50, doi:10.1021/ja01161a102
- Emsley J 1971, The inorganic chemistry of the non-metals, Methuen Educational, London, ISBN 0423861204
- Emsley J 2001, Nature's building blocks: An A–Z guide to the elements, Oxford University Press, Oxford, ISBN 0198503415
- Faraday M 1853, The subject matter of a course of six lectures on the non-metallic elements, (arranged by J Scoffern), Longman, Brown, Green, and Longmans, London
- Field SQ & Gray T 2011, Theodore Gray's elements vault, Black Dog & Leventhal Publishers, New York, ISBN 978-1-57912-880-7
- Finney J 2015, Water: A Very Short Introduction, Oxford University Press, Oxford, ISBN 978-0198708728,
- Fujimori T, Morelos-Gómez A, Zhu Z, Muramatsu H, Futamura R, Urita K, Terrones M, Hayashi T, Endo M, Hong SY, Choi YC, Tománek D & Kaneko K 2013, "Conducting linear chains of sulphur inside carbon nanotubes", Nature Communications, vol. 4, article no. 2162, doi:10.1038/ncomms3162
- Gargaud M, Barbier B, Martin H & Reisse J (eds) 2006, Lectures in astrobiology, vol. 1, part 1: The early Earth and other cosmic habitats for life, Springer, Berlin, ISBN 3-540-29005-2
- Government of Canada 2015, Periodic table of the elements, accessed 30 August 2015
- Godfrin H & Lauter HJ 1995, "Experimental properties of 3He adsorbed on graphite", in WP Halperin (ed.), Progress in low temperature physics, volume 14, pp. 213–320 (216–8), Elsevier Science B.V., Amsterdam, ISBN 9780080539935
- Greenwood NN & Earnshaw A 2002, Chemistry of the elements, 2nd ed., Butterworth-Heinemann, ISBN 0750633654
- Henderson W 2000, Main group chemistry, Royal Society of Chemistry, Cambridge, ISBN 9780854046171
- Holderness A & Berry M 1979, Advanced level inorganic chemistry, 3rd ed., Heinemann Educational Books, London, ISBN 9780435654351
- Irving KE 2005, "Using chime simulations to visualize molecules", in RL Bell & J Garofalo (eds), Science units for Grades 9–12, International Society for Technology in Education, Eugene, Oregon, ISBN 9781564842176
- Ivanenko NB, Ganeev AA, Solovyev ND & Moskvin LN 2011, "Determination of trace elements in biological fluids", Journal of Analytical Chemistry, vol. 66, no. 9, pp. 784–799 (784), doi:10.1134/S1061934811090036
- Jenkins GM & Kawamura K 1976, Polymeric carbons—carbon fibre, glass and char, Cambridge University Press, Cambridge, ISBN 0521206936
- Jolly WL 1966, The chemistry of the non-metals, Prentice-Hall, Englewood Cliffs, New Jersey
- Jones WN 1969, Textbook of general chemistry, C. V. Mosby Company, St Louis, ISBN 978-0-8016-2584-8
- Jorgensen CK 2012, Oxidation numbers and oxidation states, Springer-Verlag, Berlin, ISBN 978-3-642-87760-5
- Jørgensen SE & Mitsch WJ (eds) 1983, Application of ecological modelling in environmental management, part A, Elsevier Science Publishing, Amsterdam, ISBN 0-444-42155-6
- Keith JA & Jacob T 2010, "Computational simulations on the oxygen reduction reaction in electrochemical systems", in PB Balbuena & VR Subramanian, Theory and experiment in electrocatalysis, Modern aspects of electrochemistry, vol. 50, Springer, New York, pp. 89–132, ISBN 978-1-4419-5593-7
- Kent JA 2007, Kent and Riegel's Handbook of industrial chemistry and biotechnology, 11th ed., vol. 1, Spring Science + Business Media, New York, ISBN 978-0-387-27842-1
- King RB 2004, "The metallurgist's periodic table and the Zintl-Klemm concept", in DH Rouvray & BR King (eds), The periodic table: into the 21st century, Research Studies Press, Philadelphia, pp. 189–206, ISBN 0863802923
- King GB & Caldwell WE 1954, The fundamentals of college chemistry, American Book Company, New York
- Kneen WR, Rogers MJW & Simpson P 1972, Chemistry: Facts, patterns, and principles, Addison-Wesley, London, ISBN 0201037793
- Koziel JA 2002, "Sampling and sample preparation for indoor air analysis", in J Pawliszyn (ed.), Comprehensive analytical chemistry, vol. 37, Elsevier Science B.V., Amsterdam, pp. 1–32, ISBN 0444505105
- Krikorian OH & Curtis PG 1988, "Synthesis of CeS and interactions with molten metals", High Temperatures-High Pressures, vol. 20, pp. 9–17, ISSN 0018-1544
- Labes MM, Love P & Nichols LF 1979, "Polysulfur nitride—a metallic, superconducting polymer", Chemical Review, vol. 79, no. 1, pp. 1–15, doi:10.1021/cr60317a002
- Lee JD 1996, Concise inorganic chemistry, 5th ed., Blackwell Science, Oxford, ISBN 978-0-6320-5293-6
- Lide DR (ed.) 2003, CRC handbook of chemistry and physics, 84th ed., CRC Press, Boca Raton, Florida, Section 6, Fluid properties; Vapor pressure, ISBN 0849304849
- Manahan SE 2001, Fundamentals of environmental chemistry, 2nd ed., CRC Press, Boca Raton, Florida, ISBN 156670491X
- Maroni M, Seifert B & Lindvall T (eds) 1995, "Physical pollutants", in Indoor air quality: A comprehensive reference book, Elsevier, Amsterdam, pp. 108–123, ISBN 0444816429
- Martin RM & Lander GD 1946, Systematic inorganic chemistry: From the standpoint of the periodic law, 6th ed., Blackie & Son, London
- McCall BJ & Oka T 2003, "Enigma of H3+ in diffuse interstellar clouds", in SL Guberman (ed.), Dissociative recombination of molecular ions with electrons, Springer Science+Business Media, New York, ISBN 978-1-4613-4915-0
- McMillan PF 2006, "Solid-state chemistry: A glass of carbon dioxide", Nature, vol. 441, p. 823, doi:10.1038/441823a
- Merchant SS & Helmann JD 2012, "Elemental economy: Microbial strategies for optimizing growth in the face of nutrient limitation", in Poole RK (ed), Advances in Microbial Physiology, vol. 60, pp. 91–210, doi:10.1016/B978-0-12-398264-3.00002-4
- Meyer JS, Adams WJ, Brix KV, Luoma SM, Mount DR, Stubblefield WA & Wood CM (eds) 2005, Toxicity of dietborne metals to aquatic organisms, Proceedings from the Pellston Workshop on Toxicity of Dietborne Metals to Aquatic Organisms, 27 July–1 August 2002, Fairmont Hot Springs, British Columbia, Canada, Society of Environmental Toxicology and Chemistry, Pensacola, Florida, ISBN 1880611708
- Miller T 1987, Chemistry: a basic introduction, 4th ed., Wadsworth, Belmont, California, ISBN 0534069126
- Mitchell JBA & McGowan JW 1983, "Experimental studies of electron-ion combination", Physics of ion-ion and electron-ion collisions, F Brouillard F & JW McGowan (eds), Plenum Press, ISBN 978-1-4613-3547-4
- Mitchell SC 2006, "Biology of sulfur", in SC Mitchell (ed.), Biological interactions of sulfur compounds, Taylor & Francis, London, pp. 20–41, ISBN 0203375122
- Moeller T 1952, Inorganic chemistry: An advanced textbook, John Wiley & Sons, New York
- Moss T 1952, Photoconductivity in the elements, Butterworths Scientific Publications, London
- Murray PRS & Dawson PR 1976, Structural and comparative inorganic chemistry: A modern approach for schools and colleges, Heinemann Educational Book, London, ISBN 9780435656447
- Myers RT, Oldham KB & Tocci S 2004, Holt Chemistry, teacher ed., Holt, Rinehart & Winston, Orlando, ISBN 0-03-066463-2
- Nash CS 2005, "Atomic and molecular properties of elements 112, 114, and 118", Journal of Physical Chemistry A, vol. 109, pp. 3493–500, doi:10.1021/jp050736o
- Nelson PG 1987, "Important elements", Journal of Chemical Education, vol. 68, no. 9, pp. 732–737, doi:10.1021/ed068p732
- Nelson PG 1998, "Classifying substances by electrical character: An alternative to classifying by bond type", Journal of Chemical Education, vol. 71, no. 1, pp. 24–6, doi:10.1021/ed071p24
- Novak A 1979, "Vibrational spectroscopy of hydrogen bonded systems", in TM Theophanides (ed.), Infrared and Raman spectroscopy of biological molecules, proceedings of the NATO Advanced Study Institute held at Athens, Greece, August 22–31, 1978, D. Reidel Publishing Company, Dordrecht, Holland, pp. 279–304, ISBN 9027709661
- Oka T 2006, "Interstellar H+
3", PNAS, vol. 103, no. 33, doi:10.1073_pnas.0601242103
- Ostriker JP & Steinhardt PJ 2001, "The quintessential universe", Scientific American, January, pp. 46–53
- Oxtoby DW, Gillis HP & Campion A 2008, Principles of modern chemistry, 6th ed., Thomson Brooks/Cole, Belmont, California, ISBN 0534493661
- Partington JR 1944, A text-book of inorganic chemistry, 5th ed., Macmillan & Co., London
- Patil UN, Dhumal NR & Gejji SP 2004, "Theoretical studies on the molecular electron densities and electrostatic potentials in azacubanes", Theoretica Chimica Acta, vol. 112, no. 1, pp 27–32, doi:10.1007/s00214-004-0551-2
- Patten MN 1989, Other metals and some related materials, in MN Patten (ed.), Information sources in metallic materials, Bowker-Saur, London, ISBN 0408014911
- Patterson CS, Kuper HS & Nanney TR 1967, Principles of chemistry, Appleton Century Crofts, New York
- Pearson WB 1972, The crystal chemistry and physics of metals and alloys, Wiley-Interscience, New York, ISBN 0-471-67540-7
- Pearson RG & Mawby RJ 1967, "The nature of metal–halogen bonds", in V Gutmann (ed.), Halogen chemistry, Academic Press, pp. 55–84
- Phifer C 2000, "Ceramics, glass structure and properties", in Kirk-Othmer Encyclopedia of Chemical Technology, doi:10.1002/0471238961.0712011916080906.a01
- Phillips CSG & Williams RJP 1965, Inorganic chemistry, I: Principles and non-metals, Clarendon Press, Oxford
- Piro NA, Figueroa JS, McKellar JT & Troiani CC 2006, "Triple-bond reactivity of diphosphorus molecules", Science, vol. 313, no. 5791, pp. 1276–9, doi:10.1126/science.1129630
- Pitzer K 1975, "Fluorides of radon and elements 118", Journal of the Chemical Society, Chemical Communications, no. 18, pp. 760–1, doi:10.1039/C3975000760B
- Raju GG 2005, Gaseous Electronics: Theory and Practice, CRC Press, Boca Raton, Florida, ISBN 978-0-203-02526-0
- Rao KY 2002, Structural chemistry of glasses, Elsevier, Oxford, ISBN 0080439586
- Rayner-Canham G 2011, "Isodiagonality in the periodic table", Foundations of Chemistry, vol. 13, no. 2, pp. 121–129, doi:10.1007/s10698-011-9108-y
- Rayner-Canham G & Overton T 2006, Descriptive inorganic chemistry, 4th ed., WH Freeman, New York, ISBN 0716789639
- Regnault MV 1853, Elements of chemistry, vol. 1, 2nd ed., Clark & Hesser, Philadelphia
- Ritter SK 2011, "The case of the missing xenon", Chemical & Engineering News, vol. 89, no. 9, ISSN 0009-2347
- Rochow EG 1966, The Metalloids, DC Heath and Company, Boston
- Rodgers GE 2012, Descriptive inorganic, coordination, & solid-state chemistry, 3rd ed., Brooks/Cole, Belmont, California, ISBN 9780840068460
- Russell AM & Lee KL 2005, Structure-property relations in nonferrous metals, Wiley-Interscience, New York, ISBN 047164952X
- Scerri E 2013, A tale of seven elements, Oxford University Press, Oxford, ISBN 9780195391312
- Schaefer JC 1968, "Boron" in CA Hampel (ed.), The encyclopedia of the chemical elements, Reinhold, New York, pp. 73–81
- Scharfe ME & Schmidlin FW 1975, "Charged pigment xerography", in L Marton (ed.), Advances in Electronics and Electron Physics, vol. 38, Academic Press, New York, ISBN 0-12-014538-3, pp. 93–147
- Schrobilgen GJ 2011, "radon (Rn)", in Encyclopædia Britannica, accessed 7 Aug 2011
- Schulze-Makuch D & Irwin LN 2008, Life in the Universe: Expectations and constraints, 2nd ed., Springer-Verlag, Berlin, ISBN 9783540768166
- Seaborg GT 1969, "Prospects for further considerable extension of the periodic table", Journal of Chemical Education, vol. 46, no. 10, pp. 626–34, doi:10.1021/ed046p626
- Shanabrook BV, Lannin JS & Hisatsune IC 1981, "Inelastic light scattering in a onefold-coordinated amorphous semiconductor", Physical Review Letters, vol. 46, no. 2, 12 January, pp. 130–133
- Sherwin E & Weston GJ 1966, Chemistry of the non-metallic elements, Pergamon Press, Oxford
- Shipman JT, Wilson JD & Todd AW 2009, An introduction to physical science, 12th ed., Houghton Mifflin Company, Boston, ISBN 9780618935963
- Siebring BR & Schaff ME 1980, General chemistry, Wadsworth Publishing, Belmont, California
- Siekierski S & Burgess J 2002, Concise chemistry of the elements, Horwood Press, Chichester, ISBN 1-898563-71-3
- Silvera I & Walraven JTM 1981, "Monatomic hydrogen – a new stable gas", New Scientist, 22 January
- Smith MB 2011, Organic Chemistry: An Acid—Base Approach, CRC Press, Boca Raton, Florida, ISBN 978-1-4200-7921-0
- Stein L 1969, "Oxidized radon in halogen fluoride solutions", Journal of the American Chemical Society, vol. 19, no. 19, pp. 5396–7, doi:10.1021/ja01047a042
- Stein L 1983, "The chemistry of radon", Radiochimica Acta, vol. 32, pp. 163–71
- Steudel R 1977, Chemistry of the non-metals: With an introduction to atomic structure and chemical bonding, Walter de Gruyter, Berlin, ISBN 3110048825
- Steudel R 2003, "Liquid sulfur", in R Steudel (ed.), Elemental sulfur and sulfur-rich compounds I, Springer-Verlag, Berlin, pp. 81–116, ISBN 9783540401919
- Steudel R & Eckert B 2003, "Solid sulfur allotropes", in R Steudel (ed.), Elemental sulfur and sulfur-rich compounds I, Springer-Verlag, Berlin, pp. 1–80, ISBN 9783540401919
- Steudel R & Strauss E 1984, "Homcyclic selenium molecules and related cations", in HJ Emeleus (ed.), Advances in inorganic chemistry and radiochemistry, vol. 28, Academic Press, Orlando, Florida, pp. 135–167, ISBN 9780080578774
- Steurer W 2007, "Crystal structures of the elements" in JW Marin (ed.), Concise encyclopedia of the structure of materials, Elsevier, Oxford, pp. 127–45, ISBN 0080451276
- Stwertka A 2012, A guide to the elements, 3rd ed., Oxford University Press, Oxford, ISBN 9780199832521
- Sukys P 1999, Lifting the scientific veil: Science appreciation for the nonscientist, Rowman & Littlefield, Oxford, ISBN 0847696006
- Szpunar J, Bouyssiere B & Lobinski R 2004, "Advances in analytical methods for speciation of trace elements in the environment", in AV Hirner & H Emons (eds), Organic metal and metalloid species in the environment: Analysis, distribution processes and toxicological evaluation, Springer-Verlag, Berlin, pp. 17–40, ISBN 3540208291
- Taylor MD 1960, First principles of chemistry, Van Nostrand, Princeton, New Jersey
- Townes CH & Dailey BP 1952, "Nuclear quadrupole effects and electronic structure of molecules in the solid state", Journal of Chemical Physics, vol. 20, pp. 35–40, doi:10.1063/1.1700192
- Van Setten MJ, Uijttewaal MA, de Wijs GA & Groot RA 2007, 'Thermodynamic Stability of Boron: The Role of Defects and Zero Point Motion', Journal of the American Chemical Society, vol. 129, no. 9, pp. 2458–65, doi:10.1021/ja0631246
- Wells AF 1984, Structural inorganic chemistry, 5th ed., Clarendon Press, Oxfordshire, ISBN 0198553706
- Wiberg N 2001, Inorganic chemistry, Academic Press, San Diego, ISBN 0123526515
- Winkler MT 2009, "Non-equilibrium chalcogen concentrations in silicon: Physical structure, electronic transport, and photovoltaic potential", PhD thesis, Harvard University, Cambridge, Massachusetts
- Winkler MT, Recht D, Sher M, Said AJ, Mazur E & Aziz MJ 2011, "Insulator-to-metal transition in sulfur-doped silicon", Physical Review Letters, vol. 106, pp. 178701–4
- Wulfsberg G 1987, Principles of descriptive Inorganic chemistry, Brooks/Cole Publishing Company, Monterey, California ISBN 0-534-07494-4
- Yoder CH, Suydam FH & Snavely FA 1975, Chemistry, 2nd ed, Harcourt Brace Jovanovich, New York, ISBN 978-0-15-506470-6
- Yousuf M 1998, "Diamond anvil cells in high-pressure studies of semiconductors", in T Suski & W Paul (eds), High pressure in semiconductor physics II, Semiconductors and semimetals, vol. 55, Academic Press, San Diego, pp. 382–436, ISBN 9780080864532
- Yu PY & Cardona M 2010, Fundamentals of semiconductors: Physics and materials properties, 4th ed., Springer, Heidelberg, ISBN 9783642007101
- Zumdahl SS & DeCoste DJ 2013, Chemical principles, 7th ed., Brooks/Cole, Belmont, California, ISBN 9781111580650
- Emsley J 1971, The inorganic chemistry of the non-metals, Methuen Educational, London, ISBN 0423861204
- Johnson RC 1966, Introductory descriptive chemistry: selected nonmetals, their properties, and behavior, WA Benjamin, New York
- Jolly WL 1966, The chemistry of the non-metals, Prentice-Hall, Englewood Cliffs, New Jersey
- Powell P & Timms PL 1974, The chemistry of the non-metals, Chapman & Hall, London, ISBN 0470695706
- Sherwin E & Weston GJ 1966, Chemistry of the non-metallic elements, Pergamon Press, Oxford
- Steudel R 1977, Chemistry of the non-metals: with an introduction to atomic structure and chemical bonding, English edition by FC Nachod & JJ Zuckerman, Berlin, Walter de Gruyter, ISBN 3110048825
Media related to Nonmetals at Wikimedia Commons |
Through viewing a section of space with a highly advanced X-ray telescope, scientists have detected a star-like object in deep space that they claim is more than 10 million times brighter than our sun. The problem? Experts are saying that the object’s brightness means it shouldn’t even exist according to the Eddington Limit, a rule that states an object can be only so bright before it begins pushing away matter from itself. Theories are already floating around about what could be the possible cause of the object’s extreme brightness, but if truly unexplainable, the new discovery could permanently change scientific theory in the field of astronomy.
Insider: A mysterious object has been spotted that’s 10 million times as bright as the sun. Scientists can’t work out why it hasn’t exploded.
By Marianne Guenot; May 10, 2023
Scientists have been left baffled by a mysterious celestial object so bright that physics dictates it should have exploded.
NASA has been tracking so-called ultraluminous X-ray sources, objects that can be 10 million times as bright as the sun, to understand how they work.
These objects are impossible in theory because they break the Eddington limit, a rule of astrophysics that dictates an object can be only so bright before it breaks apart.
A new study categorically confirms that M82 X-2, a ULX 12 million light-years away, is as bright as previous observation suggested.
But the question remains: How can it exist?
The principle behind Arthur Eddington’s rule is simple. Brightness on this scale comes only from material — like stardust of remnants of disintegrating planets — that falls inward toward a massive object, such as a black hole or a dead star.
As it’s pulled by the object’s intense gravity, the material heats up and radiates light. The more matter that falls toward the object, the brighter it is. But there’s a catch.
At a certain point, so much matter is being pulled in that the radiation it’s emitting should be able to overwhelm the power of the gravity from the massive object. That means at some point, the radiation from the matter should push it away, and it should stop falling in.
But if it’s not falling in, the matter shouldn’t be radiating, which means the object shouldn’t be that bright. Hence the Eddington limit.
Because of the Eddington limit, scientists have questioned whether the ULX’s brightness was indeed caused by enormous amounts of material falling into it.
One theory, for instance, is that strong cosmic winds concentrated all the material into a cone. In this theory, the cone would be pointed toward Earth, which would create a beam of light that would look much brighter to us than if the material was scattered evenly around the ULX.
But a new study looking at M82 X-2, a ULX caused by a pulsating neutron star in the Messier 82 galaxy, put the cone theory to rest.
(A neutron star is a superdense object left behind when a star has run out of energy and dies.)
The analysis, published in The Astrophysical Journal in April, found that M82 X-2 pulled in about 9 billion trillion tons of material per year from a neighboring star, or about 1.5 times the mass of Earth, a NASA statement said.
That means the brightness of this ULX is caused by limit-breaking amounts of material.
Given this information, another explanation has become the leading theory to explain ULXs. And it’s even more bizarre.
In this theory, superstrong magnetic fields shoot out of the neutron star. These would be so strong that they would squish the atoms of the matter falling into the star, turning the shape of these atoms from a sphere into an elongated string, NASA’s statement said.
In this case, the radiation coming from these squished atoms would have a harder time pushing the matter away, explaining why so much matter could fall into the star without breaking apart.
The problem is that we’ll never be able to test this theory on Earth. These theoretical magnetic fields would have to be so strong that no magnet on Earth could reproduce them.
“This is the beauty of astronomy. Observing the sky, we expand our ability to investigate how the universe works. On the other hand, we cannot really set up experiments to get quick answers,” Matteo Bachetti, an author on the study and astrophysicist with Italy’s National Institute for Astrophysics’ Cagliari Observatory, said in NASA’s statement.
“We have to wait for the universe to show us its secrets,” he said. |
The examples and perspective in this article may not represent a worldwide view of the subject. (December 2010)
Jury instructions are the set of legal rules governing how jurors should behave when deciding a case, often addressing with whom jurors may discuss the case and how jurors will decide who is guilty. They are a type of jury control procedure, intended to mitigate potential actions of jurors that may prevent a fair trial; the judge provides these instructions to ensure their interests are represented and nothing prejudicial is said.
Under the American judicial system, juries are often the trier of fact when they serve in a trial. In other words, it is their job to sort through disputed accounts presented in evidence. The judge decides questions of law, meaning he or she decides how the law applies to a given set of facts. Jury instructions are given to the jury by the judge, who usually reads them aloud to the jury. The judge issues a judge's charge to inform the jury how to act in deciding a case. The jury instructions provide something of a flow chart on what verdict jurors should deliver based on what they determine to be true. Put another way, "If you believe A (set of facts), you must find X (verdict). If you believe B (set of facts), you must find Y (verdict)." Jury instructions can also serve an important role in guiding the jury how to consider certain evidence.
All 50 states have a model set of instructions, usually called "pattern jury instructions", which provide the framework for the charge to the jury; sometimes, only names and circumstances have to be filled in for a particular case. Often they are much more complex, although certain elements frequently recur. For instance, if a criminal defendant chooses not to testify, the jury will often be instructed not to draw any negative conclusions from that decision. Many jurisdictions are now instructing jurors not to communicate about the case through social networking services like Facebook and Twitter.
Comprehending jury instructionsEdit
A significant issue with standard jury instructions is the language comprehension difficulties for the average juror. The purpose of jury instructions is to inform jurors of relevant laws and their application in the process of coming to a verdict. However, studies have shown that juries consistently run into problems understanding the instructions given to them. Poor comprehension is noted across juror demographics, as well as across legal contexts. Various linguistic features of legalese or legal English, such as complex sentence structures and technical jargon, have been pinpointed as major factors contributing to low comprehension.
Simplifying jury instructions through the use of plain English has been shown to markedly increase juror comprehension. In one study of California’s jury instructions in cases involving the death penalty, approximately 200 university students participated in a research experiment. Half of the participants heard the original standard instructions written in legal English, and half heard revised instructions in plain English. Instructions were read twice to each group, and the participants then answered questions for researchers to gauge their understanding. The results showed a notable disparity in comprehension between the two groups. The group that received revised instructions demonstrated stronger understanding of relevant points such as key concepts, and the ability to differentiate between legal terms.
In another California study, jury instructions were again simplified to make them easier for jurors to understand. The courts moved cautiously because, although verdicts are rarely overturned due to jury instructions in civil court, this is not the case in criminal court. For example, the old instructions on burden of proof in civil cases read:
Preponderance of the evidence means evidence that has more convincing force than that opposed to it. If the evidence is so evenly balanced that you are unable to say that the evidence on either side of an issue preponderates, your finding on that issue must be against the party who had the burden of proving it.
The new instructions read:
When I tell you that a party must prove something, I mean that the party must persuade you, by the evidence presented in court, that what he or she is trying to prove is more likely to be true than not true. This is sometimes referred to as 'the burden of proof.'
Resistance to the movement towards the revision of standard jury instructions exists as well. This is due to the concern that moving away from legal English will result in jury instructions becoming imprecise. There is also the belief that jurors prefer judges to speak in legal language so that they[who?] come across as educated and respectable.
Jury nullification instructionsEdit
There is also debate over whether juries that are to judge a criminal case should be informed of the possibility of jury nullification during jury instructions. One argument states that if juries have the power of jury nullification, then they should be informed of it and that neglecting to do so is an act of intervention. Another argument states that defendants should be judged according to the law, and that jury nullification interferes with this process. It is also debated that instructions permitting jury nullification is to be criticized as promoting chaos, as it brings the decision between having a structured set of rules and having less of said rules for a more free set of choices that could also promote the likes of anarchy and tyranny.
Studies have indicated that being informed of jury nullification is likely to affect the judgement of juries when they decide on verdicts. One study that looked into 144 juries showed that they were less harsh on sympathetic defendants and harsher on unsympathetic defendants when they had been briefed on jury nullification. Another study that looked into 45 juries showed that they were likelier to reach a guilty verdict in drunk driving cases and less likely in euthanasia cases, with no reported difference in likelihood in murder cases, with the inclusion of explicit jury nullification details in jury instructions.
The judge presents directions to the jury court, after overlapping instructions have been provided by a DVD and a jury manager.
- "Crown Court Compendium Part I" (PDF). May 2016. pp. 3-1–3-3.
- "How Courts Work".
- "Overview - Federal Jury Instructions & Federal Evidence". Archived from the original on 2011-10-04. Retrieved 2011-06-26.
- Ensuring An Impartial Jury In The Age Of Social Media, Duke Law and Technology Review (2012), http://dukedltr.files.wordpress.com/2012/03/stevefinal_31.pdf
- Bornstein, Brian H.; Hamm, Joseph A. (2012). "Jury Instructions on Witness Identification". Court Review. 48: 48–53 – via EBSCO.
- Smith, Amy E.; Haney, Craig (2011). "Getting to the point: Attempting to improve juror comprehension of capital penalty phase instructions". Law and Human Behavior. 35 (5): 339–350. doi:10.1007/s10979-010-9246-0. ISSN 1573-661X. PMID 20936335.
- Spelling It Out in Plain English
- Tiersma, Peter M. (2010), "Instructions to jurors", The Routledge Handbook of Forensic Linguistics, Routledge, pp. 251–265, doi:10.4324/9780203855607.ch17, ISBN 9780203855607
- Hreno, Travis (2008). "The Rule of Law and Jury Nullification". Commonwealth Law Bulletin. 34 (2): 297–312. doi:10.1080/03050710802038353. ISSN 0305-0718.
- Dorfman, David N. (1995-01-01). Fictions, Fault, and Forgiveness: Jury Nullification in a New Context. DigitalCommons@Pace. OCLC 857357756.
- Horowitz, Irwin A. (1988). "Jury nullification: The impact of judicial instructions, arguments, and challenges on jury decision making". Law and Human Behavior. 12 (4): 439–453. doi:10.1007/bf01044627. ISSN 1573-661X.
- Horowitz, Irwin A. (1985). "The effect of jury nullification instruction on verdicts and jury functioning in criminal trials". Law and Human Behavior. 9 (1): 25–36. doi:10.1007/bf01044287. ISSN 1573-661X.
- Federal Jury Instruction Resource Page Collecting model or pattern federal civil and criminal jury instructions for trial courts by jurisdiction (where available) and subject matter.
- Jury Instructions in Insurance-Coverage and Insurance Bad-Faith Cases
- Sample Eighth Circuit Civil Jury Instructions
- Criminal Pattern Jury Instructions, 10th Circuit U.S. Court of Appeals. |
From Wikipedia, the free encyclopedia
Mensural notation is the musical notation system which was used from the later part of the 13th century until about 1600. "Mensural" refers to the ability of this system to notate complex rhythms with great exactness and flexibility. Mensural notation was the first system in the development of European music that systematically used individual note shapes to denote temporal durations. In this, it differed from its predecessor, Modal notation, which was the first system to introduce a limited way of notating rhythms. Mensural notation is most closely associated with the successive periods of the late medieval Ars nova and the Franco-Flemish or Dutch school of Renaissance music. Its name was coined by 19th-century scholars with reference to the usage of medieval theory, going back to the treatise Ars cantus mensurabilis ("The art of measured chant") by Franco of Cologne (c. 1280).
A shorter summary of the principles of Mensural notation can be found in the article on Renaissance music.
The basic note values of mensural notation are essentially identical to the modern ones. Mensural notation uses the Breve, nominally the ancestor of the modern double whole note; the Semibreve (modern whole note), the Minim (half note), Semiminim (quarter note / crotchet), Fusa (eighth note / quaver), Semifusa (sixteenth note / semiquaver), and very rarely smaller ones. There were also two larger values, the Longa and the Maxima (or Duplex longa).
Differences between Mensural and modern notation are partly superficial, but partly quite fundamental:
- Notes were written diamond- rather than oval-shaped, and they had their stems perched directly on top (or bottom, very occasionally) rather than to one side. Before the mid-15th century, all notes were written in solid, filled-in form (Black Notation), but after that the larger note values were written hollow, like today (White Notation).
- Each note had a much shorter temporal value than its nominal modern counterpart. This is because in the course of time, composers invented new note shapes for ever smaller temporal divisions of rhythm, and the older, longer notes were slowed down in proportion. Thus, the basic metrical relationship of a long to a short beat shifted from longabreve in the 13th century, to breve-semibreve in the 14th and 15th, to semibreve-minim by the end of the 16th, and finally to minimsemiminim (i.e. half and quarter notes) in modern notation. What was originally the shortest of all note values used, the semibreve, has today evolved into the longest note used routinely, the whole note.
- While the relation of each note value to the next smaller one in modern notation is invariably 2:1, the mensural system was more flexible. The principal members of the system maxima, longa, breve,, and semibreve could all contain either two or three of the next smaller units. Whether a note was to be read as triplex (perfecta) or duplex (imperfecta) was a matter partly of context (see below) and partly of mensuration signs, a system comparable to modern time signatures (see below).
- Sequences consisting of the larger members of the system (maxima, longa, breve, and semibreve) could optionally be written together as ligatures.
- Bar lines and ties were not used.
Context-dependent note values
In order to understand the principles by which notes had their triplex (perfect) or duplex (imperfect) value determined by context, it is necessary to look at the evolution of the notational system in the context of the rhythmic nature of the medieval music it was first used for. Most music in the 13th and 14th centuries followed the basic pattern of a fairly swift 6/4 meter (in modern notation). Melodies therefore consisted mainly of (in modern notation) dotted half notes, or alternating sequences of half notes and quarter notes, or groups of three quarter notes. Beginning with Franco of Cologne in the late 13th century, all these were notated using the longa and breve notes. Simplifying somewhat, a longa was automatically understood to fill a whole triplex metric group (be perfect) whenever it was in the neighborhood of other notes that did the same, i.e. whenever it was followed by another longa, or by a full group of three breves. When, however, the longa was preceded or followed by a single short note, then they were understood to form one of the characteristic sequences of a simple half and a quarter note together. Thus, the longa had to be reduced to a value of two (be made imperfect). When, finally, there were only two breves in between two longs, then the two breves had to fill up a triplex metrical group together. This was done by lengthening (alterating) the second breve to a value of two, resulting in a syncopated short-long rhythm as opposed to the otherwise dominating long-short one.
This basic principle, of inherently perfect long notes being imperfected by adjacent short notes, or alternatively of short notes being alterated into longer ones, was elaborated into an intricate set of precedence rules by notation theorists. In order to avoid remaining ambiguities, a separator dot (tractulus) was introduced to make clear which notes were supposed to form a triplex group together. It could be placed between a long and a breve to enforce perfect (triplex) value on the former when the latter would otherwise have imperfected it (signum perfectionis, historically the origin of the modern lengthening dot). It could also be used to disambiguate the readings of sequences of more than three breves in a row (divisio modi). The following (adapted from "Notation" in New Grove Dictionary of Music and Musicians) shows some of the resulting possibilities:
At the earliest stage, the rules of perfection and imperfection were applied only to the relation between longa and brevis. Beginning from the mid-14th century (with Philippe de Vitry's theory of the Ars nova), the same principles were also applied to the next smaller note values, the semibreves and minims. All subdivisions further down remained inherently and invariably imperfect.
Just like the notes, the rest symbols already had the same shapes that were later to develop into the modern symbols (with the smaller values being successively introduced in the course of the period of Mensural notation). Unlike the notes themselves, rests had a fixed, invariable duration and could not be perfected, imperfected or alterated; however, they could in turn induce imperfection on a neighbouring larger note. For longa rests, there were two separate forms for the perfect (triplex) and for the imperfect (duplex) longa. As a consequence of their invariant duration, a sequence of rests could be used as an indication of the prevailing meter of a composition (in the absence of modern bar notation). This is often found at the beginning of the tenor voice of a composition.
Ligatures are groups of notes written together. They were a holdover from the modal rhythmic system which preceded mensural notation, and they retained some of the original rhythmic meaning they had had there.
The origins of ligature semantics can be found in a rhythmical re-interpretation of the ligature neumes used since much earlier in the notation of Gregorian plainchant. In modal notation, ligatures had been used to represent stereotyped sequences of short and long notes, grouping notes together in much the same way as metric feet are used to group short and long syllables in Latin poetry. The most basic rhythmical unit was felt to be a group of one short and one long note (brevis-longa), like an iamb in poetry, filling an upbeat pattern in the typical 6/4 meter mentioned above. All other two-note groups were classified in terms of deviation from this basic pattern. In medieval terminology, a two-note ligature possessed "propriety" (proprietas) if and only if its first note was short; and it possessed "perfection" (perfectio) if and only if its second note was long. (Note that this sense of perfection is unrelated to the issue of perfect vs. imperfect in the sense of triplex vs. duplex duration of the long note as discussed above.)
- Accordingly, a note pair cum proprietate et cum perfectione could be written with the most basic of ligature shapes, those inherited from plainchant, namely the descending clivis and the ascending podatus.
- If, by way of exception, the first note was to be long (sine proprietate), this was signaled by a reversal of the use of stems: leave out the stem of the descending clivis; add a stem to the ascending podatus.
- If, conversely, the second note was to be short (sine perfectione), this was signaled by a change in the noteheads themselves: replace the descending sequence of square heads with a single diagonal beam; fold out the second note of the ascending to the right.
- If both exceptions co-occurred (sine proprietate et sine perfectione), both graphical alterations were combined accordingly.
- In addition to sequences of a longa and a breve, ligatures could also contain a pair of semibreves (but never a single one). These were called cum opposita proprietate, and consistently marked by an upward-pointing stem at the left of the note pair.
In the course of time, some alternate versions of the ascending ligatures were developed (last column). Thus, the basic ascending podatus shape was replaced by one where the second note was both folded out to the right, and marked with an extra stem (two alterations cancelling each other out, as it were). The ascending L-L (sine proprietate) was modified accordingly. Some confusion consequently arose about how to write an ascending L-B or B-B (sine perfectione). This, in the end, was the only area of ligature notation that was controversial among contemporary theoreticians, with some authors prescribing one set of values to two ligature shapes, and other authors just the reverse.
For ligatures of more than two notes, the following rules hold:
- Any notehead with an upward stem to its left is the first of a pair of semibreves (cum opposita proprietate).
- Any medial notehead with a downward stem to its right is a longa.
- A prolonged, double-wide notehead with a downward stem to its right is a maxima.
- Any other notehead not covered by any of the rules above is a brevis.
- The perfect or imperfect duration of each note within a ligature is determined according to the same principles as for the standalone notes.
By the late 15th century, the most common ligatures by far were those cum opposita proprietate (S-S), but all were still in routine use.
Modes and mensuration signs
Unlike the original system of Franco of Cologne, which was geared towards the invariant metric pattern of 6/4 (with inherently triplex longa), later compositions from the 14th-century Ars nova onwards could display a greater variety of basic metric patterns. They can be defined as different combinations of duplex (imperfect) and triplex (perfect) subdivisions on successive hierarchical levels:
The perfect modus and maximodus were rare in practice. Of most practical importance were the subdivisions from the brevis downwards (by that time, the semibreves and no longer the breves had taken over the function of the basic counting unit). The four possible combinations of tempus and prolatio could be signaled by a set of mensuration signs at the beginning of a composition: a circle for tempus perfectum, a semicircle for tempus imperfectum, each combined with a dot for prolatio maior, or no dot for prolatio minor. These correspond to modern 9/8, 3/4, 6/8, and 2/4 meters respectively.
Proportions and colorations
An individual composition was not limited to a single set of tempus and prolation. Meters could be shifted in the course of a piece, either by inserting a new mensuration sign, or by using numeric proportions. A "3" indicates that all notes will be reduced to one-third of their value; a "2" indicates double tempo; a fraction "3/2" indicates three in the time of two, etc. The proportion "2" could also be expressed by a vertical stroke through the mensuration sign (the root of the modern "alla breve" signature).
The use of numeric proportions can interact with the use of different basic mensurations in fairly complex ways. This has led to a certain amount of uncertainty and controversy over the correct interpretation of these notation devices, both in contemporary theory and in modern scholarship.
Another way of altering the metrical value of notes was coloration. This refers literally to the device of writing a note in a different color. In (earlier) Black Notation, colored notes were written in red. In (later) White Notation, coloring involved a switch between hollow and filled-in shapes. Colored notes are understood to have 2/3 of their normal duration, and are always imperfect with respect to their next smaller sub-divisions. Coloration was variously used to notate shorter passages of triplet or hemiolic rhythms. Coloration of single notes could also be used to override rules of perfection/imperfection that would otherwise have been called for.
Whereas the rules of notating rhythm in Mensural notation were in many ways different from the modern system, the notation of pitch already followed much the same principles. Notes were written on staves of five (sometimes six) lines, prefixed with clefs, and could be alterated by accidentals.
Mensural notation generally uses C-clefs and F-clefs, on various lines; G-clefs, while used infrequently throughout the period, did not come into completely routine use until the later 16th century. Clefs original bore shapes more or less closely resembling the letter they represented, but in the course of time they developed more ornamental shapes like these 15th century examples:
Accidentals and musica ficta
Accidentals in mensural notation look essentially identical to those of today, and include both sharps and flats, of which flats are somewhat more common. Key signatures appear from the 14th century on, with one flat (always B-flat) the most common, and two flats (with the addition of E-flat) becoming increasingly common through subsequent decades; these are, in fact, the only flat key signatures which appeared prior to the mid 16th century. Much rarer are sharp key signatures, which never move beyond F- and C-sharp. Occasionally, flats appeared without the presence of a clef; in these cases, the flats essentially serve as a clef, since as we have seen, they are always B and E-flats, respectively.
The most significant difference between Mensural and modern notation in the area of pitch is the use of musica ficta: while some accidentals were written out, most routine chromatic alterations were not notated and left to be supplied by the performer.
The most important early stages in the historical development of Mensural notation are found in the works of Franco of Cologne (c. 1260), Petrus de Cruce (c. 1300), and Philippe de Vitry (1322). Franco, in his Ars cantus mensurabilis, was the first to describe the relations between maxima, longa and breve in terms that were independent of the fixed patterns of earlier Modal notation. He also refined the use of semibreves: while in earlier music, one brevis could occasionally be replaced by two semibreves, Franco described the subdivision of the brevis as triplex (perfect), dividing it either into three equal or two unequal semibreves (resulting in predominantly triplet rhythmic micro-patterns.)
Petrus de Cruce introduced subdivisions of the brevis into even more short notes. However, he did not yet notate these as separate smaller hierarchy levels (minima, semiminima etc.), but simply as variable numbers of semibreves. The exact rhythmical interpretation of these groups is partly uncertain. The technique of notating complex groups of short notes by sequences of semibreves was later used more systematically in the notation of Italian Trecento music.
The decisive refinements that made notation even of extremely complex rhythmic patterns on multiple hierarchical metrical levels possible were introduced in France during the time of the Ars nova, with Philippe de Vitry as the most important theoretician. The Ars nova introduced the shorter note values below the semibreve; it systematicized the relations of perfection/imperfection across all levels down to the minima, and it introduced the devices of proportions and coloration.
During the time of the Franco-Flemish or Dutch school in Renaissance music, use of the French notation system gradually spread throughout Europe. This period brought the replacement of Black with White Notation (due at least in part to the more widespread use of paper, rather than vellum, for music). It also brought a further slowing down of the duration of the larger note values while introducing even more new small ones (fusa, semifusa etc.). Towards the end of this period, the original rules of perfection/imperfection (as they dealt primarily with the larger members of the system) became gradually obsolescent together with the use of these note values themselves, as did the use of ligatures. During the 17th century, the system of mensuration signs and proportions gradually developed into the modern time signatures, and new notation devices for time measurements, such as bar lines and ties, were introduced, thus ultimately leading towards the modern notation system.
The following example shows the use of Mensural notation in the mid-15th century. It is a three-part English carol, Hail Mary full of grace, as contained in the manuscript Ms. Selden B.26, f.23, c.1450. The example illustrates the use of perfect and imperfect breves and alterated semibreves within a tempus perfectum cum prolatione minore (6/8 time), as well as the use of some ligatures cum opposita proprietate, and the occasional use of coloration for the notation of hemiolic (3/4 instead of 6/8) patterns.
Example in Mensural notation (reset after original)
Example transcribed into modern notation
- Raybro's Reading White Mensural Notation - an excellent compendium of information on mensural notation, with images.
- In the early music newsgroups, there have been some very interesting discussions in the past ten years of musica ficta in performance, featuring such illustrious scholars as Margaret Bent.
- Peter Urquharts ficta page
References and further reading
- Willi Apel, The Notation of Polyphonic Music, 900-1600, 5th edition, Cambridge, MA: The Medieval Academy of America, 1961.
- Roger Bowers, 'Proportional notation,' The New Grove Dictionary of Music and Musicians Online, accessed 4 June, 2005. (subscription access)
- David Hiley, Thomas B. Payne, Margaret Bent, Geoffrey Chew and Richard Rastall, 'Notation: III and IV,' The New Grove Dictionary of Music and Musicians Online, accessed 4 June, 2005. (subscription access) |
Average High and Low Temperatures in Augusta, GA – A Line Graph Activity
In this math worksheet, learners graph the temperatures that are high and low in the city of Augusta, Georgia. They use a line graph by connecting the dots.
6th - 8th Math 3 Views 0 Downloads
Relationships Between Quantities and Reasoning with Equations and Their Graphs
Graphing all kinds of situations in one and two variables is the focus of this detailed unit of daily lessons, teaching notes, and assessments. Learners start with piece-wise functions and work their way through setting up and solving...
6th - 10th Math CCSS: Designed
New Review Representations of a Line
Line up to learn about lines! Scholars discover how to express patterns as linear functions. The workbook then covers how to graph and write linear equations in slope-intercept form, as well as how to write equations of parallel and...
7th - 10th Math CCSS: Designed
When Will We Ever Use This? Predicting Using Graphs
Here are a set of graphing lessons that have a real-world business focus. Math skills include creating a scatter plot or line graph, fitting a line to a scatter plot, and making predictions. These lessons are aimed at an algebra 1 level...
7th - 12th Math
New Review Predicting the Future with Best-fit Lines
What career will actually use this math? The activity has pupils determine a best-fit line for given data. Using their best-fit line, they then make predictions before discussing as a class what careers might use scatter plots and...
8th - 10th Math CCSS: Designed |
This week in Pre-Calc, I had some trouble understanding how to put equations into piecewise form, and how to solve absolute value equations. So, today I will explain how you do them.
When you are putting an equation into piecewise form, you first take the equation out of the absolute value signs. Once you have done this, you make the equation equal to zero. After this, you put all the numbers (without variables) on the other side of the equation. Next, you isolate “x” by diving by the number that is next to it (coefficient) on both sides, and then you will have your answer for what “x’ equals. Once you have solved for “x”, you put the point or points on a number line, and choose two (one point) or three (two points) on the number line. Based on which point is negative, you will then write the formula in piecewise form. To do this, you will write the original formula as it is, and then write “x” is < or > depending on where your point is positive and negative for all your points (one or two). Next, you will write out the original equation again, this time for the negative point, and since it is a negative, you will have to multiply in a negitive sign in the front of the original equaiton to make it positive. Once you have writtent that, you will then write “x” is < or > the point or points that are negative on your number line.
To solve an equation, you seperate it into two different equations. One will be the original equation (without the absolute value symbols) and the other will be the equation with the negative symbole multiplying in (to make the negative part of the number line positive). To solve your first side (the original one), you will move any numbers without variables to the other side. Once you have done this, you divide both sides by the number in front of “x” (coefficient), and this will give you your answer. For the second side (the original one with the negative multiplied in), you will start by multiplying the negative symbole in. Once you have done this, you move all the numbers without variables to the other side. Next, you will divide both sides by the number in front of “x” (coefficient), and this will give you the other answer. |
All seniors take an introduction to calculus thinking at ICE. This one-semester course will provide a culmination for students’ mathematical studies at ICE, and a bridge to possible future studies in college.
The course is structured around weekly math labs that investigate specific questions. For example, there is a lab about finding a relationship between the speed of a projectile and its acceleration. We will build on basic concepts in high school algebra and geometry (like slope and area) and look at how they apply to complex, non-linear systems. For example: How can you predict the speed of a car at any given moment in the future if the driver keeps pressing down harder and harder on the gas pedal? And how can you predict how far the car will go in a given amount of time? Ultimately, we will draw on the central concepts of calculus to answer these types of questions. The ideas of calculus will both tie together the big ideas of high school math, and push students to think about them in a more sophisticated way.
To meet the graduation requirement for math at I.C.E., students must present a roundtable, where each student provides the context for their investigation and presents their solution, demonstrating his or her conceptual understanding, procedural fluency, logical reasoning skills, and ability to solve problems using multiple solution-strategies. Students will incorporate technology into their presentation and use mathematical models and terminology appropriately. There is a question and answer period. Roundtable panelists include parents, 11th grade students, mathematicians/scientists in our community, as well as I.C.E. teachers.
Students investigate the idea of limits and infinite sums through a cake-eating problem.
Students investigate and analyze speed by measuring time and distance of an actual moving vehicle and graph their results.
Students investigate the instantaneous speed of a quadratic function, such as a penny dropped from the George Washington Bridge.
Students investigate the instantaneous speeds of various functions by calculating the slope of a tangent line using Geometer's Sketchpad. Students also find patterns between the distance function and its corresponding velocity function (the derivative) and derive the power rule based on their conclusions.
Students prove the validity of the power rule for the derivative by creating and solving for an algebraic proof.
Students investigate the second derivative by dropping a basketball and measuring its velocity and acceleration, and analyzing the slopes of these functions.
Students derive the distance function for a given velocity function by using the anti-derivative or integral, and investigate the relationship between the area under a curve and the integral. |
If humankind successfully lands people on the surface of Mars, we could discover an important clue about the origins of life on Earth — one of the greatest scientific mysteries in human history.
A theory called panspermia, which dates back to the 5th century BC, posits that certain life forms can hop between planets, and even star systems, to fertilize them with life.
Following this theory, some scientists suspect that the first life on Earth never formed on our planet at all, but instead, hitched a ride inside planetary fragments from Mars that were flung into space after a powerful impact and eventually fell to Earth. We could be the aliens!
While some write the theory off as outrageous, others think it could harbor some potential. If true, it could deeply impact how we identify ourselves as a species.
Why a manned mission to Mars is necessary
None of the landers or satellites we've sent to the Red Planet thus far have uncovered evidence of past or present life of any kind.
It's possible that a robot simply cannot dig deep enough or collect enough of the right kind of sample. In the end, it might take a human to explore what robotic rovers cannot.
Plus, what it takes NASA's best Mars rovers a week to do, a well-equipped human could complete in 15 minutes, according to mechanical engineer and popular science communicator Bill Nye in his latest book "Unstoppable: Harnessing Science to Change the World."
"If we found microbes on Mars that are clearly related to those on Earth, such a discovery would change the course of human history ... everyone everywhere would soon come to feel differently about what it means to be a living thing in the cosmos," Nye writes.
It won't be a surprise
This kind of discovery, however, won't come suddenly, according to Linda Billings, the consultant to NASA's Astrobiology and Near Earth Object Programs.
"As is the case with most scientific discoveries, the discovery of extraterrestrial life will likely be a prolonged process," Billings told Business Insider. "Claims of evidence of extraterrestrial life will be subjected to peer review, and other scientists will continue to look for further evidence."
One example of this prolonged process took place in the mid-90's when a team of scientists announced that they found convincing evidence for extraterrestrial life inside of a Martian meteorite — a rock that formed on Mars, were ejected into space after a powerful impact by an asteroid or comet, and eventually landed on Earth.
To date, scientists have identified 132 Martian meteorites.
In 1996, the NASA-led team published a paper in the prestigious journal Science that they'd identified grooves and organic compounds in the "ALH8400" Martian meteorite, discovered in Antarctica, that could be fossilized evidence for extraterrestrial nanobacteria.
"The astrobiology community spent months into years investigating those claims," Billings said. "Eventually a consensus emerged in the science community that the original claim of fossil evidence of martian life did not stand up to scrutiny."
If astrobiologists do eventually discover that life came from Mars, NASA will be ready for what happens next.
NASA explores the repercussions
In 2011, NASA and the Library of Congress established the Baruch S. Blumberg NASA/Library of Congress Astrobiology Program, which explores the philosophical, religious, ethical, legal, and cultural impact related to the possible discovery of extraterrestrial life.
The current chair of the program, Nathaniel Comfort, who is also a scientific historian and professor at the Institute of the History of Medicine at The Johns Hopkins University, shared his thoughts with Business Insider about what the notion that we all have a little Martian in us might mean:
"It wouldn't alter the views of those who hold literal interpretations of Scripture. And the rest of evolution would follow as before," Comfort said. "The tabloids would have a field day of course. But once the headlines faded and the conferences ended, I think life would continue on much as before."
As for the people who dedicate their lives to the scientific process, Comfort said:
"Academics would debate questions of human identity afresh ... in short, it might throw an existential monkey wrench into the works, but the principles of moral behavior would remain the same."
The probability of panspermia
The idea that life came from Mars is a highly-debated topic. Both Comfort and Billings agree that the possibility is unlikely.
"It seems to me extremely unlikely that life on Earth came from Mars (or anywhere else)," Comfort said. "The logic and data I find most persuasive dismisses the idea of life coming from a 'seed' at all, whether terrestrial or not."
Yet, other scientists, like Steven Benner — who's a chemist and one of the world's leading experts on the origins of life — argue otherwise.
In 2013, Benner said during a talk at the Goldschmidt conference for geochemists that Mars might have been a better place for life to begin than Earth.
That's because ancient Martian meteorites contain more boron and molybdenum — important precursors to the formation of RNA — than early Earth.
Moreover, Christopher Adcock and Elizabeth Hausrath, both researchers at the University of Nevada in 2013, discovered that phosphates — another important chemical in the formation of RNA, DNA, and essential proteins — in Martian meteorites are more water-soluble than those on early Earth.
And since life is suspected to have begun in the presence of water, their research suggests that Mars could have formed life more readily than Earth.
However, studying Martian meteorites for signs of life has been ongoing for over two decades without success. Perhaps the only way to know for sure if we are the true aliens is to head to Mars ourselves and dig up the potential proof. |
Home | Audio | DIY | Guitar | iPods | Music | Brain/Problem Solving | Links| Site Map
This work is licensed under a Creative Commons License.
Simple Guitar Physics
Construction of the Guitar
In order to achieve the specific sounds required for music, guitars have various components that enable them to produce these specialized sounds. The narrow end of the guitar is called the headstock, and is attached to the neck of the guitar. On the headstock there are machine heads, also known as tuning keys, around which the strings are wound. At the point where the headstock meets the neck of the guitar, there is a small piece of material (plastic, bone, etc.) called the nut, in which small grooves are carved in order to guide the strings up to the machine heads. The neck of the guitar runs all the way down the guitar until it meets the body of the guitar at the upper bout, and it contains the fret board of the guitar, containing the frets embedded in it at points along the length of the neck that divide it mathematically. The body of the guitar is a resonating chamber which projects the vibrations of the body through the hole cut on the top of it, called the sound hole. The strings of the guitar run from the machine heads, over the nut, down the neck, body, and the sound hole, and are anchored at a piece of hardware attached to the body of the guitar, called the bridge.
It is these components of the guitar that allow it to produce the specific sounds required to create music. In order to understand music and how guitars produce it, it is first required to understand the physics of sound. Sound is created when a wave motion is set up in the air by the vibration of material bodies. What this means is that when material bodies vibrate, they create a vibrational energy that travels in pressure waves through a medium. All forms of instruments create vibrations in order to produce sound waves that make the music, which is essentially organized sound, and guitars are a type of musical instrument called a string instrument, meaning that they create their sound through the vibrations of a string. On the guitar, the string that vibrates to produce the sound is fixed at both ends, is elastic, and therefore can vibrate . When the guitar string is either strummed or plucked, the string of the guitar begins to vibrate, and since these vibrations are waves, they begin to travel in both directions along the string and are reflected back at each fixed end. These waves will not cancel each other out as they reflect back upon themselves, but instead form a standing wave, which is a situation where crests and troughs remain at fixed positions in the medium while the wave as a whole increases and decreases together. The guitar strings act in such a way that they can satisfy the relationship between wavelength and frequency, represented by the equation v = fλ . This equation can be rearranged to f = v/λ, meaning that the frequency of a wave (f) is dependent on both the speed of the wave (v), and the length of the wave (λ). As well, the speed of the wave traveling on the guitar string depends on the tension of the string (T) and the linear mass density of the string (µ), in fact, “the root frequency for a string is proportional to the square root of the tension, inversely proportional to its length, and inversely proportional to the square root of its linear mass density” . What this means is that waves will travel faster when the tension of the string is higher, which in turn means that the frequency will be higher as the tension is increased (f = v/λ, the v is increasing).
This also means that waves will travel slower on a more massive string, since if the mass is increased, the v will decrease. This relationship between the speed, tension, and mass density can be arranged into a new equation,
When a standing wave vibrates, a combination of reflection and interference occur in such a way that the reflected waves interfere constructively with the incident waves, because the waves have changed phase when they reflected from one of the fixed ends. When this is happening, the medium appears to vibrate in segments, and it is not apparent that the whole wave is traveling. Since a guitar string has two fixed ends, it will act like a standing wave, and therefore when agitated by either being plucked or strummed; the wavelength that the string can produce is twice the length of the string . Since all the strings are the same length, all six strings on the guitar use the same range of wavelengths, however, in order to produce different sound waves required to create music, different amounts of air must be displaced at different frequencies, meaning the guitar strings must be able to vibrate at different frequencies to do so. In order to create different frequencies on the guitar, one of the factors of the equation f = v/λ must be changed, so either the speed, or the length of the wave must be changed. Since the strings on the guitar are attached to the nut and bridge, and when played open have a fixed wavelength, the only other factor that can be changed to produce a different frequency is the speed of the wave, ‘v’. Since the speed of the wave is affected by the tension on the string and the mass density (v = T/µ.), either the tension of the string, or the mass density must be changed in order to create a different frequency. However, if the frequency of the vibration of the guitar string were only changed by varying the tension, then the high strings (needing a higher frequency) would have to be wound tight since the tension required would be fairly high, while the lower strings (needing a lower frequency) would require much less tension, and subsequently be very loose. Since it would be very difficult to play a guitar where the high strings are tight and the low strings loose, guitars are constructed in such a way that the tension of the strings should be equal. Since the only other factor that can be changed while playing all the strings open is the mass density, guitars are constructed so that the tension of the strings, as well as the mass density are increased together. As a result, guitar strings are made so that the higher the frequency required from the open string, the less mass density the string will have, since higher frequencies require a higher tension, and the less mass they have the less tension is needed to achieve the same frequency. Subsequently, the lower the frequency of a string required is, the higher the mass density is, since a lower tension produces lower frequencies, and the more mass the strings contain, the more tension is required. Since in standard tuning the strings on the guitar are a perfect fourth apart on pitch (frequency), except between G and B, the amount that the mass density must be increased so that the tension remains constant can be calculated.
Frets and Intonation
However, music is complex, and many frequencies are required in order to create the correct sound waves that will produce the music. This poses a problem, because although the 6 strings of the guitar are set up in a playing-friendly manner, at this point each individual string can only produce one frequency, and since no different part of the equation to f = v/λ is being changed when an open string is played, which is not nearly enough variation to produce complex music. Therefore, one part of the equation to f = v/λ must be changed while playing a guitar in order to produce a different frequency. However, the speed of the wave cannot be changed, since the two factors (v = T/µ.), the tension of the string and the mass density are not changed significantly enough while playing to affect the speed of the wave enough to change the frequency. As a result, on the neck of the guitar there are little strips of metal called frets, whose function is to decrease the length of the string, which will cause a higher frequency. When a string is pressed down near a fret, the resonant length of the string is decreased, as it no longer stretches from the bridge to the nut but from the bridge to the fret where the string is being held down. This decreases the length of the wave (λ) through decreasing the length of the medium (string), which consequently increases the frequency of the string. Thus, on every string, the guitar player has an option of decreasing the length of the string in about 24 different ways, which will produce 24 different frequencies on each string. Since a guitar has six strings, and each string can have up to 24 frets, the number of notes available from which to choose is greatly increased. As multiple strings can be played together, the guitarist now has many frequencies from which to choose in order to create music on the instrument.
Frets on the fingerboard serve to fix the positions of notes and scales, which gives them equal temperament. Consequently, the ratio of the widths of two consecutive frets is the twelfth root of two , whose numeric value is about 1.059. The twelfth fret divides the string in two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. The position of the bridge saddles, upon which the strings rest, determines the distance to the nut (at the top of the fingerboard). This distance defines the positions of the harmonic nodes for the strings over the fretboard, and is the basis of intonation. Intonation refers to the property that the actual frequency of each string at each fret matches what those frequencies should be according to music theory. Because of the physical limitations of fretted instruments, intonation is at best approximate; thus, the guitar's intonation is said to be tempered. The twelfth, or octave, fret resides directly under the first harmonic node (half-length of the string), and in the tempered fretboard, the ratio of distances between consecutive frets is approximately 1.06, as derived above.
However if a guitar string had only one single frequency that it vibrated on, the guitar would sound quite boring, and there would not be much difference between the guitar and other stringed instruments. Guitars sound different from other stringed instruments because of the different overtones, or harmonics dominant on a guitar. When a guitar string is either strummed or plucked, the string begins to vibrate, and these vibrations are in the form of waves. However, the waves that are created by the vibrations of the string travel in both directions along the string, and continue forward until they are reflected off the fixed ends. When the waves are reflected, they change direction, and travel back the other way through the medium (the string). When the waves are traveling back through the string, they cause interference with the other waves traveling the string that were also caused by the vibration . The standing wave pattern is formed when there is perfectly timed interference of two waves passing through the same medium, to create a situation where the crests and troughs remain at fixed positions. On a guitar string, the waves that are reflected and are traveling in the opposite direction of the other waves on the string create a standing wave. Because of the interfering vibrations on a guitar string, standing wave patterns are created, meaning that there are some points along the string that appear to be standing still, and these points of no displacement are referred to as nodes. As well, there are other points along the medium that undergo vibrations between a large positive and large negative displacement, and are the points that undergo the maximum displacement during each vibrational cycle of the standing wave and are called antinodes. On the guitar string, a number of different patterns of standing waves may be produced, and each pattern will have different number of nodes and antinodes. Standing wave patterns can only be produced within the string of the guitar when it is vibrated at certain frequencies, however there are several frequencies with which the string can be vibrated to produce the different patterns of standing waves, each with a different number of nodes and antinodes. Every different frequency is associated with a different standing wave pattern, and they are referred to as harmonics. The most simple pattern of standing wave that can be produced is one at which the two nodes are at the fixed ends, which is the longest wavelength, and it is called the first harmonic, or fundamental harmonic. Since on a guitar string the waves keep on being reflected off the fixed ends and causing interference with each other, there are many different frequencies, but with any medium fixed at both ends, only certain sized waves can stand. This means that on a guitar string, only certain types of frequencies can stand, so we say that such a medium is tuned. Therefore, the strings on the guitar are tuned in such a way that the second pattern of the standing wave, or second harmonic, can only have half the wavelength and twice the frequency of the first harmonic. The second harmonic is also referred to as the first overtone, and it is these multiple overtones that we hear from the guitar string that make the guitar sound different from other instruments. Similarly, the third harmonic, or the third pattern possibility for the standing wave on a guitar, has one third the wavelength and three times the frequency when compared to the first harmonic and is called the second overtone. The rest of the harmonics follow the same pattern that the nth harmonic has 1/n wavelength and n times the frequency. It is the fundamental frequency (first harmonic) that determines the note that we hear, and the higher harmonics determine the timbre. This means that the simplest standing wave pattern on the guitar string containing only two nodes and two antinodes, determines what musical note we hear, while the more complex standing wave patterns, the other harmonics, determine how that note sounds.
Sound is created when material vibrations cause changes in air pressure and create pressure waves. However, guitar strings are not large enough to move large enough amounts of air to create a sound loud enough to be easily heard by the human ear. Therefore, the body of an acoustic guitar is used to amplify the sounds the strings produce, and the body of the guitar is made up of different components that allow it to do so. The body of the guitar is basically a larger hollow space that is specially constructed to amplify the sound of the strings. The top plate of the body, the piece of wood located on the front of the body of the guitar, is constructed so that it can vibrate up and down relatively easily, and is usually made of light, springy wood, about 2.5 mm thick. Inside the actual body of the guitar there are series of braces that strengthen the plate and the keep the plate flat, despite the movement of the strings that will tend to make the bridge move, since it is attached to the top plate. On the opposite side of the guitar, there is the back plate that does not play as big a role in amplifying the sound, since it is held against the player's body and cannot vibrate much. The sides of the guitar also do not vibrate much in the direction perpendicular to their surface, so they also don’t radiate much sound.
When the strings are plucked or strummed, they begin to vibrate, and these vibrations in the form of waves are transmitted to the bridge of the guitar. Since the bridge is attached the top plate of the guitar, the top plate also begins to vibrate as a result of the vibrations of the string, via the bridge. If the string is vibrating at a high frequency, and subsequently the bridge is vibrating at a high frequency, most of the sound is radiated by the vibrations of the top plate. Since the top plate has a much larger surface area than the string, when the top plate vibrates as a result of the vibrations of the string, the volume of air the top plate is displacing is much larger than that of the string. Therefore, the pressure waves being produced by the top plate will be bigger, and the sound will be louder. For lower frequencies, the strings vibrations are transmitted via the bridge to the top plate, where it is then transmitted to the back plate, then reflected through the sound hole, which is constantly increasing the volume of the pressure waves being produced. In fact, it is not the vibrations of the guitar string that we hear when listening to a guitar, rather the amplification of the vibrations it produces through the body of the guitar.
Home | Audio | DIY | Guitar | iPods | Music | Links | Brain and Problem Solving | Site Map | Contact |
Capacitors are manufactured in many forms, styles, lengths, girths, and from many materials. They all contain at least two electrical conductors (called "plates") separated by an insulating layer (called the dielectric). Capacitors are widely used as parts of electrical circuits in many common electrical devices.
Capacitors, together with resistors and inductors, belong to the group of "passive components" used in electronic equipment. Although, in absolute figures, the most common capacitors are integrated capacitors (e.g. in DRAMs or flash memory structures), this article is concentrated on the various styles of capacitors as discrete components.
Small capacitors are used in electronic devices to couple signals between stages of amplifiers, as components of electric filters and tuned circuits, or as parts of power supply systems to smooth rectified current. Larger capacitors are used for energy storage in such applications as strobe lights, as parts of some types of electric motors, or for power factor correction in AC power distribution systems. Standard capacitors have a fixed value of capacitance, but adjustable capacitors are frequently used in tuned circuits. Different types are used depending on required capacitance, working voltage, current handling capacity, and other properties.
Theory of conventional constructionEdit
In a conventional capacitor, the electric energy is stored statically by charge separation, typically electrons, in an electric field between two electrode plates. The amount of charge stored per unit voltage is essentially a function of the size of the plates, the plate material's properties, the properties of the dielectric material placed between the plates, and the separation distance (i.e. dielectric thickness). The potential between the plates is limited by the properties of the dielectric material and the separation distance.
Nearly all conventional industrial capacitors except some special styles such as "feed-through capacitors", are constructed as "plate capacitors" even if their electrodes and the dielectric between are wound or rolled. The capacitance formula for plate capacitors is:
The capacitance C increases with the area A of the plates and with the permittivity ε of the dielectric material and decreases with the plate separation distance d. The capacitance is therefore greatest in devices made from materials with a high permittivity, large plate area, and small distance between plates.
Theory of electrochemical constructionEdit
Another type – the electrochemical capacitor – makes use of two other storage principles to store electric energy. In contrast to ceramic, film, and electrolytic capacitors, supercapacitors (also known as electrical double-layer capacitors (EDLC) or ultracapacitors) do not have a conventional dielectric. The capacitance value of an electrochemical capacitor is determined by two high-capacity storage principles. These principles are:
- electrostatic storage within Helmholtz double layers achieved on the phase interface between the surface of the electrodes and the electrolyte (double-layer capacitance); and
- electrochemical storage achieved by a faradaic electron charge-transfer by specifically adsorpted ions with redox reactions (pseudocapacitance). Unlike batteries, in these reactions, the ions simply cling to the atomic structure of an electrode without making or breaking chemical bonds, and no or negligibly small chemical modifications are involved in charge/discharge.
The ratio of the storage resulting from each principle can vary greatly, depending on electrode design and electrolyte composition. Pseudocapacitance can increase the capacitance value by as much as an order of magnitude over that of the double-layer by itself.
Common capacitors and their namesEdit
Capacitors are divided into two mechanical groups: Fixed capacitors with fixed capacitance values and variable capacitors with variable (trimmer) or adjustable (tunable) capacitance values.
The most important group is the fixed capacitors. Many got their names from the dielectric. For a systematic classification these characteristics can't be used, because one of the oldest, the electrolytic capacitor, is named instead by its cathode construction. So the most-used names are simply historical.
The most common kinds of capacitors are:
- Ceramic capacitors have a ceramic dielectric.
- Film and paper capacitors are named for their dielectrics.
- Aluminum, tantalum and niobium electrolytic capacitors are named after the material used as the anode and the construction of the cathode (electrolyte)
- Polymer capacitors are aluminum, tantalum or niobium electrolytic capacitors with conductive polymer as electrolyte
- Supercapacitor is the family name for:
- Double-layer capacitors were named for the physical phenomenon of the Helmholtz double-layer
- Pseudocapacitors were named for their ability to store electric energy electro-chemically with reversible faradaic charge-transfer
- Hybrid capacitors combine double-layer and pseudocapacitors to increase power density
- Silver mica, glass, silicon, air-gap and vacuum capacitors are named for their dielectric.
In addition to the above shown capacitor types, which derived their name from historical development, there are many individual capacitors that have been named based on their application. They include:
- Power capacitors, motor capacitors, DC-link capacitors, suppression capacitors, audio crossover capacitors, lighting ballast capacitors, snubber capacitors, coupling, decoupling or bypassing capacitors.
Other kinds of capacitors are discussed in the #Special capacitors section.
The most common dielectrics are:
- Plastic films
- Oxide layer on metal (aluminum, tantalum, niobium)
- Natural materials like mica, glass, paper, air, SF6, vacuum
All of them store their electrical charge statically within an electric field between two (parallel) electrodes.
Beneath this conventional capacitors a family of electrochemical capacitors called supercapacitors was developed. Supercapacitors do not have a conventional dielectric. They store their electrical charge statically in Helmholtz double-layers and faradaically at the surface of electrodes
- with static double-layer capacitance in a double-layer capacitor and
- with pseudocapacitance (faradaic charge transfer) in a pseudocapacitor
- or with both storage principles together in hybrid capacitors.
The most important material parameters of the different dielectrics used and the approximate Helmholtz-layer thickness are given in the table below.
at 1 kHz
of the dielectric
|Ceramic capacitors, Class 1||paraelectric||12 to 40||< 100(?)||1|
|Ceramic capacitors, Class 2||ferroelectric||200 to 14,000||< 35||0.5|
|Film capacitors||Polypropylene ( PP)||2.2||650 / 450||1.9 to 3.0|
|Film capacitors||Polyethylene terephthalate,
|3.3||580 / 280||0.7 to 0.9|
|Film capacitors||Polyphenylene sulfide (PPS)||3.0||470 / 220||1.2|
|Film capacitors||Polyethylene naphthalate (PEN)||3.0||500 / 300||0.9 to 1.4|
|Film capacitors||Polytetrafluoroethylene (PTFE)||2.0||450(?) / 250||5.5|
|Paper capacitors||Paper||3.5 to 5.5||60||5 to 10|
|Aluminum electrolytic capacitors||Aluminium oxide
|9.6||710||< 0.01 (6.3 V)|
< 0.8 (450 V)
|Tantalum electrolytic capacitors||Tantalum pentoxide
|26||625||< 0.01 (6.3 V)|
< 0.08 (40 V)
|Niobium electrolytic capacitors||Niobium pentoxide,
|42||455||< 0.01 (6.3 V)|
< 0.10 (40 V)
|Helmholtz double-layer||-||5000||< 0.001 (2.7 V)|
|Air gap capacitors||Air||1||3.3||-|
|Glass capacitors||Glass||5 to 10||450||-|
|Mica capacitors||Mica||5 to 8||118||4 to 50|
The capacitor's plate area can be adapted to the wanted capacitance value. The permittivity and the dielectric thickness are the determining parameter for capacitors. Ease of processing is also crucial. Thin, mechanically flexible sheets can be wrapped or stacked easily, yielding large designs with high capacitance values. Razor-thin metallized sintered ceramic layers covered with metallized electrodes however, offer the best conditions for the miniaturization of circuits with SMD styles.
A short view to the figures in the table above gives the explanation for some simple facts:
- Supercapacitors have the highest capacitance density because of their special charge storage principles
- Electrolytic capacitors have lesser capacitance density than supercapacitors but the highest capacitance density of conventional capacitors due to the thin dielectric.
- Ceramic capacitors class 2 have much higher capacitance values in a given case than class 1 capacitors because of their much higher permittivity.
- Film capacitors with their different plastic film material do have a small spread in the dimensions for a given capacitance/voltage value of a film capacitor because the minimum dielectric film thickness differs between the different film materials.
Capacitance and voltage rangeEdit
Capacitance ranges from picofarads to more than hundreds of farads. Voltage ratings can reach 100 kilovolts. In general, capacitance and voltage correlate with physical size and cost.
As in other areas of electronics, volumetric efficiency measures the performance of electronic function per unit volume. For capacitors, the volumetric efficiency is measured with the "CV product", calculated by multiplying the capacitance (C) by the maximum voltage rating (V), divided by the volume. From 1970 to 2005, volumetric efficiencies have improved dramatically.
Wound metallized paper capacitor from the early 1930s in hardpaper case, capacitance value specified in "cm" in the cgs system; 5,000 cm corresponds to 0.0056 µF.
Overlapping range of the applicationsEdit
These individual capacitors can perform their application independent of their affiliation to an above shown capacitor type, so that an overlapping range of applications between the different capacitor types exists.
Types and stylesEdit
A ceramic capacitor is a non-polarized fixed capacitor made out of two or more alternating layers of ceramic and metal in which the ceramic material acts as the dielectric and the metal acts as the electrodes. The ceramic material is a mixture of finely ground granules of paraelectric or ferroelectric materials, modified by mixed oxides that are necessary to achieve the capacitor's desired characteristics. The electrical behavior of the ceramic material is divided into two stability classes:
- Class 1 ceramic capacitors with high stability and low losses compensating the influence of temperature in resonant circuit application. Common EIA/IEC code abbreviations are C0G/NP0, P2G/N150, R2G/N220, U2J/N750 etc.
- Class 2 ceramic capacitors with high volumetric efficiency for buffer, by-pass and coupling applications Common EIA/IEC code abbreviations are: X7R/2XI, Z5U/E26, Y5V/2F4, X7S/2C1, etc.
The great plasticity of ceramic raw material works well for many special applications and enables an enormous diversity of styles, shapes and great dimensional spread of ceramic capacitors. The smallest discrete capacitor, for instance, is a "01005" chip capacitor with the dimension of only 0.4 mm × 0.2 mm.
The construction of ceramic multilayer capacitors with mostly alternating layers results in single capacitors connected in parallel. This configuration increases capacitance and decreases all losses and parasitic inductances. Ceramic capacitors are well-suited for high frequencies and high current pulse loads.
Because the thickness of the ceramic dielectric layer can be easily controlled and produced by the desired application voltage, ceramic capacitors are available with rated voltages up to the 30 kV range.
Some ceramic capacitors of special shapes and styles are used as capacitors for special applications, including RFI/EMI suppression capacitors for connection to supply mains, also known as safety capacitors, X2Y® and three-terminal capacitors for bypassing and decoupling applications, feed-through capacitors for noise suppression by low-pass filters and ceramic power capacitors for transmitters and HF applications.
Film capacitors or plastic film capacitors are non-polarized capacitors with an insulating plastic film as the dielectric. The dielectric films are drawn to a thin layer, provided with metallic electrodes and wound into a cylindrical winding. The electrodes of film capacitors may be metallized aluminum or zinc, applied on one or both sides of the plastic film, resulting in metallized film capacitors or a separate metallic foil overlying the film, called film/foil capacitors.
Metallized film capacitors offer self-healing properties. Dielectric breakdowns or shorts between the electrodes do not destroy the component. The metallized construction makes it possible to produce wound capacitors with larger capacitance values (up to 100 µF and larger) in smaller cases than within film/foil construction.
Film/foil capacitors or metal foil capacitors use two plastic films as the dielectric. Each film is covered with a thin metal foil, mostly aluminium, to form the electrodes. The advantage of this construction is the ease of connecting the metal foil electrodes, along with an excellent current pulse strength.
A key advantage of every film capacitor's internal construction is direct contact to the electrodes on both ends of the winding. This contact keeps all current paths very short. The design behaves like a large number of individual capacitors connected in parallel, thus reducing the internal ohmic losses (ESR) and ESL. The inherent geometry of film capacitor structure results in low ohmic losses and a low parasitic inductance, which makes them suitable for applications with high surge currents (snubbers) and for AC power applications, or for applications at higher frequencies.
The plastic films used as the dielectric for film capacitors are polypropylene (PP), polyester (PET), polyphenylene sulfide (PPS), polyethylene naphthalate (PEN), and polytetrafluoroethylene or Teflon (PTFE). Polypropylene film material with a market share of something about 50% and polyester film with something about 40% are the most used film materials. The rest of something about 10% will be used by all other materials including PPS and paper with roughly 3%, each.
|Film material, abbreviated codes|
|Relative permittivity at 1 kHz||3.3||3.0||3.0||2.2|
|Minimum film thickness (µm)||0.7–0.9||0.9–1.4||1.2||2.4–3.0|
|Moisture absorption (%)||low||0.4||0.05||<0.1|
|Dielectric strength (V/µm)||580||500||470||650|
voltage proof (V/µm)
|DC voltage range (V)||50–1,000||16–250||16–100||40–2,000|
|Capacitance range||100 pF–22 µF||100 pF–1 µF||100 pF–0.47 µF||100 pF–10 µF|
|Application temperature range (°C)||−55 to +125 /+150||−55 to +150||−55 to +150||−55 to +105|
|C/C0 versus temperature range (%)||±5||±5||±1.5||±2.5|
|Dissipation factor (•10−4)|
|at 1 kHz||50–200||42–80||2–15||0.5–5|
|at 10 kHz||110–150||54–150||2.5–25||2–8|
|at 100 kHz||170–300||120–300||12–60||2–25|
|at 1 MHz||200–350||–||18–70||4–40|
|Time constant RInsul•C (s)||at 25 °C||≥10,000||≥10,000||≥10,000||≥100,000|
|at 85 °C||1,000||1,000||1,000||10,000|
|Dielectric absorption (%)||0.2–0.5||1–1.2||0.05–0.1||0.01–0.1|
|Specific capacitance (nF•V/mm3)||400||250||140||50|
Some film capacitors of special shapes and styles are used as capacitors for special applications, including RFI/EMI suppression capacitors for connection to the supply mains, also known as safety capacitors, Snubber capacitors for very high surge currents, Motor run capacitors, AC capacitors for motor-run applications
Power film capacitorsEdit
A related type is the power film capacitor. The materials and construction techniques used for large power film capacitors mostly are similar to those of ordinary film capacitors. However, capacitors with high to very high power ratings for applications in power systems and electrical installations are often classified separately, for historical reasons. The standardization of ordinary film capacitors is oriented on electrical and mechanical parameters. The standardization of power capacitors by contrast emphasizes the safety of personnel and equipment, as given by the local regulating authority.
As modern electronic equipment gained the capacity to handle power levels that were previously the exclusive domain of "electrical power" components, the distinction between the "electronic" and "electrical" power ratings blurred. Historically, the boundary between these two families was approximately at a reactive power of 200 volt-amperes.
Film power capacitors mostly use polypropylene film as the dielectric. Other types include metallized paper capacitors (MP capacitors) and mixed dielectric film capacitors with polypropylene dielectrics. MP capacitors serve for cost applications and as field-free carrier electrodes (soggy foil capacitors) for high AC or high current pulse loads. Windings can be filled with an insulating oil or with epoxy resin to reduce air bubbles, thereby preventing short circuits.
They find use as converters to change voltage, current or frequency, to store or deliver abruptly electric energy or to improve the power factor. The rated voltage range of these capacitors is from approximately 120 V AC (capacitive lighting ballasts) to 100 kV.
Power film capacitor for AC Power factor correction (PFC), packaged in a cylindrical metal can
Electrolytic capacitors have a metallic anode covered with an oxidized layer used as dielectric. The second electrode is a non-solid (wet) or solid electrolyte. Electrolytic capacitors are polarized. Three families are available, categorized according to their dielectric.
- Aluminum electrolytic capacitors with aluminum oxide as dielectric
- Tantalum electrolytic capacitors with tantalum pentoxide as dielectric
- Niobium electrolytic capacitors with niobium pentoxide as dielectric.
The anode is highly roughened to increase the surface area. This and the relatively high permittivity of the oxide layer gives these capacitors very high capacitance per unit volume compared with film- or ceramic capacitors.
The permittivity of tantalum pentoxide is approximately three times higher than aluminium oxide, producing significantly smaller components. However, permittivity determines only the dimensions. Electrical parameters, especially conductivity, are established by the electrolyte's material and composition. Three general types of electrolytes are used:
- non solid (wet, liquid)—conductivity approximately 10 mS/cm and are the lowest cost
- solid manganese oxide—conductivity approximately 100 mS/cm offer high quality and stability
- solid conductive polymer (Polypyrrole or PEDOT:PSS)—conductivity approximately 100...500 S/cm, offer ESR values as low as <10 mΩ
Internal losses of electrolytic capacitors, prevailing used for decoupling and buffering applications, are determined by the kind of electrolyte.
at 85 °C
e.g. Ethylene glycol,
DMF, DMA, GBL
The large capacitance per unit volume of electrolytic capacitors make them valuable in relatively high-current and low-frequency electrical circuits, e.g. in power supply filters for decoupling unwanted AC components from DC power connections or as coupling capacitors in audio amplifiers, for passing or bypassing low-frequency signals and storing large amounts of energy. The relatively high capacitance value of an electrolytic capacitor combined with the very low ESR of the polymer electrolyte of polymer capacitors, especially in SMD styles, makes them a competitor to MLC chip capacitors in personal computer power supplies.
Bipolar aluminum electrolytic capacitors (also called Non-Polarized capacitors) contain two anodized aluminium foils, behaving like two capacitors connected in series opposition.
Supercapacitors (SC), comprise a family of electrochemical capacitors. Supercapacitor, sometimes called ultracapacitor is a generic term for electric double-layer capacitors (EDLC), pseudocapacitors and hybrid capacitors. They don't have a conventional solid dielectric. The capacitance value of an electrochemical capacitor is determined by two storage principles, both of which contribute to the total capacitance of the capacitor:
- Double-layer capacitance – Storage is achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor and an electrolytic solution. The distance of separation of charge in a double-layer is on the order of a few Angstroms (0.3–0.8 nm). This storage is electrostatic in origin.
- Pseudocapacitance – Storage is achieved by redox reactions, electrosorbtion or intercalation on the surface of the electrode or by specifically adsorpted ions that results in a reversible faradaic charge-transfer. The pseudocapacitance is faradaic in origin.
The ratio of the storage resulting from each principle can vary greatly, depending on electrode design and electrolyte composition. Pseudocapacitance can increase the capacitance value by as much as an order of magnitude over that of the double-layer by itself.
Supercapacitors are divided into three families, based on the design of the electrodes:
- Double-layer capacitors – with carbon electrodes or derivates with much higher static double-layer capacitance than the faradaic pseudocapacitance
- Pseudocapacitors – with electrodes out of metal oxides or conducting polymers with a high amount of faradaic pseudocapacitance
- Hybrid capacitors – capacitors with special and asymmetric electrodes that exhibit both significant double-layer capacitance and pseudocapacitance, such as lithium-ion capacitors
Supercapacitors bridge the gap between conventional capacitors and rechargeable batteries. They have the highest available capacitance values per unit volume and the greatest energy density of all capacitors. They support up to 12,000 farads/1.2 volt, with capacitance values up to 10,000 times that of electrolytic capacitors. While existing supercapacitors have energy densities that are approximately 10% of a conventional battery, their power density is generally 10 to 100 times greater. Power density is defined as the product of energy density, multiplied by the speed at which the energy is delivered to the load. The greater power density results in much shorter charge/discharge cycles than a battery is capable, and a greater tolerance for numerous charge/discharge cycles. This makes them well-suited for parallel connection with batteries, and may improve battery performance in terms of power density.
Within electrochemical capacitors, the electrolyte is the conductive connection between the two electrodes, distinguishing them from electrolytic capacitors, in which the electrolyte only forms the cathode, the second electrode.
Supercapacitors are polarized and must operate with correct polarity. Polarity is controlled by design with asymmetric electrodes, or, for symmetric electrodes, by a potential applied during the manufacturing process.
Supercapacitors support a broad spectrum of applications for power and energy requirements, including:
- Low supply current during longer times for memory backup in (SRAMs) in electronic equipment
- Power electronics that require very short, high current, as in the KERSsystem in Formula 1 cars
- Recovery of braking energy for vehicles such as buses and trains
Supercapacitors are rarely interchangeable, especially those with higher energy densities. IEC standard 62391-1 Fixed electric double layer capacitors for use in electronic equipment identifies four application classes:
- Class 1, Memory backup, discharge current in mA = 1 • C (F)
- Class 2, Energy storage, discharge current in mA = 0.4 • C (F) • V (V)
- Class 3, Power, discharge current in mA = 4 • C (F) • V (V)
- Class 4, Instantaneous power, discharge current in mA = 40 • C (F) • V (V)
Exceptional for electronic components like capacitors are the manifold different trade or series names used for supercapacitors like: APowerCap, BestCap, BoostCap, CAP-XX, DLCAP, EneCapTen, EVerCAP, DynaCap, Faradcap, GreenCap, Goldcap, HY-CAP, Kapton capacitor, Super capacitor, SuperCap, PAS Capacitor, PowerStor, PseudoCap, Ultracapacitor making it difficult for users to classify these capacitors.
Class X and Class Y capacitorsEdit
Many safety regulations mandate that Class X or Class Y capacitors must be used whenever a "fail-to-short-circuit" could put humans in danger, to guarantee galvanic isolation even when the capacitor fails.
In principle, any dielectric could be used to build Class X and Class Y capacitors; perhaps by including an internal fuse to improve safety. In practice, capacitors that meet Class X and Class Y specifications are typically ceramic RFI/EMI suppression capacitors or plastic film RFI/EMI suppression capacitors.
Beneath the above described capacitors covering more or less nearly the total market of discrete capacitors some new developments or very special capacitor types as well as older types can be found in electronics.
- Integrated capacitors—in integrated circuits, nano-scale capacitors can be formed by appropriate patterns of metallization on an isolating substrate. They may be packaged in multiple capacitor arrays with no other semiconductive parts as discrete components.
- Glass capacitors—First Leyden jar capacitor was made of glass, As of 2012[update] glass capacitors were in use as SMD version for applications requiring ultra-reliable and ultra-stable service.
- Vacuum capacitors—used in high power RF transmitters
- SF6 gas filled capacitors—used as capacitance standard in measuring bridge circuits
- Printed circuit boards—metal conductive areas in different layers of a multi-layer printed circuit board can act as a highly stable capacitor in Distributed-element filters. It is common industry practice to fill unused areas of one PCB layer with the ground conductor and another layer with the power conductor, forming a large distributed capacitor between the layers.
- Wire—2 pieces of insulated wire twisted together. Capacitance values usually range from 3 pF to 15 pF. Used in homemade VHF circuits for oscillation feedback.
Specialized devices such as built-in capacitors with metal conductive areas in different layers of a multi-layer printed circuit board and kludges such as twisting together two pieces of insulated wire also exist.
- Leyden jars the earliest known capacitor
- Clamped mica capacitors—the first capacitors with stable frequency behavior and low losses, used for military radio applications during World War II
- Air-gap capacitors—used by the first spark-gap transmitters
Some 1 nF × 500 VDC rated silver mica capacitors
Variable capacitors may have their capacitance changed by mechanical motion. Generally two versions of variable capacitors has to be to distinguished
- Tuning capacitor – variable capacitor for intentionally and repeatedly tuning an oscillator circuit in a radio or another tuned circuit
- Trimmer capacitor – small variable capacitor usually for one-time oscillator circuit internal adjustment
Variable capacitors include capacitors that use a mechanical construction to change the distance between the plates, or the amount of plate surface area which overlaps. They mostly use air as dielectric medium.
Semiconductive variable capacitance diodes are not capacitors in the sense of passive components but can change their capacitance as a function of the applied reverse bias voltage and are used like a variable capacitor. They have replaced much of the tuning and trimmer capacitors.
Comparison of typesEdit
|Ceramic Class 1 capacitors||paraelectric ceramic mixture of Titanium dioxide modified by additives||Predictable linear and low capacitance change with operating temperature. Excellent high frequency characteristics with low losses. For temperature compensation in resonant circuit application. Available in voltages up to 15,000 V||Low permittivity ceramic, capacitors with low volumetric efficiency, larger dimensions than Class 2 capacitors|
|Ceramic Class 2 capacitors||ferroelectric ceramic mixture of barium titanate and suitable additives||High permittivity, high volumetric efficiency, smaller dimensions than Class 1 capacitors. For buffer, by-pass and coupling applications. Available in voltages up to 50,000 V.||Lower stability and higher losses than Class 1. Capacitance changes with change in applied voltage, with frequency and with aging effects. Slightly microphonic|
|Metallized film capacitors||PP, PET, PEN, PPS, (PTFE)||Metallized film capacitors are significantly smaller in size than film/foil versions and have self-healing properties.||Thin metallized electrodes limit the maximum current carrying capability respectively the maximum possible pulse voltage.|
|Film/foil film capacitors||PP, PET, PTFE||Film/foil film capacitors have the highest surge ratings/pulse voltage, respectively. Peak currents are higher than for metallized types.||No self-healing properties: internal short may be disabling. Larger dimensions than metallized alternative.|
|Polypropylene (PP) film capacitors||Polypropylene||Most popular film capacitor dielectric. Predictable linear and low capacitance change with operating temperature. Suitable for applications in Class-1 frequency-determining circuits and precision analog applications. Very narrow capacitances. Extremely low dissipation factor. Low moisture absorption, therefore suitable for "naked" designs with no coating. High insulation resistance. Usable in high power applications such as snubber or IGBT. Used also in AC power applications, such as in motors or power factor correction. Very low dielectric losses. High frequency and high power applications such as induction heating. Widely used for safety/EMI suppression, including connection to power supply mains.||Maximum operating temperature of 105 °C. Relatively low permittivity of 2.2. PP film capacitors tend to be larger than other film capacitors. More susceptible to damage from transient over-voltages or voltage reversals than oil-impregnated MKV-capacitors for pulsed power applications.|
|Polyester (PET) film
|Polyethylene terephthalate, Polyester (Hostaphan®, Mylar®)||Smaller in size than functionally comparable polypropylene film capacitors. Low moisture absorption. Have almost completely replaced metallized paper and polystyrene film for most DC applications. Mainly used for general purpose applications or semi-critical circuits with operating temperatures up to 125 °C. Operating voltages up to 60,000 V DC.||Usable at low (AC power) frequencies. Limited use in power electronics due to higher losses with increasing temperature and frequency.|
(PEN) film capacitors
|Polyethylene naphthalate (Kaladex®)||Better stability at high temperatures than PET. More suitable for high temperature applications and for SMD packaging. Mainly used for non-critical filtering, coupling and decoupling, because temperature dependencies are not significant.||Lower relative permittivity and lower dielectric strength imply larger dimensions for a given capacitance and rated voltage than PET.|
|Polyphenylene Sulfide (PPS)
|Polyphenylene (Torelina®)||Small temperature dependence over the entire temperature range and a narrow frequency dependence in a wide frequency range. Dissipation factor is quite small and stable. Operating temperatures up to 270 °C. Suitable for SMD. Tolerate increased reflow soldering temperatures for lead-free soldering mandated by the RoHS 2002/95/European Union directive||Above 100 °C, the dissipation factor increases, increasing component temperature, but can operate without degradation. Cost is usually higher than PP.|
(Teflon film) capacitors
|Polytetrafluoroethylene (Teflon®)||Lowest loss solid dielectric. Operating temperatures up to 250 °C. Extremely high insulation resistance. Good stability. Used in mission-critical applications.||Large size (due to low dielectric constant). Higher cost than other film capacitors.|
|Polycarbonate||Almost completely replaced by PP||Limited manufacturers|
|Polystyrene (Styroflex)||Good thermal stability, high insulation, low distortion but unsuited to SMT and now almost completely replaced by PET||Limited manufacturers|
|Polysulphone film capacitors||Polysulfone||Similar to polycarbonate. Withstand full voltage at comparatively higher temperatures.||Only development, no series found (2012)|
|Polyamide film capacitors||Polyamide||Operating temperatures of up to 200 °C. High insulation resistance. Good stability. Low dissipation factor.||Only development, no series found (2012)|
|Polyimide (Kapton)||Highest dielectric strength of any known plastic film dielectric.||Only development, no series found (2012)|
|Film-based power capacitors|
|Metallized paper power capacitors||Paper impregnated with insulating oil or epoxy resin||Self-healing properties. Originally impregnated with wax, oil or epoxy. Oil-Kraft paper version used in certain high voltage applications. Mostly replaced by PP.||Large size. Highly hygroscopic, absorbing moisture from the atmosphere despite plastic enclosures and impregnates. Moisture increases dielectric losses and decreases insulation resistance.|
|Paper film/foil power capacitors||Kraft paper impregnated with oil||Paper covered with metal foils as electrodes. Low cost. Intermittent duty, high discharge applications.||Physically large and heavy. Significantly lower energy density than PP dielectric. Not self-healing. Potential catastrophic failure due to high stored energy.|
(MKV power capacitors)
|Double-sided (field-free) metallized paper as electrode carrier. PP as dielectric, impregnated with insulating oil, epoxy resin or insulating gas||Self-healing. Very low losses. High insulation resistance. High inrush current strength. High thermal stability. Heavy duty applications such as commutating with high reactive power, high frequencies and a high peak current load and other AC applications.||Physically larger than PP power capacitors.|
|Single- or double-sided
metallized PP power capacitors
|PP as dielectric, impregnated with insulating oil, epoxy resin or insulating gas||Highest capacitance per volume power capacitor. Self-healing. Broad range of applications such as general-purpose, AC capacitors, motor capacitors, smoothing or filtering, DC links, snubbing or clamping, damping AC, series resonant DC circuits, DC discharge, AC commutation, AC power factor correction.||critical for reliable high voltage operation and very high inrush current loads, limited heat resistance (105 °C)|
|PP film/foil power capacitors||Impregnated PP or insulating gas, insulating oil, epoxy resin or insulating gas||Highest inrush current strength||Larger than the PP metallized versions. Not self-healing.|
with non solid
|Very large capacitance to volume ratio. Capacitance values up to 2,700,000 µF/6.3 V. Voltage up to 550 V. Lowest cost per capacitance/voltage values. Used where low losses and high capacitance stability are not of major importance, especially for lower frequencies, such as by-pass, coupling, smoothing and buffer applications in power supplies and DC-links.||Polarized. Significant leakage. Relatively high ESR and ESL values, limiting high ripple current and high frequency applications. Lifetime calculation required because drying out phenomenon. Vent or burst when overloaded, overheated or connected wrong polarized. Water based electrolyte may vent at end-of-life, showing failures like "capacitor plague"|
|Wet tantalum electrolytic capacitors (wet slug) Lowest leakage among electrolytics. Voltage up to 630 V (tantalum film) or 125 V (tantalum sinter body). Hermetically sealed. Stable and reliable. Military and space applications.||Polarized. Violent explosion when voltage, ripple current or slew rates are exceeded, or under reverse voltage. Expensive.|
with solid Manganese dioxide electrolyte
|Tantalum and niobium with smaller dimensions for a given capacitance/voltage vs aluminum. Stable electrical parameters. Good long-term high temperature performance. Lower ESR lower than non-solid (wet) electrolytics.||Polarized. About 125 V. Low voltage and limited, transient, reverse or surge voltage tolerance. Possible combustion upon failure. ESR much higher than conductive polymer electrolytics. Manganese expected to be replaced by polymer.|
with solid Polymer electrolyte
|Greatly reduced ESR compared with manganese or non-solid (wet) elelectrolytics. Higher ripple current ratings. Extended operational life. Stable electrical parameters. Self-healing. Used for smoothing and buffering in smaller power supplies especially in SMD.||Polarized. Highest leakage current among electrolytics. Higher prices than non-solid or manganese dioxide. Voltage limited to about 100 V. Explodes when voltage, current, or slew rates are exceeded or under reverse voltage.|
|Helmholtz double-layer plus faradaic pseudo-capacitance||Energy density typically tens to hundreds of times greater than conventional electrolytics. More comparable to batteries than to other capacitors. Large capacitance/volume ratio. Relatively low ESR. Thousands of farads. RAM memory backup. Temporary power during battery replacement. Rapidly absorbs/delivers much larger currents than batteries. Hundreds of thousands of charge/discharge cycles. Hybrid vehicles. Recuperation||Polarized. Low operating voltage per cell. (Stacked cells provide higher operating voltage.) Relatively high cost.|
Lithium ion capacitors
|Helmholtz double-layer plus faradaic pseudo-capacitance. Anode doped with lithium ions.||Higher operating voltage. Higher energy density than common EDLCs, but smaller than lithium ion batteries (LIB). No thermal runaway reactions.||Polarized. Low operating voltage per cell. (Stacked cells provide higher operating voltage.) Relatively high cost.|
|Air gap capacitors||Air||Low dielectric loss. Used for resonating HF circuits for high power HF welding.||Physically large. Relatively low capacitance.|
|Vacuum capacitors||Vacuum||Extremely low losses. Used for high voltage, high power RF applications, such as transmitters and induction heating. Self-healing if arc-over current is limited.||Very high cost. Fragile. Large. Relatively low capacitance.|
6-gas filled capacitors
|High precision. Extremely low losses. Very high stability. Up to 1600 kV rated voltage. Used as capacitance standard in measuring bridge circuits.||Very high cost|
|Metallized mica (silver mica) capacitors||Mica||Very high stability. No aging. Low losses. Used for HF and low VHF RF circuits and as capacitance standard in measuring bridge circuits. Mostly replaced by Class 1 ceramic capacitors||Higher cost than class 1 ceramic capacitors|
|Glass capacitors||Glass||Better stability and frequency than silver mica. Ultra-reliable. Ultra-stable. Resistant to nuclear radiation. Operating temperature: −75 °C to +200 °C and even short overexposure to +250 °C.||Higher cost than class 1 ceramic|
|Integrated capacitors||oxide-nitride-oxide (ONO)||Thin (down to 100 µm). Smaller footprint than most MLCC. Low ESL. Very high stability up to 200 °C. High reliability||Customized production|
|Air gap tuning capacitors||Air||Circular or various logarithmic cuts of the rotor electrode for different capacitance curves. Split rotor or stator cut for symmetric adjustment. Ball bearing axis for noise reduced adjustment. For high professional devices.||Large dimensions. High cost.|
|Vacuum tuning capacitors||Vacuum||Extremely low losses. Used for high voltage, high power RF applications, such as transmitters and induction heating. Self-healing if arc-over current is limited.||Very high cost. Fragile. Large dimensions.|
6 gas filled tuning capacitor
|Extremely low losses. Used for very high voltage high power RF applications.||Very high cost, fragile, large dimensions|
|Air gap trimmer capacitors||Air||Mostly replaced by semiconductive variable capacitance diodes||High cost|
|Ceramic trimmer capacitors||Class 1 ceramic||Linear and stable frequency behavior over wide temperature range||High cost|
Discrete capacitors deviate from the ideal capacitor. An ideal capacitor only stores and releases electrical energy, with no dissipation. Capacitor components have losses and parasitic inductive parts. These imperfections in material and construction can have positive implications such as linear frequency and temperature behavior in class 1 ceramic capacitors. Conversely, negative implications include the non-linear, voltage-dependent capacitance in class 2 ceramic capacitors or the insufficient dielectric insulation of capacitors leading to leakage currents.
All properties can be defined and specified by a series equivalent circuit composed out of an idealized capacitance and additional electrical components which model all losses and inductive parameters of a capacitor. In this series-equivalent circuit the electrical characteristics are defined by:
- C, the capacitance of the capacitor
- Rinsul, the insulation resistance of the dielectric, not to be confused with the insulation of the housing
- Rleak, the resistance representing the leakage current of the capacitor
- RESR, the equivalent series resistance which summarizes all ohmic losses of the capacitor, usually abbreviated as "ESR"
- LESL, the equivalent series inductance which is the effective self-inductance of the capacitor, usually abbreviated as "ESL".
Using a series equivalent circuit instead of a parallel equivalent circuit is specified by IEC/EN 60384-1.
Standard capacitance values and tolerancesEdit
The rated capacitance CR or nominal capacitance CN is the value for which the capacitor has been designed. Actual capacitance depends on the measured frequency and ambient temperature. Standard measuring conditions are a low-voltage AC measuring method at a temperature of 20 °C with frequencies of
- 100 kHz, 1 MHz (preferred) or 10 MHz for non-electrolytic capacitors with CR ≤ 1 nF:
- 1 kHz or 10 kHz for non-electrolytic capacitors with 1 nF < CR ≤ 10 μF
- 100/120 Hz for electrolytic capacitors
- 50/60 Hz or 100/120 Hz for non-electrolytic capacitors with CR > 10 μF
For supercapacitors a voltage drop method is applied for measuring the capacitance value. .
Capacitors are available in geometrically increasing preferred values (E series standards) specified in IEC/EN 60063. According to the number of values per decade, these were called the E3, E6, E12, E24 etc. series. The range of units used to specify capacitor values has expanded to include everything from pico- (pF), nano- (nF) and microfarad (µF) to farad (F). Millifarad and kilofarad are uncommon.
The percentage of allowed deviation from the rated value is called tolerance. The actual capacitance value should be within its tolerance limits, or it is out of specification. IEC/EN 60062 specifies a letter code for each tolerance.
|CR > 10 pF||Letter code||CR < 10 pF||Letter code|
|E 96||1%||F||0.1 pF||B|
|E 48||2%||G||0.25 pF||C|
|E 24||5%||J||0.5 pF||D|
|E 12||10%||K||1 pF||F|
|E 6||20%||M||2 pF||G|
The required tolerance is determined by the particular application. The narrow tolerances of E24 to E96 are used for high-quality circuits such as precision oscillators and timers. General applications such as non-critical filtering or coupling circuits employ E12 or E6. Electrolytic capacitors, which are often used for filtering and bypassing capacitors mostly have a tolerance range of ±20% and need to conform to E6 (or E3) series values.
Capacitance typically varies with temperature. The different dielectrics express great differences in temperature sensitivity. The temperature coefficient is expressed in parts per million (ppm) per degree Celsius for class 1 ceramic capacitors or in % over the total temperature range for all others.
|Type of capacitor,
|Ceramic capacitor class 1
|± 30 ppm/K (±0.5%)||−55 to +125 °C|
|Ceramic capacitor class 2
|±15%||−55 to +125 °C|
|Ceramic capacitor class 2,
|+22% / −82 %||−30 to +85 °C|
Polypropylene ( PP)
|±2.5%||−55 to +85/105 °C|
|+5%||−55 to +125/150 °C|
Polyphenylene sulfide (PPS)
|±1.5%||−55 to +150 °C|
Polyethylene naphthalate (PEN)
|±5%||−40 to +125/150 °C|
|?||−40 to +130 °C|
|Metallized paper capacitor (impregnated)||±10%||−25 to +85 °C|
|Aluminum electrolytic capacitor
|±20%||−40 to +85/105/125 °C|
|Tantalum electrolytic capacitor
|±20%||−40 to +125 °C|
Most discrete capacitor types have more or less capacitance changes with increasing frequencies. The dielectric strength of class 2 ceramic and plastic film diminishes with rising frequency. Therefore, their capacitance value decreases with increasing frequency. This phenomenon for ceramic class 2 and plastic film dielectrics is related to dielectric relaxation in which the time constant of the electrical dipoles is the reason for the frequency dependence of permittivity. The graphs below show typical frequency behavior of the capacitance for ceramic and film capacitors.
For electrolytic capacitors with non-solid electrolyte, mechanical motion of the ions occurs. Their movability is limited so that at higher frequencies not all areas of the roughened anode structure are covered with charge-carrying ions. As higher the anode structure is roughened as more the capacitance value decreases with increasing frequency. Low voltage types with highly roughened anodes display capacitance at 100 kHz approximately 10 to 20% of the value measured at 100 Hz.
Capacitance may also change with applied voltage. This effect is more prevalent in class 2 ceramic capacitors. The permittivity of ferroelectric class 2 material depends on the applied voltage. Higher applied voltage lowers permittivity. The change of capacitance can drop to 80% of the value measured with the standardized measuring voltage of 0.5 or 1.0 V. This behavior is a small source of non-linearity in low-distortion filters and other analog applications. In audio applications this can cause distortion (measured using THD).
Film capacitors and electrolytic capacitors have no significant voltage dependence.
Rated and category voltageEdit
The voltage at which the dielectric becomes conductive is called the breakdown voltage, and is given by the product of the dielectric strength and the separation between the electrodes. The dielectric strength depends on temperature, frequency, shape of the electrodes, etc. Because a breakdown in a capacitor normally is a short circuit and destroys the component, the operating voltage is lower than the breakdown voltage. The operating voltage is specified such that the voltage may be applied continuously throughout the life of the capacitor.
In IEC/EN 60384-1 the allowed operating voltage is called "rated voltage" or "nominal voltage". The rated voltage (UR) is the maximum DC voltage or peak pulse voltage that may be applied continuously at any temperature within the rated temperature range.
The voltage proof of nearly all capacitors decreases with increasing temperature. Some applications require a higher temperature range. Lowering the voltage applied at a higher temperature maintains safety margins. For some capacitor types therefore the IEC standard specify a second "temperature derated voltage" for a higher temperature range, the "category voltage". The category voltage (UC) is the maximum DC voltage or peak pulse voltage that may be applied continuously to a capacitor at any temperature within the category temperature range.
The relation between both voltages and temperatures is given in the picture right.
In general, a capacitor is seen as a storage component for electric energy. But this is only one capacitor function. A capacitor can also act as an AC resistor. In many cases the capacitor is used as a decoupling capacitor to filter or bypass undesired biased AC frequencies to the ground. Other applications use capacitors for capacitive coupling of AC signals; the dielectric is used only for blocking DC. For such applications the AC resistance is as important as the capacitance value.
The frequency dependent AC resistance is called impedance and is the complex ratio of the voltage to the current in an AC circuit. Impedance extends the concept of resistance to AC circuits and possesses both magnitude and phase at a particular frequency. This is unlike resistance, which has only magnitude.
The magnitude represents the ratio of the voltage difference amplitude to the current amplitude, is the imaginary unit, while the argument gives the phase difference between voltage and current.
In capacitor data sheets, only the impedance magnitude |Z| is specified, and simply written as "Z" so that the formula for the impedance can be written in Cartesian form
As shown in a capacitor's series-equivalent circuit, the real component includes an ideal capacitor , an inductance and a resistor . The total reactance at the angular frequency therefore is given by the geometric (complex) addition of a capacitive reactance (Capacitance) and an inductive reactance (Inductance): .
To calculate the impedance the resistance has to be added geometrically and then is given by
- . The impedance is a measure of the capacitor's ability to pass alternating currents. In this sense the impedance can be used like Ohms law
to calculate either the peak or the effective value of the current or the voltage.
In the special case of resonance, in which the both reactive resistances
have the same value ( ), then the impedance will only be determined by .
The impedance specified in the datasheets often show typical curves for the different capacitance values. With increasing frequency as the impedance decreases down to a minimum. The lower the impedance, the more easily alternating currents can be passed through the capacitor. At the apex, the point of resonance, where XC has the same value than XL, the capacitor has the lowest impedance value. Here only the ESR determines the impedance. With frequencies above the resonance the impedance increases again due to the ESL of the capacitor. The capacitor becomes an inductance.
As shown in the graph, the higher capacitance values can fit the lower frequencies better while the lower capacitance values can fit better the higher frequencies.
Aluminum electrolytic capacitors have relatively good decoupling properties in the lower frequency range up to about 1 MHz due to their large capacitance values. This is the reason for using electrolytic capacitors in standard or switched-mode power supplies behind the rectifier for smoothing application.
Ceramic and film capacitors are already out of their smaller capacitance values suitable for higher frequencies up to several 100 MHz. They also have significantly lower parasitic inductance, making them suitable for higher frequency applications, due to their construction with end-surface contacting of the electrodes. To increase the range of frequencies, often an electrolytic capacitor is connected in parallel with a ceramic or film capacitor.
Many new developments are targeted at reducing parasitic inductance (ESL). This increases the resonance frequency of the capacitor and, for example, can follow the constantly increasing switching speed of digital circuits. Miniaturization, especially in the SMD multilayer ceramic chip capacitors (MLCC), increases the resonance frequency. Parasitic inductance is further lowered by placing the electrodes on the longitudinal side of the chip instead of the lateral side. The "face-down" construction associated with multi-anode technology in tantalum electrolytic capacitors further reduced ESL. Capacitor families such as the so-called MOS capacitor or silicon capacitors offer solutions when capacitors at frequencies up to the GHz range are needed.
Inductance (ESL) and self-resonant frequencyEdit
ESL in industrial capacitors is mainly caused by the leads and internal connections used to connect the capacitor plates to the outside world. Large capacitors tend to have higher ESL than small ones because the distances to the plate are longer and every mm counts as an inductance.
For any discrete capacitor, there is a frequency above DC at which it ceases to behave as a pure capacitor. This frequency, where is as high as , is called the self-resonant frequency. The self-resonant frequency is the lowest frequency at which the impedance passes through a minimum. For any AC application the self-resonant frequency is the highest frequency at which capacitors can be used as a capacitive component.
This is critically important for decoupling high-speed logic circuits from the power supply. The decoupling capacitor supplies transient current to the chip. Without decouplers, the IC demands current faster than the connection to the power supply can supply it, as parts of the circuit rapidly switch on and off. To counter this potential problem, circuits frequently use multiple bypass capacitors—small (100 nF or less) capacitors rated for high frequencies, a large electrolytic capacitor rated for lower frequencies and occasionally, an intermediate value capacitor.
Ohmic losses, ESR, dissipation factor, and quality factorEdit
The summarized losses in discrete capacitors are ohmic AC losses. DC losses are specified as "leakage current" or "insulating resistance" and are negligible for an AC specification. AC losses are non-linear, possibly depending on frequency, temperature, age or humidity. The losses result from two physical conditions:
- line losses including internal supply line resistances, the contact resistance of the electrode contact, line resistance of the electrodes, and in "wet" aluminum electrolytic capacitors and especially supercapacitors, the limited conductivity of liquid electrolytes and
- dielectric losses from dielectric polarization.
The largest share of these losses in larger capacitors is usually the frequency dependent ohmic dielectric losses. For smaller components, especially for wet electrolytic capacitors, conductivity of liquid electrolytes may exceed dielectric losses. To measure these losses, the measurement frequency must be set. Since commercially available components offer capacitance values cover 15 orders of magnitude, ranging from pF (10−12 F) to some 1000 F in supercapacitors, it is not possible to capture the entire range with only one frequency. IEC 60384-1 states that ohmic losses should be measured at the same frequency used to measure capacitance. These are:
- 100 kHz, 1 MHz (preferred) or 10 MHz for non-electrolytic capacitors with CR ≤ 1 nF:
- 1 kHz or 10 kHz for non-electrolytic capacitors with 1 nF < CR ≤ 10 μF
- 100/120 Hz for electrolytic capacitors
- 50/60 Hz or 100/120 Hz for non-electrolytic capacitors with CR > 10 μF
Capacitors with higher ripple current loads, such as electrolytic capacitors, are specified with equivalent series resistance ESR. ESR can be shown as an ohmic part in the above vector diagram. ESR values are specified in datasheets per individual type.
The losses of film capacitors and some class 2 ceramic capacitors are mostly specified with the dissipation factor tan δ. These capacitors have smaller losses than electrolytic capacitors and mostly are used at higher frequencies up to some hundred MHz. However the numeric value of the dissipation factor, measured at the same frequency, is independent of the capacitance value and can be specified for a capacitor series with a range of capacitance. The dissipation factor is determined as the tangent of the reactance ( ) and the ESR, and can be shown as the angle δ between imaginary and the impedance axis.
If the inductance is small, the dissipation factor can be approximated as:
Capacitors with very low losses, such as ceramic Class 1 and Class 2 capacitors, specify resistive losses with a quality factor (Q). Ceramic Class 1 capacitors are especially suitable for LC resonant circuits with frequencies up to the GHz range, and precise high and low pass filters. For an electrically resonant system, Q represents the effect of electrical resistance and characterizes a resonator's bandwidth relative to its center or resonant frequency . Q is defined as the reciprocal value of the dissipation factor.
A high Q value is for resonant circuits a mark of the quality of the resonance.
at 100 kHz
at 1 MHz
at 1 MHz
ceramic capacitor (NP0)
Limiting current loadsEdit
A capacitor can act as an AC resistor, coupling AC voltage and AC current between two points. Every AC current flow through a capacitor generates heat inside the capacitor body. These dissipation power loss is caused by and is the squared value of the effective (RMS) current
The same power loss can be written with the dissipation factor as
The internal generated heat has to be distributed to the ambient. The temperature of the capacitor, which is established on the balance between heat produced and distributed, shall not exceed the capacitors maximum specified temperature. Hence, the ESR or dissipation factor is a mark for the maximum power (AC load, ripple current, pulse load, etc.) a capacitor is specified for.
AC currents may be a:
- ripple current—an effective (RMS) AC current, coming from an AC voltage superimposed of a DC bias, a
- pulse current—an AC peak current, coming from a voltage peak, or an
- AC current—an effective (RMS) sinusoidal current
Ripple and AC currents mainly warms the capacitor body. By this currents internal generated temperature influences the breakdown voltage of the dielectric. Higher temperature lower the voltage proof of all capacitors. In wet electrolytic capacitors higher temperatures force the evaporation of electrolytes, shortening the life time of the capacitors. In film capacitors higher temperatures may shrink the plastic film changing the capacitor's properties.
Pulse currents, especially in metallized film capacitors, heat the contact areas between end spray (schoopage) and metallized electrodes. This may reduce the contact to the electrodes, heightening the dissipation factor.
For safe operation, the maximal temperature generated by any AC current flow through the capacitor is a limiting factor, which in turn limits AC load, ripple current, pulse load, etc.
A "ripple current" is the RMS value of a superimposed AC current of any frequency and any waveform of the current curve for continuous operation at a specified temperature. It arises mainly in power supplies (including switched-mode power supplies) after rectifying an AC voltage and flows as charge and discharge current through the decoupling or smoothing capacitor. The "rated ripple current" shall not exceed a temperature rise of 3, 5 or 10 °C, depending on the capacitor type, at the specified maximum ambient temperature.
Ripple current generates heat within the capacitor body due to the ESR of the capacitor. The ESR, composed out of the dielectric losses caused by the changing field strength in the dielectric and the losses resulting out of the slightly resistive supply lines or the electrolyte depends on frequency and temperature. For ceramic and film capacitors in generally ESR decreases with increasing temperatures but heighten with higher frequencies due to increasing dielectric losses. For electrolytic capacitors up to roughly 1 MHz ESR decreases with increasing frequencies and temperatures.
The types of capacitors used for power applications have a specified rated value for maximum ripple current. These are primarily aluminum electrolytic capacitors, and tantalum as well as some film capacitors and Class 2 ceramic capacitors.
Aluminium electrolytic capacitors, the most common type for power supplies, experience shorter life expectancy at higher ripple currents. Exceeding the limit tends to result in explosive failure.
Tantalum electrolytic capacitors with solid manganese dioxide electrolyte are also limited by ripple current. Exceeding their ripple limits tends to shorts and burning components.
For film and ceramic capacitors, normally specified with a loss factor tan δ, the ripple current limit is determined by temperature rise in the body of approximately 10 °C. Exceeding this limit may destroy the internal structure and cause shorts.
The rated pulse load for a certain capacitor is limited by the rated voltage, the pulse repetition frequency, temperature range and pulse rise time. The "pulse rise time" , represents the steepest voltage gradient of the pulse (rise or fall time) and is expressed in volts per μs (V/μs).
The rated pulse rise time is also indirectly the maximum capacity of an applicable peak current . The peak current is defined as:
where: is in A; in µF; in V/µs
The permissible pulse current capacity of a metallized film capacitor generally allows an internal temperature rise of 8 to 10 K.
In the case of metallized film capacitors, pulse load depends on the properties of the dielectric material, the thickness of the metallization and the capacitor's construction, especially the construction of the contact areas between the end spray and metallized electrodes. High peak currents may lead to selective overheating of local contacts between end spray and metallized electrodes which may destroy some of the contacts, leading to increasing ESR.
For metallized film capacitors, so-called pulse tests simulate the pulse load that might occur during an application, according to a standard specification. IEC 60384 part 1, specifies that the test circuit is charged and discharged intermittently. The test voltage corresponds to the rated DC voltage and the test comprises 10000 pulses with a repetition frequency of 1 Hz. The pulse stress capacity is the pulse rise time. The rated pulse rise time is specified as 1/10 of the test pulse rise time.
The pulse load must be calculated for each application. A general rule for calculating the power handling of film capacitors is not available because of vendor-related internal construction details. To prevent the capacitor from overheating the following operating parameters have to be considered:
- peak current per µF
- Pulse rise or fall time dv/dt in V/µs
- relative duration of charge and discharge periods (pulse shape)
- maximum pulse voltage (peak voltage)
- peak reverse voltage;
- Repetition frequency of the pulse
- Ambient temperature
- Heat dissipation (cooling)
Higher pulse rise times are permitted for pulse voltage lower than the rated voltage.
An AC load only can be applied to a non-polarized capacitor. Capacitors for AC applications are primarily film capacitors, metallized paper capacitors, ceramic capacitors and bipolar electrolytic capacitors.
The rated AC load for an AC capacitor is the maximum sinusoidal effective AC current (rms) which may be applied continuously to a capacitor within the specified temperature range. In the datasheets the AC load may be expressed as
- rated AC voltage at low frequencies,
- rated reactive power at intermediate frequencies,
- reduced AC voltage or rated AC current at high frequencies.
The rated AC voltage for film capacitors is generally calculated so that an internal temperature rise of 8 to 10 °K is the allowed limit for safe operation. Because dielectric losses increase with increasing frequency, the specified AC voltage has to be derated at higher frequencies. Datasheets for film capacitors specify special curves for derating AC voltages at higher frequencies.
If film capacitors or ceramic capacitors only have a DC specification, the peak value of the AC voltage applied has to be lower than the specified DC voltage.
AC loads can occur in AC motor run capacitors, for voltage doubling, in snubbers, lighting ballast and for power factor correction PFC for phase shifting to improve transmission network stability and efficiency, which is one of the most important applications for large power capacitors. These mostly large PP film or metallized paper capacitors are limited by the rated reactive power VAr.
Bipolar electrolytic capacitors, to which an AC voltage may be applicable, are specified with a rated ripple current.
Insulation resistance and self-discharge constantEdit
The resistance of the dielectric is finite, leading to some level of DC "leakage current" that causes a charged capacitor to lose charge over time. For ceramic and film capacitors, this resistance is called "insulation resistance Rins". This resistance is represented by the resistor Rins in parallel with the capacitor in the series-equivalent circuit of capacitors. Insulation resistance must not be confused with the outer isolation of the component with respect to the environment.
The time curve of self-discharge over insulation resistance with decreasing capacitor voltage follows the formula
With stored DC voltage and self-discharge constant
Thus, after voltage drops to 37% of the initial value.
The self-discharge constant is an important parameter for the insulation of the dielectric between the electrodes of ceramic and film capacitors. For example, a capacitor can be used as the time-determining component for time relays or for storing a voltage value as in a sample and hold circuits or operational amplifiers.
Class 1 ceramic capacitors have an insulation resistance of at least 10 GΩ, while class 2 capacitors have at least 4 GΩ or a self-discharge constant of at least 100 s. Plastic film capacitors typically have an insulation resistance of 6 to 12 GΩ. This corresponds to capacitors in the uF range of a self-discharge constant of about 2000–4000 s.
Insulation resistance respectively the self-discharge constant can be reduced if humidity penetrates into the winding. It is partially strongly temperature dependent and decreases with increasing temperature. Both decrease with increasing temperature.
In electrolytic capacitors, the insulation resistance is defined as leakage current.
For electrolytic capacitors the insulation resistance of the dielectric is termed "leakage current". This DC current is represented by the resistor Rleak in parallel with the capacitor in the series-equivalent circuit of electrolytic capacitors. This resistance between the terminals of a capacitor is also finite. Rleak is lower for electrolytics than for ceramic or film capacitors.
The leakage current includes all weak imperfections of the dielectric caused by unwanted chemical processes and mechanical damage. It is also the DC current that can pass through the dielectric after applying a voltage. It depends on the interval without voltage applied (storage time), the thermic stress from soldering, on voltage applied, on temperature of the capacitor, and on measuring time.
The leakage current drops in the first minutes after applying DC voltage. In this period the dielectric oxide layer can self-repair weaknesses by building up new layers. The time required depends generally on the electrolyte. Solid electrolytes drop faster than non-solid electrolytes but remain at a slightly higher level.
The leakage current in non-solid electrolytic capacitors as well as in manganese oxide solid tantalum capacitors decreases with voltage-connected time due to self-healing effects. Although electrolytics leakage current is higher than current flow over insulation resistance in ceramic or film capacitors, the self-discharge of modern non solid electrolytic capacitors takes several weeks.
A particular problem with electrolytic capacitors is storage time. Higher leakage current can be the result of longer storage times. These behaviors are limited to electrolytes with a high percentage of water. Organic solvents such as GBL do not have high leakage with longer storage times.
Leakage current is normally measured 2 or 5 minutes after applying rated voltage.
All ferroelectric materials exhibit a piezoelectric effect. Because Class 2 ceramic capacitors use ferroelectric ceramics dielectric, these types of capacitors may have electrical effects called microphonics. Microphonics (microphony) describes how electronic components transform mechanical vibrations into an undesired electrical signal (noise). The dielectric may absorb mechanical forces from shock or vibration by changing thickness and changing the electrode separation, affecting the capacitance, which in turn induces an AC current. The resulting interference is especially problematic in audio applications, potentially causing feedback or unintended recording.
In the reverse microphonic effect, varying the electric field between the capacitor plates exerts a physical force, turning them into an audio speaker. High current impulse loads or high ripple currents can generate audible sound from the capacitor itself, draining energy and stressing the dielectric.
Dielectric absorption (soakage)Edit
Dielectric absorption occurs when a capacitor that has remained charged for a long time discharges only incompletely when briefly discharged. Although an ideal capacitor would reach zero volts after discharge, real capacitors develop a small voltage from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage" or "battery action".
|Type of capacitor||Dielectric Absorption|
|Air and vacuum capacitors||Not measurable|
|Class-1 ceramic capacitors, NP0||0.6%|
|Class-2 ceramic capacitors, X7R||2.5%|
|Polypropylene film capacitors (PP)||0.05 to 0.1%|
|Polyester film capacitors (PET)||0.2 to 0.5%|
|Polyphenylene sulfide film capacitors (PPS)||0.05 to 0.1%|
|Polyethylene naphthalate film capacitors (PEN)||1.0 to 1.2%|
|Tantalum electrolytic capacitors with solid electrolyte||2 to 3%, 10%|
|Aluminium electrolytic capacitor with non solid electrolyte||10 to 15%|
|Double-layer capacitor or super capacitors||data not available|
In many applications of capacitors dielectric absorption is not a problem but in some applications, such as long-time-constant integrators, sample-and-hold circuits, switched-capacitor analog-to-digital converters, and very low-distortion filters, the capacitor must not recover a residual charge after full discharge, so capacitors with low absorption are specified. The voltage at the terminals generated by the dielectric absorption may in some cases possibly cause problems in the function of an electronic circuit or can be a safety risk to personnel. In order to prevent shocks most very large capacitors are shipped with shorting wires that need to be removed before they are used.
The capacitance value depends on the dielectric material (ε), the surface of the electrodes (A) and the distance (d) separating the electrodes and is given by the formula of a plate capacitor:
The separation of the electrodes and the voltage proof of the dielectric material defines the breakdown voltage of the capacitor. The breakdown voltage is proportional to the thickness of the dielectric.
Theoretically, given two capacitors with the same mechanical dimensions and dielectric, but one of them have half the thickness of the dielectric. With the same dimensions this one could place twice the parallel-plate area inside. This capacitor has theoretically 4 times the capacitance as the first capacitor but half of the voltage proof.
Since the energy density stored in a capacitor is given by:
thus a capacitor having a dielectric half as thick as another has 4 times higher capacitance but ½ voltage proof, yielding an equal maximum energy density.
Therefore, dielectric thickness does not affect energy density within a capacitor of fixed overall dimensions. Using a few thick layers of dielectric can support a high voltage, but low capacitance, while thin layers of dielectric produce a low breakdown voltage, but a higher capacitance.
This assumes that neither the electrode surfaces nor the permittivity of the dielectric change with the voltage proof. A simple comparison with two existing capacitor series can show whether reality matches theory. The comparison is easy, because the manufacturers use standardized case sizes or boxes for different capacitance/voltage values within a series.
NCC, KME series
Ǿ D × H = 16.5 mm × 25 mm
|Metallized PP film capacitors|
KEMET; PHE 450 series
W × H × L = 10.5 mm × 20.5 mm × 31.5 mm
|Capacitance/Voltage||Stored Energy||Capacitance/Voltage||Stored Energy|
|4700 µF/10 V||235 mW·s||1.2 µF/250 V||37.5 mW·s|
|2200 µF/25 V||688 mW·s||0.68 µF/400 V||54.4 mW·s|
|220 µF/100 V||1100 mW·s||0.39 µF/630 V||77.4 mW·s|
|22 µF/400 V||1760 mW·s||0.27 µF/1000 V||135 mW·s|
In reality modern capacitor series do not fit the theory. For electrolytic capacitors the sponge-like rough surface of the anode foil gets smoother with higher voltages, decreasing the surface area of the anode. But because the energy increases squared with the voltage, and the surface of the anode decreases lesser than the voltage proof, the energy density increases clearly. For film capacitors the permittivity changes with dielectric thickness and other mechanical parameters so that the deviation from the theory has other reasons.
Comparing the capacitors from the table with a supercapacitor, the highest energy density capacitor family. For this, the capacitor 25 F/2.3 V in dimensions D × H = 16 mm × 26 mm from Maxwell HC Series, compared with the electrolytic capacitor of approximately equal size in the table. This supercapacitor has roughly 5000 times higher capacitance than the 4700/10 electrolytic capacitor but ¼ of the voltage and has about 66,000 mWs (0.018 Wh) stored electrical energy, approximately 100 times higher energy density (40 to 280 times) than the electrolytic capacitor.
Long time behavior, agingEdit
Electrical parameters of capacitors may change over time during storage and application. The reasons for parameter changings are different, it may be a property of the dielectric, environmental influences, chemical processes or drying-out effects for non-solid materials.
In ferroelectric Class 2 ceramic capacitors, capacitance decreases over time. This behavior is called "aging". This aging occurs in ferroelectric dielectrics, where domains of polarization in the dielectric contribute to the total polarization. Degradation of polarized domains in the dielectric decreases permittivity and therefore capacitance over time. The aging follows a logarithmic law. This defines the decrease of capacitance as constant percentage for a time decade after the soldering recovery time at a defined temperature, for example, in the period from 1 to 10 hours at 20 °C. As the law is logarithmic, the percentage loss of capacitance will twice between 1 h and 100 h and 3 times between 1 h and 1,000 h and so on. Aging is fastest near the beginning, and the absolute capacitance value stabilizes over time.
The rate of aging of Class 2 ceramic capacitors depends mainly on its materials. Generally, the higher the temperature dependence of the ceramic, the higher the aging percentage. The typical aging of X7R ceramic capacitors is about 2.5% per decade. The aging rate of Z5U ceramic capacitors is significantly higher and can be up to 7% per decade.
The aging process of Class 2 ceramic capacitors may be reversed by heating the component above the Curie point.
Class 1 ceramic capacitors and film capacitors do not have ferroelectric-related aging. Environmental influences such as higher temperature, high humidity and mechanical stress can, over a longer period, lead to a small irreversible change in the capacitance value sometimes called aging, too.
The change of capacitance for P 100 and N 470 Class 1 ceramic capacitors is lower than 1%, for capacitors with N 750 to N 1500 ceramics it is ≤ 2%. Film capacitors may lose capacitance due to self-healing processes or gain it due to humidity influences. Typical changes over 2 years at 40 °C are, for example, ±3% for PE film capacitors and ±1% PP film capacitors.
Electrolytic capacitors with non-solid electrolyte age as the electrolyte evaporates. This evaporation depends on temperature and the current load the capacitors experience. Electrolyte escape influences capacitance and ESR. Capacitance decreases and the ESR increases over time. In contrast to ceramic, film and electrolytic capacitors with solid electrolytes, "wet" electrolytic capacitors reach a specified "end of life" reaching a specified maximum change of capacitance or ESR. End of life, "load life" or "lifetime" can be estimated either by formula or diagrams or roughly by a so-called "10-degree-law". A typical specification for an electrolytic capacitor states a lifetime of 2,000 hours at 85 °C, doubling for every 10 degrees lower temperature, achieving lifespan of approximately 15 years at room temperature.
Supercapacitors also experience electrolyte evaporation over time. Estimation is similar to wet electrolytic capacitors. Additional to temperature the voltage and current load influence the life time. Lower voltage than rated voltage and lower current loads as well as lower temperature extend the life time.
Capacitors are reliable components with low failure rates, achieving life expectancies of decades under normal conditions. Most capacitors pass a test at the end of production similar to a "burn in", so that early failures are found during production, reducing the number of post-shipment failures.
Reliability for capacitors is usually specified in numbers of Failures In Time (FIT) during the period of constant random failures. FIT is the number of failures that can be expected in one billion (109) component-hours of operation at fixed working conditions (e.g. 1000 devices for 1 million hours, or 1 million devices for 1000 hours each, at 40 °C and 0.5 UR). For other conditions of applied voltage, current load, temperature, mechanical influences and humidity the FIT can recalculated with terms standardized for industrial or military contexts.
Capacitors may experience changes to electrical parameters due to environmental influences like soldering, mechanical stress factors (vibration, shock) and humidity. The greatest stress factor is soldering. The heat of the solder bath, especially for SMD capacitors, can cause ceramic capacitors to change contact resistance between terminals and electrodes; in film capacitors, the film may shrink, and in wet electrolytic capacitors the electrolyte may boil. A recovery period enables characteristics to stabilize after soldering; some types may require up to 24 hours. Some properties may change irreversibly by a few per cent from soldering.
Electrolytic behavior from storage or disuseEdit
Electrolytic capacitors with non-solid electrolyte are "aged" during manufacturing by applying rated voltage at high temperature for a sufficient time to repair all cracks and weaknesses that may have occurred during production. Some electrolytes with a high water content react quite aggressively or even violently with unprotected aluminum. This leads to a "storage" or "disuse" problem of electrolytic capacitors manufactured before the 1980s. Chemical processes weaken the oxide layer when these capacitors are not used for too long. New electrolytes with "inhibitors" or "passivators" were developed during the 1980s to solve this problem. As of 2012 the standard storage time for electronic components of two years at room temperature substantiates (cased) by the oxidation of the terminals will be specified for electrolytic capacitors with non-solid electrolytes, too. Special series for 125 °C with organic solvents like GBL are specified up to 10 years storage time ensure without pre-condition the proper electrical behavior of the capacitors.
For antique radio equipment, "pre-conditioning" of older electrolytic capacitors may be recommended. This involves applying the operating voltage for some 10 minutes over a current limiting resistor to the terminals of the capacitor. Applying a voltage through a safety resistor repairs the oxide layers.
The tests and requirements to be met by capacitors for use in electronic equipment for approval as standardized types are set out in the generic specification IEC/EN 60384-1 in the following sections.
- IEC/EN 60384-1 - Fixed capacitors for use in electronic equipment
- IEC/EN 60384-8—Fixed capacitors of ceramic dielectric, Class 1
- IEC/EN 60384-9—Fixed capacitors of ceramic dielectric, Class 2
- IEC/EN 60384-21—Fixed surface mount multilayer capacitors of ceramic dielectric, Class 1
- IEC/EN 60384-22—Fixed surface mount multilayer capacitors of ceramic dielectric, Class 2
- IEC/EN 60384-2—Fixed metallized polyethylene-terephthalate film dielectric d.c. capacitors
- IEC/EN 60384-11—Fixed polyethylene-terephthalate film dielectric metal foil d.c. capacitors
- IEC/EN 60384-13—Fixed polypropylene film dielectric metal foil d.c. capacitors
- IEC/EN 60384-16—Fixed metallized polypropylene film dielectric d.c. capacitors
- IEC/EN 60384-17—Fixed metallized polypropylene film dielectric a.c. and pulse
- IEC/EN 60384-19—Fixed metallized polyethylene-terephthalate film dielectric surface mount d.c. capacitors
- IEC/EN 60384-20—Fixed metalized polyphenylene sulfide film dielectric surface mount d.c. capacitors
- IEC/EN 60384-23—Fixed metallized polyethylene naphthalate film dielectric chip d.c. capacitors
- IEC/EN 60384-3—Surface mount fixed tantalum electrolytic capacitors with manganese dioxide solid electrolyte
- IEC/EN 60384-4—Aluminium electrolytic capacitors with solid (MnO2) and non-solid electrolyte
- IEC/EN 60384-15—fixed tantalum capacitors with non-solid and solid electrolyte
- IEC/EN 60384-18—Fixed aluminium electrolytic surface mount capacitors with solid (MnO2) and non-solid electrolyte
- IEC/EN 60384-24—Surface mount fixed tantalum electrolytic capacitors with conductive polymer solid electrolyte
- IEC/EN 60384-25—Surface mount fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
- IEC/EN 60384-26-Fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
- IEC/EN 62391-1—Fixed electric double-layer capacitors for use in electric and electronic equipment - Part 1: Generic specification
- IEC/EN 62391-2—Fixed electric double-layer capacitors for use in electronic equipment - Part 2: Sectional specification - Electric double-layer capacitors for power application
Capacitors, like most other electronic components and if enough space is available, have imprinted markings to indicate manufacturer, type, electrical and thermal characteristics, and date of manufacture. If they are large enough the capacitor is marked with:
- manufacturer's name or trademark;
- manufacturer's type designation;
- polarity of the terminations (for polarized capacitors)
- rated capacitance;
- tolerance on rated capacitance
- rated voltage and nature of supply (AC or DC)
- climatic category or rated temperature;
- year and month (or week) of manufacture;
- certification marks of safety standards (for safety EMI/RFI suppression capacitors)
Polarized capacitors have polarity markings, usually "−" (minus) sign on the side of the negative electrode for electrolytic capacitors or a stripe or "+" (plus) sign, see #Polarity marking. Also, the negative lead for leaded "wet" e-caps is usually shorter.
Smaller capacitors use a shorthand notation. The most commonly used format is: XYZ J/K/M VOLTS V, where XYZ represents the capacitance (calculated as XY × 10Z pF), the letters J, K or M indicate the tolerance (±5%, ±10% and ±20% respectively) and VOLTS V represents the working voltage.
- 105K 330V implies a capacitance of 10 × 105 pF = 1 µF (K = ±10%) with a working voltage of 330 V.
- 473M 100V implies a capacitance of 47 × 103 pF = 47 nF (M = ±20%) with a working voltage of 100 V.
Capacitance, tolerance and date of manufacture can be indicated with a short code specified in IEC/EN 60062. Examples of short-marking of the rated capacitance (microfarads): µ47 = 0,47 µF, 4µ7 = 4,7 µF, 47µ = 47 µF
The date of manufacture is often printed in accordance with international standards.
- Version 1: coding with year/week numeral code, "1208" is "2012, week number 8".
- Version 2: coding with year code/month code. The year codes are: "R" = 2003, "S"= 2004, "T" = 2005, "U" = 2006, "V" = 2007, "W" = 2008, "X" = 2009, "A" = 2010, "B" = 2011, "C" = 2012, "D" = 2013, etc. Month codes are: "1" to "9" = Jan. to Sept., "O" = October, "N" = November, "D" = December. "X5" is then "2009, May"
For very small capacitors like MLCC chips no marking is possible. Here only the traceability of the manufacturers can ensure the identification of a type.
As of 2013[update] Capacitors do not use color coding.
Aluminum e-caps with non-solid electrolyte have a polarity marking at the cathode (minus) side. Aluminum, tantalum, and niobium e-caps with solid electrolyte have a polarity marking at the anode (plus) side. Supercapacitors are marked at the minus side.
- Polarity marking details
Rectangular polymer capacitors, tantalum as well as aluminum, have a polarity marking at the anode (plus) side
Discrete capacitors today are industrial products produced in very large quantities for use in electronic and in electrical equipment. Globally, the market for fixed capacitors was estimated at approximately US$18 billion in 2008 for 1,400 billion (1.4 × 1012) pieces. This market is dominated by ceramic capacitors with estimate of approximately one trillion (1 × 1012) items per year.
Detailed estimated figures in value for the main capacitor families are:
- Ceramic capacitors—US$8.3 billion (46%);
- Aluminum electrolytic capacitors—US$3.9 billion (22%);
- Film capacitors and Paper capacitors—US$2.6 billion, (15%);
- Tantalum electrolytic capacitors—US$2.2 billion (12%);
- Super capacitors (Double-layer capacitors)—US$0.3 billion (2%); and
- Others like silver mica and vacuum capacitors—US$0.7 billion (3%).
All other capacitor types are negligible in terms of value and quantity compared with the above types.
- Adam Marcus Namisnyk (23 June 2003). "A Survey of Electrochemical Supercapacitor Technology" (PDF). Archived from the original (PDF) on 22 December 2014. Retrieved 2011-06-24.
- WIMA, Characteristics of Metallized Film Capacitors in Comparison with Other Dielectrics Archived 2012-11-05 at the Wayback Machine
- "- TDK Europe – General Technical Information" (PDF).
- Tomáš Kárník, AVX, NIOBIUM OXIDE FOR CAPACITOR MANUFACTURING, METAL 2008, 13. –15. 5. 2008, Hradec nad Moravicí PDF Archived 2016-03-05 at the Wayback Machine
- "Holystone, Capacitor Dielectric Comparison, Technical Note 3" (PDF).
- P. Bettacchi, D. Montanari, D. Zanarini, D. Orioli, G. Rondelli, A. Sanua, KEMET Electronics Power Film Capacitors for Industrial Applications Archived 2014-03-02 at the Wayback Machine
- S. P. Murarka; Moshe Eisenberg; A. K. Sinha (2003), Interlayer dielectrics for semiconductor technologies (in German), Academic Press, pp. 338–339, ISBN 9780125112215
- Vishay. "Vishay - Capacitors - RFI Safety Rated X/Y". www.vishay.com.
- "X2Y Attenuators - Home". www.x2y.com.
- "Three-terminal Capacitor Structure, Murata".
- "Murata, Three-terminal Capacitor Structure, No.TE04EA-1.pdf 98.3.20" (PDF).
- "Vishay, Ceramic RF-Power Capacitors" (PDF).
- Vishay. "Capacitors - RF Power". Vishay. Archived from the original on 2012-08-14. Retrieved 2013-03-09.
- Passive component magazine, Nov./Dec. 2005, F. Jacobs, p. 29 ff Polypropylene Capacitor Film Resin Archived 2016-03-04 at the Wayback Machine
- "Capacitor Reports | Resistor Reports | Electronic Analysis | Dennis Zogbi | Paumanok Publications". Paumanokgroup.com. 2013-11-08. Retrieved 2014-03-02.
- "WIMA Radio Interference Suppression (RFI) Capacitors". www.wima.com.
- "WIMA Snubber Capacitors". www.wima.com.
- "Motor-Run Capacitors online". www.motor-runcapacitorsonline.com.
- "Sorry, the requested page could not be found. - TDK Europe - EPCOS" (PDF). www.epcos.com.
- Chenxi, Rizee (15 May 2014). "2017 Tendency For Electronic Components Market". www.wellpcb.com. WellPCB. Retrieved 29 May 2017.
- U. Merker, K. Wussow, W. Lövenich, H. C. Starck GmbH, New Conducting Polymer Dispersions for Solid Electrolyte Capacitors, PDF Archived 2016-03-04 at the Wayback Machine
- "CDE, Motor Start Capacitors" (PDF).
- "Rubycon, Aluminum Electrolytic Capacitors for Strobe Flash" (PDF).
- "Electrolytic Capacitors - FTCAP GmbH". www.ftcap.de.
- B. E. Conway (1999). Electrochemical Supercapacitors: Scientific Fundamentals and Technological Applications. Berlin: Springer. ISBN 978-0306457364. Retrieved November 21, 2014. see also Brian E. Conway in Electrochemistry Encyclopedia: Electrochemical Capacitors — Their Nature, Function and Applications Archived 2012-08-13 at the Wayback Machine
- Marin S. Halper, James C. Ellenbogen (March 2006). Supercapacitors: A Brief Overview (PDF) (Technical report). MITRE Nanosystems Group. Retrieved 2013-04-02.
- Frackowiak, Elzbieta; Béguin, François (2001). "Carbon materials for the electrochemical storage of energy in capacitors". Carbon. 39 (6): 937–950. doi:10.1016/S0008-6223(00)00183-4.
- Sur, Ujjal Kumar (2012-01-27). Recent Trend in Electrochemical Science and Technology. ISBN 978-953-307-830-4.
- "Elton". Archived from the original on 2013-06-23. Retrieved 2013-08-15.
- "AC Safety Capacitors".
- "Across-the-line Capacitors, Antenna-coupling Components, Line-bypass Components and Fixed Capacitors for Use in Electronic Equipment". UL Online Certification Directory.
- Douglas Edson and David Wadler. "A New Low ESR Fused Solid Tantalum Capacitor" Archived 2013-08-06 at the Wayback Machine.
- DeMatos, H. "Design of an Internal Fuse for a High-Frequency Solid Tantalum Capacitor". 1980. doi: 10.1109/TCHMT.1980.1135610
- Tagare. "Electrical Power Capacitors". 2001.
- Hemant Joshi. "Residential, Commercial and Industrial Electrical Systems: Equipment and selection". 2008. section 21.2.1: "Internal fuse". p. 446.
- "3D Silicon Capacitors". www.ipdia.com.
- Harry Lythall - SM0VPO. "Gimmick Capacitors" Archived 2011-06-13 at the Wayback Machine.
- Darren Ashby, Bonnie Baker, Ian Hickman, Walt Kester, Robert Pease, Tim Williams, Bob Zeidman. "Circuit Design: Know It All". 2011. p. 201.
- Robert A. Pease. "Troubleshooting Analog Circuits". 1991. p. 20.
- Robert A. Pease. "Troubleshooting analog circuits, part 2: The right equipment is essential for effective troubleshooting". EDN January 19, 1989. p. 163.
- David Cripe NM0S and Four State QRP Group. "Instruction Manual Cyclone 40: 40 Meter Transceiver". 2013. p. 17.
- "Polystyrene capacitor advantages and disadvantages". Retrieved 14 February 2016.
- "Vishay, Wet Electrolyte Tantalum Capacitors, Introduction" (PDF). Archived from the original (PDF) on 2015-05-13. Retrieved 2012-12-14.
- Self-healing Characteristics of Solid Electrolytic Capacitor with Polypyrrole Electrolyte, Yamamoto Hideo
- "DRILCO, S.L. - INICIO" (PDF). www.electrico.drilco.net.[permanent dead link]
- "AVX, Performance Characteristics of Multilayer Glass Capacitors" (PDF). Archived from the original (PDF) on 2015-09-23. Retrieved 2012-12-14.
- Murata: Basics of capacitors, lesson 2 Includes graph showing impedance as a function of frequency for different capacitor types; electrolytics are the only ones with a large component due to ESR
- Vishay. "Vishay - Vishay Introduces First Silicon-Based, Surface-Mount RF Capacitor in 0603 Case Size". www.vishay.com.
- Infotech, Aditya. "Chip Mica Capacitors - Simic Electronic". www.simicelectronics.com.
- "AVX, NP0, 1000 pF 100 V, 0805, Q >= 1000 (1 MHz)" (PDF). Archived from the original (PDF) on 2012-12-24. Retrieved 2012-12-14.
- "WIMA". www.wima.de. Archived from the original on 2012-11-05. Retrieved 2012-12-14.
- "General Information DC Film Capacitors" (PDF). www.kemet.com.
- "WIMA". www.wima.de. Archived from the original on 2012-11-04. Retrieved 2012-12-14.
- "Capacitors for Reduced Micro phonics and Sound Emission" (PDF). www.kemet.com.
- Are your military ceramic capacitors subject to the piezoelectric effect? Archived June 19, 2012, at the Wayback Machine
- "Kemet, Polymer Tantalum Chip Capacitors" (PDF). Archived from the original (PDF) on 2014-11-23. Retrieved 2012-12-14.
- AVX, ANALYSIS OF SOLID TANTALUM CAPACITOR LEAKAGE CURRENT Archived August 6, 2013, at the Wayback Machine
- "Understand Capacitor Soakage to Optimize Analog Systems" by Bob Pease 1982 Archived 2007-10-12 at the Wayback Machine
- * "Modeling Dielectric Absorption in Capacitors", by Ken Kundert
- "NCC, KME series" (PDF).
- "KEMET General Purpose Pulse-and-DC-Transient-Suppression Capacitors" (PDF). www.kemet.com.
- Ralph M. Kerrigan, NWL Capacitor Division Metallized Polypropylene Film Energy Storage Capacitors For Low Pulse Duty Archived 2013-09-29 at the Wayback Machine
- "Maxwell Ultracapacitors: Enabling Energy's Future". Maxwell Technologies.
- Plessner, K W (1956), "Ageing of the Dielectric Properties of Barium Titanate Ceramics", Proceedings of the Physical Society. Section B (in German), 69 (12), pp. 1261–1268, Bibcode:1956PPSB...69.1261P, doi:10.1088/0370-1301/69/12/309
- Takaaki Tsurumi & Motohiro Shono & Hirofumi Kakemoto & Satoshi Wada & Kenji Saito & Hirokazu Chazono, Mechanism of capacitance aging under DC-bias field in X7R-MLCCs Published online: 23 March 2007, # Springer Science + Business Media, LLC 2007
- Christopher England, Johanson dielectrics, Ceramic Capacitor Aging Made Simple Archived 2012-12-26 at the Wayback Machine
- Dr. Arne Albertsen, Jianghai Europe, Electrolytic Capacitor Lifetime Estimation
- IEC/EN 61709, Electric components. Reliability. Reference conditions for failure rates and stress models for conversion
- MIL-HDBK-217F Reliability Prediction of Electronic Equipment
- J. L. Stevens, T. R. Marshall, A. C. Geiculescu M., C. R. Feger, T. F. Strange, Carts USA 2006, The Effects of Electrolyte Composition on the Deformation Characteristics of Wet Aluminum ICD Capacitors Archived 2014-11-26 at the Wayback Machine
- Alfonso Berduque, Zongli Dou, Rong Xu, BHC Components Ltd (KEMET), pdf Electrochemical Studies for Aluminium Electrolytic Capacitor Applications: Corrosion Analysis of Aluminium in Ethylene Glycol-Based Electrolytes
- Vishay BCcomponents, Revision: 10-May-12, Document Number: 28356, Introduction Aluminum Capacitors, paragraph "Storage" Archived 2016-01-26 at the Wayback Machine
- "Beuth Verlag - Normen und Fachliteratur seit 1924". www.beuth.de.
- "Electronic Capacitors market report - HighBeam Business: Arrive Prepared". business.highbeam.com. Archived from the original on 2010-02-12.
- J. Ho, T. R. Jow, S. Boggs, Historical Introduction to Capacitor Technology Archived 2016-12-05 at the Wayback Machine
|The Wikibook Electronics has a page on the topic of: Capacitors| |
Apartheid (segregation; lit. “aparthood”) was a system of institutionalised racial segregation that existed in South Africa and South-West Africa (now Namibia) from 1948 until the early 1990s. Apartheid was characterised by an authoritarian political culture based on baasskap (or white supremacy), which ensured that South Africa was dominated politically, socially, and economically by the nation’s minority white population. According to this system of social stratification, white citizens had the highest status, followed by Asians and Coloureds, then black Africans. The economic legacy and social effects of apartheid continue to the present day.
Broadly speaking, apartheid was delineated into petty apartheid, which entailed the segregation of public facilities and social events, and grand apartheid, which dictated housing and employment opportunities by race. Prior to the 1940s, some aspects of apartheid had already emerged in the form of minority rule by white South Africans and the socially enforced separation of black Africans from other races, which later extended to pass laws and land apportionment. Apartheid was adopted as a formal policy by the South African government after the ascension of the National Party (NP) during the 1948 general elections.
A codified system of racial stratification began to take form in South Africa under the Dutch Empire in the eighteenth century, although informal segregation was present much earlier due to social cleavages between Dutch colonists and a creolised, ethnically diverse slave population. With the rapid growth and industrialisation of the British Cape Colony, racial policies and laws which had previously been relatively relaxed became increasingly rigid, discriminating specifically against black Africans, in the last decade of the 19th century. The policies of the Boer republics were also racially exclusive; for instance, the Transvaal’s constitution barred black African and Coloured participation in church and state.
The first apartheid law was the Prohibition of Mixed Marriages Act, 1949, followed closely by the Immorality Amendment Act of 1950, which made it illegal for most South African citizens to marry or pursue sexual relationships across racial lines. The Population Registration Act, 1950 classified all South Africans into one of four racial groups based on appearance, known ancestry, socioeconomic status, and cultural lifestyle: “Black”, “White”, “Coloured”, and “Indian”, the last two of which included several sub-classifications. Places of residence were determined by racial classification. Between 1960 and 1983, 3.5 million black Africans were removed from their homes and forced into segregated neighbourhoods as a result of apartheid legislation, in some of the largest mass evictions in modern history. Most of these targeted removals were intended to restrict the black population to ten designated “tribal homelands”, also known as bantustans, four of which became nominally independent states. The government announced that relocated persons would lose their South African citizenship as they were absorbed into the bantustans.
Apartheid sparked significant international and domestic opposition, resulting in some of the most influential global social movements of the twentieth century. It was the target of frequent condemnation in the United Nations and brought about an extensive arms and trade embargo on South Africa. During the 1970s and 1980s, internal resistance to apartheid became increasingly militant, prompting brutal crackdowns by the National Party government and protracted sectarian violence that left thousands dead or in detention. Some reforms of the apartheid system were undertaken, including allowing for Indian and Coloured political representation in parliament, but these measures failed to appease most activist groups.
Between 1987 and 1993, the National Party entered into bilateral negotiations with the African National Congress (ANC), the leading anti-apartheid political movement, for ending segregation and introducing majority rule. In 1990, prominent ANC figures such as Nelson Mandela were released from prison. Apartheid legislation was repealed on 17 June 1991, pending multiracial elections held under a universal suffrage set for April 1994.
Apartheid is an Afrikaans word meaning “separateness”, or “the state of being apart”, literally “apart-hood” (from Afrikaans “-heid”). Its first recorded use was in 1929.
Under the 1806 Cape Articles of Capitulation the new British colonial rulers were required to respect previous legislation enacted under Roman Dutch law and this led to a separation of the law in South Africa from English Common Law and a high degree of legislative autonomy. The governors and assemblies that governed the legal process in the various colonies of South Africa were launched on a different and independent legislative path from the rest of the British Empire.
In the days of slavery, slaves required passes to travel away from their masters. In 1797 the Landdrost and Heemraden of Swellendam and Graaff-Reinet extended pass laws beyond slaves and ordained that all Khoikhoi (designated as Hottentots) moving about the country for any purpose should carry passes. This was confirmed by the British Colonial government in 1809 by the Hottentot Proclamation, which decreed that if a Khoikhoi were to move they would need a pass from their master or a local official. Ordinance No. 49 of 1828 decreed that prospective black immigrants were to be granted passes for the sole purpose of seeking work. These passes were to be issued for Coloureds and Khoikhoi, but not for other Africans, who were still forced to carry passes.
The United Kingdom’s Slavery Abolition Act 1833 (3 & 4 Will. IV c. 73) abolished slavery throughout the British Empire and overrode the Cape Articles of Capitulation. To comply with the act the South African legislation was expanded to include Ordinance 1 in 1835, which effectively changed the status of slaves to indentured labourers. This was followed by Ordinance 3 in 1848, which introduced an indenture system for Xhosa that was little different from slavery. The various South African colonies passed legislation throughout the rest of the nineteenth century to limit the freedom of unskilled workers, to increase the restrictions on indentured workers and to regulate the relations between the races.
In the Cape Colony, which previously had a liberal and multi-racial constitution and a system of franchise open to men of all races, the Franchise and Ballot Act of 1892 raised the property franchise qualification and added an educational element, disenfranchising a disproportionate number of the Cape’s non-white voters, and the Glen Grey Act of 1894 instigated by the government of Prime Minister Cecil John Rhodes limited the amount of land Africans could hold. Similarly, in Natal, the Natal Legislative Assembly Bill of 1894 deprived Indians of the right to vote.
In 1896 the South African Republic brought in two pass laws requiring Africans to carry a badge. Only those employed by a master were permitted to remain on the Rand and those entering a “labour district” needed a special pass.
In 1905 the General Pass Regulations Act denied blacks the vote and limited them to fixed areas, and in 1906 the Asiatic Registration Act of the Transvaal Colony required all Indians to register and carry passes. The latter was repealed by the British government but re-enacted again in 1908.
In 1910, the Union of South Africa was created as a self-governing dominion, which continued the legislative programme: the South Africa Act (1910) enfranchised whites, giving them complete political control over all other racial groups while removing the right of blacks to sit in parliament, the Native Land Act (1913) prevented blacks, except those in the Cape, from buying land outside “reserves”, the Natives in Urban Areas Bill (1918) was designed to force blacks into “locations”, the Urban Areas Act (1923) introduced residential segregation and provided cheap labour for industry led by white people, the Colour Bar Act (1926) prevented black mine workers from practising skilled trades, the Native Administration Act (1927) made the British Crown, rather than paramount chiefs, the supreme head over all African affairs, the Native Land and Trust Act (1936) complemented the 1913 Native Land Act and, in the same year, the Representation of Natives Act removed previous black voters from the Cape voters’ roll and allowed them to elect three whites to Parliament. One of the first pieces of segregating legislation enacted by Jan Smuts’ United Party government was the Asiatic Land Tenure Bill (1946), which banned land sales to Indians.
The United Party government began to move away from the rigid enforcement of segregationist laws during World War II. Amid fears integration would eventually lead to racial assimilation, the National Party established the Sauer Commission to investigate the effects of the United Party’s policies. The commission concluded that integration would bring about a “loss of personality” for all racial groups.
Election of 1948
South Africa had allowed social custom and law to govern the consideration of multiracial affairs and of the allocation, in racial terms, of access to economic, social, and political status. Most white South Africans, regardless of their own differences, accepted the prevailing pattern. Nevertheless, by 1948 it remained apparent that there were gaps in the social structure, whether legislated or otherwise, concerning the rights and opportunities of nonwhites. The rapid economic development of World War II attracted black migrant workers in large numbers to chief industrial centres, where they compensated for the wartime shortage of white labour. However, this escalated rate of black urbanisation went unrecognised by the South African government, which failed to accommodate the influx with parallel expansion in housing or social services. Overcrowding, increasing crime rates, and disillusionment resulted; urban blacks came to support a new generation of leaders influenced by the principles of self-determination and popular freedoms enshrined in such statements as the Atlantic Charter. Whites reacted negatively to the changes, allowing the Herenigde Nasionale Party (or simply the National Party) to convince a large segment of the voting bloc that the impotence of the United Party in curtailing the evolving position of nonwhites indicated that the organisation had fallen under the influence of Western liberals. Many Afrikaners, white South Africans chiefly of Dutch descent but with early infusions of Germans and French Huguenots who were soon assimilated, also resented what they perceived as disempowerment by an underpaid black workforce and the superior economic power and prosperity of white English speakers. In addition, Jan Smuts, as a strong advocate of the United Nations, lost domestic support when South Africa was criticised for its colour bar and the continued mandate of South West Africa by other UN member states.
Afrikaner nationalists proclaimed that they offered the voters a new policy to ensure continued white domination. This policy was initially expounded from a theory drafted by Hendrik Verwoerd and was presented to the National Party by the Sauer Commission. It called for a systematic effort to organise the relations, rights, and privileges of the races as officially defined through a series of parliamentary acts and administrative decrees. Segregation had thus been pursued only in major matters, such as separate schools, and local society rather than law had been depended upon to enforce most separation; it should now be extended to everything. The party gave this policy a name – apartheid (apartness). Apartheid was to be the basic ideological and practical foundation of Afrikaner politics for the next quarter of a century.
The National Party’s election platform stressed that apartheid would preserve a market for white employment in which nonwhites could not compete. On the issues of black urbanisation, the regulation of nonwhite labour, influx control, social security, farm tariffs, and nonwhite taxation the United Party’s policy remained contradictory and confused. Its traditional bases of support not only took mutually exclusive positions but found themselves increasingly at odds with each other. Smuts’ reluctance to consider South African foreign policy against the mounting tensions of the Cold War also stirred up discontent, while the nationalists promised to purge the state and public service of communist sympathisers.
First, to desert, the United Party were Afrikaner farmers, who wished to see a change in influx control due to problems with squatters, as well as higher prices for their maize and other produce in the face of the mine-owners’ demand for cheap food policies. Always identified with the affluent and capitalist, the party also failed to appeal to its working class constituents. Populist rhetoric allowed the National Party to sweep eight constituencies in the mining and industrial centres of the Witwatersrand and five more in Pretoria. Barring the predominantly English-speaking landowner electorate of the Natal, the United Party was defeated in almost every rural district. Its urban losses in the nation’s most populous province, the Transvaal, proved equally devastating. As the voting system was disproportionately weighted in favour of rural constituencies and the Transvaal in particular, the 1948 election catapulted the Herenigde Nasionale Party from a small minority party to a commanding position with an eight-vote parliamentary lead. Daniel François Malan became the first nationalist prime minister, with the aim of implementing the apartheid philosophy and silencing liberal opposition.
When the National Party came to power in 1948, there were factional differences in the party about the implementation of systemic racial segregation. The “baasskap” (white domination or supremacist) faction, which was the dominant faction in the NP, and state institutions, favoured systematic segregation, but also favoured the participation of black Africans in the economy with black labour controlled to advance the economic gains of Afrikaners. A second faction was the “purists”, who believed in “vertical segregation”, in which blacks and whites would be entirely separated, with blacks living in native reserves, with separate political and economic structures, which, they believed, would entail severe short-term pain, but would also lead to the independence of white South Africa from black labour in the long-term. A third faction, which included Hendrik Verwoerd, sympathised with the purists but allowed for the use of black labour while implementing the purist goal of vertical separation.
in South Africa
|Precursors (before 1948)|
Franchise and Ballot Act (1892)
Glen Grey Act (1894)
Natal Legislative Assembly Bill (1894)
Transvaal Asiatic Registration Act (1906)
South Africa Act (1909)
Mines and Works Act (1911)
Natives Land Act (1913)
Natives (Urban Areas) Act (1923)
Immorality Act (1927)
Native Administration Act (1927)
Women’s Enfranchisement Act (1930)
Franchise Laws Amendment Act (1931)
Representation of Natives Act (1936)
Native Trust and Land Act (1936)
Native (Urban Areas) Consolidation Act (1945)
Asiatic Land Tenure Act (1946)
|Malan to Verwoerd (1948–66)|
Prohibition of Mixed Marriages Act (1949)
Immorality Amendment Act † (1950)
Population Registration Act (1950)
Group Areas Act (1950)
Suppression of Communism Act (1950)
Native Building Workers Act (1951)
Separate Representation of Voters Act (1951)
Prevention of Illegal Squatting Act (1951)
Bantu Authorities Act (1951)
Native Laws Amendment Act † (1952)
Pass Laws Act (1952)
Public Safety Act (1953)
Native Labour (Settlement of Disputes) Act (1953)
Bantu Education Act (1953)
Reservation of Separate Amenities Act (1953)
Natives Resettlement Act (1954)
Group Areas Development Act (1955)
Riotous Assemblies Act (1956)
Industrial Conciliation Act (1956)
Natives (Prohibition of Interdicts) Act (1956)
Immorality Act (1957)
Bantu Investment Corporation Act (1959)
Extension of University Education Act (1959)
Promotion of Bantu Self-government Act (1959)
Unlawful Organizations Act (1960)
Indemnity Act (1961)
Coloured Persons Communal Reserves Act (1961)
Republic of South Africa Constitution Act (1961)
Urban Bantu Councils Act (1961)
General Law Amendment Act (1963)
Coloured Persons Representative Council Act (1964)
|Vorster to Botha (1966–90)|
Terrorism Act (1967)
Separate Representation of Voters Amendment Act (1968)
Prohibition of Political Interference Act (1968)
Bantu Homelands Citizenship Act (1970)
Bantu Homelands Constitution Act (1971)
Aliens Control Act (1973)
Indemnity Act (1977)
National Key Points Act (1980)
List of National Key Points
Internal Security Act (1982)
Black Local Authorities Act (1982)
Republic of South Africa Constitution Act (1983)
Negotiations to end Apartheid (1990–93)
Interim Constitution (1993)
Promotion of National Unity and Reconciliation Act (1995)
NP leaders argued that South Africa did not comprise a single nation, but was made up of four distinct racial groups: white, black, Coloured and Indian. Such groups were split into 13 nations or racial federations. White people encompassed the English and Afrikaans language groups; the black populace was divided into ten such groups.
The state passed laws that paved the way for “grand apartheid”, which was centred on separating races on a large scale, by compelling people to live in separate places defined by race. This strategy was in part adopted from “left-over” British rule that separated different racial groups after they took control of the Boer republics in the Anglo-Boer war. This created the black-only “townships” or “locations”, where blacks were relocated to their own towns. In addition, “petty apartheid” laws were passed. The principal apartheid laws were as follows.
The first grand apartheid law was the Population Registration Act of 1950, which formalised racial classification and introduced an identity card for all persons over the age of 18, specifying their racial group. Official teams or boards were established to come to a conclusion on those people whose race was unclear. This caused difficulty, especially for Coloured people, separating their families when members were allocated different races.
The second pillar of grand apartheid was the Group Areas Act of 1950. Until then, most settlements had people of different races living side by side. This Act put an end to diverse areas and determined where one lived according to race. Each race was allotted its own area, which was used in later years as a basis of forced removal. The Prevention of Illegal Squatting Act of 1951 allowed the government to demolish black shanty town slums and forced white employers to pay for the construction of housing for those black workers who were permitted to reside in cities otherwise reserved for whites.
The Prohibition of Mixed Marriages Act of 1949 prohibited marriage between persons of different races, and the Immorality Act of 1950 made sexual relations with a person of a different race a criminal offence.
Under the Reservation of Separate Amenities Act of 1953, municipal grounds could be reserved for a particular race, creating, among other things, separate beaches, buses, hospitals, schools and universities. Signboards such as “whites only” applied to public areas, even including park benches. Black South Africans were provided with services greatly inferior to those of whites, and, to a lesser extent, to those of Indian and Coloured people.
Further laws had the aim of suppressing resistance, especially armed resistance, to apartheid. The Suppression of Communism Act of 1950 banned any party subscribing to Communism. The act defined Communism and its aims so sweepingly that anyone who opposed government policy risked being labelled as a Communist. Since the law specifically stated that Communism aimed to disrupt racial harmony, it was frequently used to gag opposition to apartheid. Disorderly gatherings were banned, as were certain organisations that were deemed threatening to the government.
The Bantu Authorities Act of 1951 created separate government structures for blacks and whites and was the first piece of legislation to support the government’s plan of separate development in the bantustans. The Promotion of Black Self-Government Act of 1959 entrenched the NP policy of nominally independent “homelands” for blacks. So-called “self–governing Bantu units” were proposed, which would have devolved administrative powers, with the promise later of autonomy and self-government. It also abolished the seats of white representatives of black South Africans and removed from the rolls the few blacks still qualified to vote. The Bantu Investment Corporation Act of 1959 set up a mechanism to transfer capital to the homelands to create employment there. Legislation of 1967 allowed the government to stop industrial development in “white” cities and redirect such development to the “homelands”. The Black Homeland Citizenship Act of 1970 marked a new phase in the Bantustan strategy. It changed the status of blacks to citizens of one of the ten autonomous territories. The aim was to ensure a demographic majority of white people within South Africa by having all ten Bantustans achieve full independence.
Interracial contact in sport was frowned upon, but there were no segregatory sports laws.
The government tightened pass laws compelling blacks to carry identity documents, to prevent the immigration of blacks from other countries. To reside in a city, blacks had to be in employment there. Until 1956 women were for the most part excluded from these pass requirements, as attempts to introduce pass laws for women were met with fierce resistance.
Disenfranchisement of Coloured voters
In 1950, D. F. Malan announced the NP’s intention to create a Coloured Affairs Department. J.G. Strijdom, Malan’s successor as Prime Minister, moved to strip voting rights from black and Coloured residents of the Cape Province. The previous government had introduced the Separate Representation of Voters Bill into Parliament in 1951, turning it to be an Act on 18 June 1951; however, four voters, G Harris, W D Franklin, WD Collins and Edgar Deane, challenged its validity in court with support from the United Party. The Cape Supreme Court upheld the act, but reversed by the Appeal Court, finding the act invalid because a two-thirds majority in a joint sitting of both Houses of Parliament was needed to change the entrenched clauses of the Constitution. The government then introduced the High Court of Parliament Bill (1952), which gave Parliament the power to overrule decisions of the court. The Cape Supreme Court and the Appeal Court declared this invalid too.
In 1955 the Strijdom government increased the number of judges in the Appeal Court from five to 11, and appointed pro-Nationalist judges to fill the new places. In the same year, they introduced the Senate Act, which increased the Senate from 49 seats to 89. Adjustments were made such that the NP controlled 77 of these seats. The parliament met in a joint sitting and passed the Separate Representation of Voters Act in 1956, which transferred Coloured voters from the common voters’ roll in the Cape to a new Coloured voters’ roll. Immediately after the vote, the Senate was restored to its original size. The Senate Act was contested in the Supreme Court, but the recently enlarged Appeal Court, packed with government-supporting judges, upheld the act, and also the Act to remove Coloured voters.
The 1956 law allowed Coloureds to elect four people to Parliament, but a 1969 law abolished those seats and stripped Coloureds of their right to vote. Since Asians had never been allowed to vote, this resulted in whites being the sole enfranchised group.
A 2016 study in the Journal of Politics suggests that disenfranchisement in South Africa had a significant negative impact on basic service delivery to the disenfranchised.
Division among whites
Before South Africa became a republic in 1961, politics among white South Africans was typified by the division between the mainly Afrikaner pro-republic conservative and the largely English anti-republican liberal sentiments, with the legacy of the Boer War still a factor for some people. Once South Africa became a republic, Prime Minister Hendrik Verwoerd called for improved relations and greater accord between people of British descent and the Afrikaners. He claimed that the only difference was between those in favour of apartheid and those against it. The ethnic division would no longer be between Afrikaans and English speakers, but between blacks and whites.
Most Afrikaners supported the notion of unanimity of white people to ensure their safety. White voters of British descent were divided. Many had opposed a republic, leading to a majority “no” vote in Natal. Later, some of them recognised the perceived need for white unity, convinced by the growing trend of decolonisation elsewhere in Africa, which concerned them. British Prime Minister Harold Macmillan’s “Wind of Change” speech left the British faction feeling that the United Kingdom had abandoned them. The more conservative English speakers supported Verwoerd; others were troubled by the severing of ties with the UK and remained loyal to the Crown. They were displeased by having to choose between British and South African nationalities. Although Verwoerd tried to bond these different blocs, the subsequent voting illustrated only a minor swell of support, indicating that a great many English speakers remained apathetic and that Verwoerd had not succeeded in uniting the white population.
Under the homeland system, the government attempted to divide South Africa and South-West Africa into a number of separate states, each of which was supposed to develop into a separate nation-state for a different ethnic group.
Territorial separation was hardly a new institution. There were, for example, the “reserves” created under the British government in the nineteenth century. Under apartheid, 13 percent of the land was reserved for black homelands, a small amount relative to its total population, and generally in economically unproductive areas of the country. The Tomlinson Commission of 1954 justified apartheid and the homeland system but stated that additional land ought to be given to the homelands, a recommendation that was not carried out.
When Verwoerd became Prime Minister in 1958, the policy of “separate development” came into being, with the homeland structure as one of its cornerstones. Verwoerd came to believe in the granting of independence to these homelands. The government justified its plans on the ostensible basis that “(the) government’s policy is, therefore, not a policy of discrimination on the grounds of race or colour, but a policy of differentiation on the ground of nationhood, of different nations, granting to each self-determination within the borders of their homelands – hence this policy of separate development”. Under the homelands system, blacks would no longer be citizens of South Africa, becoming citizens of the independent homelands who worked in South Africa as foreign migrant labourers on temporary work permits. In 1958 the Promotion of Black Self-Government Act was passed, and border industries and the Bantu Investment Corporation were established to promote economic development and the provision of employment in or near the homelands. Many black South Africans who had never resided in their identified homeland were forcibly removed from the cities to the homelands.
The vision of a South Africa divided into multiple ethnostates appealed to the reform-minded Afrikaner intelligentsia, and it provided a more coherent philosophical and moral framework for the National Party’s policies, while also providing a veneer of intellectual respectability to the controversial policy of so-called baasskap.
In total, 20 homelands were allocated to ethnic groups, ten in South Africa proper and ten in South West Africa. Of these 20 homelands, 19 were classified as black, while one, Basterland, was set aside for a sub-group of Coloureds known as Basters, who are closely related to Afrikaners. Four of the homelands were declared independent by the South African government: Transkei in 1976, Bophuthatswana in 1977, Venda in 1979, and Ciskei in 1981 (known as the TBVC states). Once a homeland was granted its nominal independence, its designated citizens had their South African citizenship revoked and replaced with citizenship in their homeland. These people were then issued passports instead of passbooks. Citizens of the nominally autonomous homelands also had their South African citizenship circumscribed, meaning they were no longer legally considered South African. The South African government attempted to draw an equivalence between their view of black citizens of the homelands and the problems which other countries faced through the entry of illegal immigrants.
International recognition of the Bantustans
Bantustans within the borders of South Africa and South West Africa were classified by degree of nominal self-rule: 6 were “non-self-governing”, 10 were “self-governing”, and 4 were “independent”. In theory, self-governing Bantustans had control over many aspects of their internal functioning but were not yet sovereign nations. Independent Bantustans (Transkei, Bophutatswana, Venda and Ciskei; also known as the TBVC states) were intended to be fully sovereign. In reality, they had no significant economic infrastructure and with few exceptions encompassed swaths of disconnected territory. This meant all the Bantustans were little more than puppet states controlled by South Africa.
Throughout the existence of the independent Bantustans, South Africa remained the only country to recognise their independence. Nevertheless, internal organisations of many countries, as well as the South African government, lobbied for their recognition. For example, upon the foundation of Transkei, the Swiss-South African Association encouraged the Swiss government to recognise the new state. In 1976, leading up to a United States House of Representatives resolution urging the President to not recognise Transkei, the South African government intensely lobbied lawmakers to oppose the bill. Each TBVC state extended recognition to the other independent Bantustans while South Africa showed its commitment to the notion of TBVC sovereignty by building embassies in the TBVC capitals.
During the 1960s, 1970s and early 1980s, the government implemented a policy of “resettlement”, to force people to move to their designated “group areas”. Millions of people were forced to relocate. These removals included people relocated due to slum clearance programmes, labour tenants on white-owned farms, the inhabitants of the so-called “black spots” (black-owned land surrounded by white farms), the families of workers living in townships close to the homelands, and “surplus people” from urban areas, including thousands of people from the Western Cape (which was declared a “Coloured Labour Preference Area”) who were moved to the Transkei and Ciskei homelands. The best-publicised forced removals of the 1950s occurred in Johannesburg when 60,000 people were moved to the new township of Soweto (an abbreviation for South Western Townships).
Until 1955, Sophiatown had been one of the few urban areas where black people were allowed to own land and was slowly developing into a multiracial slum. As industry in Johannesburg grew, Sophiatown became the home of a rapidly expanding black workforce, as it was convenient and close to town. It had the only swimming pool for black children in Johannesburg. As one of the oldest black settlements in Johannesburg, it held almost symbolic importance for the 50,000 black people it contained. Despite a vigorous ANC protest campaign and worldwide publicity, the removal of Sophiatown began on 9 February 1955 under the Western Areas Removal Scheme. In the early hours, heavily armed police forced residents out of their homes and loaded their belongings onto government trucks. The residents were taken to a large tract of land 19 kilometres (12 mi) from the city centre, known as Meadowlands, which the government had purchased in 1953. Meadowlands became part of a new planned black city called Soweto. Sophiatown was destroyed by bulldozers, and a new white suburb named Triomf (Triumph) was built in its place. This pattern of forced removal and destruction was to repeat itself over the next few years and was not limited to black South Africans alone. Forced removals from areas like Cato Manor (Mkhumbane) in Durban, and District Six in Cape Town, where 55,000 Coloured and Indian people were forced to move to new townships on the Cape Flats, were carried out under the Group Areas Act of 1950. Nearly 600,000 Coloured, Indian and Chinese people were moved under the Group Areas Act. Some 40,000 whites were also forced to move when the land was transferred from “white South Africa” into the black homelands.
Society during apartheid
The NP passed a string of legislation that became known as petty apartheid. The first of these was the Prohibition of Mixed Marriages Act 55 of 1949, prohibiting marriage between whites and people of other races. The Immorality Amendment Act 21 of 1950 (as amended in 1957 by Act 23) forbade “unlawful racial intercourse” and “any immoral or indecent act” between a white and a black, Indian or Coloured person.
Blacks were not allowed to run businesses or professional practices in areas designated as “white South Africa” unless they had a permit – such being granted only exceptionally. They were required to move to the black “homelands” and set up businesses and practices there. Trains, hospitals and ambulances were segregated. Because of the smaller numbers of white patients and the fact that white doctors preferred to work in white hospitals, conditions in white hospitals were much better than those in often overcrowded and understaffed, significantly underfunded black hospitals. Residential areas were segregated and blacks were allowed to live in white areas only if employed as a servant and even then only in servants’ quarters. Blacks were excluded from working in white areas, unless they had a pass, nicknamed the dompas, also spelt dompass or dom pass. The most likely origin of this name is from the Afrikaans “verdomde pas” (meaning accursed pass), although some commentators ascribe it to the Afrikaans words meaning “dumb pass”. Only blacks with “Section 10” rights (those who had migrated to the cities before World War II) were excluded from this provision. A pass was issued only to a black with approved work. Spouses and children had to be left behind in black homelands. A pass was issued for one magisterial district (usually one town) confining the holder to that area only. Being without a valid pass made a person subject to arrest and trial for being an illegal migrant. This was often followed by deportation to the person’s homeland and prosecution of the employer for employing an illegal migrant. Police vans patrolled white areas to round up blacks without passes. Blacks were not allowed to employ whites in white South Africa.
Although trade unions for black and Coloured workers had existed since the early 20th century, it was not until the 1980s reforms that a mass black trade union movement developed. Trade unions under apartheid were racially segregated, with 54 unions being white only, 38 for Indian and Coloured and 19 for black people. The Industrial Conciliation Act (1956) legislated against the creation of multi-racial trade unions and attempted to split existing multi-racial unions into separate branches or organisations along racial lines.
Each black homeland controlled its own education, health and police systems. Blacks were not allowed to buy hard liquor. They were able to buy only state-produced poor quality beer (although this law was relaxed later). Public beaches, swimming pools, some pedestrian bridges, drive-in cinema parking spaces, graveyards, parks, and public toilets were segregated. Cinemas and theatres in white areas were not allowed to admit blacks. There were practically no cinemas in black areas. Most restaurants and hotels in white areas were not allowed to admit blacks except as staff. Blacks were prohibited from attending white churches under the Churches Native Laws Amendment Act of 1957, but this was never rigidly enforced, and churches were one of the few places races could mix without the interference of the law. Blacks earning 360 rand a year or more had to pay taxes while the white threshold was more than twice as high, at 750 rand a year. On the other hand, the taxation rate for whites was considerably higher than that for blacks.
Blacks could never acquire land in white areas. In the homelands, much of the land belonged to a “tribe”, where the local chieftain would decide how the land had to be used. This resulted in whites owning almost all the industrial and agricultural lands and much of the prized residential land. Most blacks were stripped of their South African citizenship when the “homelands” became “independent”, and they were no longer able to apply for South African passports. Eligibility requirements for a passport had been difficult for blacks to meet, the government contending that a passport was a privilege, not a right, and the government did not grant many passports to blacks. Apartheid pervaded culture as well as the law, and was entrenched by most of the mainstream media.
The population was classified into four groups: African, White, Indian and Coloured (capitalised to denote their legal definitions in South African law). The Coloured group included people regarded as being of mixed descent, including of Bantu, Khoisan, European and Malay ancestry. Many were descended from people brought to South Africa from other parts of the world, such as India, Sri Lanka, Madagascar and China as slaves and indentured workers.
The Population Registration Act, (Act 30 of 1950), defined South Africans as belonging to one of three races: White, Black or Coloured. People of Indian ancestry were considered Coloured under this act. Appearance, social acceptance and descent were used to determine the qualification of an individual into one of the three categories. A white person was described by the act as one whose parents were both white and possessed the “habits, speech, education, deportment and demeanour” of a white person. Blacks were defined by the act as belonging to an African race or tribe. Lastly, Coloureds were those who could not be classified as black or white.
The apartheid bureaucracy devised complex (and often arbitrary) criteria at the time that the Population Registration Act was implemented to determine who was Coloured. Minor officials would administer tests to determine if someone should be categorised either Coloured or White, or if another person should be categorised either Coloured or Black. The tests included the pencil test, in which a pencil was shoved into the subjects’ curly hair and the subjects made to shake their head. If the pencil stuck they were deemed to be Black; if dislodged they were pronounced Coloured. Other tests involved examining the shapes of jawlines and buttocks and pinching people to see what language they would say “Ouch” in. As a result of these tests, different members of the same family found themselves in different race groups. Further tests determined the membership of the various sub-racial groups of the Coloureds.
Discriminated against by apartheid, Coloureds were as a matter of state policy forced to live in separate townships, as defined in the Group Areas Act (1950), in some cases leaving homes their families had occupied for generations, and received an inferior education, though better than that provided to Africans. They played an important role in the anti-apartheid movement: for example, the African Political Organization established in 1902 had an exclusively Coloured membership.
Voting rights were denied to Coloureds in the same way that they were denied to Blacks from 1950 to 1983. However, in 1977 the NP caucus approved proposals to bring Coloureds and Indians into central government. In 1982, final constitutional proposals produced a referendum among Whites, and the Tricameral Parliament was approved. The Constitution was reformed the following year to allow the Coloured and Asian minorities participation in separate Houses in a Tricameral Parliament, and Botha became the first Executive State President. The idea was that the Coloured minority could be granted voting rights, but the Black majority were to become citizens of independent homelands. These separate arrangements continued until the abolition of apartheid. The Tricameral reforms led to the formation of the (anti-apartheid) United Democratic Front as a vehicle to try to prevent the co-option of Coloureds and Indians into an alliance with Whites. The battles between the UDF and the NP government from 1983 to 1989 were to become the most intense period of struggle between left-wing and right-wing South Africans.
Education was segregated by the 1953 Bantu Education Act, which crafted a separate system of education for black South African students and was designed to prepare black people for lives as a labouring class. In 1959 separate universities were created for black, Coloured and Indian people. Existing universities were not permitted to enrol new black students. The Afrikaans Medium Decree of 1974 required the use of Afrikaans and English on an equal basis in high schools outside the homelands.
In the 1970s, the state spent ten times more per child on the education of white children than on black children within the Bantu Education system (the education system in black schools within white South Africa). Higher education was provided in separate universities and colleges after 1959. Eight black universities were created in the homelands. Fort Hare University in Ciskei (now Eastern Cape) was to register only Xhosa-speaking students. Sotho, Tswana, Pedi and Venda speakers were placed at the newly founded University College of the North at Turfloop, while the University College of Zululand was launched to serve Zulu students. Coloureds and Indians were to have their own establishments in the Cape and Natal respectively.
Each black homeland controlled its own education, health and police systems.
By 1948, before formal Apartheid, 10 universities existed in South Africa: four were Afrikaans, four for English, one for Blacks and a Correspondence University open to all ethnic groups. By 1981, under apartheid government, 11 new universities were built: seven for Blacks, one for Coloreds, one for Indians, one for Afrikaans and one dual-language medium Afrikaans and English.
Women under apartheid
Colonialism and apartheid had a major impact on Black and Coloured women since they suffered both racial and gender discrimination. Judith Nolde argues that in general, South African women were “deprive[d] […] of their human rights as individuals” under the apartheid system. Jobs were often hard to find. Many Black and Coloured women worked as agricultural or domestic workers, but wages were extremely low, if existent. Children suffered from diseases caused by malnutrition and sanitation problems, and mortality rates were therefore high. The controlled movement of black and Coloured workers within the country through the Natives Urban Areas Act of 1923 and the pass laws separated family members from one another because men could prove their employment in urban centres while most women were merely dependents; consequently, they risked being deported to rural areas. Even in rural areas, there were legal hurdles for women to own land, and outside the cities jobs were scarce.
Sport under apartheid
By the 1930s, association football mirrored the balkanised Society of South Africa; football was divided into numerous institutions based on race: the (White) South African Football Association, the South African Indian Football Association (SAIFA), the South African African Football Association (SAAFA) and its rival the South African Bantu Football Association, and the South African Coloured Football Association (SACFA). Lack of funds to provide proper equipment would be noticeable in regards to black amateur football matches; this revealed the unequal lives black South Africans were subject to, in contrast to Whites, who were much better off financially. Apartheid’s social engineering made it more difficult to compete across racial lines. Thus, in an effort to centralise finances, the federations merged in 1951, creating the South African Soccer Federation (SASF), which brought Black, Indian, and Coloured national associations into one body that opposed apartheid. This was generally opposed more and more by the growing apartheid government, and – with urban segregation being reinforced with ongoing racist policies – it was harder to play football along these racial lines. In 1956, the Pretoria regime – the administrative capital of South Africa – passed the first apartheid sports policy; by doing so, it emphasised the White-led government’s opposition to inter-racialism.
While football was plagued by racism, it also played a role in protesting apartheid and its policies. With the international bans from FIFA and other major sporting events, South Africa would be in the spotlight internationally. In a 1977 survey, white South Africans ranked the lack of international sport as one of the three most damaging consequences of apartheid. By the mid-1950s, Black South Africans would also use media to challenge the “racialisation” of sports in South Africa; anti-apartheid forces had begun to pinpoint sport as the “weakness” of white national morale. Black journalists for the Johannesburg Drum magazine were the first to give the issue public exposure, with an intrepid special issue in 1955 that asked, “Why shouldn’t our blacks be allowed in the SA team?” As time progressed, international standing with South Africa would continue to be strained. In the 1980s, as the oppressive system was slowly collapsing the ANC and National Party started negotiations on the end of apartheid. Football associations also discussed the formation of a single, non-racial controlling body. This unity process accelerated in the late 1980s and led to the creation, in December 1991, of an incorporated South African Football Association. On 3 July 1992, FIFA finally welcomed South Africa back into international football.
Sport has long been an important part of life in South Africa, and the boycotting of games by international teams had a profound effect on the white population, perhaps more so than the trade embargoes did. After the re-acceptance of South Africa’s sports teams by the international community, sport played a major unifying role between the country’s diverse ethnic groups. Mandela’s open support of the predominantly white rugby fraternity during the 1995 Rugby World Cup was considered instrumental in bringing together South African sports fans of all races.
Asians during apartheid
Defining its Asian population, a minority that did not appear to belong to any of the initial three designated non-white groups, was a constant dilemma for the apartheid government.
The classification of “honorary white” was granted to immigrants from Japan, South Korea and Taiwan – countries with which South Africa maintained diplomatic and economic relations – and to their descendants.
Indian South Africans during apartheid were classified many ranges of categories from “Asian” to “black” to “Coloured” and even the mono-ethnic category of “Indian”, but never as white, having been considered “nonwhite” throughout South Africa’s history. The group faced severe discrimination during the apartheid regime and were subject to numerous racialist policies.
In a study done by Josephine C. Naidoo and Devi Moodley Rajab, they interviewed a series of Indian South Africans about their experience in South Africa. Their study highlighted education, the workplace, and general day to day living. One participant who was a doctor said that it was considered the norm for Non-White and White doctors to mingle while working at the hospital but when there was any downtime or breaks, they were to go back to their segregated quarters. Not only was there severe segregation for doctors, Non-White, more specifically Indians, but were also paid three to four times less than their White counterparts. Many described being treated as a “third-class citizen” due to the humiliation of the standard of treatment for Non-White employees across many professions. Many Indians described a sense of justified superiority from Whites due to the apartheid laws that, in the minds of White South Africans, legitimised those feelings. Another finding of this study was the psychological damage done to Indians living in South Africa during apartheid. One of the biggest long-term effects was inter-racial mistrust. Inter-racial mistrust is emotional hatred towards Whites. There was such a strong degree of alienation that left damaging psychological effects of inferiority.
Chinese South Africans – who were descendants of migrant workers who came to work in the gold mines around Johannesburg in the late 19th century – were initially either classified as “Coloured” or “Other Asian” and were subject to numerous forms of discrimination and restriction. It was not until 1984 that South African Chinese, increased to about 10,000, were given the same official rights as the Japanese, to be treated as whites in terms of the Group Areas Act, although they still faced discrimination and did not receive all the benefits/rights of their newly obtained honorary white status such as voting.
Indonesians arrived at the Cape of Good Hope as slaves until the abolishment of slavery during the 19th century. They were predominantly Muslim, were allowed religious freedom and formed their own ethnic group/community known as Cape Malays. They were classified as part of the Coloured racial group. This was the same for South Africans of Malaysian descent who were also classified as part of the Coloured race and thus considered “not-white”. South Africans of Filipino descent were classified as “black” due to historical outlook on Filipinos by White South Africans, and many of them lived in Bantustans.
The Lebanese population were somewhat of an anomaly during the apartheid era. Lebanese immigration to South Africa was chiefly Christian, and the group was originally classified as non-white; however, a court case in 1913 ruled that because Lebanese and Syrians originated from the Canaan region (the birthplace of Christianity and Judaism), they could not be discriminated against by race laws which targeted non-believers, and thus, was classified as white. The Lebanese community maintained its white status after the Population Registration Act came into effect; however, immigration from the Middle East was restricted.
Alongside apartheid, the National Party implemented a programme of social conservatism. Pornography and gambling were banned. Cinemas, shops selling alcohol and most other businesses were forbidden from opening on Sundays. Abortion, homosexuality and sex education were also restricted; abortion was legal only in cases of rape or if the mother’s life was threatened.
Television was not introduced until 1976 because the government viewed English programming as a threat to the Afrikaans language. Television was run on apartheid lines – TV1 broadcast in Afrikaans and English (geared to a White audience), TV2 in Zulu and Xhosa, TV3 in Sotho, Tswana and Pedi (both geared to a Black audience), and TV4 mostly showed programmes for an urban Black audience.
Apartheid sparked significant internal resistance. The government responded to a series of popular uprisings and protests with police brutality, which in turn increased local support for the armed resistance struggle. Internal resistance to the apartheid system in South Africa came from several sectors of society and saw the creation of organisations dedicated variously to peaceful protests, passive resistance and armed insurrection.
In 1949, the youth wing of the African National Congress (ANC) took control of the organisation and started advocating a radical black nationalist programme. The new young leaders proposed that white authority could only be overthrown through mass campaigns. In 1950 that philosophy saw the launch of the Programme of Action, a series of strikes, boycotts and civil disobedience actions that led to occasional violent clashes with the authorities.
In 1959, a group of disenchanted ANC members formed the Pan Africanist Congress (PAC), which organised a demonstration against pass books on 21 March 1960. One of those protests was held in the township of Sharpeville, where 69 people were killed by police in the Sharpeville massacre.
In the wake of Sharpeville, the government declared a state of emergency. More than 18,000 people were arrested, including leaders of the ANC and PAC, and both organisations were banned. The resistance went underground, with some leaders in exile abroad and others engaged in campaigns of domestic sabotage and terrorism.
In May 1961, before the declaration of South Africa as a Republic, an assembly representing the banned ANC called for negotiations between the members of the different ethnic groupings, threatening demonstrations and strikes during the inauguration of the Republic if their calls were ignored.
When the government overlooked them, the strikers (among the main organisers was a 42-year-old, Thembu-origin Nelson Mandela) carried out their threats. The government countered swiftly by giving police the authority to arrest people for up to twelve days and detaining many strike leaders amid numerous cases of police brutality. Defeated, the protesters called off their strike. The ANC then chose to launch an armed struggle through a newly formed military wing, Umkhonto we Sizwe (MK), which would perform acts of sabotage on tactical state structures. Its first sabotage plans were carried out on 16 December 1961, the anniversary of the Battle of Blood River.
In the 1970s, the Black Consciousness Movement (BCM) was created by tertiary students influenced by the Black Power movement in the US. BCM endorsed black pride and African customs and did much to alter the feelings of inadequacy instilled among black people by the apartheid system. The leader of the movement, Steve Biko, was taken into custody on 18 August 1977 and was beaten to death in detention.
In 1976, secondary students in Soweto took to the streets in the Soweto uprising to protest against the imposition of Afrikaans as the only language of instruction. On 16 June, police opened fire on students protesting peacefully. According to official reports, 23 people were killed, but the number of people who died is usually given as 176, with estimates of up to 700. In the following years, several student organisations were formed to protest against apartheid, and these organisations were central to urban school boycotts in 1980 and 1983 and rural boycotts in 1985 and 1986.
In parallel with student protests, labour unions started protest action in 1973 and 1974. After 1976 unions and workers are considered to have played an important role in the struggle against apartheid, filling the gap left by the banning of political parties. In 1979 black trade unions were legalised and could engage in collective bargaining, although strikes were still illegal. Economist Thomas Sowell wrote that basic supply and demand led to violations of Apartheid “on a massive scale” throughout the nation, simply because there were not enough white South African business owners to meet the demand for various goods and services. Large portions of the garment industry and construction of new homes, for example, were effectively owned and operated by blacks, who either worked surreptitiously or who circumvented the law with a white person as a nominal, figurehead manager.
In 1983, anti-apartheid leaders determined to resist the tricameral parliament assembled to form the United Democratic Front (UDF) in order to coordinate anti-apartheid activism inside South Africa. The first presidents of the UDF were Archie Gumede, Oscar Mpetha and Albertina Sisulu; patrons were Archbishop Desmond Tutu, Dr Allan Boesak, Helen Joseph, and Nelson Mandela. Basing its platform on abolishing apartheid and creating a nonracial democratic South Africa, the UDF provided a legal way for domestic human rights groups and individuals of all races to organise demonstrations and campaign against apartheid inside the country. Churches and church groups also emerged as pivotal points of resistance. Church leaders were not immune to prosecution, and certain faith-based organisations were banned, but the clergy generally had more freedom to criticise the government than militant groups did. The UDF, coupled with the protection of the church, accordingly permitted a major role for Archbishop Desmond Tutu, who served both as a prominent domestic voice and international spokesperson denouncing apartheid and urging the creation of a shared nonracial state.
Although the majority of whites supported apartheid, some 20 percent did not. Parliamentary opposition was galvanised by Helen Suzman, Colin Eglin and Harry Schwarz, who formed the Progressive Federal Party. Extra-parliamentary resistance was largely centred in the South African Communist Party and women’s organisation the Black Sash. Women were also notable in their involvement in trade union organisations and banned political parties. The public intellectuals too, such as Nadine Gordimer the eminent author and winner of the Nobel Prize in Literature (1991), vehemently opposed the Apartheid regime and accordingly bolstered the movement against it.
International relations during apartheid
South Africa’s policies were subject to international scrutiny in 1960 when UK Prime Minister Harold Macmillan criticised them during his Wind of Change speech in Cape Town. Weeks later, tensions came to a head in the Sharpeville Massacre, resulting in more international condemnation. Soon afterwards, Prime Minister Hendrik Verwoerd announced a referendum on whether the country should become a republic. Verwoerd lowered the voting age for Whites to eighteen years of age and included Whites in South West Africa on the roll. The referendum on 5 October that year asked Whites; “Are you in favour of a Republic for the Union?”, and 52 percent voted “Yes”.
As a consequence of this change of status, South Africa needed to reapply for continued membership of the Commonwealth, with which it had privileged trade links. India had become a republic within the Commonwealth in 1950, but it became clear that African and Asian member states would oppose South Africa due to its apartheid policies. As a result, South Africa withdrew from the Commonwealth on 31 May 1961, the day that the Republic came into existence.
The apartheid system as an issue was first formally brought to the United Nations attention, in order to advocate for the Indians residing in South Africa. On June 22 of 1946, the Indian government requested that the discriminatory treatment of Indians living in South Africa be included on the agenda of the first General Assembly session. In 1952, apartheid was again discussed in the aftermath of the Defiance Campaign, and the UN set up a task team to keep watch on the progress of apartheid and the racial state of affairs in South Africa. Although South Africa’s racial policies were a cause for concern, most countries in the UN concurred that this was a domestic affair, which fell outside the UN’s jurisdiction.
In April 1960, the UN’s conservative stance on apartheid changed following the Sharpeville massacre, and the Security Council for the first time agreed on concerted action against the apartheid regime. Resolution 134 called upon the nation of South Africa to abandon its policies implementing racial discrimination. The newly founded United Nations Special Committee Against Apartheid scripted and passed Resolution 181 on August 7, 1963, which called upon all states to cease the sale and shipment of all ammunition and military vehicles to South Africa. This clause was finally declared mandatory on 4 November 1977, depriving South Africa of military aid. From 1964 onwards, the US and the UK discontinued their arms trade with South Africa. The Security Council also condemned the Soweto massacre in Resolution 392. In 1977, the voluntary UN arms embargo became mandatory with the passing of Resolution 418. In addition to isolating South Africa militarily, the United Nations General Assembly encouraged the boycotting of oil sales to South Africa. Other actions taken by the United Nations General Assembly include the request for all nations and organisations, “to suspend cultural, educational, sporting and other exchanges with the racist regime and with organisations or institutions in South Africa which practise apartheid”. Illustrating that over a long period of time, the United Nations was working towards isolating the state of South Africa, by putting pressure on the Apartheid regime.
After much debate, by the late-1980s, the United States, the United Kingdom, and 23 other nations had passed laws placing various trade sanctions on South Africa. A disinvestment from South Africa movement in many countries was similarly widespread, with individual cities and provinces around the world implementing various laws and local regulations forbidding registered corporations under their jurisdiction from doing business with South African firms, factories, or banks.
Pope John Paul II was an outspoken opponent of apartheid. In 1985, while visiting the Netherlands, he gave an impassioned speech at the International Court of Justice condemning apartheid, proclaiming that “no system of apartheid or separate development will ever be acceptable as a model for the relations between peoples or races.” In September 1988, he made a pilgrimage to countries bordering South Africa, while demonstratively avoiding South Africa itself. During his visit to Zimbabwe, he called for economic sanctions against the South African government.
Organisation for African Unity
The Organisation of African Unity (OAU) was created in 1963. Its primary objectives were to eradicate colonialism and improve social, political and economic situations in Africa. It censured apartheid and demanded sanctions against South Africa. African states agreed to aid the liberation movements in their fight against apartheid. In 1969, fourteen nations from Central and East Africa gathered in Lusaka, Zambia, and formulated the Lusaka Manifesto, which was signed on 13 April by all of the countries in attendance except Malawi. This manifesto was later taken on by both the OAU and the United Nations.
The Lusaka Manifesto summarized the political situations of self-governing African countries, condemning racism and inequity, and calling for Black majority rule in all African nations. It did not rebuff South Africa entirely, though, adopting an appeasing manner towards the apartheid government, and even recognizing its autonomy. Although African leaders supported the emancipation of Black South Africans, they preferred this to be attained through peaceful means.
South Africa’s negative response to the Lusaka Manifesto and rejection of a change to its policies brought about another OAU announcement in October 1971. The Mogadishu Declaration stated that South Africa’s rebuffing of negotiations meant that its Black people could only be freed through military means, and that no African state should converse with the apartheid government.
In 1966, B. J. Vorster became Prime Minister. He was not prepared to dismantle apartheid, but he did try to redress South Africa’s isolation and to revitalise the country’s global reputation, even those with Black majority rule in Africa. This he called his “Outward-Looking” policy.
Vorster’s willingness to talk to African leaders stood in contrast to Verwoerd’s refusal to engage with leaders such as Abubakar Tafawa Balewa of Nigeria in 1962 and Kenneth Kaunda of Zambia in 1964. In 1966, he met the heads of the neighbouring states of Lesotho, Swaziland and Botswana. In 1967, he offered technological and financial aid to any African state prepared to receive it, asserting that no political strings were attached, aware that many African states needed financial aid despite their opposition to South Africa’s racial policies. Many were also tied to South Africa economically because of their migrant labour population working down the South African mines. Botswana, Lesotho and Swaziland remained outspoken critics of apartheid but were dependent on South African economic assistance.
Malawi was the first non-neighbouring country to accept South African aid. In 1967, the two states set out their political and economic relations. In 1969, Malawi was the only country at the assembly which did not sign the Lusaka Manifesto condemning South Africa’ apartheid policy. In 1970, Malawian president Hastings Banda made his first and most successful official stopover in South Africa.
Associations with Mozambique followed suit and were sustained after that country won its sovereignty in 1975. Angola was also granted South African loans. Other countries which formed relationships with South Africa were Liberia, Ivory Coast, Madagascar, Mauritius, Gabon, Zaire (now the Democratic Republic of the Congo) and the Central African Republic. Although these states condemned apartheid (more than ever after South Africa’s denunciation of the Lusaka Manifesto), South Africa’s economic and military dominance meant that they remained dependent on South Africa to varying degrees.
Sports and culture
South Africa’s isolation in sport began in the mid-1950s and increased throughout the 1960s. Apartheid forbade multiracial sport, which meant that overseas teams, by virtue of them having players of different races, could not play in South Africa. In 1956, the International Table Tennis Federation severed its ties with the all-White South African Table Tennis Union, preferring the non-racial South African Table Tennis Board. The apartheid government responded by confiscating the passports of the Board’s players so that they were unable to attend international games.
In 1959, the non-racial South African Sports Association (SASA) was formed to secure the rights of all players on the global field. After meeting with no success in its endeavours to attain credit by collaborating with White establishments, SASA approached the International Olympic Committee (IOC) in 1962, calling for South Africa’s expulsion from the Olympic Games. The IOC sent South Africa a caution to the effect that, if there were no changes, they would be barred from competing at the 1964 Olympic Games in Tokyo. The changes were initiated, and in January 1963, the South African Non-Racial Olympic Committee (SANROC) was set up. The Anti-Apartheid Movement persisted in its campaign for South Africa’s exclusion, and the IOC acceded in barring the country from the 1964 Olympic Games. South Africa selected a multi-racial team for the next Olympic Games, and the IOC opted for incorporation in the 1968 Mexico City Olympic Games. Because of protests from AAMs and African nations, however, the IOC was forced to retract the invitation.
Foreign complaints about South Africa’s bigoted sports brought more isolation. Racially selected New Zealand sports teams toured South Africa, until the 1970 All Blacks rugby tour allowed Maori to enter the country under the status of “honorary Whites”. Huge and widespread protests occurred in New Zealand in 1981 against the Springbok tour – the government spent $8,000,000 protecting games using the army and police force. A planned All Black tour to South Africa in 1985 remobilised the New Zealand protesters and it was cancelled. A “rebel tour” – not government sanctioned – went ahead in 1986, but after that sporting ties were cut, and New Zealand made a decision not to convey an authorised rugby team to South Africa until the end of apartheid.
On 6 September 1966, Verwoerd was fatally stabbed at Parliament House by parliamentary messenger Dimitri Tsafendas. John Vorster took office shortly after, and announced that South Africa would no longer dictate to the international community what their teams should look like. Although this reopened the gate for international sporting meets, it did not signal the end of South Africa’s racist sporting policies. In 1968, Vorster went against his policy by refusing to permit Basil D’Oliveira, a Coloured South African-born cricketer, to join the English cricket team on its tour to South Africa. Vorster said that the side had been chosen only to prove a point, and not on merit. D’Oliveira was eventually included in the team as the first substitute, but the tour was cancelled. Protests against certain tours brought about the cancellation of a number of other visits, including that of an England rugby team touring South Africa in 1969/70.
The first of the “White Bans” occurred in 1971 when the Chairman of the Australian Cricketing Association – Sir Don Bradman – flew to South Africa to meet Vorster. Vorster had expected Bradman to allow the tour of the Australian cricket team to go ahead, but things became heated after Bradman asked why Black sportsmen were not allowed to play cricket. Vorster stated that Blacks were intellectually inferior and had no finesse for the game. Bradman – thinking this ignorant and repugnant – asked Vorster if he had heard of a man named Garry Sobers. On his return to Australia, Bradman released a short statement: “We will not play them until they choose a team on a non-racist basis.” Bradman’s views were in stark contrast to those of Australian tennis great Margaret Court, who had won the grand slam the previous year and commented about apartheid that “South Africans have this thing better organised than any other country, particularly America” and that she would “go back there any time.”
In South Africa, Vorster vented his anger publicly against Bradman, while the African National Congress rejoiced. This was the first time a predominantly White nation had taken the side of the multiracial sport, producing an unsettling resonance that more “White” boycotts were coming. Almost twenty years later, on his release from prison, Nelson Mandela asked a visiting Australian statesman if Donald Bradman, his childhood hero, was still alive (Bradman lived until 2001).
In 1971, Vorster altered his policies even further by distinguishing multiracial from multinational sport. Multiracial sport, between teams with players of different races, remained outlawed; multinational sport, however, was now acceptable: international sides would not be subject to South Africa’s racial stipulations.
In 1978, Nigeria boycotted the Commonwealth Games because New Zealand’s sporting contacts with the South African government were not considered to be in accordance with the 1977 Gleneagles Agreement. Nigeria also led the 32-nation boycott of the 1986 Commonwealth Games because of UK Prime Minister Margaret Thatcher’s ambivalent attitude towards sporting links with South Africa, significantly affecting the quality and profitability of the Games and thus thrusting apartheid into the international spotlight.
In the 1960s, the Anti-Apartheid Movements began to campaign for cultural boycotts of apartheid South Africa. Artists were requested not to present or let their works be hosted in South Africa. In 1963, 45 British writers put their signatures to an affirmation approving of the boycott, and, in 1964, American actor Marlon Brando called for a similar affirmation for films. In 1965, the Writers’ Guild of Great Britain called for a proscription on the sending of films to South Africa. Over sixty American artists signed a statement against apartheid and against professional links with the state. The presentation of some South African plays in the United Kingdom and the United States was also vetoed. After the arrival of television in South Africa in 1975, the British Actors Union, Equity, boycotted the service, and no British programme concerning its associates could be sold to South Africa. Similarly, when home video grew popular in the 1980s, the Australian arm of CBS/Fox Video (now 20th Century Fox Home Entertainment) placed stickers on their VHS and Betamax cassettes which labeled exporting such cassettes to South Africa as “an infringement of copyright”. Sporting and cultural boycotts did not have the same impact as economic sanctions, but they did much to lift consciousness amongst normal South Africans of the global condemnation of apartheid.
While international opposition to apartheid grew, the Nordic countries – and Sweden in particular – provided both moral and financial support for the ANC. On 21 February 1986 – a week before he was murdered – Sweden’s Prime Minister Olof Palme made the keynote address to the Swedish People’s Parliament Against Apartheid held in Stockholm. In addressing the hundreds of anti-apartheid sympathisers as well as leaders and officials from the ANC and the Anti-Apartheid Movement such as Oliver Tambo, Palme declared: “Apartheid cannot be reformed; it has to be eliminated.”
Other Western countries adopted a more ambivalent position. In Switzerland, the Swiss-South African Association lobbied on behalf of the South African government. The Nixon Administration implemented a policy known as the Tar Baby Option, pursuant to which the US maintained close relations with the white supremacist South African government. The Reagan administration evaded international sanctions and provided diplomatic support in international forums for the South African government. The United States also increased trade with the white supremacist South Africa regime, while describing the ANC as “a terrorist organisation.” Like the Reagan Administration, the government of Margaret Thatcher termed this policy “constructive engagement” with the apartheid government, vetoing the imposition of UN economic sanctions. U.S. government justification for supporting the Apartheid regime publicly given as a belief in “free trade” and the perception of the right-wing South African government as a bastion against Marxist forces in Southern Africa, for example, by the military intervention by the South African government in the Mozambican Civil War in support of right-wing insurgents fighting to topple the government. The U.S. and the U.K. declared the ANC a terrorist organisation, and in 1987 her spokesman, Bernard Ingham, famously said that anyone who believed that the ANC would ever form the government of South Africa was “living in cloud cuckoo land”. The American Legislative Exchange Council (ALEC), a conservative lobbying organisation, actively campaigned against divesting from South Africa throughout the 1980s.
By the late-1980s, with no sign of a political resolution in South Africa, Western patience began to run out. By 1989, a bipartisan Republican/Democratic initiative in the US favoured economic sanctions (realised as the Comprehensive Anti-Apartheid Act of 1986), the release of Nelson Mandela and a negotiated settlement involving the ANC. Thatcher too began to take a similar line but insisted on the suspension of the ANC’s armed struggle.
The UK’s significant economic involvement in South Africa may have provided some leverage with the South African government, with both the UK and the US applying pressure and pushing for negotiations. However, neither the UK nor the US was willing to apply economic pressure upon their multinational interests in South Africa, such as the mining company Anglo American. Although a high-profile compensation claim against these companies was thrown out of court in 2004, the US Supreme Court in May 2008 upheld an appeal court ruling allowing another lawsuit that seeks damages of more than US$400 billion from major international companies which are accused of aiding South Africa’s apartheid system.
Impact of the Cold War
During the 1950s, South African military strategy was decisively shaped by fears of communist espionage and a conventional Soviet threat to the strategic Cape trade route between the South Atlantic and Indian Oceans. The apartheid government supported the US-led North Atlantic Treaty Organization (NATO), as well as its policy of regional containment against Soviet-backed regimes and insurgencies worldwide. By the late-1960s, the rise of Soviet client states on the African continent, as well as Soviet aid for militant anti-apartheid movements, was considered one of the primary external threats to the apartheid system. South African officials frequently accused domestic opposition groups of being communist proxies. For its part, the Soviet Union viewed South Africa as a bastion of neocolonialism and a regional Western ally, which helped fuel its support for various anti-apartheid causes. From 1973 onwards, much of South Africa’s white population increasingly looked upon their country as a bastion of the free world besieged militarily, politically, and culturally by Communism and radical black nationalism. The apartheid government perceived itself as being locked in a proxy struggle with the Warsaw Pact and by implication, armed wings of Black nationalist forces such as Umkhonto we Sizwe (MK) and the People’s Liberation Army of Namibia (PLAN), which often received Soviet arms and training. This was described as “Total Onslaught”.
Israeli arms sales
Soviet support for militant anti-apartheid movements worked in the government’s favour, as its claim to be reacting in opposition to aggressive Communist expansion gained greater plausibility, and helped it justify its own domestic militarisation methods, known as “Total Strategy”. Total Strategy involved building up a formidable conventional military and counter-intelligence capability. It was formulated on counter-revolutionary tactics as espoused by noted French tactician André Beaufre. Considerable effort was devoted towards circumventing international arms sanctions, and the government even went so far as to develop nuclear weapons, allegedly with covert assistance from Israel. In 2010, The Guardian released South African government documents that revealed an Israeli offer to sell the apartheid regime nuclear weapons. Israel denied these allegations and claimed that the documents were minutes from a meeting which did not indicate any concrete offer for a sale of nuclear weapons. Shimon Peres said that The Guardian‘s article was based on “selective interpretation…and not on concrete facts.”
As a result of “Total Strategy”, South African society became increasingly militarized. Many domestic civil organisations were modelled upon military structures, and military virtues such as discipline, patriotism, and loyalty were highly regarded. In 1968, national service for White South African men lasted nine months at a minimum, and they could be called up for reserve duty into their late-middle age if necessary. The length of national service was gradually extended to twelve months in 1972 and twenty-four months in 1978. At state schools, white male students were organised into military-style formations and drilled as cadets or as participants in a civil defence or “Youth Preparedness” curriculum. Compulsory military education and in some cases, paramilitary training was introduced for all older white male students at state schools in three South African provinces. These programmes presided over the construction of bomb shelters at schools and drills aimed at simulating mock insurgent raids.
From the late-1970s to the late-1980s, defence budgets in South Africa were raised exponentially. In 1975, Israeli defence minister Shimon Peres signed a security pact with South African defence minister P.W. Botha that led to $200 million in arms deals. In 1988, Israeli arms sales to South Africa totalled over $1.4 billion. Covert operations focused on espionage and domestic counter-subversion became common, the number of special forces units swelled, and the South African Defence Force (SADF) had amassed enough sophisticated conventional weaponry to pose a serious threat to the “front-line states”, a regional alliance of neighbouring countries opposed to apartheid.
Foreign military operations
Total Strategy was advanced in the context of MK, PLAN, and Azanian People’s Liberation Army (APLA) guerrilla raids into South Africa or against South African targets in South West Africa; frequent South African reprisal attacks on these movements’ external bases in Angola, Zambia, Mozambique, Zimbabwe, Botswana, and elsewhere, often involving collateral damage to foreign infrastructure and civilian populations; and periodic complaints brought before the international community about South African violations of its neighbours’ sovereignty.
The apartheid government made judicious use of extraterritorial operations to eliminate its military and political opponents, arguing that neighbouring states, including their civilian populations, which hosted, tolerated on their soil, or otherwise sheltered anti-apartheid insurgent groups could not evade responsibility for provoking retaliatory strikes. While it did focus on militarizing the borders and sealing up its domestic territory against insurgent raids, it also relied heavily on an aggressive preemptive and counter-strike strategy, which fulfilled a preventive and deterrent purpose. The reprisals which occurred beyond South Africa’s borders involved not only hostile states but neutral and sympathetic governments as well, often forcing them to react against their will and interests.
External South African military operations were aimed at eliminating the training facilities, safehouses, infrastructure, equipment, and manpower of the insurgents. However, their secondary objective was to dissuade neighbouring states from offering sanctuary to MK, PLAN, APLA, and similar organisations. This was accomplished by deterring the supportive foreign population from cooperating with infiltration and thus undermining the insurgents’ external sanctuary areas. It would also send a clear message to the host government that collaborating with insurgent forces involved potentially high costs.
The scale and intensity of foreign operations varied and ranged from small special forces units carrying out raids on locations across the border which served as bases for insurgent infiltration to major conventional offensives involving armour, artillery, and aircraft. Actions such as Operation Protea in 1981 and Operation Askari in 1983 involved both full-scale conventional warfare and a counter-insurgency reprisal operation. The insurgent bases were usually situated near military installations of the host government so that SADF retaliatory strikes hit those facilities as well and attracted international attention and condemnation of what was perceived as aggression against the armed forces of another sovereign state. This would inevitably result in major engagements, in which the SADF’s expeditionary units would have to contend with the firepower of the host government’s forces. Intensive conventional warfare of this nature carried the risk of severe casualties among white soldiers, which had to be kept to a minimum for political reasons. There were also high economic and diplomatic costs associated with openly deploying large numbers of South African troops into another country. Furthermore, military involvement on that scale had the potential to evolve into wider conflict situations, in which South Africa became entangled. For example, South Africa’s activities in Angola, initially limited to containing PLAN, later escalated to direct involvement in the Angolan Civil War.
As it became clearer that full-scale conventional operations could not effectively fulfil the requirements of a regional counter-insurgency effort, South Africa turned to a number of alternative methods. Retributive artillery bombardments were the least sophisticated means of reprisal against insurgent attacks. Between 1978 and 1979 the SADF directed artillery fire against locations in Angola and Zambia from which insurgent rockets were suspected to have been launched. This precipitated several artillery duels with the Zambian Army. Special forces raids were launched to harass PLAN and MK by liquidating prominent members of those movements, destroying their offices and safehouses, and seizing valuable records stored at these sites. One example was the Gaborone Raid, carried out in 1985, during which a South African special forces team crossed the border into Botswana and demolished four suspected MK safe houses, severely damaging another four. Other types of special forces operations included the sabotage of economic infrastructure. The SADF sabotaged infrastructure being used for the insurgents’ war effort; for example, port facilities in southern Angola’s Moçâmedes District, where Soviet arms were frequently offloaded for PLAN, as well as the railway line which facilitated their transport to PLAN headquarters in Lubango, were common targets. Sabotage was also used as a pressure tactic when South Africa was negotiating with a host government to cease providing sanctuary to insurgent forces, as in the case of Operation Argon. Successful sabotage actions of high-profile economic targets undermined a country’s ability to negotiate from a position of strength and made it likelier to accede to South African demands rather than risk the expense of further destruction and war.
Also noteworthy were South African transnational espionage efforts, which included covert assassinations, kidnappings, and attempts to disrupt the overseas influence of anti-apartheid organisations. South African military intelligence agents were known to have abducted and killed anti-apartheid activists and others suspected of having ties to MK in London and Brussels.
During the 1980s the government, led by P.W. Botha, became increasingly preoccupied with security. It set up a powerful state security apparatus to “protect” the state against an anticipated upsurge in political violence that the reforms were expected to trigger. The 1980s became a period of considerable political unrest, with the government becoming increasingly dominated by Botha’s circle of generals and police chiefs (known as securocrats), who managed the various States of Emergencies.
Botha’s years in power were marked also by numerous military interventions in the states bordering South Africa, as well as an extensive military and political campaign to eliminate SWAPO in Namibia. Within South Africa, meanwhile, vigorous police action and strict enforcement of security legislation resulted in hundreds of arrests and bans, and an effective end to the African National Congress’ sabotage campaign.
The government punished political offenders brutally. 40,000 people annually were subjected to whipping as a form of punishment. The vast majority had committed political offences and were lashed ten times for their crime. If convicted of treason, a person could be hanged, and the government executed numerous political offenders in this way.
As the 1980s progressed, more and more anti-apartheid organisations were formed and affiliated with the UDF. Led by the Reverend Allan Boesak and Albertina Sisulu, the UDF called for the government to abandon its reforms and instead abolish the apartheid system and eliminate the homelands completely.
State of emergency
Serious political violence was a prominent feature from 1985–89, as Black townships became the focus of the struggle between anti-apartheid organisations and the Botha government. Throughout the 1980s, township people resisted apartheid by acting against the local issues that faced their particular communities. The focus of much of this resistance was against the local authorities and their leaders, who were seen to be supporting the government. By 1985, it had become the ANC’s aim to make Black townships “ungovernable” (a term later replaced by “people’s power”) by means of rent boycotts and other militant action. Numerous township councils were overthrown or collapsed, to be replaced by unofficial popular organisations, often led by militant youth. People’s courts were set up, and residents accused of being government agents were dealt with extreme and occasionally lethal punishments. Black town councillors and policemen, and sometimes their families, were attacked with petrol bombs, beaten, and murdered by necklacing, where a burning tyre was placed around the victim’s neck after they were restrained by wrapping their wrists with barbed wire. This signature act of torture and murder was embraced by the ANC and its leaders.
On 20 July 1985, Botha declared a State of Emergency in 36 magisterial districts. Areas affected were the Eastern Cape, and the PWV region (“Pretoria, Witwatersrand, Vereeniging”). Three months later, the Western Cape was included. An increasing number of organisations were banned or listed (restricted in some way); many individuals had restrictions such as house arrest imposed on them. During this state of emergency, about 2,436 people were detained under the Internal Security Act. This act gave police and the military sweeping powers. The government could implement curfews controlling the movement of people. The president could rule by decree without referring to the constitution or to parliament. It became a criminal offence to threaten someone verbally or possess documents that the government perceived to be threatening, to advise anyone to stay away from work or to oppose the government, and to disclose the name of anyone arrested under the State of Emergency until the government released that name, with up to ten years’ imprisonment for these offences. Detention without trial became a common feature of the government’s reaction to growing civil unrest and by 1988, 30,000 people had been detained. The media was censored, thousands were arrested and many were interrogated and tortured.
On 12 June 1986, four days before the tenth anniversary of the Soweto uprising, the state of emergency was extended to cover the whole country. The government amended the Public Security Act, including the right to declare “unrest” areas, allowing extraordinary measures to crush protests in these areas. Severe censorship of the press became a dominant tactic in the government’s strategy and television cameras were banned from entering such areas. The state broadcaster, the South African Broadcasting Corporation (SABC), provided propaganda in support of the government. Media opposition to the system increased, supported by the growth of a pro-ANC underground press within South Africa.
In 1987, the State of Emergency was extended for another two years. Meanwhile, about 200,000 members of the National Union of Mineworkers commenced the longest strike (three weeks) in South African history. The year 1988 saw the banning of the activities of the UDF and other anti-apartheid organisations.
Much of the violence in the late-1980s and early-1990s was directed at the government, but a substantial amount was between the residents themselves. Many died in violence between members of Inkatha and the UDF-ANC faction. It was later proven that the government manipulated the situation by supporting one side or the other whenever it suited them. Government agents assassinated opponents within South Africa and abroad; they undertook cross-border army and air-force attacks on suspected ANC and PAC bases. The ANC and the PAC in return detonated bombs at restaurants, shopping centres and government buildings such as magistrates courts. Between 1960–1994, according to statistics from the Truth and Reconciliation Commission, the Inkatha Freedom Party was responsible for 4,500 deaths, South African security forces were responsible for 2,700 deaths and the ANC was responsible for 1,300 deaths.
The state of emergency continued until 1990 when it was lifted by State President F.W. de Klerk.
Final years of apartheid
Apartheid developed from the racism of colonial factions and due to South Africa’s “unique industrialisation”. The policies of industrialisation led to the segregation of and classing of people, which was “specifically developed to nurture early industry such as mining”. Cheap labour was the basis of the economy and this was taken from what the state classed as peasant groups and the migrants. Furthermore, Philip Bonner highlights the “contradictory economic effects” as the economy did not have a manufacturing sector, therefore promoting short term profitability but limiting labour productivity and the size of local markets. This also led to its collapse as “Clarkes emphasises the economy could not provide and compete with foreign rivals as they failed to master cheap labour and complex chemistry”.
The contradictions in the traditionally capitalist economy of the apartheid state led to considerable debate about racial policy, and division and conflicts in the central state. To a large extent, the political ideology of apartheid had emerged from the colonisation of Africa by European powers which institutionalised racial discrimination and exercised a paternal philosophy of “civilising inferior natives.” Some scholars have argued that this can be reflected in Afrikaner Calvinism, with its parallel traditions of racialism; for example, as early as 1933; the executive council of the Broederbond formulated a recommendation for mass segregation.
External Western influence, arising from European experiences in colonisation, may be seen as a factor which greatly influenced political attitudes and ideology. Late twentieth-century South Africa was cited as an “unreconstructed example of western civilisation twisted by racism”.
In the 1960s, South Africa experienced economic growth second only to that of Japan. Trade with Western countries grew, and investment from the United States, France, and the United Kingdom poured in.
In 1974, resistance to apartheid was encouraged by Portuguese withdrawal from Mozambique and Angola, after the 1974 Carnation Revolution. South African troops withdrew from Angola early in 1976, failing to prevent the MPLA from gaining power there, and Black students in South Africa celebrated.
The Mahlabatini Declaration of Faith, signed by Mangosuthu Buthelezi and Harry Schwarz in 1974, enshrined the principles of peaceful transition of power and equality for all. Its purpose was to provide a blueprint for South Africa by consent and racial peace in a multi-racial society, stressing opportunity for all, consultation, the federal concept, and a Bill of Rights. It caused a split in the United Party that ultimately realigned oppositional politics in South Africa with the formation of the Progressive Federal Party in 1977. The Declaration was the first of several such joint agreements by acknowledged Black and White political leaders in South Africa.
In 1978, the National Party Defence Minister, Pieter Willem Botha, became Prime Minister. His white minority regime worried about Soviet aid to revolutionaries in South Africa at the same time that South African economic growth had slowed. The South African Government noted that it was spending too much money to maintain segregated homelands created for Blacks, and the homelands were proving to be uneconomical.
Nor was maintaining Blacks as third-class citizens working well. Black labour remained vital to the economy, and illegal Black labour unions were flourishing. Many Blacks remained too poor to contribute significantly to the economy through their purchasing power – although they composed more than 70% of the population. Botha’s regime feared that an antidote was needed to prevent the Blacks’ being attracted to Communism.
In July 1979, the Nigerian Government alleged that the Shell-BP Petroleum Development Company of Nigeria Limited (SPDC) was selling Nigerian oil to South Africa, though there was little evidence or commercial logic for such sales. The alleged sanctions-breaking was used to justify the seizure of some of BP’s assets in Nigeria including their stake in SPDC, although it appears the real reasons were economic nationalism and domestic politics ahead of the Nigerian elections. Many South Africans attended schools in Nigeria, and Nelson Mandela acknowledged the role of Nigeria in the struggle against apartheid on several occasions.
In the 1980s, anti-apartheid movements in the United States and Europe were gaining support for boycotts against South Africa, for the withdrawal of US companies from South Africa, and for release of imprisoned Nelson Mandela. South Africa was sinking to the bottom of the international community. Investment in South Africa was ending and an active policy of disinvestment had begun.
In the early-1980s, Botha’s National Party government started to recognise the inevitability of the need to reform the apartheid system. Early reforms were driven by a combination of internal violence, international condemnation, changes within the National Party’s constituency, and changing demographics – whites constituted only 16% of the total population, in comparison to 20% fifty years earlier.
In 1983, a new constitution was passed implementing what was called the Tricameral Parliament, giving Coloureds and Indians voting rights and parliamentary representation in separate houses – the House of Assembly (178 members) for Whites, the House of Representatives (85 members) for Coloureds and the House of Delegates (45 members) for Indians. Each House handled laws pertaining to its racial group’s “own affairs”, including health, education and other community issues. All laws relating to “general affairs” (matters such as defence, industry, taxation and Black affairs) were handled by a Cabinet made up of representatives from all three houses. However, the White chamber had a large majority on this cabinet, ensuring that effective control of the country remained in the hands of the White minority. Blacks, although making up the majority of the population, were excluded from representation; they remained nominal citizens of their homelands. The first Tricameral elections were largely boycotted by Coloured and Indian voters, amid widespread rioting.
Reforms and contact with the ANC under Botha
Concerned over the popularity of Mandela, Botha denounced him as an arch-Marxist committed to violent revolution, but to appease Black opinion and nurture Mandela as a benevolent leader of Black’s, the government transferred him from the maximum-security Robben Island to the lower security Pollsmoor Prison just outside Cape Town; where prison life was more comfortable for him. The government allowed Mandela more visitors, including visits and interviews by foreigners, to let the world know that he was being treated well.
Black homelands were declared nation-states and pass laws were abolished. Black labour unions were legitimised, the government recognised the right of Blacks to live in urban areas permanently and gave Blacks property rights there. Interest was expressed in rescinding the law against interracial marriage and also rescinding the law against sexual relations between different races, which was under ridicule abroad. The spending for Black schools increased, to one-seventh of what was spent per white child, up from on one-sixteenth in 1968. At the same time, attention was given to strengthening the effectiveness of the police apparatus.
In January 1985, Botha addressed the government’s House of Assembly and stated that the government was willing to release Mandela on condition that Mandela pledge opposition to acts of violence to further political objectives. Mandela’s reply was read in public by his daughter Zinzi – his first words distributed publicly since his sentence to prison twenty-one years before. Mandela described violence as the responsibility of the apartheid regime and said that with democracy there would be no need for violence. The crowd listening to the reading of his speech erupted in cheers and chants. This response helped to further elevate Mandela’s status in the eyes of those, both internationally and domestically, who opposed apartheid.
Between 1986–88, some petty apartheid laws were repealed, along with the pass laws. Botha told White South Africans to “adapt or die” and twice he wavered on the eve of what were billed as “rubicon” announcements of substantial reforms, although on both occasions he backed away from substantial changes. Ironically, these reforms served only to trigger intensified political violence through the remainder of the 1980s as more communities and political groups across the country joined the resistance movement. Botha’s government stopped short of substantial reforms, such as lifting the ban on the ANC, PAC and SACP and other liberation organisations, releasing political prisoners, or repealing the foundation laws of grand apartheid. The government’s stance was that they would not contemplate negotiating until those organisations “renounced violence”.
By 1987, South Africa’s economy was growing at one of the lowest rates in the world, and the ban on South African participation in international sporting events was frustrating many whites in South Africa. Examples of African states with Black leaders and White minorities existed in Kenya and Zimbabwe. Whispers of South Africa one day having a Black President sent more hardline whites into supporting right-wing political parties. Mandela was moved to a four-bedroom house of his own, with a swimming pool and shaded by fir trees, on a prison farm just outside of Cape Town. He had an unpublicised meeting with Botha. Botha impressed Mandela by walking forward, extending his hand and pouring Mandela’s tea. The two had a friendly discussion, with Mandela comparing the African National Congress’ rebellion with that of the Afrikaner rebellion and talking about everyone being brothers.
A number of clandestine meetings were held between the ANC-in-exile and various sectors of the internal struggle, such as women and educationalists. More overtly, a group of White intellectuals met the ANC in Senegal for talks known as the Dakar Conference.
Presidency of F. W. de Klerk
Early in 1989, Botha suffered a stroke; he was prevailed upon to resign in February 1989. He was succeeded as president later that year by F.W. de Klerk. Despite his initial reputation as a conservative, de Klerk moved decisively towards negotiations to end the political stalemate in the country. Prior to his term in office, F.W. de Klerk had already experienced political success as a result of the power base he had built in the Transvaal. During this time, F.W. de Klerk served as chairman to the provincial National Party, which was in favour of the Apartheid regime. The transition of de Klerk’s ideology regarding apartheid is seen clearly in his opening address to parliament on 2 February 1990. F.W. de Klerk announced that he would repeal discriminatory laws and lift the 30-year ban on leading anti-apartheid groups such as the African National Congress, the Pan Africanist Congress, the South African Communist Party (SACP) and the United Democratic Front. The Land Act was brought to an end. F.W. de Klerk also made his first public commitment to release Nelson Mandela, to return to press freedom and to suspend the death penalty. Media restrictions were lifted and political prisoners not guilty of common-law crimes were released.
On 11 February 1990, Nelson Mandela was released from Victor Verster Prison after more than 27 years behind bars.
Having been instructed by the UN Security Council to end its long-standing involvement in South West Africa/Namibia, and in the face of military stalemate in Southern Angola, and an escalation in the size and cost of the combat with the Cubans, the Angolans, and SWAPO forces and the growing cost of the border war, South Africa negotiated a change of control; Namibia became independent on 21 March 1990.
Apartheid was dismantled in a series of negotiations from 1990–91, culminating in a transitional period which resulted in the country’s 1994 general election, the first in South Africa held with universal suffrage.
In 1990, negotiations were earnestly begun, with two meetings between the government and the ANC. The purpose of the negotiations was to pave the way for talks towards a peaceful transition towards majority rule. These meetings were successful in laying down the preconditions for negotiations, despite the considerable tensions still abounding within the country. Apartheid legislation was abolished in 1991.
At the first meeting, the NP and ANC discussed the conditions for negotiations to begin. The meeting was held at Groote Schuur, the President’s official residence. They released the Groote Schuur Minute, which said that before negotiations commenced political prisoners would be freed and all exiles allowed to return.
There were fears that the change of power would be violent. To avoid this, it was essential that a peaceful resolution between all parties be reached. In December 1991, the Convention for a Democratic South Africa (CODESA) began negotiations on the formation of a multiracial transitional government and a new constitution extending political rights to all groups. CODESA adopted a Declaration of Intent and committed itself to an “undivided South Africa”.
Reforms and negotiations to end apartheid led to a backlash among the right-wing White opposition, leading to the Conservative Party winning a number of by-elections against NP candidates. De Klerk responded by calling a Whites-only referendum in March 1992 to decide whether negotiations should continue. 68% voted in favour, and the victory instilled in de Klerk and the government a lot more confidence, giving the NP a stronger position in negotiations.
When negotiations resumed in May 1992, under the tag of CODESA II, stronger demands were made. The ANC and the government could not reach a compromise on how power should be shared during the transition to democracy. The NP wanted to retain a strong position in a transitional government, and the power to change decisions made by parliament.
Persistent violence added to the tension during the negotiations. This was due mostly to the intense rivalry between the Inkatha Freedom Party (IFP) and the ANC and the eruption of some traditional tribal and local rivalries between the Zulu and Xhosa historical tribal affinities, especially in the Southern Natal provinces. Although Mandela and Buthelezi met to settle their differences, they could not stem the violence. One of the worst cases of ANC-IFP violence was the Boipatong massacre of 17 June 1992, when 200 IFP militants attacked the Gauteng township of Boipatong, killing 45. Witnesses said that the men had arrived in police vehicles, supporting claims that elements within the police and army contributed to the ongoing violence. Subsequent judicial inquiries found the evidence of the witnesses to be unreliable or discredited, and that there was no evidence of National Party or police involvement in the massacre. When de Klerk visited the scene of the incident he was initially warmly welcomed, but he was suddenly confronted by a crowd of protesters brandishing stones and placards. The motorcade sped from the scene as police tried to hold back the crowd. Shots were fired by the police, and the PAC stated that three of its supporters had been gunned down. Nonetheless, the Boipatong massacre offered the ANC a pretext to engage in brinkmanship. Mandela argued that de Klerk, as head of state, was responsible for bringing an end to the bloodshed. He also accused the South African police of inciting the ANC-IFP violence. This formed the basis for ANC’s withdrawal from the negotiations, and the CODESA forum broke down completely at this stage.
The Bisho massacre on 7 September 1992 brought matters to a head. The Ciskei Defence Force killed 29 people and injured 200 when they opened fire on ANC marchers demanding the reincorporation of the Ciskei homeland into South Africa. In the aftermath, Mandela and de Klerk agreed to meet to find ways to end the spiralling violence. This led to a resumption of negotiations.
Right-wing violence also added to the hostilities of this period. The assassination of Chris Hani on 10 April 1993 threatened to plunge the country into chaos. Hani, the popular General Secretary of the South African Communist Party (SACP), was assassinated in 1993 in Dawn Park in Johannesburg by Janusz Waluś, an anti-Communist Polish refugee who had close links to the White nationalist Afrikaner Weerstandsbeweging (AWB). Hani enjoyed widespread support beyond his constituency in the SACP and ANC and had been recognised as a potential successor to Mandela; his death brought forth protests throughout the country and across the international community, but ultimately proved a turning point, after which the main parties pushed for a settlement with increased determination. On 25 June 1993, the AWB used an armoured vehicle to crash through the doors of the Kempton Park World Trade Centre where talks were still going ahead under the Negotiating Council, though this did not derail the process.
In addition to the continuing “black-on-black” violence, there were a number of attacks on white civilians by the PAC’s military wing, the Azanian People’s Liberation Army (APLA). The PAC was hoping to strengthen their standing by attracting the support of the angry, impatient youth. In the St James Church massacre on 25 July 1993, members of the APLA opened fire in a church in Cape Town, killing 11 members of the congregation and wounding 58.
In 1993, de Klerk and Mandela were jointly awarded the Nobel Peace Prize “for their work for the peaceful termination of the apartheid regime, and for laying the foundations for a new democratic South Africa”.
Violence persisted right up to the 1994 general election. Lucas Mangope, leader of the Bophuthatswana homeland, declared that it would not take part in the elections. It had been decided that, once the temporary constitution had come into effect, the homelands would be incorporated into South Africa, but Mangope did not want this to happen. There were strong protests against his decision, leading to a coup d’état in Bophuthatswana on 10 March that deposed Mangope, despite the intervention of white right-wingers hoping to maintain him in power. Three AWB militants were killed during this intervention, and harrowing images were shown on national television and in newspapers across the world.
Two days before the election, a car bomb exploded in Johannesburg, killing nine people. The day before the elections, another one went off, injuring 13. At midnight on 26–27 April 1994 the old flag was lowered, and the old (now co-official) national anthem Die Stem (“The Call”) was sung, followed by the raising of the new rainbow flag and singing of the other co-official anthem, Nkosi Sikelel’ iAfrika (“God Bless Africa”).
The election was held on 27 April 1994 and went off peacefully throughout the country as 20,000,000 South Africans cast their votes. There was some difficulty in organising the voting in rural areas, but people waited patiently for many hours to vote amidst a palpable feeling of goodwill. An extra day was added to give everyone the chance. International observers agreed that the elections were free and fair. The European Union’s report on the election compiled at the end of May 1994, published two years after the election, criticised the Independent Electoral Commission’s lack of preparedness for the polls, the shortages of voting materials at many voting stations, and the absence of effective safeguards against fraud in the counting process. In particular, it expressed disquiet that “no international observers had been allowed to be present at the crucial stage of the count when party representatives negotiated over disputed ballots.” This meant that both the electorate and the world were “simply left to guess at the way the final result was achieved.”
The ANC won 62.65% of the vote, less than the 66.7 percent that would have allowed it to rewrite the constitution. 252 of the 400 seats went to members of the African National Congress. The NP captured most of the White and Coloured votes and became the official opposition party. As well as deciding the national government, the election decided the provincial governments, and the ANC won in seven of the nine provinces, with the NP winning in the Western Cape and the IFP in KwaZulu-Natal. On 10 May 1994, Mandela was sworn in as the new President of South Africa. The Government of National Unity was established, its cabinet made up of 12 ANC representatives, six from the NP, and three from the IFP. Thabo Mbeki and de Klerk were made deputy presidents.
The anniversary of the elections, 27 April, is celebrated as a public holiday known as Freedom Day.
The following individuals, who had previously supported apartheid, made public apologies:
- F. W. de Klerk: “I apologise in my capacity as leader of the NP to the millions who suffered wrenching disruption of forced removals; who suffered the shame of being arrested for pass law offences; who over the decades suffered the indignities and humiliation of racial discrimination.”
- Marthinus van Schalkwyk: “The National Party brought development to a section of South Africa, but also brought suffering through a system grounded on injustice”, in a statement shortly after the National Party voted to disband.
- Adriaan Vlok washed the feet of apartheid victim Frank Chikane in an act of apology for the wrongs of the Apartheid regime.
- Leon Wessels: “I am now more convinced than ever that apartheid was a terrible mistake that blighted our land. South Africans did not listen to the laughing and the crying of each other. I am sorry that I had been so hard of hearing for so long”.
International legal, political and social uses of the term
The South African experience has given rise to the term “apartheid” being used in a number of contexts other than the South African system of racial segregation. For example: The “crime of apartheid” is defined in international law, including in the 2007 law that created the International Criminal Court (ICC), which names it as a crime against humanity. Even before the creation of the ICC, the International Convention on the Suppression and Punishment of the Crime of Apartheid of the United Nations, which came into force in 1976, enshrined into law the “crime of apartheid.”
The term apartheid has been co-opted by Palestinian and Anti-Zionist organisations, referring to occupation in the West Bank, legal treatment of illegal settlements and the West Bank barrier.
Social apartheid is segregation on the basis of class or economic status. For example, social apartheid in Brazil refers to the various aspects of economic inequality in Brazil. Social apartheid may fall into various categories. Economic and social discrimination because of gender is sometimes referred to as gender apartheid. Separation of people according to their religion, whether pursuant to official laws or pursuant to social expectations, is sometimes referred to as religious apartheid. The treatment of non-Muslims and women by the Saudi rulers has been called apartheid.
The concept in occupational therapy that individuals, groups and communities can be deprived of meaningful and purposeful activity through segregation due to social, political, economic factors and for social status reasons, such as race, disability, age, gender, sexuality, religious preference, political preference, or creed, or due to war conditions, is sometimes known as occupational apartheid.
A 2007 book by Harriet A. Washington on the history of medical experimentation on African Americans is entitled medical apartheid.
The disproportionate management and control of the world’s economy and resources by countries and companies of the Global North referred to as global apartheid. A related phenomenon is a technological apartheid, a term used to describe the denial of modern technologies to Third World or developing nations. The last two examples use the term “apartheid” less literally since they are centred on relations between countries, not on disparate treatment of social populations within a country or political jurisdiction. |
This article needs additional citations for verification. (August 2011) (Learn how and when to remove this template message)
The pre-Columbian history of the territory now comprising contemporary Mexico is known through the work of archaeologists and epigraphers, and through the accounts of the conquistadors, clergymen, and indigenous chroniclers of the immediate post-conquest period. While relatively few documents (or codices) of the Mixtec and Aztec cultures of the Post-Classic period survived the Spanish conquest, more progress has been made in the area of Mayan archaeology and epigraphy.
Human presence in the Mexican region was once thought to date back 40,000 years based upon what were believed to be ancient human footprints discovered in the Valley of Mexico, but after further investigation using radioactive dating, it appears this is untrue. It is currently unclear whether 21,000-year-old campfire remains found in the Valley of Mexico are the earliest human remains in Mexico. Indigenous peoples of Mexico began to selectively breed maize plants around 8000 BC. Evidence shows a marked increase in pottery working by 2300 B.C. and the beginning of intensive corn farming between 1800 and 1500 B.C..
Between 1800 and 300 BC, complex cultures began to form. Many matured into advanced pre-Columbian Mesoamerican civilizations such as the: Olmec, Izapa, Teotihuacan, Maya, Zapotec, Mixtec, Huastec, Purépecha, Totonac, Toltec and Aztec, which flourished for nearly 4,000 years before the first contact with Europeans.
Archaic inscriptions on rocks and rock walls all over northern Mexico (especially in the state of Nuevo León) demonstrate an early propensity for counting in Mexico. These very early and ancient count-markings were associated with astronomical events and underscore the influence that astronomical activities had upon Mexican natives, even before they possessed urbanization.
In fact, many of the later Mexican-based civilizations would carefully build their cities and ceremonial centers according to specific astronomical events. Astronomy and the notion of human observation of celestial events would become central factors in the development of religious systems, writing systems, fine arts, and architecture.
Prehistoric Mexican astronomers began a tradition of precise observing, recording, and commemorating astronomical events that later become a hallmark of Mexican civilized achievements. Cities would be founded and built on astronomical principles, leaders would be appointed on celestial events, wars would be fought according to solar-calendars, and a complex theology using astronomical metaphors would organize the daily lives of millions of people.
At some different points in time, three Mexican cities (Teotihuacan, Tenochtitlan, and Cholula) were among the largest cities in the world. These cities and several others blossomed as centers of commerce, ideas, ceremonies, and theology. In turn, they radiated influence outward into neighboring cultures in central Mexico.
At its height, Oasisamerica covered part of the present-day Mexican states of Chihuahua, Sonora and Baja California, as well as the U.S. states of Arizona, Utah, New Mexico, Colorado, Nevada, and parts of California. Cultural groups that flourished partially within the borders of modern-day Mexico include the Mogollon, Patayan, and Hohokam. These Oasisamerica civilizations maintained close ties with those of Mesoamerica, evidenced by turquoise trade, macaws, copper, cacao, and cultural exchange. For example, in Paquimé, a site connected to the Mogollon culture, there have been found ceremonial structures related to Mesoamerican religion, similar to the juego de pelota.
While many city-states, kingdoms, and empires competed with one another for power and prestige, Mexico can be said to have had five major civilizations: The Olmec, Teotihuacan, the Toltec, the Aztec and the Maya. These civilizations (with the exception of the politically fragmented Maya) extended their reach across Mexico, and beyond, like no others. They consolidated power and distributed influence in matters of trade, art, politics, technology, and theology. Other regional power players made economic and political alliances with these five civilizations over the span of 3,000 years. Many made war with them. But almost all found themselves within these five spheres of influence.
The Olmec were an ancient Pre-Columbian people living in the tropical lowlands of south-central Mexico, roughly in what are the modern-day states of Veracruz and Tabasco on the Isthmus of Tehuantepec. Their immediate cultural influence, however, extends far beyond this region. The Olmec flourished during the Formative (or Preclassic) period, dating from 1400 BCE to about 400 BCE, and are believed to have been the progenitor civilization of later Mesoamerican civilizations.
The decline of the Olmec resulted in a power vacuum in Mexico. Emerging from that vacuum was Teotihuacan, first settled in 300 B.C. By AD 150, it had grown to become the first true metropolis of what is now called North America. Teotihuacan established a new economic and political order never before seen in Mexico. Its influence stretched across Mexico into Central America, founding new dynasties in the Mayan cities of Tikal, Copan, and Kaminaljuyú. Teotihuacan's influence over the Maya civilization cannot be overstated; it transformed political power, artistic depictions, and the nature of economics. Within the city of Teotihuacan was a diverse and cosmopolitan population.
Most of the regional ethnicities of Mexico were represented in the city, such as Zapotecs from the Oaxaca region. They lived in rural apartment communities where they worked their trades and contributed to the city's economic and cultural prowess. By AD 500, Teotihuacan had become one of the largest cities in the world. Teotihuacan's economic pull impacted areas in northern Mexico as well. It was a city whose monumental architecture reflected a new era in Mexican civilization, declining in political power about AD 650, but lasting in cultural influence for the better part of a millennium, to around AD 950.
Contemporary with Teotihuacan's greatness was the greatness of the Mayan civilization. The period between 250 and 650 saw an intense flourishing of Maya civilized accomplishments. While the many Maya city-states never achieved political unity on the order of the central Mexican civilizations, they exerted a tremendous intellectual influence upon Mexico. The Maya built some of the most elaborate cities on the continent, and made innovations in mathematics, astronomy, and writing that became the pinnacle of Mexico's scientific achievements.
Just as Teotihuacan had emerged from a power vacuum, so too did the Toltec civilization, which took the reins of cultural and political power in Mexico from about 700. The Toltec empire established contact as far south as Central America, and as far north as the Anasazi corn culture in the Southwestern United States. The Toltec established a prosperous turquoise trade route with the northern civilization of Pueblo Bonito, in modern-day New Mexico. Toltec traders would trade prized bird feathers with Pueblo Bonito, while circulating all the finest wares that Mexico had to offer with their divorced, immediate neighbors. The Mayan city of Chichen Itza was also in contact with the Toltec civilization were powerfully influenced by central Mexicans. The Toltec political system was so influential, that many future Mesoamerican dynasties would later claim to be of Toltec descent. In fact, it was this prized Toltec lineage that would set the stage for Mesoamerica's last great indigenous civilization.
With the decline of the Toltec civilization came political fragmentation in the Valley of Mexico, and into this new game of political contenders for the Toltec throne stepped outsiders: the Aztec. Newcomers to the Valley of Mexico, they were seen as crude and unrefined in the eyes of the existing Mesoamerican civilizations, such as the fallen Toltec empire.
Latecomers to Mexico's central plateau, the Aztecs thought of themselves as heirs to the prestigious civilizations that had preceded them, much as Charlemagne did with respect to the fallen Roman Empire. What the Aztecs lacked in political power, they made up for with ambition and military skill.
In 1428, the Aztecs led a war of liberation against their rulers from the city of Azcapotzalco, which had subjugated most of the Valley of Mexico's peoples. The revolt was successful, and the Aztecs, through cunning political maneuvers and ferocious fighting skills, managed to pull off a true "rags-to-riches" story: they became the rulers of central Mexico as the leaders of the Triple Alliance.
This Alliance was composed of the city-states of Tenochtitlan, Texcoco, and Tlacopan. At their peak, 350,000 Aztecs presided over a wealthy tribute-empire comprising around 10 million people, almost half of Mexico's then-estimated population of 24 million. This empire stretched from ocean to ocean, and extended into Central America. The westward expansion of the empire was stopped cold by a devastating military defeat at the hands of the Purépecha (who possessed state-of-the-art copper-metal weapons). The empire relied upon a system of taxation (of goods and services) which were collected through an elaborate bureaucracy of tax collectors, courts, civil servants, and local officials who were installed as loyalists to the Triple Alliance (led by Tenochtitlan).
The empire was primarily economic in nature, and the Triple Alliance grew very rich: libraries were built, monumental architecture was constructed, and a highly prestigious artistic and priestly class was cultivated. All of this created a "First World" aura of invincibility around the island-city of Tenochtitlan. Unlike the later Spanish, the Aztecs did not seek to "convert" or destroy the cultures they conquered. Quite the opposite: the engines of warfare and empire in Central Mexico required that all participants understand and accept common cultural "rules" in order to make the flow of imperial wealth as smooth as possible. The rules of empire in Mexico were old rules, understood by all the power players and "contenders to the throne," as had been shown many times before (the kingdom of Tlaxcala would attempt its own power grab in 1519 by using the Spanish as mercenary-allies).
By 1519, the Aztec capital, Mexico-Tenochtitlan, was among the largest cities in the world with a population of around 350,000 (although some estimates range as high as 500,000). Beijing at the same time had a population variously estimated to be 670,000 up to one million people. By comparison, the population of London in 1519 was 80,000 people. Tenochtitlan is the site of modern-day Mexico City.
Allies of the AztecsEdit
In the formation of the Triple Alliance empire, the Aztecs established several ally states. Among them were Cholula (the site of an early massacre by Spaniards), Texcoco (the site of a major library, subsequently burned by the Spanish), Tlacopan, and Matatlan. Also, many of the kingdoms conquered by the Aztecs provided soldiers for further imperial campaigns such as: Culhuacan, Xochimilco, Tepeacac, Amecameca, Coaixtlahuacan, Cuetlachtlan, Ahuilizipan. The Aztec war machine would become multi-ethnic, comprising soldiers from conquered areas, led by a large core of Aztec warriors and officers. This same strategy would later be employed by the Spaniards.
Legacy of the AztecsEdit
The Aztecs left a durable stamp upon modern Mexican culture. Much of what is considered modern Mexican culture derives from the Aztec civilization: place-names, words, food, art, dress, symbols, and even the name "Mexican". (See also Origin and history of the name "Mexico-Tenochtitlan").
Mexico City as the capitalEdit
Today, the Aztec's capital city of Mexico-Tenochtitlan survives in modern times as Mexico City, the capital of the modern nation of Mexico. Mexico City is the largest metropolitan area in the Western Hemisphere (and second-largest in the world following Tokyo, Japan).
In their haste to colonize, the Spanish initially retained much of the original layout of the city of Tenochtitlan, reflected today in the various city districts (barrios) and in the central precinct of the Zócalo (formerly the ceremonial center of Tenochtitlan). Many streets and boulevards lay along the same paths as the previous water canals of Tenochtitlan. Several pyramids and ruins have even remain unearthed within the urban sprawl of the city. Over the two centuries following the conquest, the lakes of the valley were drained, drastically changing the landscape. The former island city now was able to spread over a dry plain. Only small remnants of the old canal city remain, such as in the celebrated flower district of Xochimilco. Today, Mexico City incorporates over 25 million people, whereas in 1519, that number was 500 thousand.
Food and cuisineEdit
Foods originating from MexicoEdit
Mexico is a Megadiverse country. As such, many ingredients commonly consumed by today's people worldwide originate from Mexico. The names of the various foods are originally from Nahuatl. Examples of such ingredients are: Chocolate, Tomato, Maize and Corn, Vanilla, Avocado, Guava, Chayote, Epazote, Camote, Jícama, Tejocote, Nopal, Huitlacoche, Zapote, Mamey zapote, many varieties of modern Beans.
The majority of Mexico's cuisine are of indigenous origins and are based on the ingredients listed above:
- corn enters in the composition of tortillas, tamales, pozole, enchiladas
- avocado is the principal ingredient of guacamole
- chocolate is used in mole and atole
These foods continue to make up the core of Mexican cuisine today.
Because the Mexica spoke Nahuatl (the most common language at the time of Spanish arrival) their terms and names were widespread as descriptors of cities, regions, valleys, rivers, mountains, and many cultural objects. The Spanish used Nahuatl translators as they waged wars of conquest throughout Mexico and beyond. As a result, Nahuatl names were used as geographic identifiers as far away as Guatemala and the northern state of Coahuila on the southern Texas border. Numerous words from the Nahuatl language are today interspersed within Mexican Spanish. These words are used to describe geography, foods, colloquialisms, and first names for people (e.g., Xochitl, "flower," for females and Tenoch for males).
Today, approximately 1.5 million people continue to speak the Nahuatl language. Recent years have seen a resurgence of interest in learning Nahuatl by Spanish-speaking and English-speaking Mexicans at-large. Some Mexican-American activists have portrayed Nahuatl language as a path to claiming an identity that is not European-based or Anglo-derivative (i.e. "Hispanic", "Latino", or "American").
Modern flag of MexicoEdit
The official story of Mexico is, the coat of arms of Mexico was inspired by an Aztec legend based on the founding of Tenochtitlan. The Aztecs, then a nomadic tribe, were wandering throughout Mexico in search of a sign that would indicate the precise spot on which they would build their capital. Their god Huitzilopochtli had commanded them to find an eagle devouring a snake, perched on top of a cactus that grew on a rock submerged in a lake. After two hundred years of wandering, they found the promised sign on a small island in the swampy lake of Texcoco. It was there they found their new capital, Tenochtitlan, also known as Mexico.
Art and symbolsEdit
Mexican art has inspired generations of Mexican-descent artists, both inside and outside of Mexico's modern borders. Images of pyramids, the "Aztec calendar", and armed indigenous warriors have been popular themes. Also popular have been zig-zag motifs (found on indigenous buildings and pottery) and the theological notion of The Four Directions (found among indigenous cultures across the Western Hemsiphere). In recent years, there has been a resurgence of interest in the ceremonies and art of the Day of the Dead. The art, architecture, and symbols of the Mexica civilization exert such a unique identity that they are commonly used in advertisements for tourism to Mexico.
For much of its history, the majority of Mexico's population lived an urban lifestyle: cities, towns, and villages. Only a fraction of the population was tribal and wandering. Most people were permanently settled, agriculturally based, and identified with an urban identity, as opposed to a tribal identity. Mexico has long been an urbanized land, which was graphically reflected in the writings of the Spaniards who encountered them. |
Then all the elements from the old list must be copied to the new list and the new element is added at the end of this list. Example 2: Append a list to another list keeping a copy of original list. Weiterer Eintrag am Ende einer Liste anhängen: append() Wir wollen in einer Liste einen weiteren Eintrag anhängen. Here are all of the methods of list objects: list.append (x) Add an item to the end of the list. This tutorial covers the following topic – Python Add lists. Python add elements to List Examples. This is also called concatenation. Description . obj - Listeye eklenecek nesnedir. It’s very easy to add elements to a List in Python programming. Python Append to List Using Append() The Python append() list method adds an element to the end of a list. Lists are one of 4 built-in data types in Python used to store collections of data, the other 3 are Tuple, Set, and Dictionary, all with different qualities and usage.. To add elements of a list to another list, use the extend method. Using the append method, number 4 is appended to the list1. In this article I'll be showing the differences between the append, extend, and insert list methods. Die List-Append-Funktion in Python ist sehr praktisch, wenn Sie Daten zu einer bereits vorhandenen Liste hinzufügen möchten. Estefania Cassingena Navone. Lists are used to store multiple items in a single variable. ). Instead, you can use binary search and the list.insert(i,x) method to insert element x at position i in the list. Well, you shouldn’t use append() in the first place because the append operation cannot insert an element at the correct position. It is separated by a colon(:), and the key/value pair is separated by comma(,). Following is the syntax for append() method − list.append(obj) Parameters. Python append: useful tips. 2728. 2712. Method 3: Using += Operator. If you specify a range using slice and assign another list or tuple, all itemss will be added. The list squares is inserted as a single element to the numbers list. In Python, use list methods append(), extend(), and insert() to add items to a list or combine other lists. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. 6 ways to get the last element of a list in Python ; No Comments Yet. Previous Page. Welcome. Python has a few methods to add items to the existing list. To join a list in Python, there are several ways in Python. Till now we have seen two different ways to create an empty python list, now let’s discuss the different ways to append elements to the empty list. Eine Liste erstellt man mit Hilfe von eckigen Klammern. Python | Append String to list Last Updated: 29-11-2019 Sometimes, while working with data, we can have a problem in which we need to add elements to a container. We can add values of all types like integers, string, float in a single list. This element can be of any data type: Integer. Example 2: Append a list to another list keeping a copy of original list. Deswegen zeige ich hier, wie es geht: Erstellen einer Liste. append () Syntax: list_name.append(‘value’) It takes only one argument. In the case of the + operator, a new list is returned. Mul (*) operator to join lists. Python join list. The append() method adds a single item to the end of the list. It describes various ways to join/concatenate/add lists in Python. You can add an item to the end of the list with append(). extend(): extends the list by appending elements from the iterable. It’s entirely a new method to join two or more lists and is available from … The append () method adds an item to the end of the list. Getting the last element of a list. Like append(), the list is added as a single item, not combined. I used my notebook with an Intel(R) Core(TM) i7-8565U 1.8GHz processor (with Turbo Boost up to 4.6 GHz) and 8 GB of RAM. Updated list: [10, 15, [1, 4, 9], 20, 25] Conclusion # We have shown you how to add elements to a list in Python using the append(), extend(), and insert() methods. The append is a built-in function in Python. If you want to learn how to use the append() method, then this article is for you. This is a powerful list method that you will definitely use in your Python projects. Python List pop() Method . In this scenario, we will find out how we can append to a list in Python when the lists are nested. Lists are created using square brackets: For negative values, -1 means one before the end. Bu method herhangi bir değer döndürmez, fakat mevcut listeyi günceller. This is the 9th article of our Python tutorial series 'Python on Terminal' and we will be learning about List Methods in Python in this tutorial. my_list.append(5) The default location that the new element will be added is always in the (length+1) position. List. ; Now, let us understand the ways to append elements to the above variants of Python Array. Python has a few methods to add items to the existing list. A Nested List is a List that contains another list(s) inside it. In the list, you can add elements, access it using indexing, change the elements, and remove the elements. Appends an item to a list. Python List append() Method. When ‘+’ is used a new list is created with space for one more element. For example – simply appending elements of one list to the tail of the other in a for loop, or using +/* operators, list comprehension, extend(), and itertools.chain() methods.. Pythonのappendメソッドは次のように書きます。 これを使うと、元のリストに任意の要素を追加することができます。要素を追加した新しいリストを作るのではなく、元のリストに要素が追加されるという点を覚えておきましょう。 例を見た方が早いので、早速見ていきましょう。 なお、appendメソッドはリストメソッドです。dict(辞書)やnumpyのarray配列、string(文字列)やtuple(タプル)、set(集合)には使えません。これらに、任意の要素を追加するには別のメソッドを使います。これについ … Python add to List; Python.or g Docs; Facebook Twitter WhatsApp Reddit LinkedIn Email. Python : Convert list of lists or nested list to flat list; Python : How to add an element in list ? The append function in Python doesn't have a return value: it doesn't create any new, original lists.Instead, it updates a list that already exists, increasing its length by one. Python List append() Sorted. In Python, use list methods append (), extend (), and insert () to add items to a list or combine other lists. When working with lists in Python, you will often want to add new elements to the list. When we call a method, we use a dot after the list to indicate that we want to "modify" or "affect" that particular list. In this article, We are going to see how to append a list as a row to a pandas dataframe in Python. There are ways to add elements from an iterable to the list. Pandas DataFrame.loc attribute access a group of rows and columns by label(s) or a … Python 测验 W3School 简体中文版提供的内容仅用于培训和测试,不保证内容的正确性。 通过使用本站内容随之而来的风险与本站无关。 The method doesn’t return anything. Prev. A list is also added as one item, not combined. References: Python List ; Python.org Docs; Facebook Twitter WhatsApp Reddit LinkedIn Email. Hello readers! According to the below program, the list1 contains three elements, which are 1,2 and 3. Append Python List to Another List. Even if the incoming element is itself a list, it will increase the count of the original list by only one. The syntax of the append () method is: list.append (item) You can add an item at the specified index (position) by insert(). 2230. They are powerful list methods that you will definitely use in your Python projects. These methods are append, insert, and extend. obj − This is the object to be appended in the list. # python3 /tmp/append_string.py My name is Deepak and I am 32 years old. ; Python Array module: This module is used to create an array and manipulate the data with the specified functions. While using W3Schools, you agree to have read and accepted our, Required. The values can be a list or list within a list, numbers, string, etc. list.extend (iterable) Extend the list by appending all the items from the iterable. List Concatenation: We can use + operator to concatenate multiple lists and create a new list. … All items in the specified range are replaced. The append() method appends a passed obj into the existing list. Syntax. Python Program. Append. In this method, we first convert the string into a list and then perform the task of append using + operator. Examples might be simplified to improve reading and learning. Append example Extend Example Insert Example The Python append method of list. 2407. It only appends the element to the end of the list. Hi! Sözdizimi. In this article, you will learn: How and when to use the Below is an example of a 1d list and 2d list. Syntax. If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: W3Schools is optimized for learning and training. Example. Let’s discuss certain ways in which we can perform string append operation in list of integers. In the case of a string, note that each character is added one by one. 3721. Steps to Append an Item to a List in Python Step 1: Create a List. List initialization can be done using square brackets . The append() method accepts the item that you want to add to a list as an argument. Hier ist sofort einsichtig, dass wir etwas übergeben müssen, was dann an die bestehende Liste angehängt werden wird. insert() - inserts a single item at a given position of the list. This tutorial covers the following topic – Python Add Two list Elements. Tutorials, references, and examples are constantly reviewed to avoid errors, but we cannot warrant full correctness of all content. If you would like to keep the contents of original list unchanged, copy the list to a variable and then add the other list to it. Ein einfaches Array – Python List. How do you split a list into evenly sized chunks? list. Arrays bzw Listen können in Python ähnlich zu anderen Programmiersprachen benutzt werden. Parameters. The data in a dictionary is stored as a key/value pair. #!/usr/bin/python # -*- coding: UTF-8 -*- def changeextend(str): "print string with extend" mylist.extend([40,50,60]); print "print string mylist:",mylist return def changeappend(str): "print string with append" mylist.append( [7,8,9] ) print "print string mylist:",mylist return mylist = [10,20,30] changeextend( mylist ); print "print extend mylist:", mylist changeappend( mylist ); print "print append mylist:", mylist We can add an element to the end of the list or at any given index. Equivalent to a[len(a):] = [x]. The append is a built-in function in Python that is used to add its arguments as a single element to the end of the list. All itemss are added to the end of the original list. Append and extend are one of the extensibility mechanisms in python. Pankaj. The syntax of this Python append List function is: list.append(New_item) This list append method help us to add given item (New_item) at the end of the Old_list. Instead of adding each animal individually, we can add them into a new list. # python3 /tmp/append_string.py My name is Deepak and I am 32 years old. Leave a Reply Cancel reply. String. Next list append code adds 50 to the end of List a Method 3: Using += Operator. Advertisements. Python list 列表增加元素可调用列表的 append() 方法 ,该方法会把传入的参数追加到列表的最后面。 append() 方法既可接收单个值,也可接收元组、列表等,但该方法只是把元组、列表当成 You can also add to the existing list with +=. How do I concatenate two lists in Python? If you haven’t already done so, create a list in Python. This method adds an element at the end of an existing list. Listen in Python zu erstellen und zu bearbeiten ist mit nur wenigen Schritten möglich. Here’s how it works: Usage. list_name.append(item) Parameters In the previous article on Python Lists - Python Lists Creation, Concatenation, Repetition, Indexing and Slicing, we studied how to create lists, how two lists can be merged and repeated and how indexing and slicing works for Python lists. Let’s start add elements to the list of toy animals we started building earlier. Python List Methods - append(), sort(), remove(), reverse(), etc. You can also replace the original item. Python List: It contains all the functionalities of an Array. As we cannot use 1d list in every use case so python 2d list is used. The append() method in python adds a single item to the existing list. After executing the method append on the list the size of the list increases by one. Append: Adds an element to the end of the list. Length of the List: When using append, the length of the list will increase by one. Python list method append () appends a passed obj into the existing list. It doesn’t return a new list of items but will modify the original list by adding the item to the end of the list. Python's *for* and *in* constructs are extremely useful, and the first use of them we'll see is with lists. Each of these methods is explained below with examples. An element of any type (string, number, object etc. Add dictionary to a list in Python. Syntax. WelcomeIf you want to learn how to work with .append() and .extend() and understand their differences, then you have come to the right place. Python offers us three different methods to do so. Python List Append – How to Add an Element to an Array, Explained with Examples. pawanasipugmailcom. Return Value. Append example Extend Example Insert Example The Python append method of list. Using ‘+’ operator to add an element in the list in Python: The use of the ‘+’ operator causes Python to access each element of that first list. Example. First, we declared an integer list with four integer values. Python offers us three different methods to do so. Python | Append suffix/prefix to strings in list; Python append to a file; pawan_asipu. All three methods modify the list in place and return None. My thesis is that the extend() method should be faster for larger list sizes because Python can append elements to a list in a batch rather than by calling the same method again and again. Add an item to the end: append () Combine lists: extend (), + operator. Append to a List in Python – Nested Lists. References. What is the difference between Python's list methods append and extend? List can contain any type of data type. Let’s use list.insert () to append elements at the end of an empty list, sample_list.insert(len(sample_list), i) We iterated over a sequence of numbers (0 to 9) provided my range () function, for each number we called the list.insert () function and passed the number to it along with index size-1 i.e. The list data type has some more methods. It is the simplest approach in Python to add two list elements. The beginning is 0. It adds a single element at the end of the list. How to call it. Python append List Function Example. It is also possible to combine using the + operator instead of extend(). The keys in a dictionary are unique and can be a string, integer, tuple, etc. In this article, you will learn: Why and when you should use append(). The *for* construct -- for var in list-- is an easy way to look at each element in a list (or other collection). Also, known as lists inside a list … If you would like to keep the contents of original list unchanged, copy the list to a variable and then add the other list to it. 3468. Question or problem about Python programming: This seems like something Python would have a shortcut for. Another way to add elements to a list is to use the + operator to concatenate multiple lists. If you add a dictionary to a list in Python, it won’t create a copy of this dictionary, but rather add a reference to the dictionary. Pandas DataFrame.loc attribute access a group of rows and columns by label(s) or a … Dazu gibt es eine Methode mit dem Namen append(). It doesn’t return a new list of items but will modify the original list by adding the item to the end of the list. In this article, We are going to see how to append a list as a row to a pandas dataframe in Python. This method does not return any value but updates existing list. Return Value. Aşağıdaki append() metodu için sözdizimidir. Unknown October 31, 2016 Python No comments Hello readers! How to insert an element into a sorted list? Dictionary is one of the important data types available in Python. The append method adds an item at the end of a list. Create an empty list and append elements using for loop . リストではスライス機能の開始インデックスと終了インデックスを指定してリストの指定した範囲の要素を別の要素と入れ替えることができますが、開始インデックスと終了インデックスを共にリストの最後の要素の次の位置を指定することで要素をリストの最後に追加することができます。リストの最後の要素のインデックスは、組み込み関数の len 関数を使って len(リスト) -1 として取得できるため、最後の要素の次の位置は len(リスト) で指定できます。 イコール演算子の右に指定したリストなどをリス … This method does not return anything; it modifies the list in place. The list item can contain strings, dictionaries, or any other data type. The * operator repeats a list for the given number of times. Append. my_list = [1,2,3,4] To add a new element to the list, we can use append method in the following way. Python append List Function Example. For loop to add elements of two lists. These methods are append, insert, and extend. # Appending and Extending lists in Python odd = [1, 3, 5] odd.append(7) print(odd) odd.extend([9, 11, 13]) print(odd) Output [1, 3, 5, 7] [1, 3, 5, 7, 9, 11, 13] We can also use + operator to combine two lists. Need to append an item to a list in Python? Each of these methods is explained below with examples. It can be done in three ways: Using loc Using iloc Using append() Append list using loc methods. Prev. After executing the method append on the list the size of the list increases by one. Set the index for the first parameter and the item to be inserted for the second parameter. We can also use += operator which would append strings at the end of existing value also referred as iadd; The expression a += b is shorthand for a = a + b, where a and b can be numbers, or strings, or tuples, or lists (but both must be of the same type). » MORE: Python Collections: A Step-By-Step Guide. It is added at the … Difference Between ‘+’ and ‘append’ in Python. The append() method in python adds a single item to the existing list. I want to append an item to a list N times, effectively doing this: l = x = 0 for i in range(100): l.append(x) It would seem to me that there should be an “optimized” method for that, […] List changes unexpectedly after assignment. The syntax to use it is: a.append(x) Here the variable a is our list, and x is the element to add. For example – using a for loop to iterate the lists, add corresponding elements, and store their sum at the same index in a new list. filter_none. Syntax. Next Page . list_name.append… Python 3 - List append() Method. Equivalent to a[len(a):] = iterable. Method #1 : Using + operator + list conversion. We can also use += operator which would append strings at the end of existing value also referred as iadd; The expression a += b is shorthand for a = a + b, where a and b can be numbers, or strings, or tuples, or lists (but both must be of the same type). list.insert (i, x) Insert an item at a given position. The syntax to use it is: a.append(x) Here the variable a is our list, and x is the element to add. See your article appearing on the GeeksforGeeks main page and help other Geeks. You can also use the + operator to combine lists, or use slices to insert itemss at specific positions. Suppose we want to create an empty list and then append 10 numbers (0 to 9 ) to it. Do not add or remove from the list during iteration. It can be done in three ways: Using loc Using iloc Using append() Append list using loc methods. #initialize lists list1 = [6, 52, 74, 62] list2 = [85, 17, 81, 92] #make of copy of list1 result = list1.copy() #append the second list result.extend(list2) #print resulting list print(result) Run this … In this article I'll be showing the differences between the append, extend, and insert list methods. The list is one of the most useful data-type in python. You can combine another list or tuple at the end with extend(). # [0, 1, 2, 100, 101, 102, -1, -2, -3, 'n', 'e', 'w'], # [0, 1, 2, 100, 101, 102, -1, -2, -3, 'n', 'e', 'w', 5, 6, 7], Convert pandas.DataFrame, Series and list to each other, Shuffle a list, string, tuple in Python (random.shuffle, sample), Remove an item from a list in Python (clear, pop, remove, del), Extract, replace, convert elements of a list in Python, Convert numpy.ndarray and list to each other, Sort a list, string, tuple in Python (sort, sorted), Initialize a list with given size and values in Python, Transpose 2D list in Python (swap rows and columns), enumerate() in Python: Get the element and index from a list, zip() in Python: Get elements from multiple lists, How to slice a list, string, tuple in Python, Reverse a list, string, tuple in Python (reverse, reversed), Convert lists and tuples to each other in Python, Add another list or tuple at specified index: slice. ; Python NumPy array: The NumPy module creates an array and is used for mathematical purposes. Append Method. Python's *for* and *in* constructs are extremely useful, and the first use of them we'll see is with lists. Append Method. Dönüş Değeri. As you can see, the append () method only takes one argument, the element that you want to append. This method adds an element at the end of an existing list. We want to add three new toy animals to our list: Rabbit, Dolphin, and Kangaroo. Python List append () Method Description. This method does not return any value but updates existing list. Python : How to Remove Duplicates from a List; Python Pandas : How to add rows in a DataFrame using dataframe.append() & loc , iloc Python: Reverse a list, sub list or list of list | In place or Copy; Python: How to append a new row to an existing csv file? This function appends the incoming element to the end of the list as a single new element. Python Program. The append() method appends an element to the end of the list. The append method adds an item at the end of a list. We’ll look at a particular case when the nested list has N lists of different lengths. You can also use the + operator to combine lists, or use slices to insert itemss at specific positions. This is the 9th article of our Python tutorial series 'Python on Terminal' and we will be learning about List Methods in Python in this tutorial. If you want to concatenate multiple lists, then use the overloaded + operator.
Contraire Du Mot Courageux, Petit Ou Grand En Asie En 5 Lettres, Musique No Copyright Youtube, Vrai Nom Rappeur Américain, Importation Voiture Américaine Ancienne, Oreille En 8 Lettres, Double Nationalité Espagnole Marocaine, Formulaire Pai Asthme, Bracelet Love Cartier Occasion Belgique, On Les Appelle Les étoiles Vagabondes, Séquence Informer, S'informer Seconde Bac Pro, Tv Smart Tv, Salaire Expatrié Total, |
Vector Details with Diagram
Today, I am going to explain some of the topics from the Higher Secondary Physics chapter Vector. We are going to cover some of the main questions and significant sections of the chapter Vectors.
There are two types of quantities in the world. Vector Quantities and Scalar Quantities. You might what are those?
Those physical quantities have the only magnitude, but no direction, are called Scalar quantities. For Example, mass, time, length, work, density, stress, etc. are scalar quantities.
Those physical quantities have both magnitude and direction are called vector quantities. For examples Velocity, acceleration, displacement, weight etc are vector quantities.
Difference between Scalar Quantities and Vector Quantities:
|Sl. No||Scalar quantity||Vector quantity|
|1.||Those physical quantities have the only magnitude, but no direction, are called scalar quantities||. The physical quantities have both magnitude and direction, are called vector quantities.|
|2.||Mass, time, length, work, density, stress, etc. are examples of scalar quantities.||Velocity, acceleration, displacement, weight etc are examples of vector quantities.|
|3.||It can be added by general algebraic rule.||It cannot be added by general algebraic rule.|
|4.||The dot product of two vectors is a scalar quantity.||The cross product of two vectors is a vector quantity.|
|5.||If one or both the scalar quantities are not zero, the product can never be zero.||If one or both the vector quantities are not zero, the product may be zero.|
|6.||The resultant of two scalar quantities acting at a point making an angle cannot be determined by Parallelogram Law.||The resultant of two scalar quantities acting at a point making an angle can be determined by Parallelogram Law.|
We need to know some other types of vectors too. To fully cover the chapter
Unit Vector :
If the module of a vector is one (unit), the vector is called unit vector Or, A vector having unit magnitude is called a unit vector. Any non-zero vector having its module other than zero gives rise to a unit vector directed along the same direction as the vector. When a non-zero vector quantity is divided by its magnitude, the unit vector is obtained.
Zero or Null Vector:
If the magnitude of a vector is zero, it is called Zero or null vector. In other words, a vector whose two endpoints of a directed line segment coincide is known as null or zero vector. If A = B, then A – B = 0, is a null or zero vector.
Like Vector or parallel vector:
Two or more vectors of same nature parallel to one another and directed. In the same direction, are known as like vectors.
Two or more vectors of same nature parallel to one another but the directions are in opposite, are known as, unlike vectors.
Co-linear Vector :
If two or more vectors are directed along the same line or parallel to one another, then the vectors are called Co-linear Vectors.
Position Vector or Radius vector:
Any vector representing the position of a particle with respect to a reference point of a reference frame is known as position vector or, Radius vector.
Rectangular Unit Vector :
A set of unit vectors in three-dimensional rectangular coordinate systems directed Along the positive X, Y and Z axes denoted by I, are called rectangular unit vector. |
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge about cats, for example, that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.
In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called 'edges'. Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.
The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
Warren McCulloch and Walter Pitts (1943) created a computational model for neural networks based on mathematics and algorithms called threshold logic. This model paved the way for neural network research to split into two approaches. One approach focused on biological processes in the brain while the other focused on the application of neural networks to artificial intelligence. This work led to work on nerve networks and their link to finite automata.
In the late 1940s, D. O. Hebb created a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. Hebbian learning is unsupervised learning. This evolved into models for long term potentiation. Researchers started applying these ideas to computational models in 1948 with Turing's B-type machines. Farley and Clark (1954) first used computational machines, then called "calculators", to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).Rosenblatt (1958) created the perceptron, an algorithm for pattern recognition. With mathematical notation, Rosenblatt described circuitry not in the basic perceptron, such as the exclusive-or circuit that could not be processed by neural networks at the time. In 1959, a biological model proposed by Nobel laureates Hubel and Wiesel was based on their discovery of two types of cells in the primary visual cortex: simple cells and complex cells. The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965, becoming the Group Method of Data Handling.
Neural network research stagnated after machine learning research by Minsky and Papert (1969), who discovered two key issues with the computational machines that processed neural networks. The first was that basic perceptrons were incapable of processing the exclusive-or circuit. The second was that computers didn't have enough processing power to effectively handle the work required by large neural networks. Neural network research slowed until computers achieved far greater processing power. Much of artificial intelligence had focused on high-level (symbolic) models that are processed by using algorithms, characterized for example by expert systems with knowledge embodied in if-then rules, until in the late 1980s research expanded to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a cognitive model.
A key trigger for renewed interest in neural networks and learning was Werbos's (1975) backpropagation algorithm that effectively solved the exclusive-or problem by making the training of multi-layer networks feasible and efficient. Backpropagation distributed the error term back up through the layers, by modifying the weights at each node.
Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity. However, using neural networks transformed some domains, such as the prediction of protein structures.
In 1992, max-pooling was introduced to help with least shift invariance and tolerance to deformation to aid in 3D object recognition. In 2010, Backpropagation training through max-pooling was accelerated by GPUs and shown to perform better than other pooling variants.
The vanishing gradient problem affects many-layered feedforward networks that used backpropagation and also recurrent neural networks (RNNs). As errors propagate from layer to layer, they shrink exponentially with the number of layers, impeding the tuning of neuron weights that is based on those errors, particularly affecting deep networks.
To overcome this problem, Schmidhuber adopted a multi-level hierarchy of networks (1992) pre-trained one level at a time by unsupervised learning and fine-tuned by backpropagation. Behnke (2003) relied only on the sign of the gradient (Rprop) on problems such as image reconstruction and face localization.
Hinton et al. (2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine to model each layer. Once sufficiently many layers have been learned, the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations. In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos.
Earlier challenges in training deep neural networks were successfully addressed with methods such as unsupervised pre-training, while available computing power increased through the use of GPUs and distributed computing. Neural networks were deployed on a large scale, particularly in image and visual recognition problems. This became known as "deep learning".
Computational devices were created in CMOS, for both biophysical simulation and neuromorphic computing. Nanodevices for very large scale principal components analyses and convolution may create a new class of neural computing because they are fundamentally analog rather than digital (even though the first implementations may use digital devices). Ciresan and colleagues (2010) in Schmidhuber's group showed that despite the vanishing gradient problem, GPUs makes back-propagation feasible for many-layered feedforward neural networks.
Between 2009 and 2012, recurrent neural networks and deep feedforward neural networks developed in Schmidhuber's research group won eight international competitions in pattern recognition and machine learning. For example, the bi-directional and multi-dimensional long short-term memory (LSTM) of Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three languages to be learned.
Ciresan and colleagues won pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition, the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge and others. Their neural networks were the first pattern recognizers to achieve human-competitive or even superhuman performance on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem.
Researchers demonstrated (2010) that deep neural networks interfaced to a hidden Markov model with context-dependent states that define the neural network output layer can drastically reduce errors in large-vocabulary speech recognition tasks such as voice search.
GPU-based implementations of this approach won many pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition, the ISBI 2012 Segmentation of neuronal structures in EM stacks challenge, the ImageNet Competition and others.
Deep, highly nonlinear neural architectures similar to the neocognitron and the "standard architecture of vision", inspired by simple and complex cells, were pre-trained by unsupervised methods by Hinton. A team from his lab won a 2012 contest sponsored by Merck to design software to help find molecules that might identify new drugs.
As of 2011 topped by several fully or sparsely connected layers followed by a final classification layer. Learning is usually done without unsupervised pre-training. In the convolutional layer, there are filters that are convolved with the input. Each filter is equivalent to a weights vector that has to be trained., the state of the art in deep learning feedforward networks alternated between convolutional layers and max-pooling layers,
Such supervised deep learning methods were the first to achieve human-competitive performance on certain tasks.
Artificial neural networks were able to guarantee shift invariance to deal with small and large natural objects in large cluttered scenes, only when invariance extended beyond shift, to all ANN-learned concepts, such as location, type (object class label), scale, lighting and others. This was realized in Developmental Networks (DNs) whose embodiments are Where-What Networks, WWN-1 (2008) through WWN-7 (2013).
This section may be confusing or unclear to readers. (April 2017) (Learn how and when to remove this template message)
An artificial neural network is a network of simple elements called artificial neurons, which receive input, change their internal state (activation) according to that input, and produce output depending on the input and activation.
The network forms by connecting the output of certain neurons to the input of other neurons forming a directed, weighted graph. The weights as well as the functions that compute the activation can be modified by a process called learning which is governed by a learning rule.
A neuron with label receiving an input from predecessor neurons consists of the following components:
Often the output function is simply the Identity function.
An input neuron has no predecessor but serves as input interface for the whole network. Similarly an output neuron has no successor and thus serves as output interface of the whole network.
The network consists of connections, each connection transferring the output of a neuron to the input of a neuron . In this sense is the predecessor of and is the successor of . Each connection is assigned a weight . Sometimes a bias term added to total weighted sum of inputs to serve as threshold to shift the activation function.
The propagation function computes the input to the neuron from the outputs of predecessor neurons and typically has the form
When a bias value added with the function, the above form changes to following
The learning rule is a rule or an algorithm which modifies the parameters of the neural network, in order for a given input to the network to produce a favored output. This learning process typically amounts to modifying the weights and thresholds of the variables within the network.
Neural network models can be viewed as simple mathematical models defining a function or a distribution over or both and . Sometimes models are intimately associated with a particular learning rule. A common use of the phrase "ANN model" is really the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons or their connectivity).
Mathematically, a neuron's network function is defined as a composition of other functions , that can further be decomposed into other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between functions. A widely used type of composition is the nonlinear weighted sum, where , where (commonly referred to as the activation function) is some predefined function, such as the hyperbolic tangent or sigmoid function or softmax function or rectifier function. The important characteristic of the activation function is that it provides a smooth transition as input values change, i.e. a small change in input produces a small change in output. The following refers to a collection of functions as a vector .
This figure depicts such a decomposition of , with dependencies between variables indicated by arrows. These can be interpreted in two ways.
The first view is the functional view: the input is transformed into a 3-dimensional vector , which is then transformed into a 2-dimensional vector , which is finally transformed into . This view is most commonly encountered in the context of optimization.
The second view is the probabilistic view: the random variable depends upon the random variable , which depends upon , which depends upon the random variable . This view is most commonly encountered in the context of graphical models.
The two views are largely equivalent. In either case, for this particular architecture, the components of individual layers are independent of each other (e.g., the components of are independent of each other given their input ). This naturally enables a degree of parallelism in the implementation.
Networks such as the previous one are commonly called feedforward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure, where is shown as being dependent upon itself. However, an implied temporal dependence is not shown.
The possibility of learning has attracted the most interest in neural networks. Given a specific task to solve, and a class of functions , learning means using a set of observations to find which solves the task in some optimal sense.
The cost function is an important concept in learning, as it is a measure of how far away a particular solution is from an optimal solution to the problem to be solved. Learning algorithms search through the solution space to find a function that has the smallest possible cost.
For applications where the solution is data dependent, the cost must necessarily be a function of the observations, otherwise the model would not relate to the data. It is frequently defined as a statistic to which only approximations can be made. As a simple example, consider the problem of finding the model , which minimizes , for data pairs drawn from some distribution . In practical situations we would only have samples from and thus, for the above example, we would only minimize . Thus, the cost is minimized over a sample of the data rather than the entire distribution.
When some form of online machine learning must be used, where the cost is reduced as each new example is seen. While online machine learning is often used when is fixed, it is most useful in the case where the distribution changes slowly over time. In neural network methods, some form of online machine learning is frequently used for finite datasets.
While it is possible to define an ad hoc cost function, frequently a particular cost function is used, either because it has desirable properties (such as convexity) or because it arises naturally from a particular formulation of the problem (e.g., in a probabilistic formulation the posterior probability of the model can be used as an inverse cost). Ultimately, the cost function depends on the task.
A DNN can be discriminatively trained with the standard backpropagation algorithm. Backpropagation is a method to calculate the gradient of the loss function (produces the cost associated with a given state) with respect to the weights in an ANN.
The basics of continuous backpropagation were derived in the context of control theory by Kelley in 1960 and by Bryson in 1961, using principles of dynamic programming. In 1962, Dreyfus published a simpler derivation based only on the chain rule. Bryson and Ho described it as a multi-stage dynamic system optimization method in 1969. In 1970, Linnainmaa finally published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions. This corresponds to the modern version of backpropagation which is efficient even when the networks are sparse. In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients. In 1974, Werbos mentioned the possibility of applying this principle to Artificial neural networks, and in 1982, he applied Linnainmaa's AD method to neural networks in the way that is widely used today. In 1986, Rumelhart, Hinton and Williams noted that this method can generate useful internal representations of incoming data in hidden layers of neural networks. In 1993, Wan was the first to win an international pattern recognition contest through backpropagation.
The weight updates of backpropagation can be done via stochastic gradient descent using the following equation:
where, is the learning rate, is the cost (loss) function and a stochastic term. The choice of the cost function depends on factors such as the learning type (supervised, unsupervised, reinforcement, etc.) and the activation function. For example, when performing supervised learning on a multiclass classification problem, common choices for the activation function and cost function are the softmax function and cross entropy function, respectively. The softmax function is defined as where represents the class probability (output of the unit ) and and represent the total input to units and of the same level respectively. Cross entropy is defined as where represents the target probability for output unit and is the probability output for after applying the activation function.
These can be used to output object bounding boxes in the form of a binary mask. They are also used for multi-scale regression to increase localization precision. DNN-based regression can learn features that capture geometric information in addition to serving as a good classifier. They remove the requirement to explicitly model parts and their relations. This helps to broaden the variety of objects that can be learned. The model consists of multiple layers, each of which has a rectified linear unit as its activation function for non-linear transformation. Some layers are convolutional, while others are fully connected. Every convolutional layer has an additional max pooling. The network is trained to minimize L2 error for predicting the mask ranging over the entire training set containing bounding boxes represented as masks.
Supervised learning uses a set of example pairs and the aim is to find a function in the allowed class of functions that matches the examples. In other words, we wish to infer the mapping implied by the data; the cost function is related to the mismatch between our mapping and the data and it implicitly contains prior knowledge about the problem domain.
A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output, , and the target value over all the example pairs. Minimizing this cost using gradient descent for the class of neural networks called multilayer perceptrons (MLP), produces the backpropagation algorithm for training neural networks.
Tasks that fall within the paradigm of supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). The supervised learning paradigm is also applicable to sequential data (e.g., for hand writing, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
In unsupervised learning, some data is given and the cost function to be minimized, that can be any function of the data and the network's output, .
The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables).
As a trivial example, consider the model where is a constant and the cost . Minimizing this cost produces a value of that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between and , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples those quantities would be maximized rather than minimized).
Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
In reinforcement learning, data are usually not given, but generated by an agent's interactions with the environment. At each point in time , the agent performs an action and the environment generates an observation and an instantaneous cost , according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimizes some measure of a long-term cost, e.g., the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated.
More formally the environment is modeled as a Markov decision process (MDP) with states and actions with the following probability distributions: the instantaneous cost distribution , the observation distribution and the transition , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two then define a Markov chain (MC). The aim is to discover the policy (i.e., the MC) that minimizes the cost.
Artificial neural networks are frequently used in reinforcement learning as part of the overall algorithm.Dynamic programming was coupled with Artificial neural networks (giving neurodynamic programming) by Bertsekas and Tsitsiklis and applied to multi-dimensional nonlinear problems such as those involved in vehicle routing,natural resources management or medicine because of the ability of Artificial neural networks to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of the original control problems.
Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost. Numerous algorithms are available for training neural network models; most of them can be viewed as a straightforward application of optimization theory and statistical estimation.
Most employ some form of gradient descent, using backpropagation to compute the actual gradients. This is done by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a gradient-related direction. Backpropagation training algorithms fall into three categories:
Evolutionary methods,gene expression programming,simulated annealing,expectation-maximization, non-parametric methods and particle swarm optimization are other methods for training neural networks.
This is a learning method specially designed for cerebellar model articulation controller (CMAC) neural networks. In 2004, a recursive least squares algorithm was introduced to train CMAC neural network online. This algorithm can converge in one step and update all weights in one step with any new input data. Initially, this algorithm had computational complexity of O(N3). Based on QR decomposition, this recursive learning algorithm was simplified to be O(N).
The optimization algorithm repeats a two phase cycle, propagation and weight update. When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function. The resulting error value is calculated for each of the neurons in the output layer. The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.
Backpropagation uses these error values to calculate the gradient of the loss function. In the second phase, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.
Let be a neural network with connections, inputs, and outputs.
Below, will denote vectors in , vectors in , and vectors in . These are called inputs, outputs and weights respectively.
The neural network corresponds to a function which, given a weight , maps an input to an output .
The optimization takes as input a sequence of training examples and produces a sequence of weights starting from some initial weight , usually chosen at random.
These weights are computed in turn: first compute using only for . The output of the algorithm is then , giving us a new function . The computation is the same in each step, hence only the case is described.
Calculating from is done by considering a variable weight and applying gradient descent to the function to find a local minimum, starting at .
This makes the minimizing weight found by gradient descent.
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (December 2016) (Learn how and when to remove this template message)
To implement the algorithm above, explicit formulas are required for the gradient of the function where the function is .
The learning algorithm can be divided into two phases: propagation and weight update.
Each propagation involves the following steps:
For each weight, the following steps must be followed:
This ratio (percentage) influences the speed and quality of learning; it is called the learning rate. The greater the ratio, the faster the neuron trains, but the lower the ratio, the more accurate the training is. The sign of the gradient of a weight indicates whether the error varies directly with, or inversely to, the weight. Therefore, the weight must be updated in the opposite direction, "descending" the gradient.
Learning is repeated (on new batches) until the network performs adequately.
initialize network weights (often small random values) do forEach training example named ex prediction = neural-net-output(network, ex) // forward pass actual = teacher-output(ex) compute error (prediction - actual) at the output units // backward pass // backward pass continued update network weights // input layer not modified by error estimate until all examples classified correctly or another stopping criterion satisfied return the network
The lines labeled "backward pass" can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights.
The choice of learning rate is important, since a high value can cause too strong a change, causing the minimum to be missed, while a too low learning rate slows the training unnecessarily.
Optimizations such as Quickprop are primarily aimed at speeding up error minimization; other improvements mainly try to increase reliability.
In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements of this algorithm use an adaptive learning rate.
By using a variable inertia term (Momentum) the gradient and the last change can be weighted such that the weight adjustment additionally depends on the previous change. If the Momentum is equal to 0, the change depends solely on the gradient, while a value of 1 will only depend on the last change.
Similar to a ball rolling down a mountain, whose current speed is determined not only by the current slope of the mountain but also by its own inertia, inertia can be added:
Inertia makes the current weight change depend both on the current gradient of the error function (slope of the mountain, 1st summand), as well as on the weight change from the previous point in time (inertia, 2nd summand).
With inertia, the problems of getting stuck (in steep ravines and flat plateaus) are avoided. Since, for example, the gradient of the error function becomes very small in flat plateaus, a plateau would immediately lead to a "deceleration" of the gradient descent. This deceleration is delayed by the addition of the inertia term so that a flat plateau can be escaped more quickly.
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the gradient descent process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the average error of the batch. A common compromise choice is to use "mini-batches", meaning small batches and with samples in each batch selected stochastically from the entire data set.
The Group Method of Data Handling (GMDH) features fully automatic structural and parametric model optimization. The node activation functions are Kolmogorov-Gabor polynomials that permit additions and multiplications. It used a deep feedforward multilayer perceptron with eight layers. It is a supervised learning network that grows layer by layer, where each layer is trained by regression analysis. Useless items are detected using a validation set, and pruned through regularization. The size and depth of the resulting network depends on the task.
A convolutional neural network (CNN) is a class of deep, feed-forward networks, composed of one or more convolutional layers with fully connected layers (matching those in typical Artificial neural networks) on top. It uses tied weights and pooling layers. In particular, max-pooling is often structured via Fukushima's convolutional architecture. This architecture allows CNNs to take advantage of the 2D structure of input data.
CNNs are suitable for processing visual and other two-dimensional data. They have shown superior results in both image and speech applications. They can be trained with standard backpropagation. CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate. Examples of applications in computer vision include DeepDream and robot navigation.
A recent development has been that of Capsule Neural Network (CapsNet), the idea behind which is to add structures called capsules to a CNN and to reuse output from several of those capsules to form more stable (with respect to various perturbations) representations for higher order capsules.
Long short-term memory (LSTM) networks are RNNs that avoid the vanishing gradient problem. LSTM is normally augmented by recurrent gates called forget gates. LSTM networks prevent backpropagated errors from vanishing or exploding. Instead errors can flow backwards through unlimited numbers of virtual layers in space-unfolded LSTM. That is, LSTM can learn "very deep learning" tasks that require memories of events that happened thousands or even millions of discrete time steps ago. Problem-specific LSTM-like topologies can be evolved. LSTM can handle long delays and signals that have a mix of low and high frequency components.
Stacks of LSTM RNNs trained by Connectionist Temporal Classification (CTC) can find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
In 2003, LSTM started to become competitive with traditional speech recognizers. In 2007, the combination with CTC achieved first good results on speech data. In 2009, a CTC-trained LSTM was the first RNN to win pattern recognition contests, when it won several competitions in connected handwriting recognition. In 2014, Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark, without traditional speech processing methods. LSTM also improved large-vocabulary speech recognition, text-to-speech synthesis, for Google Android, and photo-real talking heads. In 2015, Google's speech recognition experienced a 49% improvement through CTC-trained LSTM.
LSTM became popular in Natural Language Processing. Unlike previous models based on HMMs and similar concepts, LSTM can learn to recognise context-sensitive languages. LSTM improved machine translation,language modeling and multilingual language processing. LSTM combined with CNNs improved automatic image captioning.
Deep Reservoir Computing and Deep Echo State Networks (deepESNs) provide a framework for efficiently trained models for hierarchical processing of temporal data, while enabling the investigation of the inherent role of RNN layered composition.[clarification needed]
A deep belief network (DBN) is a probabilistic, generative model made up of multiple layers of hidden units. It can be considered a composition of simple learning modules that make up each layer.
A DBN can be used to generatively pre-train a DNN by using the learned DBN weights as the initial DNN weights. Backpropagation or other discriminative algorithms can then tune these weights. This is particularly helpful when training data are limited, because poorly initialized weights can significantly hinder model performance. These pre-trained weights are in a region of the weight space that is closer to the optimal weights than were they randomly chosen. This allows for both improved modeling and faster convergence of the fine-tuning phase.
Large memory storage and retrieval neural networks (LAMSTAR) are fast deep learning neural networks of many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic, logic, non-stationary, or even non-analytical. They are biologically motivated and learn continuously.
A LAMSTAR neural network may serve as a dynamic neural network in spatial or time domains or both. Its speed is provided by Hebbian link-weights that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task. This grossly imitates biological learning which integrates various preprocessors (cochlea, retina, etc.) and cortexes (auditory, visual, etc.) and their various regions. Its deep learning capability is further enhanced by using inhibition, correlation and its ability to cope with incomplete data, or "lost" neurons or layers even amidst a task. It is fully transparent due to its link weights. The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task.
LAMSTAR has been applied to many domains, including medical and financial predictions, adaptive filtering of noisy speech in unknown noise, still-image recognition, video image recognition, software security and adaptive control of non-linear systems. LAMSTAR had a much faster learning speed and somewhat lower error rate than a CNN based on ReLU-function filters and max pooling, in 20 comparative studies.
These applications demonstrate delving into aspects of the data that are hidden from shallow learning networks and the human senses, such as in the cases of predicting onset of sleep apnea events, of an electrocardiogram of a fetus as recorded from skin-surface electrodes placed on the mother's abdomen early in pregnancy, of financial prediction or in blind filtering of noisy speech.
LAMSTAR was proposed in 1996 (A U.S. Patent 5,920,852 A) and was further developed Graupe and Kordylewski from 1997-2002. A modified version, known as LAMSTAR 2, was developed by Schneider and Graupe in 2008.
An encoder is a deterministic mapping that transforms an input vector x into hidden representation y, where , is the weight matrix and b is an offset vector (bias). A decoder maps back the hidden representation y to the reconstructed input z via . The whole process of auto encoding is to compare this reconstructed input to the original and try to minimize the error to make the reconstructed value as close as possible to the original.
In stacked denoising auto encoders, the partially corrupted output is cleaned (de-noised). This idea was introduced in 2010 by Vincent et al. with a specific approach to good representation, a good representation is one that can be obtained robustly from a corrupted input and that will be useful for recovering the corresponding clean input. Implicit in this definition are the following ideas:
The algorithm starts by a stochastic mapping of to through , this is the corrupting step. Then the corrupted input passes through a basic auto-encoder process and is mapped to a hidden representation . From this hidden representation, we can reconstruct . In the last stage, a minimization algorithm runs in order to have z as close as possible to uncorrupted input . The reconstruction error might be either the cross-entropy loss with an affine-sigmoid decoder, or the squared error loss with an affine decoder.
In order to make a deep architecture, auto encoders stack. Once the encoding function of the first denoising auto encoder is learned and used to uncorrupt the input (corrupted input), the second level can be trained.
A deep stacking network (DSN) (deep convex network) is based on a hierarchy of blocks of simplified neural network modules. It was introduced in 2011 by Deng and Dong. It formulates the learning as a convex optimization problem with a closed-form solution, emphasizing the mechanism's similarity to stacked generalization. Each DSN block is a simple module that is easy to train by itself in a supervised fashion without backpropagation for the entire blocks.
Each block consists of a simplified multi-layer perceptron (MLP) with a single hidden layer. The hidden layer h has logistic sigmoidal units, and the output layer has linear units. Connections between these layers are represented by weight matrix U; input-to-hidden-layer connections have weight matrix W. Target vectors t form the columns of matrix T, and the input data vectors x form the columns of matrix X. The matrix of hidden units is . Modules are trained in order, so lower-layer weights W are known at each stage. The function performs the element-wise logistic sigmoid operation. Each block estimates the same final label class y, and its estimate is concatenated with original input X to form the expanded input for the next block. Thus, the input to the first block contains the original data only, while downstream blocks' input adds the output of preceding blocks. Then learning the upper-layer weight matrix U given other weights in the network can be formulated as a convex optimization problem:
which has a closed-form solution.
Unlike other deep architectures, such as DBNs, the goal is not to discover the transformed feature representation. The structure of the hierarchy of this kind of architecture makes parallel learning straightforward, as a batch-mode optimization problem. In purely discriminative tasks, DSNs perform better than conventional DBNs.
This architecture is a DSN extension. It offers two important improvements: it uses higher-order information from covariance statistics, and it transforms the non-convex problem of a lower-layer to a convex sub-problem of an upper-layer. TDSNs use covariance statistics in a bilinear mapping from each of two distinct sets of hidden units in the same layer to predictions, via a third-order tensor.
While parallelization and scalability are not considered seriously in conventional all learning for s and s is done in batch mode, to allow parallelization. Parallelization allows scaling the design to larger (deeper) architectures and data sets.,
The need for deep learning with real-valued inputs, as in Gaussian restricted Boltzmann machines, led to the spike-and-slab RBM (ssRBM), which models continuous-valued inputs with strictly binary latent variables. Similar to basic RBMs and its variants, a spike-and-slab RBM is a bipartite graph, while like GRBMs, the visible units (input) are real-valued. The difference is in the hidden layer, where each hidden unit has a binary spike variable and a real-valued slab variable. A spike is a discrete probability mass at zero, while a slab is a density over continuous domain; their mixture forms a prior.
An extension of ssRBM called µ-ssRBM provides extra modeling capacity using additional terms in the energy function. One of these terms enables the model to form a conditional distribution of the spike variables by marginalizing out the slab variables given an observation.
Compound hierarchical-deep models compose deep networks with non-parametric Bayesian models. Features can be learned using deep architectures such as DBNs, DBMs, deep auto encoders, convolutional variants, ssRBMs, deep coding networks, DBNs with sparse feature learning, RNNs, conditional DBNs, de-noising auto encoders. This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data. However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (a distributed representation) and must be adjusted together (high degree of freedom). Limiting the degree of freedom reduces the number of parameters to learn, facilitating learning of new classes from few examples. Hierarchical Bayesian (HB) models allow learning from few examples, for example for computer vision, statistics and cognitive science.
Compound HD architectures aim to integrate characteristics of both HB and deep networks. The compound HDP-DBM architecture is a hierarchical Dirichlet process (HDP) as a hierarchical model, incorporated with DBM architecture. It is a full generative model, generalized from abstract concepts flowing through the layers of the model, which is able to synthesize new examples in novel classes that look "reasonably" natural. All the levels are learned jointly by maximizing a joint log-probability score.
In a DBM with three hidden layers, the probability of a visible input ν is:
where is the set of hidden units, and are the model parameters, representing visible-hidden and hidden-hidden symmetric interaction terms.
A learned DBM model is an undirected model that defines the joint distribution . One way to express what has been learned is the conditional model and a prior term .
Here represents a conditional DBM model, which can be viewed as a two-layer DBM but with bias terms given by the states of :
A deep predictive coding network (DPCN) is a predictive coding scheme that uses top-down information to empirically adjust the priors needed for a bottom-up inference procedure by means of a deep, locally connected, generative model. This works by extracting sparse features from time-varying observations using a linear dynamical model. Then, a pooling strategy is used to learn invariant feature representations. These units compose to form a deep architecture and are trained by greedy layer-wise unsupervised learning. The layers constitute a kind of Markov chain such that the states at any layer depend only on the preceding and succeeding layers.
DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states.
Integrating external memory with Artificial neural networks dates to early research in distributed representations and Kohonen's self-organizing maps. For example, in sparse distributed memory or hierarchical temporal memory, the patterns encoded by neural networks are used as addresses for content-addressable memory, with "neurons" essentially serving as address encoders and decoders. However, the early controllers of such memories were not differentiable.
Apart from long short-term memory (LSTM), other approaches also added differentiable memory to recurrent functions. For example:
Neural Turing machines couple LSTM networks to external memory resources, with which they can interact by attentional processes. The combined system is analogous to a Turing machine but is differentiable end-to-end, allowing it to be efficiently trained by gradient descent. Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.
Differentiable neural computers (DNC) are an NTM extension. They out-performed Neural turing machines, long short-term memory systems and memory networks on sequence-processing tasks.
Approaches that represent previous experiences directly and use a similar experience to form a local model are often called nearest neighbour or k-nearest neighbors methods. Deep learning is useful in semantic hashing where a deep graphical model the word-count vectors obtained from a large set of documents.[clarification needed] Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by accessing all the addresses that differ by only a few bits from the address of the query document. Unlike sparse distributed memory that operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.
Memory networks are another extension to neural networks incorporating long-term memory. The long-term memory can be read and written to, with the goal of using it for prediction. These models have been applied in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response. A team of electrical and computer engineers from UCLA Samueli School of Engineering has created a physical artificial neural network. That can analyze large volumes of data and identify objects at the actual speed of light.
Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability. While training extremely deep (e.g., 1 million layers) neural networks might not be practical, CPU-like architectures such as pointer networks and neural random-access machines overcome this limitation by using external random-access memory and other components that typically belong to a computer architecture such as registers, ALU and pointers. Such systems operate on probability distribution vectors stored in memory cells and registers. Thus, the model is fully differentiable and trains end-to-end. The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently - unlike models like LSTM, whose number of parameters grows quadratically with memory size.
Encoder-decoder frameworks are based on neural networks that map highly structured input to highly structured output. The approach arose in the context of machine translation, where the input and output are written sentences in two natural languages. In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNN language model to produce the translation. These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.
Multilayer kernel machines (MKM) are a way of learning highly nonlinear functions by iterative application of weakly nonlinear kernels. They use the kernel principal component analysis (KPCA), as a method for the unsupervised greedy layer-wise pre-training step of deep learning.
Layer learns the representation of the previous layer , extracting the principal component (PC) of the projection layer output in the feature domain induced by the kernel. For the sake of dimensionality reduction of the updated representation in each layer, a supervised strategy selects the best informative features among features extracted by KPCA. The process is:
Some drawbacks accompany the KPCA method as the building cells of an MKM.
A more straightforward way to use kernel machines for deep learning was developed for spoken language understanding. The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use stacking to splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine. The number of levels in the deep convex network is a hyper-parameter of the overall system, to be determined by cross validation.
Neural architecture search (NAS) uses machine learning to automate the design of Artificial neural networks. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset and use the results as feedback to teach the NAS network.
Using Artificial neural networks requires an understanding of their characteristics.
Because of their ability to reproduce and model nonlinear processes, Artificial neural networks have found many applications in a wide range of disciplines.
Application areas include system identification and control (vehicle control, trajectory prediction,process control, natural resource management), quantum chemistry, game-playing and decision making (backgammon, chess, poker), pattern recognition (radar systems, face identification, signal classification, object recognition and more), sequence recognition (gesture, speech, handwritten and printed text recognition), medical diagnosis, finance (e.g. automated trading systems), data mining, visualization, machine translation, social network filtering and e-mail spam filtering.
Artificial neural networks have been used to diagnose cancers, including lung cancer,prostate cancer, colorectal cancer and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
Many types of models are used, defined at different levels of abstraction and modeling different aspects of neural systems. They range from models of the short-term behavior of individual neurons, models of how the dynamics of neural circuitry arise from interactions between individual neurons and finally to models of how behavior can arise from abstract neural modules that represent complete subsystems. These include models of the long-term, and short-term plasticity, of neural systems and their relations to learning and memory from the individual neuron to the system level.
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational valued weights (as opposed to full precision real number-valued weights) has the full power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
Models' "capacity" property roughly corresponds to their ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Models may not consistently converge on a single solution, firstly because many local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical. However, for CMAC neural network, a recursive least squares algorithm was introduced to train it, and this algorithm can be guaranteed to converge in one step.
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the capacity of the network significantly exceeds the needed free parameters. Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and optimally select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of the output of the network, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based neural network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is very useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
A common criticism of neural networks, particularly in robotics, is that they require too much training for real-world operation. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example and by grouping examples in so-called mini-batches. Improving the training efficiency and convergence capability has always been an ongoing research area for neural network. For example, by introducing a recursive least squares algorithm for CMAC neural network, the training process only takes one step to converge.
A fundamental objection is that they do not reflect how real neurons function. Back propagation is a critical part of most artificial neural networks, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known. Sensor neurons fire action potentials more frequently with sensor activation and muscle cells pull more strongly when their associated motor neurons receive action potentials more frequently. Other than the case of relaying information from a sensor neuron to a motor neuron, almost nothing of the principles of how information is handled by biological neural networks is known. This is a subject of active research in Neural coding.
The motivation behind Artificial neural networks is not necessarily to strictly replicate neural function, but to use biological neural networks as an inspiration. A central claim of artificial neural networks is therefore that it embodies some new and powerful general principle for processing information. Unfortunately, these general principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. Alexander Dewdney commented that, as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything".
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may compel a neural network designer to fill many millions of database rows for its connections – which can consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which must often be matched with enormous CPU processing power and time.
Schmidhuber notes that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
Arguments against Dewdney's position are that neural networks have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs non-local learning and shallow vs deep architecture.
Artificial neural networks have many variations. The simplest, static types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to change during the learning process. The latter are much more complicated, but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
This "see also" section may contain an excessive number of suggestions. Please ensure that only the most relevant links are given, that they are not red links, and that any links are not already in this article. (March 2018) (Learn how and when to remove this template message)
The most popular method for learning in multilayer networks is called Back-propagation.
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share. |
Now suppose you have chosen the best possible model for a particular problem and are working to further improve its accuracy. In this case, you will need to apply more advanced machine learning techniques which are collectively referred to as ensemble learning.
A set is a collection of elements that collectively contribute to a whole. A familiar example is a musical ensemble, which mixes the sounds of several musical instruments to create a beautiful harmony, or architectural ensembles, which are a collection of buildings designed as a unit. In ensembles, the (whole) harmonious result is more important than the execution of any individual part.
Condorcet's Jury Theorem (1784) is about a set in some sense. It states that, if each member of the jury makes an independent judgment and the probability of each juror's correct decision is greater than 0.5, then the probability of the correct decision of the entire jury increases with the total number of jurors and tends to a. On the other hand, if the probability of being right is less than 0.5 for each juror, then the probability of a correct decision by the jury as a whole decreases with the number of jurors and tends towards zero.
Consider another example of sets: an observation known as Wisdom of the Crowd. In 1906 Francis Galton visited a rural fair in Plymouth where he saw a competition held for farmers. 800 participants tried to estimate the weight of a slaughtered bull. The actual weight of the bull was 1198 pounds. Although none of the farmers could guess the exact weight of the animal, the average of their predictions was 1197 pounds.
A similar idea for error reduction has been adopted in the field of machine learning.
Priming (bagging and bootstrapping)
Bagging (also known as Bootstrap aggregation) is one of the earliest and most basic ensemble techniques. It was proposed by Leo Breiman in 1994. Bagging is based on the bootstrap statistical method, which makes it possible to evaluate many statistics of complex models.
The bootstrap method proceeds as follows. Consider a sample X of size N. A new sample can be made from the original sample by drawing N elements from the latter in a random and uniform manner, with replacement. In other words, we select a random element from the original sample of size N and do it N times. All elements are equally likely to be selected, so each element is drawn with equal probability 1/N.
Let's say we draw balls from a bag one at a time. At each stage, the selected ball is put back in the bag so that the next selection is made in an equiprobable manner, that is to say from the same number of balls N. Note that, since the balls are put back , there may be duplicates in the new sample. Let's call this new sample X1.
By repeating this procedure M times, we create M bootstrap samples X1, …, XM. In the end, we have a sufficient number of samples and can calculate various statistics from the original distribution.
For our example, we'll use the familiar telecom_churn dataset. Previously, when we discussed the importance of features, we saw that one of the most important features in this dataset is the number of customer service calls. Let's visualize the data and look at the distribution of this feature.
|import panda ace pd|
|from matplotlib import pyplot ace please|
|please.rcParams['figure.figsize'] = 10, 6|
|import sea born ace sns|
|telecom_data = pd.read_csv('../../data/telecom_churn.csv')|
|fig = sns.kdeplot(telecom_data[telecom_data['churn'] == False]['Customer service calls'],|
|label = 'Loyal')|
|fig = sns.kdeplot(telecom_data[telecom_data['churn'] == True]['Customer service calls'],|
|label = 'churn')|
|fig.set(xlabel='Number of calls', ylabel='Density')|
|import numpy ace n.p.|
|def get_bootstrap_samples(data, n_samples):|
|“””Generate bootstrap samples using the bootstrap method. » » »|
|clues = n.p..random.randint(0, then(data), (n_samples, then(data)))|
|samples = data[clues]|
|def stat_intervals(status, alpha):|
|“””Produce an interval estimate. » » »|
|boundaries = n.p..percentile(status, [100 * alpha / 2., 100 * (1 – alpha / 2.)])|
|# Save the data about the loyal and form customers to split the dataset|
|loyal_calls = telecom_data[telecom_data['churn']|
|== False]['Customer service calls'].values|
|== True]['Customer service calls'].values|
|# Set the seed for reproducibility of the results|
|# Generate the samples using bootstrapping and calculate the mean for each of them|
|loyal_mean_scores = [n.p..mean(sample)|
|for sample in get_bootstrap_samples(loyal_calls, 1000)]|
|churn_mean_scores = [n.p..mean(sample)|
|for sample in get_bootstrap_samples(churn_calls, 1000)]|
|# Print the resulting interval estimates|
|print(“Service calls from loyal: mean interval”,|
|print(“Service calls from churn: mean interval”,|
Now that you've gotten the bootstrap idea, we can move on to bagging. In a problem of regression, by averaging the individual responses, bagging reduces the root mean square error by a factor M, the number of regressors.
From our previous lesson, let's recall the components that make up the total out-of-sample error:
Bagging reduces the variance of a classifier by decreasing the error difference when we train the model on different datasets. In other words, bagging prevents overfitting. The effectiveness of bagging comes from the fact that the individual models are quite different due to different training data and their errors cancel each other out when voting. Also, outliers are likely omitted in some of the training starter samples.
The scikit-learn library supports bagging with the BaggingRegressor and BaggingClassifier meta-estimators. You can use most algorithms as a base.
Let's take a look at how bagging works in practice and compare it with the decision tree. For this we will use an example from the sklearn documentation.
The error for the decision tree:
0.0255 = 0.0003 (bias²)+ 0.0152 (variance) + 0.0098 (σ²)
The error when using bagging:
0.0196 = 0.0004 (bias²) + 0.0092 (variance) + 0.0098 (σ²)
As you can see from the graph above, the error variance is much lower for bagging. Remember that we have already proven this theoretically.
Bagging is efficient on small datasets. Removing even a small part of the training data leads to the construction of significantly different base classifiers. If you have a large data set, you will generate bootstrap samples of a much smaller size.
The example above is unlikely to apply to real work. This is because we made the strong assumption that our individual errors are uncorrelated. More often than not, this is far too optimistic for real-world applications. When this assumption is false, the reduction in error will not be as great. In subsequent lectures, we will discuss some more sophisticated ensemble methods, which allow for more accurate predictions in real-world problems.
In the future, in the case of random forest, it is not necessary to use cross-validation or exclusion samples to obtain an unbiased error estimate. Why? Because, in ensemble techniques, error estimation takes place internally.
Random trees are constructed using different bootstrap samples from the original dataset. About 37 % of the inputs are excluded from a particular bootstrap sample and are not used in the construction of the K-th tree.
Let's see how Out-of-Bag (or OOBE) error estimation works:
The upper part of the figure above represents our original dataset. We've split it into practice (left) and test (right) sets. In the image on the left, we draw a grid that neatly divides our dataset by classes. Now we use the same grid to estimate the share of correct answers on our test set. We can see that our classifier gave incorrect answers in these 4 cases which were not used during training (left). Therefore, the precision of our classifier is 11/15*100 % = 73.33 %.
To sum up, each algorithm base is trained on ~63 % of the original examples. It can be validated on the remaining ~37%. The Out-of-Bag estimate is nothing more than the average estimate of the base algorithms over the ~37 % of inputs that have not been trained.
Leo Breiman succeeded in applying the bootstrap not only in statistics but also in machine learning. Together with Adel Cutler, he extended and improved the Random Forest algorithm proposed by Tin Kam Ho. They combined the construction of uncorrelated trees using CART, bagging and the random subspace method.
Decision trees are a good choice for the basic classifier in bagging because they are quite sophisticated and can achieve zero classification errors on any sample. The random subspace method reduces the correlation between the shafts and thus avoids overfitting. With bagging, the core algorithms are trained on different random subsets of the original feature set.
The following algorithm builds a set of models using the random subspace method:
- Suppose the number of instances is equal to n and the number of dimensions of the entity is equal to d.
- Choose M as the number of individual models in the set.
- For each model m, choose the number of features dm < d. As a general rule, the same dm value is used for all models.
- For each model m, create a training set by selecting dm features at random from the feature set d.
- Train each model.
- Apply the resulting ensemble model to a new input by combining the results of all models of M. You can use either majority voting or posterior probability aggregation.
The algorithm is as follows:
The final classifier is the mean of the trees.
For classification problems, it is advisable to set m equal to the square root of d. For regression problems, we usually take m = d/3, where d is the number of features. It is recommended to build each tree until all its leaves contain only 1 instance for classification and 5 instances for regression.
You can think of Random Forest as a clustering of decision trees with the changing selection of a random subset of features with each split.
Here are the results for the three algorithms:
As we can see from our graphs and the MSE values above, a Random Forest of 10 trees achieves a better result than a single decision tree and is comparable to bagging with 10 trees. The main difference between Random Forests and bagging is that, in a Random Forest, the best feature for a split is selected from a random subset of the available features while, in bagging, all features are considered for the next best split.
We can also look at the benefits of random forests and classification problems.
The figures above show that the decision boundary of the decision tree is quite irregular and has many sharp angles that suggest overfitting and poor ability to generalize. We would struggle to make reliable predictions on new test data. In contrast, the bagging algorithm has a rather smooth boundary and shows no obvious signs of overfitting.
Now let's look at some parameters that can help us increase the accuracy of the model.
Parameters to increase accuracy
The scikit-learn library implements random forests by providing two estimators: RandomForestClassifier and RandomForestRegressor.
Below are the parameters we need to pay attention to when building a new model:
- n_estimators is the number of trees in the forest;
- criterion is the function used to measure the quality of a split;
- max_features is the number of features to consider when finding the best distribution;
- min_samples_leaf is the minimum number of samples required to be at a leaf node;
- max_depth is the maximum depth of the tree.
The most important fact about random forests is that its accuracy does not decrease when we add trees, so the number of trees is not a hyperparameter of complexity unlike max_depth and min_samples_leaf. This means you can tune the hyperparameters with, say, 10 trees, then increase the number of trees up to 500 and be sure that the accuracy will only improve.
Extremely random trees use a greater degree of randomization to the choice of cutpoint when splitting a tree node. As in Random Forests, a random subset of features is used. But, instead of searching for optimal thresholds, their values are randomly selected for each possible feature, and the best among these randomly generated thresholds is used as the best rule to split the node. This usually compensates for a slight reduction in model variance with a slight increase in bias.
In the scikit-learn library, there are 2 extremely random tree implementations: ExtraTreesClassifier and ExtraTreesRegressor.
This method should be used if you have overdone it with random forests or gradient boosting.
Conclusion on the random forest
- High prediction accuracy; will perform better than linear algorithms in most problems; the precision is comparable to that of boosting;
- Robust to outliers, thanks to random sampling;
- Insensitive to feature scaling as well as any other monotonic transformation due to random subspace selection;
- Doesn't require fine-tuning of settings, works quite well out of the box. With tuning, an accuracy gain of 0.5 to 3 % can be achieved, depending on the problem tuning and data;
- Effective for datasets with a large number of features and classes;
- Handles both continuous and discrete variables;
- Rarely oversized. In practice, an increase in the number of trees almost always improves the composition. But, after reaching a certain number of trees, the learning curve is very close to the asymptote;
- There are methods developed to estimate the importance of features;
- Works well with missing data and maintains good levels of precision even when a large portion of the data is missing;
- Provides means to weight classes over the dataset as well as for each sample tree;
- Under the hood, calculates proximities between pairs of instances which can then be used in clustering, outlier detection, or interesting data representations;
- The above functionality and properties can be extended to unlabeled data to allow unsupervised grouping, data visualization and detection of outliers;
- Easily parallelized and highly scalable.
- Compared to a single decision tree, the output of Random Forest is more difficult to interpret.
- There are no formal p-values for estimating feature importance.
- Performs less well than linear methods in the case of sparse data: text entries, bag of words, etc. ;
- Unlike linear regression, Random Forest is unable to extrapolate. But this can also be seen as an advantage because outliers do not cause outliers in random forests;
- Prone to overfitting in some problems, especially when dealing with noisy data;
- In the case of categorical variables with varying numbers of levels, random forests favor variables with a larger number of levels. The tree structure will adapt more to multi-level functionality, as it becomes more precise;
- If a dataset contains groups of correlated features with similar importance for the predicted classes, preference will be given to smaller groups;
- The resulting model is large and requires a lot of RAM. |
Developing critical thinking and problem solving techniques begins at the early stages of development. The students are encouraged to use their thought process to solve everyday occurrences in the classroom. Teachers give students the opportunity to elaborate and explain their ideas, opinions, thoughts, predictions, needs, works of art, and actions. They are also constantly presenting them with questions and situations that enable them to develop their critical thinking and problem solving skills.
Emphasis on Hands on Learning Approach:
Children learn best through play. That is why we enable the students to experience the learning process through the use of manipulative, educational toys, art, music, discovery, blocks, drama, and puzzles. The point is children learn best by being actively involved in the lesson. This involves actually experiencing what is being taught by doing it, seeing it, smelling it, tasting it, touching it, or hearing it.
Daily Exposure to Learning Programs
Art, Circle Time, Music, Discovery, Blocks, Drama, and Quite Area
The children rotate around the learning centers that include art, music, discovery, blocks, drama, and quiet area. Each day the children also participate in circle time. In a typical day, children are exposed to:
Here students are exposed to clay, finger paint, crayons, markers, color pencils, painting on easels, glitter, crafts, glue, scissors, and encouraged to develop their creative abilities.
Here students are exposed to different musical instruments, toys that make noise, encouraged to develop their musical abilities, and enjoy the different sounds of music.
Here students are exposed to sand and water tables that give them the opportunity to experiment touching and feeling different objects dipped in sand and water, as well as exposure to other science related activities in order to develop their sense of touch, critical thinking, problem solving, and sense of curiosity.
Here the students are exposed a variety of manipulatives such as wooden blocks, Mega blocks, cars, string beads, Mr. Potato, lego table, and encouraged to develop and use their eye-hand coordination, imagination, visual discrimination, mathematical skills, and analytical skills.
Here students are exposed to costumes, hats, props, kitchen furniture & supplies, doctor kits, dolls, cash register & store supplies, puppets, puppet theatre, and encouraged to develop their creativity, role-playing techniques, acting skills, and imagination.
Here students are exposed to books, magnetic letters & numbers, puzzles, thinking games & activities, and encouraged to develop their critical thinking, problem solving, cognitive, and reading skills.
Here students are exposed to literature, songs, hand & body movements, stretching exercises, dancing, acting, and encouraged to tell stories, listen to stories, read aloud, create stories, predict what will happen in a story, and follow directions in order to use and develop their language, listening, cognitive, critical thinking & problem solving, and cooperative group skills, as well as develop their large-motor muscles.
Outdoor & Indoor Play Time:
Here students are exposed to outdoor and indoor play equipment and encouraged to run, play, use their imagination, and get physical exercise in order to develop their large-motor muscles.
Emphasis on Critical Thinking and Problem Solving
Incorporation Character Education
We believe that character education is equally as important as academics. That is why we incorporate lessons and readings that assist in the development of the students’ character as well as model and encourage proper behaviors such as respect, self-esteem, good manners, self-discipline, responsibility, and moral values.
Outdoor Play Ground
Children need to develop their gross motor skills and coordination through movement. We provide the students with both an outdoor and indoor play area in order to enable them to use and develop their gross motor skills and coordination. Students use the play ground equipment and play areas to practice balancing, jumping, stretching, hopping, riding, walking, running, swinging, dancing, sliding, climbing, throwing, catching, crawling, and most importantly having plain old FUN! |
In this article, we continue to explore Big-O, which is a notation we use to describe algorithmic efficiency. If you haven’t read part one, you might like to read that part first.
In the last article we talked about where Big-O notation came from, and what notation we might expect to discover for an efficient or inefficient algorithm. For example the difference between O(1) constant time, and O(N) linear time when looking at time-complexity.
However, now we want to be able to look at an algorithm or a bit of code and work out which Big-O notation for time and space complexity the code or algorithm will start to conform under the conditions of increasing input size. To do this let’s look at one commonly cited example of complexity in algorithms, that is search algorithms.
O(N) Linear Time Algorithms
O(N) refers to where a given operation’s complexity scales linearly with the size of the input. The below algorithm is an example where we want to search for a given string in a given unordered array of strings, we would call this a Linear Search Algorithm, and we could see it for searching for anything comparable in an unordered array.
So, of course, the reason that this is O(N) is that we are taking our worst-case scenario. Whilst the letter we are looking for could be found quickly for a given array, it could equally not be present in the array at all. This means as the size of the input increases, the search time could take proportionally longer (as each checking operation takes the same time per entry in the array).
O(log n) — Logarithmic Time Algorithms
Moving on now to algorithms we can apply to ordered data-sets: Binary Search is an example of algorithm who’s big-O time-complexity will tend towards O(log n) as input size increases. Binary Search also has a O(1) space-complexity, meaning that it doesn’t take up any more memory, no matter the size of the input.
Let’s look at one way to implement Binary Search in Swift using a recursive algorithm (we could alternatively implement this iteratively).
SOURCE: Swift Algorithm Club
Binary Search aims to find the position of a target value within a sorted (or ordered) array (so in the above example it finds 17), and it does this by comparing “the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful. If the search ends with the remaining half being empty, the target is not in the array” (Ref#: D).
Here is a python example:
And here is an illustration of how the search happens for a target of 7 in a sorted array:
So we see that binary search tries to get a better search time through a divide & conquer strategy. You can think of a binary search as recursive in nature because it looks to apply the same logic recursively to the smaller and smaller subarrays. To explore then the reason why binary search has a time-complexity of O(log n) we can use Master Theorem for algorithmic analysis, or see the formula worked out here (but intuitively, if we remember that on the graph O(log n) levels out as n increases, we understand that this is because we start to break the problem down into smaller sub-problems which are each faster to solve).
O(n²) — Quadratic Time Algorithms
When we see this notation, we know it is for cases where a given operation’s complexity grows exponentially with the number of inputs (so we want to avoid these if possible). One example of a quadratic-time algorithm is Traversing a 2D array. Another is printing all possible pairs in an array composed of pairs of Ints like:
You can probably see fairly easily here that as soon as we add another pair of integers to our input, we are actually multiplying the number of calculations since we are working out pairs of the x and y between all of the pairs. This means the curve on the graph goes exponential upon adding more inputs.
The Four Rules of Big-O-ifying
Often we may want to work out what the Big-O of a bit of code we are shown is (when we are trying to evaluate what the best solution to a given problem for example). When working out the Big-O of some code, there are four key rules that one should follow when deriving the correct complexity for an algorithm (Ref#: G).
Rule One: Different Steps Get Added
Here a particular function contains two separate processes each of which have their own complexity, these could be the same or different but we will add them together such that with one operation being O(a) and one being O(b) that becomes O(a+b), as above as a first step towards working out the overall complexity.
Rule Two: Drop Constants
So, for example, if you have two operations in a function which both have complexity O(N), then we might naively think that the complexity would be O(2N). However, the collective complexity is not O(2N) as the 2 is not important, we drop the constant therefore and focus on the dominant element in this meaning that indeed the overall complexity is O(N). This also means that if we have a complexity of O(N2 + N2) then that’s just going to become O(N2) since O(N2 + N2) = O(2N2) => O( N2).
Rule Three: Associate Different Inputs with Different Complexity Variables
This rule says that where we have a function taking in two inputs we want to take this into account when we describe the overall efficiency. This is easier to demonstrate with an example:
One might intuitively think that the complexity of the above should be described as O(N2), however, this rule suggests, that because we don’t know what N would be describing in this case, but we do know that the length of each of the arrays are factors, then a better way to describe this is as O(a * b), where a is linked to the length of array1 and b to the length of array2.
Rule Four: Drop Non-Dominant Terms
So if you had a function containing an operation of complexity O(N) and another of O(N2) we might be tempted to say that the complexity of the function is something like O(N + N2), but N is going to become less and less important with input size here, it’s basically “non-dominant” so we drop it and, in fact, we just end up with O(N2).
So with these four rules, we should be able to start to think the time and space complexities of a range of different simple algorithms and even novel algorithms, which we are encountering for the first time, and to come up with the correct Big-O notation describing the time-complexity of each. We have also seen in this article some typical examples of algorithms that have a particular time-complexity and the reasons for this. Clearly there are some more complex cases where it can be more challenging to work this out straightforwardly, but the rules for these can also be learnt through more in-depth study of the mathematics involved.
A: Know Thy Complexities! (n.d.). Retrieved February 18, 2018, from http://bigocheatsheet.com/
B: Webb-Orenstein, C. (2017, May 11). Complexity and Big-O Notation In Swift – Journey Of One Thousand Apps – Medium. Retrieved February 18, 2018, from https://medium.com/journey-of-one-thousand-apps/complexity-and-big-o-notation-in-swift-478a67ba20e7
C: Hollemans, M. et al. (n.d.). Binary Search. Retrieved February 18, 2018, from https://github.com/raywenderlich/swift-algorithm-club/tree/master/Binary Search
Post Updated: 6th May 2018 |
The cognitive perspective is mainly concerned with mental functions such as memory, perception, attention, etc. It views people as being similar to computers in the way we process information (e.g., input-process-output). For example, human brains and computers process information, store data and have input an output procedure.
This had led cognitive psychologists to explain that memory consists of three stages: encoding ( it is a process where information is received and attended to), storage (In this process, the information is retained) and retrieval (the information is recalled in this process).
It is a scientific approach and typically uses lab experiments to study human behavior. The cognitive approach has many applications including cognitive therapy and eyewitness testimony.
What is cognitive psychology?
Cognitive psychology is the scientific study of the mind as an information processor. Cognitive psychology focuses on the way people process information. It works on how we process and receive the information and how the received information leads to our responses.
Wilhelm Wundt was the Scientist, who found the first psychological laboratory.
His initiative was soon followed by other European and American Universities. These early laboratories, through experiments, explored areas such as memory and sensory perception, both of which Wundt believed to be closely related to physiological processes in the brain. The whole movement had evolved from the early philosophers, such as Aristotle and Plato. Today this approach is known as cognitive psychology.
Cognitive Psychology revolves around the notion that if you want to know what makes people tick then the way to do it is to figure out what processes are actually going on in their minds.
Human Experimental Psychology
Human experimental psychology can be defined as the scientific and empirical approach to the study of the mind. The experimental approach means that tests are administered to participants, with both control and experimental conditions.
This means that a group of participants are exposed to a stimulus (or stimuli), and their behavior in response is recorded. This behavior is compared to some kind of control condition, which could be either a neutral stimulus, the absence of a stimulus, or against a control group.
Human experimental psychology is concerned with testing theories of human thoughts, feelings, actions, and beyond any aspect of being human that involves the mind.
Human experimental psychology is further categorized into memory, attention, problem-solving, and language.
Cognitive psychologist Margaret W. Matlin has described memory as the “process of retaining information over time.” Others have defined it as the ability to use past experiences to determine future path.
How we form memories?
The process of encoding a memory begins when we are born and occurs continuously. For something to become a memory, it must first be picked up by one or more of our senses. A memory starts off in short-term storage. We learn how to tie our shoe, for example. Once we have the process down, it goes into our long-term memory and we can do it without consciously thinking about the steps involved.
Motivation is also a consideration, in that information relating to something that we have a keen interest in is more likely to be stored in our long-term memory. That is why some people might be able to recall the stats of a favorite baseball player years after they have retired or where a favorite pair of shoes was purchased.
Memory loss is often associated with aging, but there are a number of things that can trigger short- and long-term memory loss, including injury, medications and witnessing a traumatic event.
What are the types of memory?
Following are the types of memory
Short term memory is generally described as the recollection of things that happened immediately up to a few days. It is generally believed that five to nine items can be stored in active short-term memory and can be readily recalled.
Patients who suffer from short-term memory loss can not remember who walked into the room ten minutes before, but can remember their childhood friend from 50 years ago.
Implicit memory is sometimes referred to as unconscious memory or automatic memory. It uses past experiences to remember things without thinking about them. Musicians and professional athletes are said to have superior ability to form procedural memories.
Procedural memory is a subset of implicit memory, it is a part of the long-term memory responsible for knowing how to do things, also known as motor skills. We do not have to delve into our memory to recall how to walk each time we take a step.
Some examples of procedural memory:
Explicit memory sometimes referred to as declarative memory. It requires a more concerted effort to bring the surface. Declarative memory involves both semantic and episodic memory.
It is not connected to personal experience. Semantic memory includes things that are common knowledge, such as the names of states, the sounds of letters, the capitals of countries and other basic facts that are not in question. Some examples of semantic memory include:
Episodic memory is a person’s unique recollections of a specific event or an episode. People are usually able to associate particular details with an episodic memory, such as how they felt, the time and place, and other particulars. It is not clear as to why some memories of events in our lives are committed to memory, while others do not get recorded, but researchers believe that emotions play a critical role in what we remember.
Some examples of episodic memory:
- Where we were and the people we were with when we found out about the Challenger space shuttle disaster
- Your beach vacation last summer
- The first time we traveled by plane
- Our first day at a new job
How to improve memory?
There are many fun, simple and even delicious ways to improve your memory.
Exercising your mind and body, enjoying a quality piece of chocolate and reducing the amount of added sugar in your diet are all excellent techniques.
Try adding a few of the science-backed tips to your daily routine to boost your brain health and keep your memory in top condition.
There are three main stages of memory, which are encoding, storage, and retrieval. Problems can occur at any of these stages. The three main forms of memory storage are sensory memory, short-term memory, and long-term memory.
Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.
What are the types of attention?
Many aspects of attention have been studied in the field of psychology. In some respects, we define different types of attention by the nature of the task used to study it.
Divided attention tasks allow us to determine how well individuals can attend to many sources of information at once.
Spatial attention refers specifically to how we focus on one part of our environment and how we move attention to other locations in the environment.
In selective attention some information is attended to while other information is intentionally blocked out. Selective attention is the ability to select certain stimuli in the environment to process, while ignoring distracting information.
It may be useful to think of attention as a mental resource, one that is needed to focus on and fully process important information, especially when there is a lot of distracting “noise” threatening to obscure the message. Our selective attention system allows us to find or track an object or conversation in the midst of distractions. Whether the selection process occurs early or late in the analysis of those events has been the focus of considerable research, and in fact how selection occurs may very well depend on the specific conditions. With respect to divided attention, in general we can only perform one cognitively demanding task at a time, and we may not even be aware of unattended events even though they might seem too obvious to miss.
What is problem solving?
A problem arises when we need to overcome some obstacle in order to get from our current state to a desired state. Problem solving is the process that an organism implements in order to try to get from the current state to the desired state.
An historical review of approaches to problem solving
The behaviourist approach
Behaviourist researchers argued that problem solving was a reproductive process; that is, organisms faced with a problem applied behaviour that had been successful on a previous occasion. Successful behaviour was itself believed to have been arrived at through a process of trial-and-error. In 1911 Edward Thorndikehad developed his law of effect after observing cats discover how to escape from the cage into which he had placed them.
Gestalt psychologists argued that problem solving was a productive process. In particular, in the process of thinking about a problem individuals sometimes “restructured” their representation of the problem, leading to a flash of insight that enabled them to reach a solution.
Language can be defined as the complex method which is used by humans in order to communicate with each other.
It is important to note that language should be structured and based on the words which are stored in the people’s minds as dictionary. From this point, if language is a system and structure to use words while speaking and writing appropriately, lexicon is the complex of those words preserving in the human mind.
Cognitive psychology studies the aspects of human cognition. Many processes of human cognition depend on language and its qualities. Thus, to realize the cognitive processes, people perceive, understand, learn, and remember a lot of information presented in the form of language. From this perspective, language is the stable structure which develops within the society, but it is perceived by persons individually.
Communication is the specific ability of humans to exchange their thoughts and ideas with the help of certain spoken signals and written signs which contain the definite meaning and can be successfully recognized by the participants of the communication process. Meaningful signals, signs, and symbols are combined in a complex known as language.
Computer Analogies Information Processing Approach
Information processing theory is a cognitive theory that uses computer processing as a metaphor for the workings of the human brain. Initially proposed by George A. Miller and other American psychologists in the 1950s, the theory describes how people focus on information and encode it into their memories.
The information-processing approach focuses on how the human memory system acquires, transforms, compacts, elaborates, encodes, retrieves, and uses information. The memory system is divided into three main storage structures: sensory registers, short-term memory, and long-term memory. Each structure is synonymous with a type of processing.
To understand how an individual is able to interpret information, the researcher must first focus on decisions made at each memory storage structure. Within the information-processing model, attention and pattern recognition determine the environmental factors that are processed. A large amount of information impinges on the sensory registers, but it is quickly lost if not attended to. Attention, therefore, plays an important role in selecting sensory information.
Informational processing approach is further categorized into Artificial intelligence and Computer Stimulation.
Artificial intelligence is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do.
Computer stimulation is the use of a computer to represent the dynamic responses of one system by the behaviour of another system modeled after it. A simulation uses a mathematical description, or model, of a real system in the form of a computer program. This model is composed of equations that duplicate the functional relationships within the real system. When the program is run, the resulting mathematical dynamics form an analog of the behaviour of the real system, with the results presented in the form of data. A simulation can also take the form of a computer-graphics image that represents dynamic processes in an animated sequence.
Computer simulations are used to study the dynamic behaviour of objects or systems in response to conditions that cannot be easily or safely applied in real life.
Cognitive neuroscience is the study of how the brain enables the mind. Brain science explores how individual neurons operate and communicate to form complex neuronal architectures that comprise the human brain. Cognitive science uses the experimental methods of cognitive psychology and artificial intelligence to create and test models of higher-level cognition such as thought and language. Cognitive neuroscience bridges these two domains. It maps higher-level cognitive functions to known brain architectures and known modes of neuronal processing.
Cognitive Neuroscience is an interdisciplinary area of research that combines measurement of brain activity (mostly by means of neuroimaging) with a simultaneous performance of cognitive tasks by human subjects.
Brain Damage and effect on cognition
Brain damage occurs when a person’s brain is injured due to traumatic injury, such as a fall or car accident, or nontraumatic injury, such as a stroke.
Doctors more commonly refer to brain damage as brain injury because this term better describes what’s happening in the brain.
Brain damage can affect cognition in many ways. After brain damage, it is common for people to have problems with concentration, memory, speech, and even with problem-solving.
Problems with reasoning, problem-solving and judgment
- Individuals with brain damage may have difficulty recognizing when there is a problem, which is the first step in problem-solving.
- They may have trouble analyzing information or changing the way they are thinking (being flexible).
- When solving problems, they may have difficulty deciding the best solution, or get stuck on one solution and not consider other, better options.
- They may make quick decisions without thinking about the consequences, or not use the best judgment.
What can be done to improve reasoning and problem-solving?
- A speech therapist or psychologist experienced in cognitive rehabilitation can teach an organized approach for daily problem-solving.
- Work through a step-by-step problem-solving strategy in writing: define the problem; brain-storm possible solutions; list the pros and cons of each solution; pick a solution to try; evaluate the success of the solution; and try another solution if the first one doesn’t work.
After a brain damage it is common for people to have problems with attention, concentration, speech and language, learning and memory, reasoning, planning and problem-solving.
Therefore, in conclusion, there are so many different perspectives in psychology to explain the different types of behavior and give different angles. No one perspective has explanatory powers over the rest.
Frequently Asked Questions
What is cognitive perspective?
The cognitive perspective is concerned with “mental” functions such as memory, perception, attention, etc. It views people as being similar to computers in the way we process information (e.g., input-process-output). The cognitive approach has many applications including cognitive therapy and eyewitness testimony.
What is an example of cognitive perspective?
Learning is an example of cognition. The way our brain makes connection as we learn concepts in different ways to remember what we have learned. Our ability to reason through logic is a prime example of cognition. People do have different ways of reasoning if we think about why people buy certain things when they shop.
Why is the cognitive perspective important?
The cognitive perspective , operates on the belief that the brain is the most important aspect in relation to the way that an individual behaves or thinks. This perspective states that to understand someone, you must first be able to understand what is happening in their mind.
What is the focus of the cognitive perspective?
The cognitive approach in psychology is a relatively modern approach to human behaviour that focuses on how we think. It assumes that our thought processes affect the way in which we behave.
How does the brain work psychology?
Cognitive psychology explores our mental processes. Cognitive psychologists, sometimes called brain scientists, study how the human brain works how we think, remember and learn. They apply psychological science to understand how we perceive events and make decisions. |
There is nothing better than deductive reasoning to win a test, belief or an argument. This type of logical argument produce rock-solid conclusions, and not everyone can use it with certainty. Deductive reasoning, also known as deductive logic, is used in both academia and everyday life.
Let’s explore what deductive reasoning is.
What is deductive reasoning?
Deductive reasoning is the method of reason from one or more factual statements (i.e. premises) which starts with a general idea to reach a logical conclusion. It is based on a hypothesis or a general statement which is believed to be a true statement. The premise is then used to reach a logical and specific conclusion.
Below are some examples which showcase deductive reasoning:
if all A are B
and all B are C,
then all A are C (Deductive reasoning)
All men are mortal
Theo is a man
Then Theo is mortal (Deductive reasoning)
Deductive Reasoning vs Inductive Reasoning
- Inductive reasoning starts with specific observations that are used to reach a broad conclusion. On the other hand, deductive reasoning begins with extensive observations that are used to achieve a particular outcome.
- An instance for inductive reasoning- A café owner observes that few customers are waiting to enter the café before it opens every day and hence the owner decided to open the café an hour earlier five days.
- An instance of deductive reasoning – A sales professional might use deductive reasoning to test selling strategies and formulate new goals.
Advantages of Deductive Reasoning
Deductive reasoning helps to use logic to your work-related discussions. Even when the decisions don’t work, you can explain why you decided to do what you did. Employer’s value deductive reasoning and being able to use it means your decision is decisive, and you are proactive.
Make sure you highlight your deductive reasoning skills which are particularly essential for managerial positions in which you will be required to make crucial decisions which will affect the organization. Remember, not to include the phrase ‘deductive reasoning’, unless it’s asked for in the job requirement.
Examples of Deductive Reasoning
- A retail outlet has recently identified that customers are purchasing fresh food items instead of frozen food items. The store owner then reduced the number of frozen items in the outlet.
- An IT department identified that the employers are facing issue with a specific brand of a keyboard. They decided to eliminate the keyboard issued by a particular brand and order it from another brand.
- It was discussed in a meeting yesterday that whoever will generate the highest sales, will get a promotion at the end of the year. I generated high sales, and so I am looking for a promotion.
- HR department announced that personality development sessions would take place every week, and it will be compulsory for everyone to attend the session. The candidate who depicts maximum participation in the session will be rewarded and appreciated. My manager then gave instructions to the session and directed every team member to participate.
Related courses: online personal development courses. |
Kinetics, or rates of chemical reactions, represents one of the most complex topics faced by high-school and college chemistry students. The rate of a chemical reaction describes how the concentrations of products and reactants changes with time. As a reaction proceeds, the rate tends to decrease because the chance of a collision between reactants becomes progressively lower. Chemists therefore tend to describe reactions by their "initial" rate, which refers to the rate of reaction during the first few seconds or minutes. In general, chemists represent chemical reactions in the form aA + bB ---> cD + dD, where A and B represent reactants, C and D represent products, and a, b, c and d represent their respective coefficients in the balanced chemical equation. The rate equation for this reaction is then rate = (-1/a) d[A]/dt = (-1/b) d[B]/dt = (1/c) d[C]/dt = (1/d) d[D]/dt, where square brackets denote the concentration of the reactant or product; a, b, c and d represent the coefficients from the balanced chemical equations; and t represents time.
Items you will need
Write a balanced chemical equation for the reaction under investigation. As an eample, consider the reaction of hydrogen peroxide, H2O2, decomposing to water, H2O, and oxygen, O2: 2 H2O2 ---> 2 H2O + O2. A "balanced" reactions contains the same number of each type of atom on both the left and right sides of the arrow. In this case, both sides contain four hydrogen atoms and two oxygen atoms.
Construct the rate equation based on the equation given in the Introduction. Continuing the example from step 1: rate = -(1/2) d[H2O2]/dt = (1/2) d[H2O]/dt = (1/1) d[O2]/dt.
Substitute the concentration and time data into the equation from step 2 based on the information available in the problem or obtained during an experiment. For example, for the reaction described above, assume the following data was obtained: time (s), [H2O2] (M) 0, 0.250 10, 0.226 This data indicates that after 10 seconds, the concentration of hydrogen peroxide decreased from 0.250 moles per liter to 0.226 moles per liter. The rate equation then becomes rate = -(1/2) d[H2O2]/dt = -(1/2) (0.226 - 0.250)/10 = 0.0012 M/s. This value represents the initial rate of the reaction.
Style Your World With Color
Explore a range of deep greens with the year's "it" colors.View Article
Create balance and growth throughout your wardrobe.View Article
Let your imagination run wild with these easy-to-pair colors.View Article
See if her signature black pairs well with your personal style.View Article
- Hemera Technologies/Photos.com/Getty Images |
Hearing & perception
The operation of the ear has two facets: the behavior of the mechanical apparatus and the neurological processing of the information acquired. The mechanics of hearing are straightforward and well understood, but the action of the brain in interpreting sounds is still a matter of dispute among researchers.
THE EAR MECHANISM
The ear contains three sections, the outer, middle, and inner ears. The outer ear consists of the lobe and ear canal, structures which serve to protect the more delicate parts inside.
The outer boundry of the middle ear is the eardrum, a thin membrane which vibrates in sympathy with any entering sound. The motion of the eardrum is transferred across the middle ear via three small bones named the hammer, anvil, and stirrup. These bones are supported by muscles which normally allow free motion but can tighten up and inhibit the bones' action when the sound gets too loud. The leverages of these bones are such that rather small motions of the ear drum are very efficiently transmitted.
The boundry of the inner ear is the oval window, another thin membrane which is almost totally covered by the end of the stirrup. The inner ear is not a chamber like the middle ear, but consists of several tubes which wind in various ways within the skull. Most of these tubes, the ones called the semicircular canals, are part of our orientation apparatus. (They contain fine particles of dust-the location of the dust tells us which way is up.) The tube involved in the hearing process is wound tightly like a snail shell and is called the cochlea.
Schematic of the ear
This is a diagram of the ear with the cochlea unwound. The cochlea is filled with fluid and is divided in two the long way by the basilar membrane. The basilar membrane is supported by the sides of the cochlea but is not tightly stretched. Sound introduced into the cochlea via the oval window flexes the basilar membrane and sets up traveling waves along its length. The taper of the membrane is such that these traveling waves are not of even amplitude the entire distance, but grow in amplitude to a certain point and then quickly fade out. The point of maximum amplitude depends on the frequency of the sound wave.
The basilar membrane is covered with tiny hairs, and each hair follicle is connected to a bundle of nerves. Motion of the basilar membrane bends the hairs which in turn excite the associated nerve fibers. These fibers carry the sound information to the brain. This information has two components. First, even though a single nerve cell cannot react fast enough to follow audio frequencies, enough cells are involved that the aggregate of all the firing patterns is a fair replica of the waveform. Second, and probably most importantly, the location of the hair cells associated with the firing nerves is highly correlated with the frequency of the sound. A complex sound will produce a series of active loci along the basilar membrane that accurately matches the spectral plot of the sound.
The amplitude of a sound determines how many nerves associated with the appropriate location fire, and to a slight extent the rate of firing. The main effect is that a loud sound excites nerves along a fairly wide region of the basilar membrane, whereas a soft one excites only a few nerves at each locus.
The mechanical process described so far is only the beginning of our perception of sounds. The mechanisms of sound interpretation are poorly understood, in fact is not yet clear whether all people interpret sounds in the same way. Until recently, there has been no way to trace the wiring of the brain, no way to apply simple stimuli and see which parts of the nervous system respond, at least not in any detail. The only research method available was to have people listen to sounds and describe what they heard. The variability of listening skills and the imprecision of the language combined to make psycho-acoustics a rather frustrating field of study. Some of the newest research tools show promise of improving the situation, so research that is happening now will likely clear up several of the mysteries. The current best guess as to the neural operation of hearing goes like this:
We have seen that sound of a particular waveform and frequency sets up a characteristic pattern of active locations on the basilar membranes. (We might assume that the brain deals with these patterns in the same way it deals with visual patterns on the retina.) If a pattern is repeated enough we learn to recognize that pattern as belonging to a certain sound, much as we learn a particular visual pattern belongs to a certain face. (This learning is accomplished most easily during the early years of life.) The absolute position of the pattern is not very important, it is the pattern itself that is learned. We do possess an ability to interpret the location of the pattern to some degree, but that ability is quite variable from one person to the next. (It is not clear whether that ability is innate or learned.) What use the brain makes of the fact that the aggregate firing of the nerves more or less approximates the waveform of the sound is not known. The processing of impulse sounds (which do not last long enough to set up basilar patterns) is also not well explored. INTERPRETATION OF SOUNDS
Most studies in psycho-acoustics deal with the sensitivity and accuracy of hearing. This data was intended for use in medicine and telecommunications, so it reflects the abilities of the average untrained listener. It seems to be traditional to weed out musicians from such studies, so the capabilities of trained ears are not documented. I suspect such capabilities are much better than that suggested by the classic studies.
The ear can respond to a remarkable range of sound amplitude. (Amplitude corresponds to the quality known as loudness.) The ratio between the threshold of pain and the threshold of sensation is on the order of 130 dB, or ten trillion to one. The judgment of relative sounds is more or less logarithmic, such that a tenfold increase in sound power is described as "twice as loud". The just noticeable difference in loudness varies from 3 dB at the threshold of hearing to an impressive 0.5 dB for loud sounds.
Perceived loudness of sounds
The sensation of loudness is affected by the frequency of the sound. A series of tests using sine waves produces the curves shown. At the low end of the frequency range of hearing, the ear becomes less sensitive to soft sounds, although the pain threshold as well as judgments of relatively loud sounds are not affected much. Sounds of intermediate softness show some but not all of the sensitivity loss indicated for the threshold of hearing. At high frequencies the change in the sensitivity is more abrupt, with sensation ceasing entirely around 20 khz. The threshold of pain increases in the top octave also.
The ability to make loudness judgments is compromised for sounds of less than 200ms duration. Below that limit, the loudness is affected by the length of the sound; shorter is softer. Durations longer than 200ms do not affect loudness judgment, beyond the fact that we tend to stop paying attention to long unchanging tones.
The threshold of hearing for a particular tone can be raised by the presence of another noise or another tone. White noise reduces the loudness of all tones, regardless of absolute level. If the bandwidth of the masking noise is reduced, the effect of masking loud tones is reduced, but the threshold of hearing for those tones remains high. If the masking sound is narrow band noise or a tone, masking depends on the frequency relationship of the masked and masking tones. At low loudness levels, a band of noise will mask tones of higher frequency than the noise more than those of lower frequency. At high levels, a band of noise will also mask tones of lower frequency than itself.
People's ability to judge pitch is quite variable. (Pitch is the quality of sound associated with frequency.) Most subjects studied could match pitches very well, usually getting the frequencies of two sine waves within 3%. (Musicians can match frequencies to 1%, or should be able to.) Better results are obtained if the stimuli are similar complex tones, which makes sense since there are more active points along the basilar membrane to give clues. Dissimilar complex tones are apparently fairly difficult to match for pitch (judging from experience with ear training students; I haven't seen any studies on the matter to compare them with sine tone results).
Judgment of relative pitch intervals is extremely variable. The notion of the two to one frequency ratio for the octave is probably learned, although it is easily learned given access to a musical instrument. An untrained subject, asked to set the frequency of a tone to twice that of a reference, is quite likely to set them a twelfth or two octaves apart or find some arbitrary and inconsistent ratio. The tendency to land on "proper" intervals increases if complex tones are used instead of sine tones. Trained musicians often produce octaves slightly wider than two to one, although the practical aspects of their instrument strongly influence their sense of interval. (As a bassoonist who has played the same instrument for twenty years, I have a very strong tendency to place G below middle C a bit high.)
Identification of intervals is even more variable, even among musicians. It does appear to be trainable, suggesting it is a learned ability. Identification of exact pitches is so rare that it has not been properly studied, but there is some anecdotal evidence (such as its relatively more common occurrence among people blind from birth) suggesting it is somehow learned also.
The amplitude of sound does not have a strong effect on the perception of pitch. Such effects seem to hold only for sine tones. At low loudness levels pitch recognition of pure tones becomes difficult, and at high levels increasing loudness seems to shift low and middle register pitches down and high register pitches up.
The assignment of the quality of possessing pitch in the first place depends on the duration and spectral content of the sound. If a sound is shorter than 200ms or so, pitch assignment becomes difficult with decreasing length until a sound of 50ms or less can only be described as a pop. Sounds with waveforms fitting the harmonic pattern are clearly heard as pitched, even if the frequencies are offset by some additive factor. As the spectral plot deviates from the harmonic model, the sense of pitch is reduced, although even noise retains some sense of being high or low.
Recognition of sounds that are similar in aspects other than pitch and loudness is not well studied, but it is an ability that everyone seems to share. We do know that timbre identification depends strongly on two things, waveform of the steady part of the tone, and the way the spectrum changes with time, particularly at the onset or attack. This ability is probably built on pattern matching, a process that is well documented with vision. Once we have learned to identify a particular timbre, recognition is possible even if the pitch is changed or if parts of the spectrum are filtered out. (We are good enough at this that we can tell the pitch of low sounds when played through a sound system that does not reproduce the fundamentals.)
We are also able to perceive the direction of a sound source with some accuracy. Left and right location is determined by perception of the difference of arrival time or difference in phase of sounds at each ear. If there are more than two arrivals, as in a reverberant environment, we choose the direction of the first sound to arrive, even if later ones are louder. Localization is most accurate with high frequency sounds with sharp attacks.
Height information is provided by the shape of our ears. If a sound of fairly high frequency arrives from the front, a small amount of energy is reflected from the back edge of the ear lobe. This reflection is out of phase for one specific frequency, so a notch is produced in the spectrum. The elongated shape of the lobe causes the notch frequency to vary with the vertical angle of incidence, and we can interpret that effect as height. Height detection is not good for sounds originating to the side or back, or lacking high frequency content. |
Warm-up: Number Talk: What's the Sum? (10 minutes)
- Display one expression.
- “Give me a signal when you have an answer and can explain how you got it.”
- 1 minute: quiet think time
- Record answers and strategies.
- Keep expressions and work displayed.
- Repeat with each expression.
Find the value of each expression mentally.
- \(20 + 10 + 10 + 5\)
- \(30 + 25\)
- \(35 + 15\)
- \(15 + 25 + 15\)
- “What patterns did you notice working with these numbers?”
- “How could the second problem help you think about the last one?” (I know that \(15 + 15 = 30\), so it was the same problem.)
Activity 1: Pizza to Share (15 minutes)
The purpose of this activity is for students to learn that when you partition a shape into 2, 3, or 4 equal pieces, the whole shape can be named as 2 halves, 3 thirds, 4 fourths respectively. This activity uses the context of pizza to intentionally elicit “whole” from students to describe the situation. They will continue to deepen their understanding of a whole as a mathematical term during their study of fractions in grade 3. Students observe regularity in repeated reasoning (MP8) when they see that however many equal pieces the whole pizza is cut into, that number of pieces makes the whole.
Students begin the activity by looking at the problem displayed, rather than in their books. At the end of the launch, students open their books and work on the problem.
This activity uses MLR5 Co-craft Questions. Advances: writing, reading, representing.
- Groups of 2
MLR5 Co-Craft Questions
- Display only the problem stem and image for Clare’s pizza, without revealing the questions.
- “Clare’s friends were going to share a pizza. The image shows how they cut the pizza.”
- “Write a list of mathematical questions that could be asked about this situation.” (How many friends does she have? How much pizza can each friend get? Can they slice the pizza a different way? What would be a fair way to share the pizza with her friends?)
- 2 minutes: independent work time
- 2–3 minutes: partner discussion
- Invite several students to share one question with the class. Record responses.
- “What do these questions have in common? How are they different?”
- Reveal the additional context and questions for Clare’s pizza (students open books), and invite additional connections.
- “Why do you believe her friends are upset?” (The pizza is cut into thirds, so 3 slices would be the whole pizza. 3 thirds is the same as the whole thing.)
- “Now you and your partner will discuss what happened in each problem when students share a pizza.”
- 10 minutes: partner work time
Clare’s friends were going to share a pizza. The image shows how they cut the pizza.
Clare ate 3 slices and her friends got upset with her.
- Why are her friends upset?
- How many thirds did Clare eat?
- How much of the pizza was left?
- Priya will eat ________________________ of the pizza.
- Together they will eat ________________________ of the pizza.
- Each girl will eat ______________________ of the pizza.
- Together they will eat ________________________ of the pizza.
- How much pizza will each child eat? ________________________
- How much pizza will they eat in all? ________________________
Advancing Student Thinking
If students disagree that 2 halves, 3 thirds, or 4 fourths is the same as the whole pizza, consider asking:
- “If Jada ate this piece (one half) and Mai ate this piece (one half), how much of the pizza is left?”
- “How could you show how much of the pizza each student ate?”
- Invite students to share how much of the pizza each group of friends ate using halves, thirds, fourths, and quarters.
- Record responses.
- “Each group had the same-size pizza. Of all the students, which group of students will have the largest slices? Why do you think that is?” (Jada and Mai only have to share the pizza with 2 people, so they have the largest pieces.)
- “What do you notice about the size of the slices and the number of students?” (The slices get smaller if there are more students.)
- Share and record responses.
Activity 2: Equal Shares of the Pie (20 minutes)
The purpose of this activity is for students to recognize and describe pieces of circles using the words half of, a third of, and a quarter of. Students match shapes partitioned into halves and quarters to stories and partition shapes into quarters and halves based on directions. Students can continue to use one fourth when describing a piece, but encourage the use of a quarter as a way to describe the same piece.
Supports accessibility for: Visual-Spatial Processing
Materials to Gather
- Groups of 2
- Give students access to colored pencils.
- “You are going to read some stories with a partner about students sharing pies.”
- “Then you will partition and color shapes on your own.”
- 5 minutes: partner work time
- As students work, encourage them to use precise language when talking with their partners.
- Consider asking: “Is there another way you could say how much of the circle is shaded?"
- 10 minutes: independent work time
- Monitor for students who accurately shade the circles to share in the synthesis.
Write the letter of each image next to the matching story.
- Noah ate most of the pie. He left a quarter of the pie for Diego. __________
- Lin gave away a half of her pie and kept a half of the pie for herself. __________
- Tyler cut a pie into four equal pieces. He ate a quarter of the pie. __________
- Mai sliced the pie to share it equally with Clare and Priya. __________
- How much of the pie will they each get? a _________________
- How much of the pie will they eat in all? _________________
Now you try.
- Partition the circle into four equal pieces.
- Shade in a quarter of the circle red.
- Shade in the rest of the circle blue.
How much of the circle is shaded? _____________________________
- Partition the circle into 2 equal pieces.
- Shade one half of the circle blue.
- Color the other piece yellow.
How much of the circle is yellow? ________________________
How much of the circle is shaded? ________________________
- Invite previously selected students to display their partitioned circles. Share at least one example of a circle partitioned into halves and one example of a circle partitioned into fourths.
- “How are the circles you partitioned and shaded the same? How are they different?” (They are both circles. They are shaded with different colors. They are partitioned differently. The whole circle is shaded in both. All of the pieces are shaded in both.)
- “How much of each circle is shaded?” (4 fourths, 2 halves, the whole circle)
“We have learned a lot about composing and decomposing shapes. Sometimes different-size pieces can make up a whole shape. Sometimes the whole shape is made up of equal-size pieces. We learned that these equal-size pieces of a whole have special names.”
“Each of these shapes has pieces shaded. How would you name each one? Are there any pieces that you are not sure how to name? Explain.” (The first circle shows 2 halves because there are two equal pieces. The first hexagon has some pieces that are not thirds because each piece is a different size. I think the red trapezoid is half because you could use another trapezoid that's the same size to make the whole hexagon, but I'm not sure.)
Cool-down: Partition a Circle (5 minutes)
Student Section Summary
We have learned a lot about composing and decomposing shapes. Sometimes the pieces make up a whole shape, but all of the pieces are not the same size. Sometimes the whole is partitioned into equal pieces and they have special names. We practiced partitioning shapes into halves, thirds, and fourths. We learned that halves, thirds, and fourths of the same shape can look different. We learned that we can say the whole shape is 2 halves, 3 thirds, 4 fourths, or 4 quarters.
How can you use halves, thirds, fourths, or quarters to describe the pieces of these shapes? How can you use halves, thirds, fourths, or quarters to describe the whole shape? |
Rules For Triangle Congruency
Congruent triangles are triangles that have the same size and shape.This means that the corresponding sides are equal and the corresponding angles are equal.
We can tell whether two triangles are congruent without testing all the sides and all the angles ofthe two triangles. In this lesson, we will consider the four rules to prove triangle congruence.They are called the SSS rule, SAS rule, ASA rule and AAS rule.In another lesson, we will consider a proof used for right triangles called the Hypotenuse Leg rule. As long as one of the rules is true, it is sufficient to prove that the two triangles are congruent.
The following diagrams show the Rules for Triangle Congruency: SSS, SAS, ASA, AAS and RHS. Take notethat SSA is not sufficient for Triangle Congruency. Scroll down the page for more examples, solutionsand proofs.
What Are Congruent Line Segments
Congruent line segments are 1-dimensional geometrical figures having equal measures. The word “congruent” with respect to congruent lines in geometry is defined as the equality between the two line segments. Two lines are said to be congruent when they have the same length. Congruent segments are superimposable figures, which completely overlap when placed one over the other. On turning, flipping, or rotating the congruent segments, they still remain to be congruent. The symbol used to depict congruence between any two congruent line segments is .
To brief it, congruent segments is just another name given to congruent line segments or congruent lines in geometry. All three terms are mathematically the same. Now, let us look into some examples of congruent line segments we find in mathematics.
Examples of congruent line segments:
- Sides of an equilateral triangle.
Let’s look into the diagram below showing congruent line segments.
Here line segment PQ XY since the double vertical bars on each line segment, PQ and XY depict their equality.
Congruent Meaning In Geometry
The word ‘congruent’ means ‘exactly equal’ in terms of shape and size. Even when we turn, flip, or rotate the shapes, they remain equal. For example, draw two circles of the same radius, then cut them out and place them on one another. We will notice that they will superimpose each other, that is, they will be placed completely over each other. This shows that the two circles are congruent. The following circles are said to be congruent since they have an equal radius, and they can be placed exactly over one another. The symbol that is used to show the congruence of figures is “”. Since circle A is congruent to circle B, we can express this fact as follows: Circle A Circle B.
Read Also: How To Find Time In Physics
Congruent Angle Sample Questions
Here are a few sample questions going over congruent angles.
Angles 1 and 2 are corresponding angles. If the measure of Angle 2 is 67°, what is the measure of Angle 1?
The city of Seattle is building a walking path that crosses over a pair of railroad tracks. The walking path is represented by the transversal t in the image below. The railroad tracks are represented by the parallel lines l and m. If the city wants to have the walking path cross the tracks at a 135° angle , what will the values of Angles 2, 3, and 4 be?
2 = 45° 3 = 135° 4 = 45°
2 = 45° 3 = 45° 4 = 135°
2 = 145° 3 = 45° 4 = 45°
2 = 180° 3 = 45° 4 = 45°
1 and 4 are congruent because they are vertical angles. If 1 equals 135°, then 2 must be equal to 45° because their sum needs to be 180° in order to form a straight line. Now that we know 2 equals 45°, we also know that 3 equals 45° because they are vertical angles.
Kelcy has a rectangular garden that she wants to divide equally into two sections diagonally. One section will be for carrots and the other section will be for kale. She separates the garden into two triangular pieces similar to the image below. If the measure of DCA is 40° what is the measure of CAB?
Definition Of Congruence In Analytic Geometry
In a Euclidean system, congruence is fundamental it is the counterpart of equality for numbers. In analytic geometry, congruence may be defined intuitively thus: two mappings of figures onto one Cartesian coordinate system are congruent if and only if, for any two points in the first mapping, the Euclidean distance between them is equal to the Euclidean distance between the corresponding points in the second mapping.
Don’t Miss: What Are All The Laws Of Physics
Congruent Triangle Theorem And Postulates
Two triangles are said to be congruent if they have same shape and same size. When triangles are congruent corresponding sides and corresponding angles are congruent .
There are two theorems and three postulates that are used to identify congruent triangles.
As per this theorem the two triangles are congruent if two angles and a side not between these two angles of one triangle are congruent to two corresponding angles and the corresponding side not between the angles of the other triangle.
If the hypotenuse and one of the legs of a right triangle are congruent to hypotenuse and corresponding leg of the other right triangle, the two triangles are said to be congruent.
If all three sides of a triangle are congruent to corresponding three sides of other triangle then the two triangles are congruent.
According to this postulate the two triangles are said to be congruent if two angles and the side between these two angles of one triangle are congruent to corresponding angles and the included side of the other triangle.
If two sides and the included angle of one triangle are congruent to the corresponding two sides and the included angle of a second triangle, then the two triangles are congruent.
Congruent Triangles On A Sphere
As with plane triangles, on a sphere two triangles sharing the same sequence of angle-side-angle are necessarily congruent . This can be seen as follows: One can situate one of the vertices with a given angle at the south pole and run the side with given length up the prime meridian. Knowing both angles at either end of the segment of fixed length ensures that the other two sides emanate with a uniquely determined trajectory, and thus will meet each other at a uniquely determined point thus ASA is valid.
The congruence theorems side-angle-side and side-side-side also hold on a sphere in addition, if two spherical triangles have an identical angle-angle-angle sequence, they are congruent .
The plane-triangle congruence theorem angle-angle-side does not hold for spherical triangles. As in plane geometry, side-side-angle does not imply congruence.
You May Like: How Did Geography Affect Ancient Greece
What Are The Properties Of Congruence
The properties of congruence are applicable to lines, angles, and figures. They can be listed as follows: Reflexive property, Symmetric property, and Transitive property.
- The reflexive property of congruence says that a line segment, an angle, or a shape is always congruent to itself. For example, PP
- The symmetric property says that if one figure is congruent to another, then the second one is also congruent to the first. For any two angles P and Q, if P Q, then Q P.
- The transitive property of congruence states that if line 1 is congruent to line 2, and line 2 is congruent to line 3, then line 1 is also congruent to line 3.
Congruent Meaning In Maths
The meaning of congruent in Maths is addressed to those figures and shapes that can be repositioned or flipped to coincide with the other shapes. These shapes can be reflected to coincide with similar shapes.
Two shapes are congruent if they have the same shape and size. We can also say if two shapes are congruent, then the mirror image of one shape is the same as the other.
Recommended Reading: Definition Of Equilateral Triangle In Geometry
Congruent And Similar Triangles
In mathematics, we say that two objects are similar if they have the same shape, but not necessarily the same size. This means that we can obtain one figure from the other through a process of expansion or contraction, possibly followed by translation, rotation or reflection. If the objects also have the same size, they are congruent.
Three Ways To Prove Triangles Congruent
A video lesson on SAS, ASA and SSS.
Using Two Column Proofs To Prove Triangles Congruent
Triangle Congruence by SSSHow to Prove Triangles Congruent using the Side Side Side Postulate? If three sides of one triangle are congruent to three sides of another triangle, then the two trianglesare congruent.
Triangle Congruence by SASHow to Prove Triangles Congruent using the SAS Postulate? If two sides and the included angle of one triangle are congruent to two sides and the included angle ofanother triangle, then the two triangles are congruent.
Prove Triangle Congruence with ASA PostulateHow to Prove Triangles Congruent using the Angle Side Angle Postulate? If two angles and the included side of one triangle are congruent to two angles and the included sideof another triangle, then the two triangles are congruent.
Prove Triangle Congruence by AAS PostulateHow to Prove Triangles Congruent using the Angle Angle Side Postulate? If two angles and a non-included side of one triangle are congruent to two angles and a non-includedside of another triangle, then the two triangles are congruent.
Congruent Figures Lesson & Examples
- Introduction to identifying congruent figures
- Write a congruence statement for the pair of congruent figures
- 00:18:54 Write a congruence statement for the pair of congruent figures
- 00:27:30 Find x and y given pair of congruent quadrilaterals
- 00:31:04 Find x and y given pair of congruent triangles
- 00:33:43 Give the reason for each statement
- Practice Problems with Step-by-Step Solutions
- Chapter Tests with Video Solutions
Get access to all the courses and over 450 HD videos with your subscription
Monthly and Yearly Plans Available
Also Check: What Is The Physical Geography Of Africa
How To Prove Two Line Segments Are Congruent
Given two line segments, the lengths can be measured using a ruler which helps us to compare their equality. If the lengths of two line segments are equal, they are known to be congruent. For example, sides of an equilateral triangle are congruent as all the three sides are of equal measure. The distance between two lines which are line segments and are congruent have a distance of zero units between them.
When Is A Line Segment Congruent To Itself
When a line segment is compared to itself, the line segment is congruent to itself since they exactly measure the same. This is the reflexive property followed by a congruent line segment. For example, consider a line segment, MN of length 7.5 cm. When this line segment is compared to itself they become congruent i.e., MN MN.
Recommended Reading: What Is The Definition Of Potential Energy In Physics
What Are Congruent Lines Angles
When two line segments exactly measure the same, they are known as congruent lines. For example, two line segments XY and AB have a length of 5 inches and are hence known as congruent lines. When two angles exactly measure the same, they are known as congruent angles. For example, the internal angles of a square are congruent as each angle measures 90º.
Congruent Definition In Geometry
The word “congruent” is an adjective, and it describes these two squares:
These are congruent squares their corresponding parts are identical, so they have congruency. The word “congruency” is the noun for what these figures have. Congruent figures have congruency. Whether you have just two figures or a whole chessboard of congruent squares, they are all congruent
Read Also: Who Coined The Term Geography
Determining Congruence Of Polygons
For two polygons to be congruent, they must have an equal number of sides . Two polygons with n sides are congruent if and only if they each have numerically identical sequences side-angle-side-angle-… for n sides and n angles.
Congruence of polygons can be established graphically as follows:
- First, match and label the corresponding vertices of the two figures.
- Second, draw a vector from one of the vertices of the one of the figures to the corresponding vertex of the other figure. Translate the first figure by this vector so that these two vertices match.
- Third, rotate the translated figure about the matched vertex until one pair of corresponding sides matches.
- Fourth, reflect the rotated figure about this matched side until the figures match.
If at any time the step cannot be completed, the polygons are not congruent.
Construction Of Two Congruent Angles
Let’s learn the construction of two congruent angles step-wise.
Step 1- Draw two horizontal lines of any suitable length with the help of a pencil and a ruler or a straightedge.
Step 2- Take any arc on your compass, less than the length of the lines drawn in the first step, and keep the compass tip at the endpoint of the line. Draw the arc keeping the lines AB and PQ as the base without changing the width of the compass.
Step 3 – Keep the compass tip on point D and expand the legs of the compass to draw an arc of any suitable length. Draw that arc and repeat the same process with the same arc by keeping the compass tip on point S.
Step 4- Draw lines that will join AC and PR.
This is how we get two congruent angles in geometry, CAB, and RPQ.
Don’t Miss: What Do You Do In Chemistry
What Is Congruent Segment Midpoint
The congruent segment midpoint is defined as that point on a line segment that exactly divides the line segment into two parts of equal length and hence the two newly formed segments are congruent to each other. For example, for a line segment of length 10 cm, the midpoint will exactly be at 5 cm and the newly formed segments will be 5 cm each.
What Are Congruent Angles
In mathematics, the definition of congruent angles is “angles that are equal in the measure are known as congruent angles”. In other words, equal angles are congruent angles. It is denoted by the symbol “”, so if we want to represent A is congruent to X, we will write it as A X. Look at a congruent angles example given below.
In the above image, both the angles are equal in measurement . They can completely overlap each other. So, as per the definition, we can say that both the given angles are congruent angles.
Don’t Miss: Big Ideas Math Geometry 4.5 Answers
Congruent And Similar Figures
There is a difference between congruent and similar figures. Congruent figures have the same corresponding side lengths and the corresponding angles are of equal measure. However, similar figures may have the same shape, but their size may not be the same.
For example, observe the following triangles which show the difference between congruent and similar figures. In the congruent figures, we can see that all the corresponding sides and angles are of equal measure. However, if we notice the similar figures, we see that the corresponding angles are of equal measure, but the sides are not of equal length.
Difference Between Congruent Figures And Similar Figures
The significant difference between congruent figures and similar figures is that:
|Congruent Figures||Similar Figures|
|In two congruent figures, both the corresponding angles and the lengths of the corresponding sides are equal to each other.||In two similar figures, the shapes look the same. This is because the corresponding angles are equal. However, the lengths of the corresponding sides are not equal to each other.|
As per the above diagram, Congruent Figures are represented by ABC and DEF, whereas Similar Figures are represented by MNO and XYZ
In regards to Congruent Figures,
Side AC = DF, AB = DE and BC = EF,
A = D, B = E and C = F
Therefore, ABC DEF, as both the corresponding angles and the lengths of the corresponding side are equal to each other.
Whereas, with respect to Similar Figures.
Only the angles are equal to each other, which are M = X, N = Y and O = Z.
The length of the corresponding sides are not equal to each other.
Hence, MNO and XYZ are similar to each other.
However, they are not congruent to each other.
You May Like: What Is Potential Difference In Physics |
Presentation on theme: "1 Working With Graphs. 2 Graphs In General: A graph is a visual representation of the relationship between two ormore variables. We will deal with just."— Presentation transcript:
1 Working With Graphs
2 Graphs In General: A graph is a visual representation of the relationship between two ormore variables. We will deal with just two variables at a time.
3 Graphs In General: 1. Independent variable: This is the variable that influences the dependent variable. (X variable) 2. Dependent variable: Its value is determined by the independent variable. (Y variable)
4 Graphs In General: 3. We say that the dependent variable is a function of the independent variable: function of the independent variable: Y = f(X)
5 The Axis of a Graph: Dependent Variable Dependent Variable (Y-axis) (Y-axis) Independent Variable ( X-axis)
6 Direct Relationships: v A person's weight and height are often related. v If we sample 1000 people and measure their weight and height we would probably find that as weight increases so does height.
7 Direct Relationships: HeightWeight
8 v There is a direct relationship between height and weight. v Have a direct relationship when: indep. variable dep. variable indep. variable dep. variable indep. variable dep. variable indep. variable dep. variable
9 Inverse Relationships: There is strong evidence indicating that as price rises for a specific commodity, the amount purchased decreases.
10 Inverse Relationships: Price per Unit DemandCurve Quantity Purchase per Unit Time
11 Inverse Relationships: v There is an inverse relationship between price per unit and the quantity purchased per unit of time. v Have an inverse relationship when: (1) indep. variable dep. variable (2) indep. variable dep. variable
12 Complex Relationships: v Evidence suggests that income from wages increases up to a certain age, and then decreases until death.
13 Complex Relationships: Income from Wages ($) Age
14 Complex Relationships: v There is a direct relationship between wage income and age up to a certain point known as retirement, v then an inverse relationship exists from retirement to the individuals expiration date.
15 Complex Relationships: Income from All Sources ($) Age
16 Complex Relationships: Income from All Sources ($) Age
17 Complex Relationships: Income from All Sources ($) Age What should the slope of this line be equal to at the minimum?
18 Social Security Issues: True/False: Social Security was NEVER intended to provide benefits sufficient to be the sole source of retirement income.
19 Social Security Issues: Current “Social Security” tax rate: 7.65 percent 7.65 percent OASDI TAX:Old Age, Survivors, and Disability Insurance. The 2000rate of tax is 6.2 percent to a taxable wage limit of $76,200. The 2000rate of tax is 6.2 percent to a taxable wage limit of $76,200. The maximum tax an employee may pay is $4, The maximum tax an employee may pay is $4,501.20
20 Social Security Issues: HI TAX:Federal Hospital Insurance. The 2000 rate of tax is 1.45 percent without a wage limit. The maximum tax is therefore unlimited The maximum tax is therefore unlimited
21 Social Security Issues: v Social Security covered 58 percent of the work force in 1940, covered over 90 percent in v Over this 50 yr. period, REAL benefits have increased and coverage has been extended to spouses, widows, and dependents.
22 Social Security Issues: v Today, elderly as a group have lower poverty rates than the general population, and about the same per capita income.
23 Social Security Issues: v Greater than 90 percent of all persons 65 or older receive Social Security. v On average, SS equals 38 percent of total income received by elderly households. v For 25 percent of older households, SS equals 90 percent of family income.
24 Social Security Issues: v For 15 percent of older households, SS equals 100 percent of family income. v To maintain pre-retirement living standards, middle and upper income households must have additional income from employer pensions or private savings
25 Historical Social Security Tax Rates: v Before 1950, SS rate = 1.0 percent paid by both the employer and employee. v This covered retirement only, no disability or Medicare. v Maximum earnings taxed prior to 1950 = $3,000 v Maximum tax paid prior to 1950 = $30 per employee, $30 per employer
26 Historical Social Security Tax Rates: v In 1970, the maximum retirement tax paid was $ Matched by employer. v In 1990, SS retirement tax rate = 5.60 percent, Maximum earnings taxed in 1991 = $51,300 v Maximum tax paid in 1991 = $2, per employee. Matched by employer
27 Historical Social Security Tax Rates: v In 2000, SS retirement tax rate = 5.30 percent, Maximum earnings taxed in 2000 = $76,200 v Maximum tax paid in 2000 = $4, per employee. Matched by employer
28 Future of Social Security v Funds in the SS trust fund will peak in 2030 at $12 trillion (no cash, all gov. bonds!) v I will be 73 years old. v You will be ? years old.
29 Future of Social Security v After 2030, these trust fund assets decrease rapidly, and will be equal to zero in 2046 at the current SS tax rates. v In 2046, I probably will be in a state of mind such that I won't care! v You will be ? years old.
30 Future of Social Security v As the population of our country continues to age, the ratio of (workers / beneficiaries) will decrease. v The W/B ratio is expected to remain stable between 1989 and 2010 but will then decrease.
31 Future of Social Security YearW/B YearW/B "Baby Boomers" start hitting 65 in hitting 65 in >
32 Future of Social Security v Building up the trust fund NOW will reduce the expected tax burden on future workers (YOU) by making Baby Boomers (ME) pay higher taxes to partially finance their own retirement benefits.
33 Constructing A Graph We start with a horizontal number line:
34 Constructing A Graph 1.The points on the line divide the line into segments. 2.All the line segments are equally spaced 3.Numbers associated with the points increase in value from left to right. 4.Use a distance, so many points, to represent a quantity.
36 Add a Vertical Number Line: 1. Construct a vertical number line. 1. Construct a vertical number line. 2. Points divide the line into equal 2. Points divide the line into equal segments. segments. 3. Numbers associated with points 3. Numbers associated with points increase in value from the bottom increase in value from the bottom to top. to top. 4. The scale can be different from the 4. The scale can be different from the horizontal number line. horizontal number line.
37 Add a Vertical Number Line:
38 To Make A Graph: 1. The vertical and horizontal number lines must intersect at each others zero point. 2. They must be perpendicular.
39 To Make A Graph: The vertical and horizontal number lines should look like the illustration below:
40 To Make A Graph: 3. Result: We get a set of coordinate axis, or a coordinate number system. e.g. Sighting in a rifle scope on the range.
How would you call out the location of this three shot group? X-Axis Y-Axis
45 To Make A Graph: 4. With a graph, you need two numbers to specify a single point OR OR When you see a point on a graph, you know that point represents two numbers ! When you see a point on a graph, you know that point represents two numbers !
46 BASICS YOU NEED TO KNOW ABOUT GRAPHING AND THE COORDINATE NUMBER SYSTEM Axis defined: v The vertical number line is reserved for the Dependent variable and is referred to as the Y AXIS. v The horizontal number line is referred to as the X AXIS and is reserved for the Independent variable.
47 The origin and points on the graph v The point of intersection of the two number lines is referred to as the ORIGIN.
48 Point A represents two numbers: A value for x and a value for y. Point A
49 The origin and points on the graph v Every point on a graph represents a pair of observations of x and y. (x,y) v In this class, y will often represent price and x will often represent quantity.
50 The Slope 1. Slope = change in Y values / change in X values = (y 1 - y 0 ) / (x 1 - x 0 ) = (y 1 - y 0 ) / (x 1 - x 0 ) = RISE / RUN = RISE / RUN
51 The Slope Price Quantity demanded per unit time 2 3 Quantity demanded per unit time x 1, y 1 ) x 0, y 0 )
52 The Slope 2. As X goes from 2 to 3, Y goes from 8 to 6. Y goes from 8 to Y = RISE = (TO - FROM) = = -2 X = RUN = (TO - FROM) = = 1
53 The Slope 4. SLOPE = Y / X = -2 / 1 = The slope of a straight line is CONSTANT. Class Exercise |
26 pages of fun activities giving practice in maths for Easter!
Produce colourful maths-themed decorations for the classroom or to take home!
Easter Egg sequencing. Differentiated.
Complete sequences by drawing and colouring easter eggs
Easter Egg colour by numbers
Practice multiplication and Addition and create colourful patterns
My Easter Basket
Practice in using money to purchase flowers and mini-eggs. Create an Easter basket to take home.
A card game giving practice in multiplying two or three numbers
Little Lost Lambs
A game based on “Battleships” to practice the use of coordinates
Easter Egg symmetry
Make attractive patterns while exploring reflective symmetry
Easter Bonnet Listening Exercise
Listen carefully to the instructions to produce a coloured poster.
Easter Egg Algebra
Match the eggs with the cups by performing simple algebra.
Problem solving sheets for KS2
Find the quickest routes for the rabbits to get the food.
Easter Math Activity: Easter CSI Math - Who Stole the Easter Bunnies Eggs? - Great for upper elementary and middle school students.
Students have to use their math skills to eliminate suspects so they can find out who committed the crime and stole all the Easter eggs!
Five clues are given to the students and each clue (worksheet) allows them to eliminate 1-2 suspects.
Clue 1/ Hidden multiplication message
Clue 2/ Wheeling away the eggs - Calculating Volume
Clue 3/ Chocolate Zapper Gun - Adding basic fractions
Clue 4/ Eating the eggs - Long Subtraction
Clue 5/ Travel Time: A time activity.
Pack also includes 3 extension activities for early finishers
- A writing activity - confessions of a thief (why I stole the eggs)
- A math brain problem
- A design your Easter Egg activity.
Answer sheets provided
Great math activity to do before Easter!
Easter Activity Pack - Literacy and Math Fun. Spring into spring with a 27 page packet. Includes an Easter writing prompt and coloring sheet, Easter writing paper, an Easter crossword, an Easter word search, an Easter acrostic poem sheet, an Easter poem writing sheet, Easter Syllables, Easter shopping math activities, Easter Fractions and Easter Decimals sheets, a Bunny Hop Search for Numbers divisible by 4, an Easter math puzzle, an Easter puppet template, an Easter bookmark, and Easter ABC order. Answer Keys included.
Lots of fun for April!- HappyEdugator
A set of 20 Bingo Boards featuring Roman Numerals together with calling cards. Fun way to help children reinforce how Roman Numerals are written. Great group activity, or can be used with whole class. If you find this useful please comment.
The Mathematics Medicine booklet consists of 8 pages of maths problems suitable for Year 3 children.
The booklet uses the idea of a weird ‘medicine’ recipe from Roald Dahl’s George’s Marvellous Medicine.
George’s teacher has designed his own weird medicine and the ingredients are hidden in the book of problems. The children are presented with a series of maths problems, each of which is based on a calculation skill linked to the National Curriculum. Go to mathsticks.com for more information. Please rate!
This fun game uses 'Top Trumps' style cards to enable children to practice addition and multiplication strategies while reinforcing correct mathematical vocabulary.
There are 36 cards; each featuring an image of a mathstick figure and two mathematical statements.
The first statment on each card is an addition and the other a multiplication.
While playing a Top Trumps type game, the children challenge each other as to whose card displays the higer value - product or sum!
However, the educational value does not end there... The statements offer four different mathematical elements (or categories), as well as the product or the sum, players can also use individual factors or addends.
This game is ideal for Key Stage 2 children who have a reasonable understanding of multiplication facts and addition strategies. The download features my teacher notes and a set of 36 printable cards.
Full details at www.mathsticks.com
This decimal bingo game gives children lots of practice adding decimals and doesn't feel like it relies too much on luck. The children can use a host of mental math strategies while playing; so it's not just a 'hunt and cover' game.
The pdf file includes two sets of 6 game boards. One set uses numbers to only one decimal place, the remaing set stretches to two decimal places. There are nine different 'target' cards displaying a 'Bingo number' of either 1, 1.5 or 2. Each child has their own 'Decimals bingo board' and a set of counters. Each 'Decimal board' displays a set of 14 decimal numbers. The game plays like normal bingo except that playrs cover two numbers on each go - the numbers which give a total of the bingo number drawn.
Full details (and illustrations) are on my www.mathsticks.com website.
Lesson plan, teaching resources and differentiated work for 3 lessons
1) Partitioning numbers into tens and units (MA hundreds)
2) Adding multiples of 10.
3) Using partitioning strategy for addition.
Check out my other resources at - https://www.tes.com/teaching-resources/shop/jreadshaw
A game that can be played in pairs, threes or independently. Please note that this game focuses on times tables facts from 2 to 10 to ensure children are confident in these tables before progressing to the 11s and 12s, which will need to be taught subsequently.
Children need a 'Table Splat' mat and counters (different coloured counters for each child). Children take turns to turn over a card, revealing a times tables question. The child who is the first to place their counter on the correct answer (whilst shouting 'SPLAT!') can leave their counter on the board. The winner is the player with the most counters on the board once all of the times table cards have been used.
Differentiate by choosing the multiplication cards relevant for that child, depending on which times table they are currently working on.
*Update: 6s and 9s have been underlined, to avoid confusion.*
I couldn't find any materials on this when I needed to teach my year 3 class what the equals sign REALLY meant.
I have included the lesson plan I used, the 3 differentiated worksheets and a Powerpoint I made.
Also have included a homework set of resource sheets (could alternatively be used in class as a reinforcement at a later date.)
UPDATE: Have added a more challenging resource for year 3/4
A Japanese theme game. The children play the game which leads on to - Investigating the frequency score of 2 dice. (What are the best numbers to choose to help you win the game next time?) Lessons include find the odd one out starter activities, success criteria and worksheets. There are also questions to encourage the children’s mathematical thinking. My class find it fun to make up a quick Sumo dance… where they bow to each other before and after each game. |
. Worksheet. January 30th , 2021.
These printable 1st grade math worksheets help students master basic math skills.the initial focus is on numbers and counting followed by arithmetic and concepts related to fractions, time, money, measurement and geometry.simple word problems review all these concepts. Word searches, crossword puzzles, and critical thinking.
Printable art worksheets for grade 1. This coloring math worksheet gives your child practice finding 1 more and 1 less than numbers up to 100. From global coloring pages to “how to draw” sheets, first grade arts worksheets make learning enjoyable. These can likewise be made use of to encourage the adults too.
5 images of math worksheets for 1st graders. Grade 1 worksheets browse our free, printable worksheets for grade one. 16 images of first grade telling time worksheets.
Whole numbers, spelling of basic numbers up to 10 or 100 and first grade math operations, grade 1addition and subtraction, place value, skip counting, introduction to division and multiplication, first grade geometry and basic shapes, easy picture graphs, length, volume and mass measurement and. These printable worksheets and activities are designed for teachers, parents, tutors, and homeschool families. Grade 1 language arts worksheets :
Print one of the art appreciation worksheets 3. This page is filled with over 300,000+ pages of free printable worksheets for 1st grade including both worksheets, games, and activities to make learning math, language arts, english, grammar, phonics, science, social studies, art, bible, music, and more fun! Our 1st grade language arts worksheets encompass a vast array of several language art concepts and assists students in learning how words are made and pronounced.
Worksheets/printables, followed by 1475 people on pinterest. Our grade 1 math worksheets cover topics such as: See more ideas about art lessons, art handouts, art worksheets.
Some of the worksheets displayed are vocabulary, trinity grade 1, english revision name grade 1, first grade basic skills, multiple meaning words, term 1 work 1, elpac practice test grade 1, big grammar book. 1 more or 1 less? 1st grade science worksheets and printables.
Reading • reading skills • reading levels. 114 filtered results clear all filters. Printable worksheets for grade worksheet ideas coloring book free fore stunning language.
These worksheets introduce students to parts of speech, punctuation and related concepts which form the building blocks for writing proper sentences. By the way, related with plant worksheets for grade 1, scroll the page to see various variation of images to complete your references. What do students learn in grade 1 language arts?.
Fill out the form below to receive your. Help introduce and enhance your child's essential reading, writing and learning skills. As students take on first grade arts worksheets, they expand their knowledge and practice fine motor skills.
These worksheets for grade 1 help your kid sharpen his skill in math using these free and printable 1st grade worksheets.and if you want your kid to practice his english skill, choose the english worksheets with grammar and antonym exercises. Spark your students' creativity with our selection of printable art worksheets! 14 images of first grade clock worksheets printables.
Printableeets for […] worksheet printable worksheets for grade language english kindergarten. It's never too early to learn about art styles such as realism, baroque, impressionism, and abstract. Plus see our history lesson plans, free math games, english worksheets, sight words activities, alphabet worksheets, and cvc word games for kids of all ages!
Each flower has a number on it. With activities to challenge and inspire children of all ages, these printable art worksheets help your students discover new talents in drawing, music, creative writing, and more. 1 more or 1 less?
We have over 1,000,000 pages of free pre k worksheets, kindergarten worksheets, grade 1 worksheets, second grade worksheets and more for k12. Printable worksheets for grade worksheet fill the frame first math pdf free. The bundle includes 25 printable art worksheets, but everyone who signs up for your weekly art break, my email newsletter full of art inspiration, gets six free art appreciation worksheets.
These are to be used to encourage youngsters as well as to make your work much easier. Printable worksheets learning games educational videos + filters 114 results filters. Watch with joy as your students connect with and interpret art.
Free grade 1 math worksheets. Free printable grade 4 art worksheets are perfect educational resources for teachers and homeschooling parents. Identifying nouns, singular and plural nouns, proper and common nouns & possessive nouns.
Young learners will love tracing and coloring pictures and. 5 images of free printable math worksheets for 1st grade. Worksheets > grammar > grade 1.
Take the kids on a noun hunt with this grade 1 language arts worksheet and get them to read each sentence, identify the nouns or words that refer to things, people, animals, or places and underline them. Math addition and subtraction whole numbers language arts rhyming capitalization nouns every parent wants to provide enrichment for their child's school experience, whether by helping with homework or by doing extracurricular activities around topics their childen find interesting or difficult. Worksheets > math > grade 1.
Having your first grader work through these 1st grade english worksheets will. You can practice, check answers and upload your sheets for free using schoolmykids worksheets for kids. 18 images of telling time worksheets for first grade.
Instill the love of art in your students. Grade science worksheets plants, printable plant parts of a flower worksheet and 1st grade science worksheets plants are three of main things we will show you based on the post title. First grade second grade third grade fourth grade fifth grade sixth grade:
100s free language arts worksheets too! The first grade language arts worksheets will prepare students on how to categorize words and why. Free printable first grade worksheets to help younger kids learn and practice their concepts related to maths, science, language, social studies, english and art.
Top Ten Posts
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. In no way does LocalHost claim ownership or responsibility for such items and you should seek legal consent for any use of such materials from its owner. |
Sino-Burmese War (1765–69)
|Sino-Burmese War (1765–1769)|
|Part of Ten Great Campaigns|
Burma and China prior to the war (1765)
|Qing Empire||Konbaung Dynasty|
|Commanders and leaders|
|Agui||Pierre de Milard|
Eight Banners ArmyTai-Shan militias
|Royal Burmese Army|
|Bamar and Shan levies|
Total strength: 5,000 foot, 1000 horse[note 1]
First invasion Total: unknown
|Casualties and losses|
|||Unknown, but significantly less|
The Sino-Burmese War (Chinese: 中緬戰爭 or 清緬戰爭; Burmese: တရုတ်-မြန်မာ စစ် (၁၇၆၅–၆၉)), also known as the Qing invasions of Burma or the Myanmar campaign of the Qing dynasty, was a war fought between the Qing dynasty of China and the Konbaung dynasty of Burma (Myanmar). China under the Qianlong Emperor launched four invasions of Burma between 1765 and 1769, which were considered as one of his Ten Great Campaigns. Nonetheless, the war, which claimed the lives of over 70,000 Chinese soldiers and four commanders, is sometimes described as "the most disastrous frontier war that the Qing dynasty had ever waged", and one that "assured Burmese independence". Burma's successful defense laid the foundation for the present-day boundary between the two countries.
At first, the Emperor envisaged an easy war, and sent in only the Green Standard troops stationed in Yunnan. The Qing invasion came as the majority of Burmese forces were deployed in their latest invasion of Siam. Nonetheless, battle-hardened Burmese troops defeated the first two invasions of 1765–1766 and 1766–1767 at the border. The regional conflict now escalated to a major war that involved military maneuvers nationwide in both countries. The third invasion (1767–1768) led by the elite Manchu Bannermen nearly succeeded, penetrating deep into central Burma within a few days' march from the capital, Ava (Inwa). But the bannermen of northern China could not cope with unfamiliar tropical terrains and lethal endemic diseases, and were driven back with heavy losses. After the close-call, King Hsinbyushin redeployed his armies from Siam to the Chinese front. The fourth and largest invasion got bogged down at the frontier. With the Qing forces completely encircled, a truce was reached between the field commanders of the two sides in December 1769.
The Qing kept a heavy military lineup in the border areas of Yunnan for about one decade in an attempt to wage another war while imposing a ban on inter-border trade for two decades. The Burmese, too, were preoccupied with the Chinese threat, and kept a series of garrisons along the border. Twenty years later, when Burma and China resumed a diplomatic relationship in 1790, the Qing unilaterally viewed the act as Burmese submission, and claimed victory. Ultimately the main beneficiaries of this war were the Siamese, who reclaimed most of their territories in the next three years after having lost their capital Ayutthaya to the Burmese in 1767.
- 1 Background
- 2 First invasion (1765–1766)
- 3 Second invasion (1766–1767)
- 4 Third invasion (1767–1768)
- 5 Fourth invasion (1769)
- 6 Aftermath
- 7 Significance
- 8 See also
- 9 Notes
- 10 References
The long border between Burma and China had long been vaguely defined. The Ming dynasty first conquered Yunnan borderlands between 1380 and 1388, and stamped out local resistance by the mid-1440s. The Burmese control of the Shan States (which covered the present-day Kachin State, Shan State and Kayah State) came in 1557 when King Bayinnaung of the Toungoo dynasty conquered all of the Shan States. The border was never demarcated in the modern sense, with local Shan sawbwas (chiefs) at the border regions paying tribute to both sides. The situation turned to China's favor in the 1730s when the Qing decided to impose a tighter control of Yunnan's border regions while the Burmese authority largely dissipated with the rapid decline of the Toungoo dynasty.
Qing consolidation of borderlands (1730s)
The Qing attempts for tighter control of the border were initially met with fierce resistance by the local chiefs. In 1732, the Yunnan government's demand of higher taxes led to several Shan revolts at the border. Shan resistance leaders united people by saying "The lands and water are our properties. We could plow ourselves and eat our own produces. There is not a need to pay tributes to foreign government". In July 1732, a Shan army, mostly consisted of native mountaineers, laid siege to the Qing garrison at Pu'er for ninety days. The Yunnan government responded with an overwhelming force numbered around 5,000 and lifted the siege. The Qing army pursued further west but could not put down persistent local resistance. Finally, the Qing field commanders changed their tactics by allying with neutral sawbwas, granting Qing titles and powers, including Green Standard captainships and regional commanderships. To complete the agreements, the third ranking officer of Yunnan traveled to Simao personally and held a ceremony of allegiance. By the mid-1730s, the sawbwas of the border who used to pay dual tributes, were increasingly siding with the more powerful Qing. By 1735, the year which the Qianlong Emperor ascended the Chinese throne, ten sawbwas had sided with the Qing. The annexed border states ranged from Mogaung and Bhamo in present-day Kachin State to Hsenwi State (Theinni) and Kengtung State (Kyaingtong) in present-day Shan State to Sipsongpanna (Kyaingyun) in present-day Xishuangbanna Dai Autonomous Prefecture, Yunnan.
While the Qing were consolidating their hold at the border, the Toungoo dynasty was faced with multiple external raids and internal rebellions and could not take any reciprocal action. Throughout the 1730s, the dynasty faced Meitei raids that reached increasingly deeper parts of Upper Burma. In 1740, the Mon of Lower Burma revolted and founded the Restored Hanthawaddy Kingdom. By the mid-1740s, the authority of the Burmese king had largely dissipated. In 1752, the Toungoo dynasty was toppled by the forces of Restored Hanthawaddy which captured Ava.
By then, the Qing control of the former borderlands was unquestioned. In 1752, the Emperor issued a manuscript, Qing Imperial Illustration of Tributaries, saying that all "barbarian" tribes under his rule must be studied and reported their natures and cultures back to Beijing.
Burmese reassertion (1750s–1760s)
In 1752, a new dynasty called Konbaung rose to challenge Restored Hanthawaddy, and went on to reunite much of the kingdom by 1758. In 1758–59, King Alaungpaya, the founder of the dynasty, sent an expedition to the farther Shan States (present-day Kachin State and northern and eastern Shan State), which had been annexed by the Qing over two decades earlier, to reestablish Burmese authority. (Nearer Shan States had been reacquired since 1754). Three of the ten farther Shan state sawbwas (Mogaung, Bhamo, Hsenwi) and their militias reportedly ran away into Yunnan and tried to persuade Qing officials to invade Burma. The nephew of Kengtung sawbwa and his followers also fled.
The Yunnan government reported the news to the Emperor in 1759, and the Qing court promptly issued an imperial edict ordering reconquest. At first, the Yunnan officials, who believed that "barbarians must be conquered using barbarians", tried to resolve the matter by supporting the defected sawbwas. But the strategy did not work. In 1764, a Burmese army, which was on its way to Siam, was increasing its grip of the borderlands, and the sawbwas complained to China. In response, the Emperor appointed Liu Zao, a respected scholarly minister from the capital to sort out the matters. At Kunming, Liu assessed that the use of Tai-Shan militias alone was not working, and that he needed to commit regular Green Standard Army troops.
First invasion (1765–1766)
In early 1765, a 20,000-strong Burmese army stationed at Kengtung, led by Gen. Ne Myo Thihapate, left Kengtung for yet another Burmese invasion of Siam. With the main Burmese army gone, Liu used a few minor trade disputes between local Chinese and Burmese merchants as the excuse to order an invasion of Kengtung in December 1765. The invasion force, which consisted of 3,500 Green Standard troops along with Tai-Shan militias, laid siege to Kengtung but could not match battle-hardened Burmese troops at the Kengtung garrison, led by Gen. Ne Myo Sithu. The Burmese lifted the siege and pursued the invaders into Pu'er Prefecture, and defeated them there. Ne Myo Sithu left a reinforced garrison, and returned to Ava in April 1766.
Governor Liu, in his embarrassment, first tried to conceal what had happened. When the emperor became suspicious, he ordered Liu's immediate recall and demotion. Instead of complying, Liu committed suicide by slicing his throat with a stationery knife, writing as blood was pouring from his neck: "There is no way to pay back the emperor's favor. I deserve death with my crime". While this kind of suicide in the face of bureaucratic failure apparently was not unusual in Qing China, it reportedly enraged the Emperor nonetheless. Sorting out the Mien (the Chinese word for "Burmese") was now a matter of imperial prestige. The Emperor now appointed Yang Yingju, an experienced frontier officer with long service in Xinjiang and Guangzhou.
Second invasion (1766–1767)
Yang arrived in the summer of 1766 to take command. Unlike Liu's invasion of Kengtung, located far away from the Burmese heartland, Yang was determined to strike Upper Burma directly. He reportedly planned to place a Qing claimant on the Burmese throne. Yang's planned path of invasion was via Bhamo and down the Irrawaddy River to Ava. The Burmese knew the route of invasion in advance, and were prepared. Hsinbyushin's plan was to lure the Chinese into Burmese territory, and then surround them. The Burmese commander in the field Balamindin was ordered to give up Bhamo, and instead stay at the Burmese stockade at Kaungton, a few miles south of Bhamo on the Irrawaddy. The Kaungton fort had been especially equipped with the cannon corps led by the French gunners (captured at the battle of Thanlyin in 1756.) To reinforce them, another army led by Maha Thiha Thura and posted at the easternmost Burmese garrison at Kenghung (present-day Jinghong, Yunnan), was ordered to march to the Bhamo theater across the northern Shan states.
Trap at Bhamo–Kaungton
As planned, the Qing troops easily captured Bhamo in December 1766, and established a supply base. The Chinese then proceeded to lay siege to the Burmese garrison at Kaungton. But Balamindin's defenses held off repeated Chinese assaults. Meanwhile, two Burmese armies, one led by Maha Sithu, and another led by Ne Myo Sithu, surrounded the Chinese. Maha Thiha Thura's army also arrived and took position near Bhamo to block the escape route back to Yunnan.
The impasse did not favor the Chinese troops who were utterly unprepared to fight in the tropical weather of Upper Burma. Thousands of Chinese soldiers reportedly were struck down by cholera, dysentery and malaria. One Qing report stated that 800 out of 1000 soldiers in one garrison had died of disease, and that another hundred were ill.
With the Chinese army greatly weakened, the Burmese then launched their offensive. First, Ne Myo Sithu easily retook the lightly held Bhamo. The main Chinese army was now totally holed up in the Kaungton-Bhamo corridor, cut off from all supplies. The Burmese then proceeded to attack the main Chinese army from two sides— Balamindin's army out of Kaungton fortress, and Ne Myo Sithu's army from the north. The Chinese retreated eastwards and then northwards where another Burmese army led by Maha Thiha Thura was waiting. The two other Burmese armies also followed up, and the Chinese army was destroyed entirely. Maha Sithu's army which had been guarding the western flank of the Irrawaddy, then marched north of Myitkyina and defeated other lightly held Chinese garrisons at the border. The Burmese armies proceeded to occupy eight Chinese Shan States within Yunnan.
Victorious Burmese armies returned to Ava with the captured guns, muskets and prisoners in early May. At Kunming, Yang began resorting to lies. He reported that Bhamo had been occupied; that its inhabitants had begun wearing Manchu-style pigtails; and that the Burmese commander, Ne Myo Sithu, after losing 10,000 men had sued for peace. He recommended that the emperor graciously accept the peace offer to restore the normal trade relations between the two countries. The Qianlong Emperor however realized the falsity of the report, and ordered Yang back to Beijing. On his arrival, Yang committed suicide at the order of the Emperor.
Third invasion (1767–1768)
After the two defeats, the emperor and his court could not comprehend how a relatively small country like Burma could resist the might of the Qing. For the Emperor, it was time for the Manchus themselves to come into the picture. He had always doubted the battle-worthiness of his Chinese Green Standard armies. The Manchus saw themselves as a warlike and conquering race and the Chinese as an occupied people. He commissioned a study of the first two invasions, and the report reinforced his biases—that the low battle-worthiness of the Green Standard armies was the reason for the failures.
In 1767, the Emperor appointed the veteran Manchu commander Ming Rui, a son-in-law of his, as governor-general of Yunnan and Guizhou, and head of the Burma campaign. Ming Rui had seen battle against the Turks in the northwest and was in command of the strategically key post of Ili (in present-day Xinjiang). His appointment meant that this was no longer a border dispute but a full-fledged war. Ming Rui arrived in Yunnan in April. An invasion force consisting of Mongol and elite Manchu troops rushed down from northern China and Manchuria. Thousands of Green Standards from Yunnan and Tai-Shan militias accompanied this force. Provinces throughout China were mobilized to provide supplies. The total strength of the invasion force was 50,000 men, the vast majority being infantry. The mountains and thick jungles of Burma kept the use of cavalry forces to a minimum. The Qing court now seriously considered the threat of illnesses among its troops; as a precaution, the campaign was planned for the winter months when diseases were believed to be less prevalent.
The Burmese now faced the largest Chinese army yet mobilized against them. Yet King Hsinbyushin did not seem to realize the gravity of the situation. Throughout the first two invasions, he had steadfastly refused to recall the main Burmese armies, which had been battling in Laos and Siam since January 1765, and laying siege to the Siamese capital of Ayutthaya since January 1766. Throughout 1767, when the Chinese were mobilizing for their most serious invasion yet, the Burmese were still focused on defeating the Siamese. Even after the Siamese capital was finally captured in April 1767, Hsinbyushin kept part of the troops in Siam during the rainy season months in order to mop up the remaining Siamese resistance during the winter months later that year. He actually allowed many Shan and Laotian battalions to demobilize at the start of the rainy season.
As a result, when the invasion did come in November 1767, the Burmese defenses had not been upgraded to meet a much larger and a more determined foe. The Burmese command looked much like that of the second invasion. Hsinbyushin again assigned the same commanders of the second invasion to face off the Chinese. Maha Sithu led the main Burmese army, and was the overall commander of the Chinese theater, with Maha Thiha Thura and Ne Myo Sithu commanding two other Burmese armies. Balamindin again commanded the Kaungton fort. (Given that the main Burmese army was only about 7000 strong, the entire Burmese defense at the start of the third invasion was most likely no more than 20,000.)
Ming Rui planned a two-pronged invasion as soon as the rainy season ended. The main Chinese army, led by Ming Rui himself, was to approach Ava through Hsenwi, Lashio and Hsipaw, and down the Namtu river. (The main invasion route was the same route followed by the Manchu forces a century earlier, chasing the Yongli Emperor of the Southern Ming dynasty.) The second army, led by Gen. E'erdeng'e, was to try the Bhamo route again. The ultimate objective was for both armies to clamp themselves in a pincer action on the Burmese capital of Ava. The Burmese plan was to hold the second Chinese army in the north at Kaungton with the army led by Ne Myo Sithu, and meet the main Chinese army in the northeast with two armies led by Maha Sithu and Maha Thiha Thura.
At first, everything went according to plan for the Qing. The third invasion began in November 1767 as the smaller Chinese army attacked and occupied Bhamo. Within eight days, Ming Rui's main army occupied the Shan states of Hsenwi and Hsipaw. Ming Rui made Hsenwi a supply base, and assigned 5000 troops to remain at Hsenwi and guard the rear. He then led a 15,000-strong army in the direction of Ava. In late December, at the Goteik Gorge (south of Hsipaw), the two main armies faced off and the first major battle of the third invasion ensued. Outnumbered two-to-one, Maha Sithu's main Burmese army was thoroughly routed by Ming Rui's Bannermen. Maha Thiha Thura too was repulsed at Hsenwi. The news of the disaster at Goteik reached Ava. Hsinbyushin finally realized the gravity of the situation, and urgently recalled Burmese armies from Siam.
Having smashed through the main Burmese army, Ming Rui pressed on full steam ahead, overrunning one town after another, and reached Singu on the Irrawaddy, 30 miles north of Ava at the beginning of 1768. The only bright spot for the Burmese was that the northern invasion force, which was to come down the Irrawaddy to join up with Ming Rui's main army, had been held off at Kaungton.
At Ava, Hsinbyushin famously did not panic at the prospect of a large Chinese army (about 30,000) at the doorstep. The court urged the king to flee but he scornfully refused, saying he and his brother princes, sons of Alaungpaya, would fight the Chinese single-handed if they had to. Instead of defending the capital, Hsinbyushin calmly sent an army to take up position outside Singu, personally leading his men toward the front line.
It turned out that Ming Rui had overstretched himself, and was in no position to proceed any farther. He was now too far away from his main supply base at Hsenwi, hundreds of miles away in the northern Shan Hills. The Burmese guerrilla attacks on the long supply lines across the jungles of the Shan Hills were seriously hampering the Qing army's ability to proceed. (Burmese guerrilla operations were directed by Gen. Teingya Minkhaung, a deputy of Maha Thiha Thura). Ming Rui now resorted to defensive tactics, playing for time to enable the northern army to come to his relief. But it was not to be. The northern army had suffered heavy casualties in their repeated attacks against the Kaungton fort. Its commander, against the express orders of Ming Rui, retreated back to Yunnan. (The commander was later publicly shamed and executed on the orders of the Emperor.)
The situation turned worse for Ming Rui. By early 1768, battle-hardened Burmese reinforcements from Siam had begun to arrive back. Bolstered by the reinforcements, two Burmese armies led by Maha Thiha Thura and Ne Myo Sithu succeeded in retaking Hsenwi. The Qing commander at Hsenwi committed suicide. The main Qing army was now cut off from all supplies. It was now March 1768. Thousands of Bannermen from the freezing grasslands along the Russian border, began dying of malaria as well as Burmese attacks in the furnace-like hot weather of central Burma. Ming Rui gave up all hope of proceeding toward Ava, and instead tried to make it back to Yunnan with as many of his soldiers as possible.
Battle of Maymyo
In March 1768, Ming Rui began his retreat, pursued by a Burmese army of 10,000 men and 2000 cavalry. The Burmese then tried to encircle the Chinese by splitting the army into two. Maha Thiha Thura had now assumed the overall command, replacing Maha Sithu. The smaller army, led by Maha Sithu, continued to pursue Ming Rui while the larger army led by Maha Thiha Thura advanced through the mountainous route to emerge directly behind the Chinese. Through careful maneuvering, the Burmese managed to achieve complete encirclement of the Chinese at modern-day Pyinoolwin (Maymyo), about 50 miles northeast of Ava. Over the course of three days of bloody fighting, the Bannerman army was completely annihilated. The slaughter was such that the Burmese could hardly grip their swords as the hilts were slippery with enemy blood. Of the original 30,000 men of the main army, only 2500 remained alive and were captured. The rest had been killed either on the battlefield, through disease or through execution after their surrender. Ming Rui himself was severely wounded in battle. Only a small group managed to break through and escaped the carnage. Ming Rui himself could have escaped with that group. Instead, he cut off his queue and sent it to the emperor as a token of his loyalty by those who were escaping. He then hanged himself on a tree. In the end, only a few dozen of the main army returned.
Fourth invasion (1769)
The Qianlong Emperor had sent Ming Rui and his Bannermen assuming an easy victory. He had begun making plans about how he would administer his newest territory. For weeks, the Qing court had heard nothing, and then the news finally came. The Emperor was shocked and ordered an immediate halt to all military actions until he could decide what next to do. Generals returning from the front line cautioned that there was no way Burma could be conquered. But there was no real choice but to press on. Imperial prestige was at stake.
The Emperor turned to one of his most trusted advisers, the chief grand councilor Fuheng, Ming Rui's uncle. Back in the 1750s, Fuheng had been one of the few senior officials who had fully backed the Emperor's decision to eliminate the Dzungars at a time when most believed that war was too risky. On 14 April 1768, the imperial court announced the death of Ming Rui and the appointment of Fuheng as the new chief commander of the Burma campaign. Manchu generals, Agui, Aligun and Suhede were appointed as his deputies. Now, the top rung of the Qing military establishment prepared for a final showdown with the Burmese.
Before any fighting resumed, some on the Chinese side sent out peace feelers to the court of Ava. The Burmese also sent signals that they would like to give diplomacy a chance, given their preoccupations in Siam. But the emperor, with Fuheng's encouragement, made it clear that no compromise with the Burmese could be made. The dignity of the state demanded a full surrender. His aim was to establish direct Qing rule over all Burmese possessions. Emissaries were sent to Siam and Laotian states informing them of the Chinese ambition and seeking an alliance.
Ava now fully expected another major invasion. Hsinbyushin had now brought most of the troops back from Siam to face the Chinese. With the Burmese fully preoccupied with the Chinese threat, the Siamese resistance retook Ayutthaya in 1768 and went on to reconquer all of their territories throughout 1768 and 1769. For the Burmese, their hard-fought gains of the prior three years (1765–1767) in Siam had gone to waste but there was little they could do. The survival of their kingdom was now at stake.
Chinese battle plan
Fuheng arrived in Yunnan in April, 1769 to take command of a 60,000-strong force. He studied past Ming and Mongol expeditions to form his battle plan, which called for a three-pronged invasion via Bhamo and the Irrawaddy river. The first army would attack Bhamo and Kaungton head-on, which he knew would be difficult. But two other larger armies would bypass Kaungton and march down the Irrawaddy, one on each bank of the river, to Ava. The twin invading armies on each side of the river would be accompanied by war boats manned by thousands of sailors from the Fujian Navy. To avoid a repeat of Ming Rui's mistake, he was determined to guard his supply and communication lines, and advance at a sustainable pace. He avoided an invasion route through the jungles of the Shan Hills so as to minimize the Burmese guerrilla attacks on his supply lines. He also brought in a full regiment of carpenters who would build fortresses and boats along the invasion route.
Burmese battle plan
For the Burmese, the overall objective was to stop the enemy at the border, and prevent another Chinese penetration into their heartland. Maha Thiha Thura was the overall commander, the role which he had assumed since the second half of the third invasion. As usual, Balamindin commanded the Kaungton fort. In the last week of September, three Burmese armies were dispatched to meet the three Chinese armies head-on. A fourth army was organized with the sole purpose of cutting the enemy supply lines. Hsinbyushin had also organized a flotilla of war boats to meet the Chinese war boats. The Burmese defenses now included French musketeers and gunners under the command of Pierre de Milard, governor of Tabe, who had arrived back from the Siamese theater. Based on their troop movements, the Burmese knew at least the general direction from where the massive invasion force would come. Maha Thiha Thura moved upriver by boat toward Bhamo.
As the Burmese armies marched north, Fuheng, against the advice of his officers, decided not to wait until the end of the monsoon season. It clearly was a calculated gamble; he had wanted to strike before the Burmese arrived but he had also hoped that "miasma would not be everywhere." So in October 1768, towards the end of (but still during) the monsoon season, Fuheng launched the largest invasion yet. The three Chinese armies jointly attacked and captured Bhamo. They proceeded south and built a massive fortress near Shwenyaungbin village, 12 miles east of the Burmese fortress at Kaungton. As planned, the carpenters duly built hundreds of war boats to sail down the Irrawaddy.
But almost nothing went according to plan. One army did cross over to the western bank of the Irrawaddy, as planned. But the commander of that army did not want to march far away from the base. When the Burmese army assigned to guard the west bank approached, the Chinese retreated back to the east bank. Likewise, the army assigned to march down the eastern bank also did not proceed. This left the Chinese flotilla exposed. The Burmese flotilla came up the river and attacked and sank all the Chinese boats. The Chinese armies now converged on attacking Kaungton. But for four consecutive weeks, the Burmese put up a remarkable defense, withstanding gallant charges by the Bannermen to scale the walls.
A little over a month into the invasion, the entire Qing invasion force was bogged down at the border. Predictably, many Chinese soldiers and sailors fell ill, and began to die in large numbers. Fuheng himself was struck down by fever. More ominously for the Chinese, the Burmese army sent to cut the enemy line of communication also achieved its purpose, and closed in on the Chinese armies from the rear. By early December, the Chinese forces were completely encircled. The Burmese armies then attacked the Chinese fort at Shwenyaungbin, which fell after a fierce battle. The fleeing Chinese troops fell back into the pocket near Kaungton where other Chinese forces were stationed. The Chinese armies were now trapped inside the corridor between the Shwenyaungbin and Kaungton forts, completely surrounded by rings of Burmese forces.
The Chinese command, which had already lost 20,000 men, and a quantity of arms and ammunition, now asked for terms. The Burmese staff were averse to granting terms, saying that the Chinese were surrounded like cattle in a pen, they were starving, and in a few days, they could be wiped out to a man. But Maha Thiha Thura, who oversaw the annihilation of Ming Rui's army at the battle of Maymyo in 1768, realized that another wipe-out would merely stiffen the resolve of the Chinese government.
Maha Thiha Thura was said to have said:
- Comrades, unless we make peace, yet another invasion will come. And when we have defeated it, yet another will come. Our nation cannot go on just repelling invasion after invasion of the Chinese for we have other things to do. Let us stop the slaughter, and let their people and our people live in peace.
He pointed out to his commanders that war with the Chinese was quickly becoming a cancer that would finally destroy the nation. Compared to Chinese losses, Burmese losses were light but considered in proportion to the population, they were heavy. The commanders were not convinced but Maha Thiha Thura, on his own responsibility, and without informing the king, demanded that the Chinese agree to the following terms:
- The Chinese would surrender all the sawbwas and other rebels and fugitives from Burmese justice who had taken shelter in Chinese territory;
- The Chinese would undertake to respect Burmese sovereignty over those Shan states that had been historically part of Burma;
- All prisoners of war would be released;
- The emperor of China and the king of Burma would resume friendly relations, regularly exchanging embassies bearing letters of good will and presents.
The Chinese commanders decided to agree to the terms. At Kaungton, on 13 December 1769 (or 22 December 1769), under a 7-roofed pyathat hall, 14 Burmese and 13 Chinese officers signed a peace treaty. The Chinese burned their boats and melted down their cannon. Two days later, as the Burmese stood to arms and looked down, starved Chinese soldiers marched sullenly away up the Taiping valley; they began to perish of hunger by thousands in the passes.
At Beijing, the Qianlong Emperor was not pleased with the treaty. He did not accept the Chinese commanders' explanation that the fourth stipulation—exchange of embassies bearing presents—amounted to Burmese submission and tribute. He did not permit the surrender of the sawbwas or other fugitives nor the resumption of trade between the two countries.
At Ava, Hsinbyushin was furious that his generals had acted without his knowledge, and tore up his copy of the treaty. Knowing that the king was angry, the Burmese armies were afraid to return to the capital. In January 1770, they marched to Manipur where a rebellion had begun, taking advantage of Burmese troubles with the Chinese. After a three days' battle near Langthabal, the Meiteis were defeated, and their raja fled to Assam. The Burmese raised their nominee to the throne, and returned. The king's anger had subsided; after all, they had won victories and preserved his throne. Still, the king sent Maha Thiha Thura, the decorated general, whose daughter was married to Hsinbyushin's son and heir-apparent Singu, a woman's dress to wear, and exiled him and other generals to the Shan states. He would not allow them to see him. He also exiled ministers who dared to speak on their behalf.
Although hostilities ceased, an uneasy truce ensued. None of the points in the treaty was honored by both sides. Because the Chinese did not return the sawbwas, the Burmese did not return the 2500 Chinese prisoners of war, who were resettled. The Qing had lost some of the generation's most important frontier experts, including Yang Yingju, Ming Rui, Aligun, and Fuheng (who eventually died of malaria in 1770). The war cost the Qing treasury 9.8 million silver taels. Nonetheless, the Emperor kept a heavy military lineup in the border areas of Yunnan for about one decade in an attempt to wage another war while imposing a ban on inter-border trade for two decades.
The Burmese for years were preoccupied with another impending invasion by the Chinese, and kept a series of garrisons along the border. The high casualties of the war (in terms of the population size) and the ongoing need to guard the northern border seriously hampered the Burmese military's capability to renew warfare in Siam. It would be another five years when the Burmese sent another invasion force to Siam.
It would be another twenty years when Burma and China resumed a diplomatic relationship in 1790. The resumption was brokered by the Tai-Shan nobles and Yunnan officials who wanted to see trade resume. To the Burmese, then under King Bodawpaya, the resumption was on equal terms, and they considered the exchange of presents as part of diplomatic etiquette, not as tribute. To the Chinese however, all of these diplomatic missions were considered as tributary missions. The Emperor viewed the resumption of relations as Burmese submission, and unilaterally claimed victory and included the Burma campaign in his list of Ten Great Campaigns.
Burma's successful defense laid the foundation for the present-day boundary between the two countries. The border still was not demarcated, and the borderlands were still overlapping spheres of influence. After the war, Burma remained in possession of Koshanpye, the nine states above Bhamo. At least down to the eve of the First Anglo-Burmese War in 1824, the Burmese exerted authority over the southern Yunnan borderlands, as far as Kenghung (present-day Jinghong, Yunnan). Likewise, the Chinese exercised a degree of control over the borderlands, including present-day northeastern Kachin State. Overall, the Burmese were able to push back the line of control up to one that existed before the Qing consolidation drive of the 1730s.
However, the war also forced the Burmese to withdraw from Siam. Their victory over the Qing is described as a moral victory. Historian G.E. Harvey writes: "Their other victories were over states on their own level such as Siam; this was won over an empire. Alaungpaya's crusade against the Mons was stained with treachery; the great siege of Ayuthaya was a magnificent dacoity", though he described the Sino-Burmese war "a righteous war of defense against the invader".
The main beneficiaries of the war were the Siamese, who took full advantage of the Burmese absence to reclaim their lost territories and independence. By 1770, they had reconquered most of the pre-1765 territories. Only Tenesserim remained in Burmese hands. Preoccupied by the Chinese threat, and recovering from the depletion of manpower from the war, Hsinbyushin left Siam alone even as Siam continued to consolidate its gains. (He was finally forced to send Burmese armies to Siam in 1775 in response to a Siamese-backed rebellion in Lan Na a year earlier). In the following decades, Siam would become a power in its own right, swallowing up Lan Na, Laotian states, and parts of Cambodia.
From a wider geopolitical standpoint, the Qing, and the Qianlong Emperor, who hitherto had never faced defeat, now had to accept—albeit grudgingly—that there were limits to Qing power. A historian of Chinese Military History, Marvin Whiting, writes that the Burmese success probably saved the independence of other states in Southeast Asia.
For the Qing, the war highlighted limits to their military power. The Emperor blamed the low battle-worthiness of his Green Standard armies for the first two failed invasions. But he was to concede later that his Manchu Bannermen too were less suited to fighting in Burma than in Xinjiang. Despite sending in 50,000 and 60,000 troops in the last two invasions, the Qing command lacked up-to-date information about invasion routes, and had to consult centuries-old maps to form their battle plan. This unfamiliarity exposed their supply and communication lines to repeated Burmese attacks, and allowed their main armies to be encircled in the last three invasions. The Burmese scorched earth policy meant that the Chinese were vulnerable to supply line cuts. Perhaps most importantly, the Qing soldiers proved ill-suited to fight in the tropical climate of Burma. In the last three invasions, thousands of Chinese troops became ill with malaria and other tropical diseases, and many perished as a result. This neutralized the Chinese advantage of superior numbers, and allowed the Burmese to engage the Chinese armies head-to-head towards the end of the campaigns.
The war is considered the peak of Konbaung military power. Historian Victor Lieberman writes: "These near simultaneous victories over Siam (1767) and China (1765–1769) testified to a truly astonishing elan unmatched since Bayinnaung." The Burmese military proved that they were able and willing to take on a far superior enemy, using their familiarity with the terrain and the weather to their maximum advantage. (The Battle of Maymyo is now a military case study of infantry fighting against a larger army.)
Yet it proved that there were limits to the Burmese military power. The Burmese learned that they could not fight two simultaneous wars, especially if one of them was against the world's largest military. Hsinbyushin's reckless decision to fight a two-front war nearly cost the kingdom its independence. Moreover, their losses, while smaller than Qing losses, were heavy in proportion to her much smaller size of population, hampering their military capability elsewhere. Konbaung's military power would plateau in the following decades. It made no progress against Siam. Its later conquests came only against smaller kingdoms to the west—Arakan, Manipur and Assam.
- Mongol invasion of Burma
- Ten Great Campaigns
- Burmese–Siamese War (1765–67)
- First Anglo-Burmese War
- Battle of Ngọc Hồi-Đống Đa
- (Burney 1840, pp. 171–173); from Burmese sources; figures adjusted down by one magnitude per G.E. Harvey's analysis in his History of Burma (1925) in the section Numerical Note (pp. 333–335).
- ~20,000 at the beginning, plus additional 10,000 men and 2000 cavalry towards the end
- (Burney 1840, pp. 180–181) and (Harvey 1925, pp. 333–335). Burney citing Burmese sources gives the Chinese strength as 500,000 foot and 50,000 cavalry and states the Burmese strength to be 64,000 foot and 1200 cavalry. These numbers are certainly exaggerated. Per Harvey (pp. 333–335), the Burmese numbers should be reduced by an order of magnitude, which gives the Chinese strength as about 55,000 which is in line with the 60,000 figure from Chinese sources. Moreover, the Burmese figure of ~65,000 was also exaggerated though probably not by a factor of ten. Per Harvey's analysis, the most the Konbaung kings could have raised was 60,000, even that in early 19th century when they had a larger empire than Hsinbyushin's. Hsinbyushin could not have raised 60,000 since Burma had been at war since 1740 and many able men had already perished. The most he could have raised was no more than 40,000.
- The number is derived from the fact that only a few dozens of the 30,000 strong main army managed to return back to Yunnan. (See e.g. (Myint-U 2006, pp. 102–103).) This figure does not include casualties suffered by the northern army.
- Whiting 2002, pp. 480–481.
- Dai 2004, p. 145.
- Htin Aung 1967, p. 182.
- Qing Chronicles 528.
- Giersch 2006, p. 103.
- Giersch 2006, p. 101.
- Qing Chronicles 327.
- Haskew 2008, pp. 27–31.
- Giersch 2006, p. 102.
- Htin Aung 1967, pp. 180–183.
- George C. Kohn 2006, p. 82.
- Harvey 1925, p. 258.
- Giersch 2006, pp. 101–110.
- Hall 1960, pp. 27–29.
- Harvey 1925, p. 254–258.
- Fernquest 2006, pp. 61–63.
- Woodside 2002, pp. 256–262.
- Giersch 2006, pp. 99–100.
- Giersch 2006, pp. 59–80.
- Phayre 1884, pp. 191–192, 201.
- Giersch 2006, p. 68.
- Myint-U 2006, pp. 100–101.
- Phayre 1884, pp. 191–192,201.
- Harvey 1925, p. 250.
- Kyaw Thet 1962, pp. 310–314.
- Phayre 1884, p. 192.
- Phayre 1884, p. 195.
- Htin Aung 1967, pp. 177–178.
- Myint-U 2006, pp. 102–103.
- Harvey 1925, p. 253.
- Kyaw Thet 1962, pp. 314–318.
- Htin Aung 1967, p. 178.
- Hall 1960, p. 28.
- Htin Aung 1967, pp. 178–179.
- Phayre 1884, pp. 196–198.
- Haskew 2008, p. 29.
- Harvey 1925, p. 255–257.
- Myint-U 2006, pp. 103–104.
- Htin Aung 1967, pp. 181–183.
- Harvey 1925, pp. 257–258.
- Harvey 1925, p. 259.
- Lieberman 2003, p. 32.
- Woodside 2002, pp. 265–266.
- Lieberman 2003, p. 184.
- Burney, Col. Henry (August 1840). Four Years' War between Burmah and China. The Chinese Repository. 9. Canton: Printed for Proprietors.
- Dai, Yingcong (2004). "A Disguised Defeat: The Myanmar Campaign of the Qing Dynasty". Modern Asian Studies. Cambridge University Press. doi:10.1017/s0026749x04001040.
- Fernquest, Jon (Autumn 2006). "Crucible of War: Burma and the Ming in the Tai Frontier Zone (1382–1454)". SOAS Bulletin of Burma Research. 4 (2).
- Giersch, Charles Patterson (2006). Asian borderlands: the transformation of Qing China's Yunnan frontier. Harvard University Press. ISBN 0-674-02171-1.
- Hall, D.G.E. (1960). Burma (3rd ed.). Hutchinson University Library. ISBN 978-1-4067-3503-1.
- Harvey, G. E. (1925). History of Burma: From the Earliest Times to 10 March 1824. London: Frank Cass & Co. Ltd.
- Haskew, Michael E., Christer Joregensen, Eric Niderost, Chris McNab (2008). Fighting techniques of the Oriental world, AD 1200–1860: equipment, combat skills, and tactics (Illustrated ed.). Macmillan. ISBN 978-0-312-38696-2.
- Htin Aung, Maung (1967). A History of Burma. New York and London: Cambridge University Press.
- Jung, Richard J. K. (1971). "The Sino-Burmese War, 1766–1770: War and Peace Under the Tributary System". Papers on China. 24.
- George C. Kohn (2006). Dictionary of wars. Checkmark Books. ISBN 0-8160-6578-0.
- Kyaw Thet (1962). History of Union of Burma (in Burmese). Yangon: Yangon University Press.
- Lieberman, Victor B. (2003). Strange Parallels: Southeast Asia in Global Context, c. 800–1830, volume 1, Integration on the Mainland. Cambridge University Press. ISBN 978-0-521-80496-7.
- Myint-U, Thant (2006). The River of Lost Footsteps—Histories of Burma. Farrar, Straus and Giroux. ISBN 978-0-374-16342-6.
- Sir Arthur Purves Phayre (1884). History of Burma: including Burma proper, Pegu, Taungu, Tenasserim, and Arakan. From the earliest time to the end of the first war with British India. Trübner & co.
- Whiting, Marvin C. (2002). Imperial Chinese Military History: 8000 BC – 1912 AD. iUniverse. pp. 480–481. ISBN 978-0-595-22134-9.
- Woodside, Alexander (2002). Willard J. Peterson, ed. The Cambridge history of China: The Ch'ing Empire to 1800, Volume 9. United Kingdom: Cambridge University Press. ISBN 978-0-521-24334-6.
- Draft History of Qing, Chapter 327, Biographies 114 《清史稿》卷327 列傳一百十四 (in Chinese). China.
- Draft History of Qing, Chapter 528, Biographies 315 《清史稿》卷528 列傳三百十五 (in Chinese). China. |
In Excel. you can quickly change the case of the text in a cell (to lower case, upper case, or proper case) using text functions.
Below is an example of each type of case:
Excel PROPER Function – Overview
PROPER function is one of the many text functions in Excel.
What Does it Do?
It takes a string as the input and returns a string where the first letter of all the words has been capitalized and all the remaining characters are in lower case.
When to Use it?
Use it when you have a text string and you want to capitalize the first alphabet of each word in the text string and make all the other character in lowercase. This could be the case when you have names in different formats and you want to make it consistent by capitalizing the first alphabet of the first and the last name.
Proper Function Syntax
- text – the text string in which you want in capitalize the first letter of each word.
Examples of using PROPER Function
Here are some practical examples to show you how to the PROPER function in an Excel worksheet.
Example 1 – Making Names Consistent
Suppose you have the dataset as shown below:
The names in this dataset are all inconsistent.
You can use the PROPER function to make these consistent (where the first alphabet of each name is capitalized and rest all are small).
The below formula would do this:
In the above formula, I use the ampersand operator to add the text in cells in column A and B, and then PROPER function makes the combined string consistent.
Example 2 – Making Address Consistent
Just like the names, you can also use it to make the address consistent.
Below is an example dataset where the addresses are in an inconsistent format:
You can use the below formula to make all these addresses in a consistent format:
Note that this formula works perfectly, but if you want the state code (such CA, NV, NY) in upper case, it will not be done with PROPER function only.
In that case, you need to use the below formula:
You can get an idea of how this formula works from this tutorial.
Some useful things to know about the PROPER Function:
- The PROPER function only affects the first character of every word in a text string. All the other characters are left unchanged.
- It capitalizes the first letter of any word that follows a non-text character. For example: =PROPER(hello,excel) returns Hello,Excel
- Numbers, special characters, and punctuations are not changed by the PROPER function.
- If you use a null character (or a reference to an empty cell), it will return a null character.
Other Useful Excel Functions:
- Excel FIND Function: Excel FIND function can be used when you want to locate a text string within another text string and find its position. It returns a number that represents the starting position of the string you are finding in another string. It is case-sensitive.
- Excel LOWER Function: Excel LOWER function can be used when you want to convert all uppercase letter in a text string to lowercase. Numbers, special characters, and punctuations are not changed by it.
- Excel UPPER Function: Excel UPPER function can be used when you want to convert all lowercase letter in a text string to uppercase. Numbers, special characters, and punctuations are not changed by it.
- Excel REPLACE Function: Excel REPLACE function can be used when you want to replace a part of the text string with another string. It returns a text string where a part of the text has been replaced by the specified string.
- Excel SEARCH Function: Excel SEARCH function can be used when you want to locate a text string within another text string and find its position. It returns a number that represents the starting position of the string you are finding in another string. It is NOT case-sensitive.
- Excel SUBSTITUTE Function: Excel SUBSTITUTE function can be used when you want to substitute text with new specified text in a string. It returns a text string where an old text has been substituted by the new one.
2 thoughts on “Excel PROPER Function (Useful Examples + Video)”
How do I make Proper() to fix the text it returned and not the formula? e.g. If I say Proper (A2) and excel returns the text in A2. After that if I delete cell A2, the returned text also disappears. How do I make sure it stays?
Well done. Clear and easily understood.
Comments are closed. |
Number Patterns and Systems of Equations
Students investigate geometric number patterns and systems of equations. They observe and discuss various examples of linear patterns in a teacher demonstration.
9th - 12th Math 3 Views 6 Downloads
Relationships Between Quantities and Reasoning with Equations and Their Graphs
Graphing all kinds of situations in one and two variables is the focus of this detailed unit of daily lessons, teaching notes, and assessments. Learners start with piece-wise functions and work their way through setting up and solving...
6th - 10th Math CCSS: Designed
Geometric Sequences - Bacterial Growth
Bring algebra to life with scientific applications. Math minded individuals calculate and graph the time it takes a bacterium to double. They discuss geometric sequences and use a chart to graph their findings. There are 38 questions all...
8th - 10th Math CCSS: Adaptable
Using Linear Equations to Define Geometric Solids
Making the transition from two-dimensional shapes to three-dimensional solids can be difficult for many geometry young scholars. This comprehensive Common Core lesson starts with writing and graphing linear equations to define a bounded...
9th - 11th Math CCSS: Designed
Solving Systems of Linear Inequalities
One thing that puzzles a lot of young algebrists is the factors in a word problem that are taken as "understood". This presentation on solving systems of linear inequalities does a great job walking the learner through how to tease those...
9th - 10th Math CCSS: Adaptable
Solving Systems of Linear Equations
Solving systems of equations underpins much of advanced algebra, especially linear algebra. Developing an intuition for the kinds and descriptions of solutions is key for success in those later courses. This intuition is exactly what...
8th - 9th Math CCSS: Adaptable
The Complex Geometry of Islamic Design
Discover the prevalence of geometric design in Islamic culture with this wonderful informational video. It begins with an overview of the complexity of designs dating back to the eighth century during early Islam, and then delves into...
5 mins 7th - 12th Math CCSS: Adaptable
Topic 4: Solving Systems of Linear Equations
Linear equations, coordinate planes, and systems of equations are covered in this extremely well-organized lesson. Composed of a series of mini-lessons, the instruction aims at explaining a different facet of solving systems of linear...
8th - 11th Math CCSS: Adaptable |
Systems of linear equations (or linear systems as they are called sometimes) are defined as collections of linear equations that use the same set of variables. That means that within systems of linear equations you have two or more linear equations with the same variables. This is an example of such a system:
3x – 5y = 16
x – 3y = 8
This example shows a linear system with two equations and two variables. The number of equations in a system, as well as the number of variables, is not limited. But the number of solutions varies depending on the ratio of equations and variables in the system. If there are more variables then equations in a linear system, such a system has infinitely many solutions or sometimes unique sparse solutions. A system like that is called an underdetermined system. If a system has the same number of equations and variables, it has a single unique solution. But, if a system of linear equations has more equations than unknowns, it doesn’t have a solution. Such a system is called an overdetermined system.
Systems of linear equations are an important part of linear algebra and they play an important role in such sciences as engineering, physics, economics, chemistry and computer science, as well as modeling complex systems.
Solving systems of linear equations
There are a few approaches to solving systems of linear equations. The simplest way is to use a method called substitution. This method is appropriate for the simplest kinds of linear systems and we will use this method to solve the system above.
The first step you need to perform when using substitution is to solve one of the equations for, let us say x in terms of y. We will do that with the second equation, since it only has one x.
x – 3y = 8
x = 8 + 3y
Now we insert the expression for x into the top equation:
3 * (8 + 3y) – 5y = 16
24 + 9y + -5y = 16
17y = 16 – 24
4y = -8 |: 4
y = -2
After this, we just insert the value of y into any of the given equations and we will have the value of x and the coordinates of the solution. So, let us insert the value of x into the second equation:
x – 3*(-2) = 8
x + 6 = 8
x = 8 – 6
x = 2
The coordinates of the solution are (2, -2). The solution is a single point in which two lines (which are the visual representation of the given equations in a coordinate system) cross each other.
Graphing a system of linear equations
The solution to a system of linear equations can be determined graphically. The only thing you have to do is to draw the lines based on the given equations and then visually determine the point in which the lines cross paths. You can learn to do that by clicking on a link to our article on graphing linear equations.
If you wish to practice solving and graphing systems of linear equations, please feel free to use the math worksheets below.
Graphing systems of linear equations exams for teachers
Graphing systems of linear equations worksheets for students
|Worksheet Name||File Size||Downloads||Upload date|
|Graphing systems of linear equations – Standard||170.7 kB||2581||October 14, 2012|
|Graphing systems of linear equations – Slope/Intercept||221.8 kB||1822||October 14, 2012|
|Systems of linear equations||666.8 kB||3580||October 14, 2012| |
Closest element in BST
You have been given a binary search tree of integers with ‘N’ nodes and a target integer value ‘K’. Your task is to find the closest element to the target ‘K’ in the given binary search tree.
A node in BST is said to be the closest to the target if its absolute difference with the given target value ‘K’ is minimum. In the case of more than one closest element, return the element with a minimum value.
A binary search tree (BST) is a binary tree data structure with the following properties.
• The left subtree of a node contains only nodes with data less than the node’s data. • The right subtree of a node contains only nodes with data greater than the node’s data. • Both the left and right subtrees must also be binary search trees.
For the given BST and target value ‘K’ = 32, the closest element is 30 as the absolute difference between 30 and 32 (|32 - 30|) is the minimum among all other possible node-target pairs.
The first line contains an Integer 'T' which denotes the number of test cases or queries to be run. Then the test cases follow. The first line of each test case contains the elements of the tree in the level order form separated by a single space. If any node does not have a left or right child, take -1 in its place. Refer to the example below. The second line of each test case contains a single non-negative integer ‘K’ denoting the target value.
Elements are in the level order form. The input consists of values of nodes separated by a single space in a single line. In case a node is null, we take -1 in its place. The input for the tree depicted in the below image would be :
1 2 3 4 -1 5 6 -1 7 -1 -1 -1 -1 -1 -1 Explanation : Level 1 : The root node of the tree is 1 Level 2 : Left child of 1 = 2 Right child of 1 = 3 Level 3 : Left child of 2 = 4 Right child of 2 = null (-1) Left child of 3 = 5 Right child of 3 = 6 Level 4 : Left child of 4 = null (-1) Right child of 4 = 7 Left child of 5 = null (-1) Right child of 5 = null (-1) Left child of 6 = null (-1) Right child of 6 = null (-1) Level 5 : Left child of 7 = null (-1) Right child of 7 = null (-1) The first not-null node (of the previous level) is treated as the parent of the first two nodes of the current level. The second not-null node (of the previous level) is treated as the parent node for the next two nodes of the current level and so on. The input ends when all nodes at the last level are null (-1).
The above format was just to provide clarity on how the input is formed for a given tree. The sequence will be put together in a single line separated by a single space. Hence, for the above-depicted tree, the input will be given as: 1 2 3 4 -1 5 6 -1 7 -1 -1 -1 -1 -1 -1
Output Format :
For each test case, print a single integer representing the closest element to the given target ‘K’. Output for every test case will be printed in a separate line.
You are not required to print the output, it has already been taken care of. Just implement the function.
1 <= T <= 100 1 <= N <= 5 * 10^3 0 <= Node.data <= 10^5 0 <= K <= 10^5 Time Limit: 1 sec
In this approach, we will visit all the nodes in pre-order fashion( we can visit nodes in post-order or in-order fashion too) of the BST and find the absolute difference between target ‘K’ and node value. At last, we will return the maximum absolute difference.
The steps are as follows:
- In the “helper” function
- If ‘NODE’ is null, then return.
- Declare a variable ‘CURR_DIFF’ of type integer to store the absolute value of the difference of target ‘K’ and ‘NODE’ value.
- Initialize a ‘CURR_DIFF’ as abs(‘K’ - ‘NODE’ -> data ).
- If ‘CURR_DIFF’ is less than ‘MIN_DIFF,’ then update ‘MIN_DIFF’ as ‘CURR_DIFF’ and ‘CLOSEST’ as ‘NODE’ -> data.
- Else if ‘CURR_DIFF’ is equal to ‘MIN_DIFF’ then, ‘CLOSEST’ will be a minimum of ‘CLOSEST’ and ‘NODE’ -> data.
- Recursively visit the left and right subtree.
- In the given function
- Declare variables ‘MIN_DIFF’ and ‘CLOSEST’ of type integer.
- Set ‘MIN_DIFF’ initially as INT_MAX.
- Call the “helper” function for the given node, target value ‘K’, ‘MIN_DIFF’, and ‘CLOSEST’.
- Return ‘CLOSEST’. |
AY 101: Test Two Study Guide
Answering questions from the handouts, with a few bonuses
Chapter Six: Solar System Formation
1. The Nebular Hypothesis states that our solar system formed from a solar nebula with an initial rotation by the process of gravitational collapse.
a. Nebula: a large, often irregular looking type of astronomical object whose material lies between the stars; found in what astronomers call the interstellar medium.
b. Gravitational Collapse: occurs when part of an interstellar cloud is cold enough for gravity to overcome the cloud’s random thermal motions.
2. Planetesimals small objects that started out as tiny particles that grew over time through condensation. These can collide together and form larger bodies through the process of accretion.
a. Accretion: when planetesimals have a high enough velocity to collide and merge with others to the point where it gathers a significant gravitational pull to draw in all the other planetesimals in its orbit, thus clearing its orbital path.
3. Terrestrial planets are mostly made of rock and metal, and are located closer to the sun where they were able to form more solid outer layers.
a. Jovian planets are located outside the frost line in the outer solar system, much farther away from the sun. There, they developed thick atmospheres due to their massive sizes, but were not close enough to the Sun for their surfaces to condense into solids, so they are essentially liquid planets. We also discuss several other topics like How do you rearrange an equation?
i. Frost line: the largest distance from the Sun that will allow terrestrial
planets to form. Due to the fact that gases farther away from the sun will condense into ices, and these ices get pulled in by the gravity of the
massive planets to form liquid surfaces.
4. Radiometric dating means using the number of radioactive particles to determine the age of very old materials. It works by measuring the number radioactive isotopes in a sample, then working backwards by how many halflives are presumed to have passed since the material was formed. We also discuss several other topics like What is sensory adaptation and how does it occur?
a. Isotopes: atoms that have more neutrons than protons, and become very unstable and often radioactive.
b. Halflife: the amount of time it takes for one half the amount of radioactive isotopes to decay into other, more stable elements (known constants).
5. We have determined that the solar system is about billion years old by using radiometric dating on unaltered meteorites.
a. Meteorite: a meteor (very small planetesimals) that crashed to the surface of the earth without burning up in the atmosphere.
Chapter Seven: Terrestrial Planets
6. Mercury, being a planet inside the frost line, is a terrestrial planet. However, it is more like the moon than the earth because it has no atmosphere.
7. Venus has a very thick atmosphere made up mostly of CO2 gas that have caused a runaway greenhouse effect. Besides that, its sulfuric acid clouds make it a wholly uninhabitable planet.
a. Greenhouse effect: when certain gasses with more reflective properties act as a sort of filter in the atmosphere of a planet that traps incoming heat from nearby stars to hear up the exterior of the planet.
b. The effect becomes runaway when there are too many gasses in the atmosphere and so much heat is built up that more gases are produced to make the effect even worse, which eventually superheats the atmosphere of the planet. We also discuss several other topics like How can one decide on policies?
8. The Earth’s atmosphere is mostly made up of nitrogen and oxygen, which is odd because they are some of the rarer elements in the universe, and for them so show up in such a concentrated sample is miraculous.
a. The earth appears blue because the elements in its atmosphere (nitrogen and oxygen) tend to reflect blue light from the sun in the day time. This is also what the oceans reflect, adding to the blue color.
9. The moon is too small to hold an atmosphere because of its location relevant to the Sun. Other planets that are the same size or smaller can be found with an atmosphere, but only because they are located far away enough from the Sun’s solar winds.
a. Surface gravity depends on the size of a world, and indicates its ability to draw in gases from space.
b. Lunar maria are the smooth, dark lowlands that can be seen on the moon. c. Lunar highlands are the rougher surfaces that show up as the lighter areas seen on the moon.
i. These are considered to be the older areas that are visible on the moon. 10. Mars is red for the same reason that the earth is blue: the elements that are found on its surface and in its atmosphere reflect red waves of light the most, likely because of the abundance of iron oxide that found on the surface of Mars. Don't forget about the age old question of What is the meaning of sexual contact?
a. Lighteningshaped streaks on the surface of Mars coming down from its mountains are evidence that water once flowed on the planet, but it likely froze or evaporated long ago. If you want to learn more check out What is pattern recognition?
Don't forget about the age old question of What is the concept of social construction of reality?
11. Location relevant to the sun primarily determines the amount of heat a planet receives. 12. The bigger the planet it, the more geologically active it will be, and the warmer it will be on the interior of the planet. Mass also determines the surface gravity a planet will have. 13. Surface gravity is the main force responsible for keeping an atmosphere over a planet.
Chapter Eight: Jovian Planets
1. A “dipole magnetic field” refers to an electromagnetic field that develops out of a planet with magnetic lines that are able to divert away certain electromagnetic wavelengths, and
these line converge at the magnetic poles of that planet. This location is determined by the interior shape of the planet.
a. The earth has a magnetic field because the conductive metals in its inner layers get heated and charged under the earth’s surface, creating a net charge at the top and bottom of the planet that attracts other elements of similar magnetic affiliation.
2. Without a magnetic field, or magnetosphere, the earth would be bombarded by harmful electromagnetic rays that would kill most if not all life on Earth. These waves get sent to Earth when the force of the Sun’s emission (solar wind) shoots the waves at us, but they do not cross the magnetic field lines. Instead they travel along the lines and are dissipated before entering the atmosphere.
a. There are some places on Earth where the harmful waves do make it to the atmosphere, but only as charged photons that emit light when they interact with the earth’s atmosphere. These locations are the Arctic and Antarctic circles, where this interaction between the atmosphere and the charged particles create the aurora borealis.
3. Jovial planets are mostly made of gas and ice, so they have a very thick atmosphere, but not solid surfaces.
a. Jupiter spins so fast that its thick atmosphere streaks in large bands across its visible atmosphere.
b. The “Great Red Spot” is a giant antihurricane about the size of the earth that has been raging for hundreds of years,
4. Jupiter’s clouds are made of hydrogen compounds, and its overall atmosphere is mostly hydrogen and helium. The surface below, however, is completely liquefied. a. Jupiter is unique in that it is the only place in the solar system that has liquid hydrogen.
5. Saturn is a bit smaller than Jupiter, but is still considered a massive gas giant. It has generally the same composition, but due its distance from the Sun, a cold haze surrounds the planet and masks its bands of atmosphere, like those seen on Jupiter.
6. The rings of Saturn are made millions of incredible small planetesimals that are mostly ice and rock. These circle the planet in a disk shape that is extremely thin, only about 1.5 miles.
7. The Cassini Division in Saturn’s rings is the result of orbital resonance with Saturn’s moon, Mimas, which has an orbital period that is exactly twice the orbital period of the fragments in the division.
a. The orbital resonance means that the moon pulls as the fragments in the exact same spot, causing them to be pulled slightly out of their normal orbit and to bump into surrounding fragments, thus clearing that orbital path.
b. The shepherd satellites Pandora and Prometheus reign over the Fring of Saturn and hold a number of fragments in a very thin line with their equal but opposite gravities.
8. The moons of Jupiter (Io, Europa, Ganymede and Callisto) are all either the same size as, or larger than, Earth’s moon. The first three, are also in orbital resonance with each other, and thus have some interesting characteristics.
a. Io is volcanically active due to the tidal heating that is caused by the orbital resonance of the moons. A significant tidal bulge causes it to warp, creating friction and therefore geologic activity within the moon.
9. A subsurface ocean is a body of liquid substance (usually water) that forms underneath a layer of ice that coats the entire world. A subsurface ocean can be found on Europa, a moon of Jupiter.
Chapter Nine: Asteroids and Comets
1. Asteroids are large planetesimals that are generally made of metal and rock. 2. Asteroids are mostly found in the Asteroid Belt located in between Jupiter and Mars. 3. Typical asteroids are about 1,000 km wide, but are oblong in shape which results in a boomeranglike rotation.
4. A comet nucleus is a condensed mass made up of rock and ice that forms in the Kuiper Belt. A typical comet is only a few miles wide.
5. Although a comet is very similar to an asteroid in that it is a colorless rock, but only when it is in the far reaches of the system. When the comet’s orbital period takes it close enough to the Sun, the ices that have formed evaporate straight to a gas, skipping a liquid phase—a process called sublimation.
6. The coma of a comet is the temporary atmosphere that forms around the comet’s nucleus as it passes the sun. This atmosphere will soon be blown away by the Sun’s solar winds. 7. Comets develop tails because, as the ices that make up the nucleus sublimate, the solar wind from the Sun blows the coma outwards and away from it, creating two tails. a. The dust tail, made up of rock particles that become dislodged as the ices sublimate.
b. The gas tail is made up of the sublimated ices.
8. The dust tail curves around towards the orbital path of the comet because it is the heavier of the two tails. The gas tail is much lighter than the gas tail, and so points directly away from the Sun as it is blown away by the solar winds.
9. The tails will always point away from the Sun, so their direction will rotate to accommodate the position of the Sun as the comet passes through its orbit. a.
10. The orbital of a comet is always very elliptical, with very long orbital periods. Chapter Ten: Exoplanets
1. An exoplanet is a planetary object that orbits an alien star.
2. We cannot see exoplanets with telescopes, like we can planets in our own system, because of their immense distance from us, but we can use other methods to detect them. 3. A center of mass is the point at which two object of varying sizes are balanced at, like the point on a pencil where it will balance the heavy eraser and lighter tip. Pluto and Ceres orbit around a center of mass, and so do not have a perfectly straight orbit, but a slight wobble.
4. The radial velocity method is a method of detecting exoplanets by observing stars that have a slight wobble in their orbit, which is measurable through the Doppler effect. The wobble means that the star shares a center of mass with another object, most likely a planet.
a. Since the force of gravity between two objects is determined by their relative masses and the distance between them, we can also determine the mass and orbit size of the planet orbiting the alien star, because the bigger the wobble is, the larger the planet is and the larger the orbit is.
5. The other method for detecting exoplanets is by measuring their transit, or the amount that their star dims when the planet passes in front of it. This only works if the planet’s orbit is edgeon to us observing from Earth.
a. A light curve is a graphical representation of the dip in the brightness of the star caused by the transit. This tells us the size of the planet because larger planets will block more light from reaching us, resulting in a larger dip in brightness.
6. If both of the detection methods are used on the same exoplanet, we can determine the density of that planet because we will have gathered information on the size and mass of that planet.
TO BE CONTINUNED. . . |
Chemical Patterns - Arrangement of Elements
Chemists have arranged the chemical elements into a table called the Periodic Table. This helps us to make sense of the different properties of the elements and their compounds. It also helps us to predict how they will behave in different situations.
There are several different versions of the Periodic Table, but all have a similar apperance. Each element is shown but its symbol, and sometimes also by its name.
In the Periodic Table, the elements are arranged in order of proton number, also called atomic number. This is the number of positive protons in each atom. It is shown as the number written below each element in the table.
Putting element in this order gives a repeating pattern of their properties. In the Periodic Table each element is place beneath those with similar properties.
Chemical Patterns - Periodic Table
Chemical Patterns - Data from the Table
The Periodic Table gives us a great deal of information about each of the elements. Firstly, the name and symbol of each element are shown. If you know one of these two facts about an element, you can use the table to find the other.
As well as the proton number shown below each element, another number is shown above it. This is the relative atomic mass of the element . It is a comparative measurement of the mass of one atom of the element. You can use it to see how much heavier an atom of one element is compared with an atom of another element.
For example a magnesium atom has a relative atomic mass of 24. So we know it is twice as heavy as a carbon atom, which has a relative atomic mass of 12.
Chemical Patterns - Rows and Columns
The Periodic Table is divided into horizontal rows and vertical columns. The first row has only two elements: hydrogen and helium. The next row has eight elements. lithium to neon.
Across each row, the elements on the left are metals, while those on the right are non-metals, most of the elements are metals.
Each column in the table contains elements with similar properties called a group. Each has a group number, shown across the top of the table. So Group 1 contains the elements lithium (Li) to francium (Fr), and Group 7 contains the elements fluorine (F) to astatine (At).
Chemical Patterns - Group 1 Properties
The elements in Group 1 of the Periodic Table are called the alkali metals. They include lithium, sodium and potassium.
Lithium, sodium and potassium are all soft metals that are easily cut with a scalpel or knife. The freshly cut surface is shiny, silver colour, but this tarnishes quickly to a dull grey as the metal reacts with oxygen and water in the air. Pieces of such metals are stored in oil to prevent these reactions. The shiny surface of sodium tarnishes more quickly than that of lithium. And potassium tarnishes more quickly than sodium. This shows the increasing reactivity of the metals as we go down the group.
Because the alkali metals are so reactive, care has to taken when using the,. They must not be touched because they will react with the water in sweat on the skin. Gloves may be used, and goggles should be worn.
Chemical Patterns - Melting, Boiling point, Densit
The alkali metals have low melting and boiling points compared to most other metals. Apart from the other alkali metals, only three metals indium, gallium and mercury have lower melting points than lithium. Lithium has the greatest melting point in the group, the melting points then decrease as you go down the group.
Boiling points show a very similar pattern to the melting point. Lithium has the highest boiling point and they decrease as you go down the group.
The density of a substance is a measure of how much mass it has for its size. Its measured in grams/cubic centimetre. For example gold and lead are very dense metals - even a small lump of either of them can still feel heavy. The alkali metals have low densities compared to most other metals. Lithium is shown to have the lowest density in the group. The densities then generally increase as you go down the group.
The alkali metals are very soft. Lithium is the hardest alkali metal and they become softer as you go down the group.
Chemical Patterns - Reaction with cold water
All the alkali metals react vigorously with cold water. In each reaction, hydrogen gas is given off and the metal hydroxide is produced. The speed and violence of the reaction increases as you go down the group. This shows that the reactivity of the alkali metals increases as you go down Group 1.
Lithium - When lithium is added to water, lithium floats. It fizzes steadily and becomes smaller, until it eventually disappears.
Lithium + Water = Lithium Hydroxide + Hydrogen
Sodium - When sodium is added to water, the sodium melts to forma a ball that moves around of the surface. It fizzes rapidly, and the hydrogen produced may burn and orange flame before the sodium disappears.
Sodium + Water = Sodium Hydroxide + Hydrogen
This equation applies to all the Group 1 metals when reacting with cold water.
Chemical Patterns - Strong Alkalis
The hydroxides formed in all of these equations previously mentioned dissolve in water to form alkaline solutions. These solutions turn universal indicator purple, showing they are strongly alkaline. Strong alkalis are corrosive, so care must be taken when they are used - for example, by using goggles and gloves.
Chemical Patters - Reaction with Chlorine
All of the alkali metals react vigorously with chlorine gas. Each reaction produces a white crystalline salt. The reaction gets more violent as you move down Group 1, showing how reactivity increases down the group.
Lithium - If a piece of hot lithium is lowered into a jar of chlorine, white powder is produced and settles on the sides of the jar. This is the salt lithium chloride.
Lithium + Chlorine = Lithium Chloride
2Li + Cl2 = 2LiCl
Sodium - If a piece of hot sodium is lowered into a jar of chlorine, the sodium burns with a bright yellow flame. Clouds of *********** are produced and settle on the sides of the jar. This is the salt sodium chloride. The reaction of sodium with chlorine is similar to the reaction with lithium, but more vigorous.
Sodium + Chlorine = Sodium Chloride
2Na + Cl2 = 2NaCl
Chemical Patterns - Group 7
The elements in Group 7 of the Periodic Table are called the halogens. They include chlorine, bromine and iodine. The halogens are diatomic - this means they exist as molecules, each with a pair of atoms. Chlorine molecules have a formula Cl2, bromine Br2 and iodine I2,
The halogens show trends in physical properties down the group.
The halogens have low melting and boiling points. This is a typical property of non-metals. Fluorine has the lowest melting point and boiling point. The melting points and boiling points then increase as you down the group.
Chemical Patterns - Group 7 Properties
Room temperature is usually taken as being 25 degrees. At this temperature fluorine and chlorine are gases, bromine is a liquid, and iodine and astatine are solids. There is therefore a trend in state from gas to liquid to solid down the group.
The halogens become darker as you go down the group. Fluorine is very pale yellow, chlorine is yellow-green and bromine is red-brown. Iodine crystals are shiny purple - but easily turn into a dark purple vapour when they are warmed up.
When we can see a trend in the properties of some of the elements in a group, it is possible to predict the properties of other elements in that group. Astatine is below iodine in Group 7. The colour of these elements gets darker as you go down the group. Iodine is purple, and, as we would expect astatine is black.
Chemical Patterns - Reaction of Halogens
The halogens become less reactive as you go down the group. Fluorine at the top of the group, is the most reactive halogen. It is extremely dangerous, causing severe chemical burns on contact with skin.
The halogens react with metals to make salts called metal halides.
metal + halogen = metal halide
For example sodium reacts with chlorine to make sodium chloride (common salt).
The reaction between sodium and a halogen becomes less vigorous as we move down Group 7. Fluorine reacts violently with sodium at room temperature. Chlorine reacts very vigorously when in contact with hot sodium. Iodine reacts slowly with hot sodium.
Chemical Patterns - Use of Halogens
Halogens are bleaching agents. They will remove the colour of dyes. Chlorine is used to bleach wood pulp to make white paper.
Halogens kill bacteria. Chlorine is added to drinking water at very low concentrations. This kills any harmful bacteria in the water, making it safe to drink. Chlorine is also added to the water in swimming pools.
Because the halogens are very reactive and poisonous, care must be taking when using them. Chlorine is used in a fume cupboard. Iodine should not be handled (it will damage the skin). Gloves may be used, and goggles should be worn.
Chemical Patterns - Displacement Reactions
The reactivity of the halogens decreases as we move down the group. This can be shown by looking at displacement reactions.
When chlorine (as a gas or dissolved in water) is added to sodium bromide solution the chlorine takes the place of the bromine. Because chlorine is more reactive than bromine, it displaces bromine from sodium bromide. The solution turns brown. This brown colour is the displaced bromine. The chlorine has gone to form sodium chloride.
If you look at the equation you can see that the Cl and Br have swapped places.
Chlorine + Sodium Bromide = Sodium Chloride + Bromine
Cl2 + 2NaBr = 2NaCl +Br2
Chemical Patterns - Reactivity Series
This type of reaction happens with all of the halogens. A more reactive halogen displaces a less reactive halogen from a solution of one of its salts.
If you test different combinations of the halogens and their salts, you can work out a reactivity series for Group 7. The most reactive halogen displaces all of the other halogens from solutions of their salts, and is itself displaced by non of the others. The least reactive halogen displaces non of the others, and is itself displace by all of the others. It works just the same where you use sodium salts or potassium salts.
Chemical Patterns - Chemical Equations
When chemicals react with each other, different chemicals are made. One of the best ways to describe what is happening is by writing a chemical equation.
A chemical equation tells you which chemicals reacted together (the reactants) and the new chemicals that were made in the reaction (the products). The simplest equation is a word equation. For example: Sodium + chlorine = sodium chloride. A symbol equation gives more information about what is happening in the reaction. 2Na + Cl2 = 2NaCl - Each of the reactants and products is shown as a formula. This formula shows how many atoms of element are present. The formula for sodium is Na - the same as its symbol. The formula for chlorine is Cl2, because the halogens exist as molecules of two atoms (diatomic molecules).
Each of the Group 1 halides has a formula with one symbol for the metal and for for the halogen. So, for sodium chloride the formula is NaCl. The numbers in front of the formulae are there to balance the equation. This gives the same number of atoms of each element on each side of the equation.
Chemical Patterns - State Symbols
Sometimes it is useful to know whether the reactants and products in a chemical reaction are solids, gases, liquids or dissolved in water. We can *** state symbols to a symbol equation to show this.
- (s) - solid
- (l) - liquid
- (g) - gas
- (aq) - aqueous (dissolved in water)
So for the reaction between sodium and water, this is the symbol equation with state symbols :
2Na (s) + 2H2O (l) = 2NaOH (aq) + H2 (g)
Chemical Patterns - Atomic Structure
Atoms are not the smallest particles of matter. Atoms are made up of even smaller, subatomic particles called proton, neutrons and electrons.
At the centre of every atom is a nucleus containing protons and neutrons. All atoms of the same element have the same number of protons. This number is used to arrange the elements in the Periodic Table, beginning with hydrogen, which has just one proton.7
Electrons are contained in shells around the nucleus. The total number of electrons is always the same as the number of protons in the nucleus.
These shells are also called energy levels. The number of shells, and the number of electrons in the outer shell, varies from one element to another. For example, a lithium atom has two shells with two electrons in the inner shell and one in the outer shell. A carbon atom also has two shells but with two electrons in the inner shell and four in the outer shell.
Chemical Patterns - Relative masses and charges.
Protons and neutrons have the same mass, which is about 2000 times larger than the mass of an electron. Protons and electrons have an electrical charge. This electrical charge is the same size for both, but protons are positive and electrons are negative.
Neutrons have no electrical charge; they are neutral.
Proton - Relative Mass = 1, Relative Charge +1
Neutron - Relative Mass = 1 Relative Charge 0
Electron - Relative Mass = 0.0005 Relative Charge -1
Chemical Patterns - Spectra
The coloured light given off by fireworks is produced as elements in the fireworks are heated up. By studying the light geven out by elements, scientists have found out about the structure of atoms, and even discovered new elements.
When the atoms of some metals are heated, they give off coloured light. The colour given off by each metal is different, and can be used to identify them.
A small piece of metal compound on the end of a piece of Nichrome wire is introduced into a hot Bunsen flame. The Bunsen flame shows a colour that is characteristic of the metal in the compound.
Lithium shows red
Sodium shows yellow
Potassium shows lilac
Chemical Patterns - Line Spectra
All atoms give off light when heated, although sometimes this light is not visible to the human eye. A prism can be used to split this light to form a spectrum, and each element has its own distinctive line spectrum. This technique is known as spectroscopy. Some examples of what line spectra look like are shown here:
Scientists have used line spectra to discover new elements. In fact the discovery of some elements, such as rubidium and caesium, was not possible until the development of spectroscopy. The element helium was discovered by studying line spectra emitted by the sun.
Chemical Patterns - Electron Arrangement
The number of protons in the atom of an element determines its place in the Periodic Table. The number of electrons in an atom is the same as the number of protons. These electrons are arranged in shells or "energy levels" around the nucleus. The arrangement of electrons determines the chemical properties of an element.
Electrons are arranged in shells at different distances around the nucleus. As we move across each row of the Periodic Table the proton number increases by one for each element. This means the number of electrons also increase by one for each element.
Starting from the simplest element, hydrogen, and moving through the elements in order we can see how the electrons fill the shells. The innermost shell of electron is filled first. The shell can contain a maximum of two electrons.
Next the second shell filsl with electrons. This can hold a maximum of eight electrons. When this is filled, electrons go into the third shell, which also hold a maximum of eight electrons. Then the fourth shell begins to fill.
Chemical Patterns - Dot & Cross Diagrams
The electronic structure of each element can be shown simply as the number of electrons in each shell. For example, lithium is 2.1, neon is 2.8.8, and calcium is 18.104.22.168.
The arrangement of electrons can also be shown using a dot and cross diagram. Electron shells are drawn as circles, with the electrons on each shown as dots or crosses. Here is an example:
Lithium atom: The black dot represents the nucleus, where the red dots represent the electrons.
Chemical Patterns - Electron Arrangement & Group N
The way electrons are arranged in an atom is called the "electronic structure". As you have seen there is a link between an element's electronic structure and its place in the Periodic Table. You can work out an element's electronic structure from its place in the Periodic Table.
Moving across each period, the number of shells is the same as the period number. As you go across each period from left to right the outer shell gradually becomes filled with electrons. The outer shell contains just one electron on the left hand side of the table, but is filled by the time you get to the right hand side.
Moving down each group, the number of electrons in the outermost shell is the same as the group number. Each element in a group therefore has the same number of electrons in its outer shell.
Group 0 is a partial exception to this rule, since although it comes after Group 7 its is not called Group 8; and it contains helium which has only two electrons in its outer shell. |
Designed to SUPPLEMENT and add FUN to an existing moon & tides unit!
Includes great activities, extra practice, and an engaging cumulative project.
SAVE 30% when you bundle these 5 great resources!
Supports NGSS standards MS-ESS1-1 and MS-ESS1-2.
Moon Phase Sorting Activity
Relationship Between Tides and the Phases of the Moon Activity
Use a Tide Chart to Graph Tides Activity
Planets in Motion: Seasons & Tides Cumulative Project
Moon Phases & Tides Review Activity
INDIVIDUAL RESOURCE DESCRIPTIONS:
Moon Phase Sorting Activity:
Get kids thinking and engaged with the content! This activity directs middle school students to cut out the names, definitions, and pictures of each moon phase and paste them on a 3-column chart under the appropriate heading of "Word", "Picture" and "Definition".
Relationship Between Tides and the Phases of the Moon Activity:
Take a deeper look at the relationship between the moon and tides over time! In this activity worksheet middle school students are provided the dates of extreme tides and full moons over a 5 months period, and fill this information into a series of small calendar boxes. From this information, they must predict the dates for the neap tides during these months. They must also make a small diagram of how the moon would appear on each calendar day. A quick analysis section is included at the end, but the bulk of the assignment is to complete the calendars with moon phases and tides to observe the relationship between the two.
Use a Tide Chart to Graph Tides Activity:
Through CLEAR INSTRUCTIONS and a STEP-BY-STEP process, students generate a graph to discover the patterns in tides and how they relate to the moon. Lesson Outline: First, students use a Daily Tide Chart to complete the worksheet “Using a Tide Chart”. This familiarizes them with reading and working with the tide chart. Second, students graph the tide chart data. At the end, students complete the “Tide Graph Analysis” half worksheet. This paper instructs them on how to make some simple additions to their graph (labeling neap and spring tides, moon phases, etc.) and asks some analysis questions.
Moon Phases & Tides Review Activity:
Fun! Students cut out pictures of the sun, Earth and moon. On their desk, they manipulate the positions of the 3 pictures in relation to each other according to 10 different cards that say things like "spring tide", "full moon", and "solar eclipse". Students love this activity!
Also check out our best-selling MOON & TIDES TEST.
Find lots more related resources in the MOON & TIDES Section of Our Store!
We especially recommend this GRAPHING GRAVITY ACTIVITY!
Need something different?
Search all of the NGSS-Aligned middle school materials in Our Store!
Always effective, easy to follow, and standards-based! |
Our last forum will look at social development. Please answer the following three questions in your initial posting.
1-How is social learning linked to academic learning?
2-How are schools providing for social development for children?
3-What are notable issues on gender-role development in society today and how are we as a family and society reacting?
Emotional and Social Development in Early Childhood
The focus of this lesson is the emotional and social development in early childhood. It is critical that, during a child’s early years, he or she is exposed to great variety of experiences that contribute to healthy social and emotional growth. Furthermore, this lesson will focus on the ways in which children develop a sense of self. When children interact with peers, they also advance in their social skills and social development. Finally, being aware of the different roles that genetic and environmental influences play on gender-role development will lead to greater understanding of gender expectations for these young children.
TOPICS TO BE COVERED INCLUDE:
· The development of the aspects of the self
· Peer sociability
· Moral development
· Gender-role development
Development of Aspects of the Self
As children learn to talk and their language skills improve, they become more self-aware as seen in the ways in which they subjectively talk about themselves. As children become able to understand their self-concept ‒ their attributes, attitudes, abilities, and qualities that make them unique ‒ they truly begin to develop a sense of self-awareness. This self-awareness has a profound impact on a child’s emotional and social life. Additionally, self-esteem is also affected by children’s awareness of self.
· RECOGNIZING SELF AS SEPARATE
· SELF-AWARENESS GROWS
· REFERRING TO SELF BY NAME
· PREFERENCES AND EMOTIONS
In infancy children develop an awareness of their body. As children continue to age, they begin to understand that they are separate beings from others. For example, during late toddlerhood, children learn that they have different emotional states, different characteristics (physical and emotional) and different actions or responses from others.
Psychosocial Developmental Stages
This self-awareness development corresponds to the second stage of Erik Erikson’s Psychosocial Development. Click on the icons to read about the milestones for each stage.
1 ½ to 3
Autonomy versus Shame and Doubt.
3 to 4
Initiative versus Guilt.
PRIDE AND HAPPINESS
IF SUPEREGO IS OVERLY STRICT
SOME SHAME AND GUILT IS NEEDED
Self-concept is the image that we hold about ourselves. These ideas or images stem from the beliefs that a child has about him or herself as well as how other individuals view that particular child. Self-concept is what children think about themselves, how they evaluate themselves, and perceives themselves.
· The child’s self-concept, or the ideas that a child has about himself or herself has a direct impact on emotional and social well-being. The categorical self emerges when a child becomes aware of himself or herself as a separate being from others, and that they are an object in the world. It is here that children continue to develop their self-concept.
Self-Esteem in Early Years
· PREOPERATIONAL STAGE
· EASY-GOING TEMPERAMENT
· DIFFICULT TEMPERAMENT
Self-esteem, the judgements we make about our own worth and the emotions that are associated with such judgements, is another aspect of self concept. Self-esteem directly affects emotional experiences, future behaviors, and long-term psychological adjustments.
Self Esteem in Older Preschoolers
By the age of four, preschoolers have developed self awareness and even self-judgements in several areas of their life, like learning, relationships, play, etc.
NO ASSIMILATION OF JUDGEMENTS FROM DIFFERENT SOURCES
COMPETENCIES INCORRECTLY APPRAISED
· As children’s self-awareness matures, so does their autobiographical memory, which is their remembered self. The remembered self includes accounts of experiences as a child as well as memories that are shared with the children by adults. The autobiographical memory greatly influences a child’s self-concept and self-esteem.
· PEER SOCIABILITY
· PROSOCIAL EVENTS
· COMMUNICATION ABILITY AND PEER RELATIONSHIPS
· TEACHING SOCIAL SKILLS
There are several areas in a child’s life that greatly affect the ways in which they interact socially with their peers. As children age, their relationships with their peers and their sociability advance. Peers play a critically important role in children’s well-being, because as their sociability develops, so does the children’s understanding of self and of others. Peer sociability is the interactions and friendships with others. Peer relationships in early childhood have a long-term impact on children. Positive peer relationships especially impact children because they serve as a protective factor against later psychological issues. On the other hand, negative peer relationships, such as peer rejection, are connected to poorer psychological and educational outcomes for children.
Levels of Peer Sociability
Peer sociability in the context of play affects children’s emotional and social development. Since play is the major activity of young children, much of what is known about children is in this context. For example, Mildred Parten is one of the first to study children in the context of play in 1930s. She identified that peer sociability proceeds in four levels.
CHILDREN ENGAGE IN DIFFERENT LEVELS OF PLAY
· Sociodramatic play, which is a type of Parten’s cooperative level of play, is a more cognitively advanced form of play. This play becomes more common in preschool years. This type of play supports cognitive, social, and emotional development.
Gender and Cultural Differences in Play
· GENDER DIFFERENCES
· PLAY IN INDIA
· PLAY IN CHINA
· RURAL AND URBAN DIFFERENCES
Girls engage in more sociodramatic play and boys engage in more rough and tumble types of interactions. Regardless of the type, play requires children to understand the emotions of themselves and others, exercise self-control, and respond to others’ verbal and nonverbal cues.
Friendships for toddlers and preschoolers differs greatly from the components that make up a friendship for adult. Older toddlers and preschoolers have friendships, but they do not have the long-term enduring quality based on mutual trust, as adult friendships and relationships do. Children’s friendships are primarily based on pleasurable play and sharing toys, which lasts approximately until the age of seven, which is also the end of the psychosocial stage that Freud identified. Friendships are typically related to proximity. Children form friendships with other children at their daycare or preschool.
INFLUENCE OF ADULTS AND PEERS
INFLUENCE OF TEACHERS
· MORALITY IN THE YOUNG CHILD
· STANDARDS OF MORALITY
· MILESTONES OF MORALITY
Morality in the young child is centered around the development of the conscience, which is one of the superegos addressed earlier in this lesson.
Psychoanalytic Theory, developed by Sigmund Freud, stresses the emotional side of conscience development, especially identification and guilt as motivators of good conduct. This occurs in stages.
· A child obeys superego to avoid guilt, a painful emotion experienced when a child is tempted to misbehave.
Social Learning Theory
Social Learning Theory focuses on how moral behavior is learned through reinforcement and modeling. Unlike the psychoanalytic theory, it does not have unique stages; rather, morality is acquired gradually like other sets of responses
CHARACTERISTICS OF A GOOD MODEL
CONSEQUENCES OF INADEQUATE MODELS
Cognitive Developmental Theory
· THINKING AND REASONING
· SOME RULES ARE MORE IMPORTANT
· SOME CHOICES ARE NEITHER GOOD NOR BAD
· RIGID MORAL REASONING
Cognitive Development Theory emphasizes thinking and a child’s ability to reason about justice and fairness and other social rules. By preschool, children make moral judgments about what is right or wrong. Sometimes, children have well-developed ideas, like whether a person intentionally wants to hurt or frighten or embarrass another. They understand that this individual is more deserving of punishment, compared to the child that unintentionally does one of those things (hurt or embarrass). Children approve of telling truth and disapprove of lying.
Gender role development revolves around the child’s perception of the characteristics and behaviors identified with being a female or male. Children identify with a specific gender role based on both biological and environmental factors and influences.
· Gender Identity: whether a child identifies as being a male or female; most children identify with their biological sex but a small percentage do not or gender identity is not clear to them.
Similar to mannerisms, religious beliefs, and racism that stem from the home environment, attitudes that drive gender role are learned at home also. They are reinforced by peers, school, and the media. Children as young as two have been known to have a fairly well-developed understanding of gender roles.
RIGID PERCEPTION OF GENDER ROLES
Biological Influences on Gender Role Development
· ANIMAL STUDIES
· EVOLUTIONARY BASIS
· PRENATAL HORMONE EXPOSURE
· REDUCED ANDROGEN LEVELS
Sex differences in play and personality have been discussed and viewed in cultures across the world. Studies of mammals show that males tend to have higher amounts of physical aggression, females tend to be more emotionally sensitive, and at young ages, children prefer same-sex playmates.
Environmental Influences on Gender Role Development
Noticeable gender-typed behavior arises from ages two to thirteen with the sharpest increase in young preschoolers. Experiences at home build on genetic influences, leading to stronger gender typing in early childhood. From birth, children have different experiences based on their gender.
· For example, parents create a different environment by their choice of the color of the room, toys, clothes and how they interact which continues throughout childhood. Boys tend to get toys that involve action and/or competition.
The study of child development began in the 20th century, and many of the original theories and ideas of the 20th century continue to influence the study of child development today. Nature via genetics shapes many aspects of children’s lives and development, such as appearance, physical health, personality, intelligence and more. Nurture, or environmental factors, also plays a key role in the intellectual, emotional and physical development of children. The first two to three years of life are a time of rapid growth and development for children emotionally, physically, and cognitively. These years provide the basis for future learning. Physical or emotional harm during this time can cause lifelong issues with cognition, emotional control, impulse control, and motor skills. Both heredity and environment impact the cognitive ability of growing children.
Emotional and social development begins at birth and continues through infancy and toddlerhood. Basic emotions such as happiness and fear are found early in infancy. These are related to survival. Complex or higher-order emotions like shame and pride emerge once the child has a sense of self. Between birth and three years of age, children grow and develop rapidly. Growth is driven by genetic, hormonal, and environmental factors. In order for children’s language skill, development, and acquisition to grow, they must be exposed to opportunities to communicate with themselves, other children, and adults that use rich vocabulary. Based on research, there are several different stages (ages) at which we can expect children to start participating in make-believe play, understanding metacognition, communicating with others, and understanding grammar. Exposure to these practices will improve language skills and practices.
SOCIAL LEARNING THEORY
COGNITIVE DEVELOPMENTAL THEORY
· Cherry, K. (2016, June 5). What is the superego. VeryWell. Retrieved from https://www.verywell.com/what-is-the-superego-2795876
· Cherry, K. (2016, June 21). Preoperational stage of cognitive development. VeryWell. Retrieved from https://www.verywell.com/preoperational-stage-of-cognitive-development-2795461
Prosocial definition. (n.d.). Retrieved
I need help with the following assignment. Pleae let me know if you can help. Thank you!
Explain the relevance of assessing for psychopathy or antisocial personality disorder in an adult forensic population, as well as the reasons for assessing for psychopathy or antisocial personality disorder. Describe when and where in the adjudicative process assessment for psychopathy or antisocial personality disorder may be used, using specific examples. Explain how assessing for psychopathy or antisocial personality disorder may influence a case outcome, using specific examples.
Support your Application Assignment with specific references to all resources used in its preparation
Assignment 2: Outside Group Observation 1
For this assignment, you will attend a group session. The group may be a community-based group, a school-related group, a private practice group, an area support group, or other appropriate counseling group. You are responsible for making arrangements to attend the group. Exercise judgment in selecting the group to attend. For example, one would need permission from the group leader to attend a meeting of a suicide survivors’ group, but attending an open 12 step meeting would be much easier. You may find such open groups at schools, hospitals, community centers, or agencies.
Select the group and submit your choice for instructor approval by email, specifying the type of group, location of meeting, and anticipated date and time of attendance. No identifying information about group participants should appear in the paper. Be prepared to make changes if the instructor offers suggestions.
Once the group is approved, attend the session and write a report on the following: Analyze the current stage of the group and what you believe would help the group to function more effectively. Characterize the type of group session. (What issue or process occurred as the focus of the session?) Detail the group counseling theory that was being applied in the group session. Describe the techniques that were employed by the group leader during the session. Determine whether the desired group outcomes were achieved. Explain how the group leader’s behavior influenced these outcomes. Reflect on future directions for this group (i.e., possible next sessions).
Your paper should be a 3-page Microsoft Word document, citing a minimum of two scholarly sources.
Create an 8- to 10-slide Microsoft® PowerPoint® presentation on spatial organization.
Describe the following:The concept of spatial organizationHow spatial organization affects visual perceptionHow perception influences behavior
Format your presentation consistent with APA guidelines.
Click the Assignment Files tab to submit your assignment.
Case Vignette 1
You began working started your new job as a counselor at a Native American reservation in Arizona. You are new to the area and the population. Discuss the ethical guidelines you will want to consider in your work with this client.
Case Vignette 2 You are working at a community mental health center in New York City. Your newest client is a 40 year old Mexican woman, married, living with her extended family in New York. She is unemployed and has two children, ages 8 and 10. She is experiencing severe depression and was referred by her family physician. Discuss the ethical guidelines you will want to consider in your work with this client. In the case analysis, please include the following information: Description of at least 3 ethical and/or legal issues in the vignette Identification of relevant ethical codes Explanation of 3 courses of action to resolve the issue Description of the decision-making process for each course of action Assessment of option that best upholds the ethical standards of the profession
The paper should be 5 to 6 pages, and include a minimum of 3 scholarly resources. APA format.
These questions need to be answered a short essay answer for each question with the citation right below the answer not on a refrence page. 200 words minmum on each question. No plagrisim.
1. Chapter 1: What is cognitive psychology?
2. Chapter 1: How did cognitive psychology emerge as a major force in psychology?
3. Chapter 1: What is a cognitive model, and how have cognitive models been used to understand the mind?
4. Chapter 3: Why are sensation and perception important topics to cognitive psychologists?
5. Chapter 3: What are the major theories of attention and the experimental support for them?
6. Chapter 3: What have cerebral imaging techniques told us about attention?
7. Chapter 4: What are the main issues regarding object recognition?
8. Chapter 4: What is Gestalt psychology, and how does the theory account for perception?
9. Chapter 4: What are the main features of the following ideas regarding pattern recognition: template matching, geon theory, feature analysis, and prototype formation?
10. Chapter 5: How much information can you hold in short-term memory?
11. Chapter 5: What is “chunking” of information, and how does it increase our capacity for storing knowledge?
12. Chapter 5: What type of memories are the easiest to remember? Why?
13. Chapter 6: What is meant by level of recall, levels of processing, and self reference effect?
14. Chapter 6: What is episodic and semantic memory?
15. Chapter 6: Discuss evidence for the existence of two memory stores.
16. Chapter 7: Discuss the link between mnemonics and college success.
17. Chapter 7: Discuss the link between expertise and brain function.
18. Chapter 7: What are the three factors that make a mnemonic, a mnemonic?
19. Chapter 8: What important historical events lead to the contemporary studies of consciousness?
20. Chapter 8: How can consciousness be studies scientifically?
21. Chapter 8: Describe the stages of sleep.
22. Chapter 9: Why has the study of words and language been a favorite topic of psychologists interested in knowledge and its representation?
23. Chapter 9: What features identify the following: set-theoretical model, semantic feature-comparison model, network model, prepositional networks, neurocognitive model?
24. Chapter 9: What have studies of amnesic patients told us about the structure of memory? 25. Chapter 10: How were the early studies of mental imagery and the testing of mental attributes related?
26. Chapter 10: What are the main features of (a) the dual-coding hypothesis, (b) the conceptual-propositional hypothesis, and (c) the functional-equivalency hypothesis?
27. Chapter 10: How does a person’s bias influence the type of mental map he or she might form?
28. Chapter 11: How do psychologists differ from linguists in the study of language?
29. Chapter 11: What are the basic features of transformational grammar?
30. Chapter 11: What is the linguistic-relativity hypothesis? What support has been given for this hypothesis? And what evidence is against the hypothesis?
31. Chapter 13: How do cognitive psychologists define “thinking,” and how does thinking differ from concept formation? From logic?
32. Chapter 13: What are the major components of a syllogism?
33. Chapter 13: What are Venn diagrams? Take a basic argument and illustrate it in a Venn diagram.
34. Chapter 14: List some famous people you consider to be creative. What are the features that define creativity in them?
35. Chapter 14: How does functional fixity make creative solutions difficult?
36. Chapter 14: What recent experiments in genetics portend a new way of looking at intelligence?
Imagine that you are scheduled to interview a practicing psychologist about what his/her job is like. This is known as an “information interview.” If you could only ask 5 ethics related questions what would they be? Give this careful thought.
You only have 5 questions–what key aspects of the profession, with ethics particularly in mind, will you focus on?
Write your 5 questions and below each construct a hypothetical response. How would your interviewee answer each?
Note: You do not have to be an expert on professional ethics to complete this assignment. Base the responses on what you have learned about professional ethics in the course.
Development of 5 interview questions to ask psychologist
Hypothetical responses to the 5 interview questions demonstrate knowledge of ethical guidelines (250-300 words per response)
Compliance with APA paper source crediting and formatting standards.
Minimum of three scholarly resources are used
Minimal to no grammar, spelling or basic writing errors
Assignment is 5-6 pages, not including title page or references page
Identifying Relevant Theories and Models
To complete this assignment, use the required APA “Identifying Relevant Theories and Models Template,” linked in the Resources. Address the following sections: Theory Identification: Review three theories that you feel might be appropriate for addressing the client’s sexual problem. Include a section on how neuroscience has facilitated our understanding of the client’s problem. Pick the theory you believe best represents the client’s situation, and provide a rationale for your selection. In addition, describe a systems perspective that provides an understanding of family and other systems theories and major models of family and related interventions as it pertains to human sexuality. Reference: Continue to build the Reference section by adding the references you utilized to complete this section of your treatment plan. THIS IS THE CASE THAT I NEED TO WRITE ABOUT
CLIENT 3 CASE
Client is a 22 year old female. She states she has come to counseling because she wants to do a better job knowing herself and be brave enough to let others know her too. Client reports that she came out as bisexual several months ago and has had mixed reactions to this announcement. She states that she has told a few friends and some of them have disowned her. She also states that she has not told her parents or her best friend because she is worried that the same thing might happen (being disowned). She reports being conflicted about her sexuality for the past 4 or 5 years and states that she has known that she was different for a long time. She reports that she did not do anything about this in high school and ignored how she felt because she was worried about what others would say. She states now that she is in college and on her own that she feels it is safer to be able to explore her sexuality. She reports three relationships with men and two relationships with women but the relationships with women were a secret. She states that the last woman broke up with her because the client insisted on secrecy.
Client reports a stable family environment growing up in a two-parent home. She reports that her mother was very controlling and tried to make her do ‘all sorts of things’ while she was growing up. She states she mostly complied because she did not want to get into trouble. She reports that her dad was somewhat dismissive and allowed her mom to ‘control the house’. She states that she received good grades throughout high school and was basically ‘a stellar child’. She reports that she played basketball and tennis and was interested in sports her junior and senior year. She reports one older brother and one younger sister and strong relationships with both siblings. She reports that she has lived away from home for the past three years with the first year on campus and the last two years living in a house with friends. She has been part of a sorority since her freshman year and feels very connected to the women who also are part of the sorority. She is worried that if they find out about her secret, they might kick her out of the sorority. Client states that she is studying environmental sciences and is happy with this choice.
Client reports little income and is primarily supported by her parents. She states she works part-time on campus in the dean’s office and she likes this job, but she knows she needs to find something different that is more in her field of study. She reports no drug use, but she does reports some alcohol use. She states she primarily drinks when she is at parties with her sorority sisters. She reports no past or current legal problems. She reports no medical problems. She states that she has grown up Catholic, but she is currently not practicing. She states that she believes there is a God but she does not know how to reconcile this belief with her thoughts and feelings about herself. She states that her faith only makes her feel guilt. She states that she is worried that she is bisexual and that she is never going to ‘pick a side’. Her gay and lesbian friends are always joking around with her because she just ‘falls in the middle’. She reports this is her only real problem of concern. Her grades are As and Bs and she is on track to graduate in the fall.
DUE DATE IS [email protected]
Natalie was growing concerned about her daughter Brandi’s school performance. Her grades had dropped since the beginning of the school year, and she seemed reluctant to go to school. On some days, she complained of vague symptoms, such as stomachache or headache. On other days, she simply did not get out of bed. Natalie took her to the doctor, but there was no definitive diagnosis. She questioned Brandi about any problems at school, but Brandi was uncommonly quiet. Natalie then looked at Brandi’s Facebook page and saw a series of comments from Brandi’s friends about a school bully. When Natalie confronted Brandi, the child broke down crying and told the whole story. Another girl, who was two years ahead of her in school, was bullying her. She would tease Brandi in school, leave nasty messages on her Facebook page, and even threatened her on several occasions.
Natalie was furious and immediately arranged a meeting with the teacher and school principal. The school officials attempted to address the problem by speaking to the girl and her parents. The parents placed their daughter in treatment; she was diagnosed with a behavior disorder and put on medication, which seemed to work. Both the girl’s parents and the school officials explained to Natalie that the girl had an underlying medical condition that caused her to become angry and lack impulse control. The school officials were reluctant to suspend the girl because it was “not her fault” but rather a “biological factor” causing the behavior.
Natalie was still upset. She did not understand why her own daughter should suffer. She had a nagging suspicion that the bully’s parents were using the biological cause as an excuse for their daughter’s bad behavior.
Research the biological causes of crime and the eugenics movement using the textbook, the Argosy University online library resources, and the Internet.
Based on the scenario, and drawing on your readings and research, respond to the following:Why do you think some people are troubled by the idea that crime has a biological cause? Support your response using an article from the popular media presenting the biological argument for criminal behavior.In what way may views of biological causes of crime be related to the eugenics movement? Give reasons using a scholarly, peer-reviewed article either for or against the eugenics movement.
Write your initial response in 4–6 paragraphs.
compare Erikson and Freud’s theoretical framework. Make sure to identify and explain key differences (and similarities) between these two theoretical frameworks. Conclude with your perception of which theoretical framework not only fits most with your own view of human nature but may be most applicable or useful in your future career aspirations |
Presentation on theme: "VOCABULARY LIST UNIT TEST ON PROPORTIONAL REASONING."— Presentation transcript:
VOCABULARY LIST UNIT TEST ON PROPORTIONAL REASONING
RATIO A COMPARISON OF 2 NUMBERS OFTEN WRITTEN IN FRACTION FORM WHAT IS THE RATIO OF GIRLS TO BOYS IN THE CLASSROOM?
RATE A COMPARISON OF 2 DIFFERENT KINDS OF UNITS (MILES PER HOUR) WRITE THE TIME PER CLASS PERIOD AS A RATE
RATE OF CHANGE DESCRIBES HOW ONE QUANTITY CHANGES IN RELATION TO ANOTHER CAN EASILY BE SEEN BY THE CHANGES ON A GRAPH
SLOPE RATE OF CHANGE BETWEEN 2 POINTS ON A LINE (CHANGE IN Y / CHANGE IN X)
PROPORTION AN EQUATION THAT SHOWS 2 EQUIVALENT RATIOS IF THERE WERE 50 BOYS IN THIS CLASS, HOW MANY GIRLS WOULD THERE BE? (USE THE RATIO FROM THE FIRST SLIDE)
DIRECTLY PROPORTIONAL HAVING A CONSTANT RATIO PIZZA HUT IS OFFERING PIZZAS FOR $ 10 EACH (ANY SIZE, TOPPING, CRUST). IF YOU ORDER 4 PIZZAS, HOW MUCH WILL IT COST? IS THE RELATIONSHIP BETWEEN NUMBER OF PIZZAS AND TOTAL PRICE DIRECTLY PROPORTIONAL?
NONPROPORTIONAL NO CONSTANT RATIO BETWEEN QUANTITIES NUMBER OF ITEMS ORDERED AT AN ONLINE STORE TOTAL AMOUNT PAID 10$ $ $ $ 0.89
INVERSELY PROPORTIONAL As one quantity becomes smaller, the other becomes larger. An example is the relationship between the speed and time it takes to travel a fixed distance. If you drive 60 mph, you can drive 60 miles in 1 hour. If you drive 30 mph, it will take you 2 hours to drive the same 60 miles. Speed in mph Time in hours / 3 1 Inversely proportional relationships have a constant of proportionality. It can be found from a combination of the speed and time that works for all pairs of speed and time. What is the constant of proportionality for the above relationship? How does this relate to the graph of the data? How long will it take you to drive 60 miles if you drive at 2 mph? 25 mph? 65 mph? How fast must you drive to cover the 60 miles in 5 hours? 3 hours?
SCALE FACTOR RATIO OF THE LENGTHS OF 2 CORRESPONDING SIDES OF 2 SIMILAR POLYGONS
RATIO OF AREAS = SCALE FACTOR SQUARED (TIMES ITSELF) RATIO OF VOLUMES = SCALE FACTOR CUBED (TIMES ITSELF TWICE) SCALE FACTOR = 4 RATIO OF AREAS = SCALE FACTOR = 3/4 RATIO OF VOLUMES = |
The world of superconductivity is in an extraordinary state. In 2015, a team in Germany discovered that hydrogen sulphide can superconduct at 203 kelvin (-70 degrees Centrigrade). That’s the highest temperature ever recorded for a superconductor. But nobody is quite sure how it does it, although there are one or two theories.
Today, the field is set to be turned on its head once again by the discovery that a simple organic molecule, more usually found in sunscreen, can be made to superconduct at 123 K at ambient pressure. That’s almost 100 degrees higher than the current record for an organic superconductor and a similar temperature to the very best ceramic superconductors.
If confirmed, the discovery raises the prospect of a new focus for materials scientists hoping to achieve ever higher superconducting temperatures. It should also set theorists scrambling for an explanation—for the moment nobody is quite sure why this molecule should superconduct at all at such temperatures.
First some background. Ordinary metallic superconductors operate at relatively low temperatures of up to 30 K (-243 degrees Centigrade). When cooled, electrons within the metallic lattice join up to form Cooper pairs that interact with coherent vibrations in the lattice. When the temperature is low enough, these vibrations conspire to ease the passage of the Cooper pairs through the lattice with zero resistance.
This phenomenon of zero resistance is a delicate state. Raise the temperature just slightly and it breaks down and the resistance increases dramatically. This phase transition in conductivity at a critical temperature is one of the hallmarks of superconductivity.
Another is the so-called Meissner effect, in which a superconductor expels the magnetic field within it. Physicists demand evidence the Meissner effect in any claim of superconductivity.
A final test of superconductivity is the isotope effect. Because superconductivity depends on lattice vibrations, it is tremendously sensitive to the mass of the atoms in the lattice. Change their mass by replacing them with lighter or heavier isotopes and the critical temperature changes too.
This change in critical temperature is yet another crucial sign of superconductivity. Physicists usually demand all three of these signatures before they accept any claim of superconductivity.
So what of the new claim? The molecule in question is an aromatic hydrocarbon called para-terphenyl or sometimes diphenylbenzene. As a material used in laser dyes and sunscreen, it is unremarkable.
That looks set to change now that Ren-Shu Wang and pals at Hubei University in China say they have made it superconduct at 143 K by doping it with potassium.
Their method is relatively straightforward. In a high vacuum, they mixed pure para-terphenyl with pure potassium cut into small pieces at a ratio of three to one. They then packed the mixture into quartz tubes and heated it to 260 degrees Centigrade for up to seven days.
Finally, they put the mixture into non-magnetic capsules and measured the magnetic and conducting properties of the material over a temperature range of 1 to 300 K.
The results make for interesting reading. Ren-Shu and co say the magnetic properties of the material change dramatically at a temperature of 123 K. “This shape of the magnetization susceptibility curve is consistent with the well-defined Meissner effect,” they say. “The superconducting transition at temperatures higher than 120 K in this molecule was unambiguously confirmed from these measurements.”
That’s an interesting result but it is far from a slam dunk. It’s not hard to imagine physicists asking whether they observed a phase transition in conductivity at the same temperature and a change in the critical temperature when ordinary atoms were replaced with isotopes.
Unfortunately, Ren-Shu and co have nothing to say on these issues. That makes their announcement tentative at best.
Physicists well know that the field of superconductivity is littered with claims of high temperature phenomenon that have turned out to be impossible to reproduce. So more work is clearly needed here.
Nevertheless, it raises some interesting ideas. If para-terphenyl is superconducting at these temperatures, how does it do it?
Ren-Shu and co have a tentative answer and some evidence for it. Polymers that conduct at all are relatively new to physics, having been first developed at the turn of the century. To explain how they work, physicists think that electrons move across a molecule by interacting with the way it vibrates.
This combination of a vibration and an electron is called a polaron and the way polarons move through an organic molecule explains their conductivity. In some molecules polarons can pair up to form bipolarons.
This is what Ren-Shu and co think is happening in para-terphenyl when it is doped with potassium. The doping allows bipolarons to form, and when the structure is cooled, the bipolarons travel resistance-free in exactly the same way as Cooper pairs in conventional superconductors.
They say they have gathered evidence of this by studying the vibrations at work in the superconducting molecules using a technique known as Raman scattering.
That’s an interesting suggestion. If bipolarons can cause organic materials to superconduct, para-terphenyl is unlikely to be the only example. And maybe this same process will work at higher temperatures in other materials.
That’s a lot of “ifs,” and all this should be taken with a pinch of salt until the exotic behavior of doped para-terphenyl is confirmed. One thing is for certain though—para-terphenyl is about to become one of the most closely studied molecules on the planet. We’ll be watching to see what it reveals.
Ref: arxiv.org/abs/1703.06641: Superconductivity Above 120 Kelvin in a Chain Link Molecule
This startup wants to copy you into an embryo for organ harvesting
With plans to create realistic synthetic embryos, grown in jars, Renewal Bio is on a journey to the horizon of science and ethics.
VR is as good as psychedelics at helping people reach transcendence
On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms.
This nanoparticle could be the key to a universal covid vaccine
Ending the covid pandemic might well require a vaccine that protects against any new strains. Researchers may have found a strategy that will work.
This artist is dominating AI-generated art. And he’s not happy about it.
Greg Rutkowski is a more popular prompt than Picasso.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. |
- Category: Microeconomics
- Hits: 4,105
When we have studied equilibria so far, it has always been so-called partial equilibria. (A partial equilibrium is one where we assume that “everything else is unchanged.”) However, we have also seen that a change in one variable can lead to changes in many other variables, so the restriction that everything else is unchanged may not be very realistic.
For example, a price change can affect the price of close substitutes and complementary goods. We will now study how interactions between two individuals in a very simple economy lead to general equilibrium, i.e. a simultaneous equilibrium in all markets.
A "Robinson Crusoe" Economy
Consider an economy with only two agents on a desert island: Robinson and Friday. Those two are then the only consumers and the only producers. Let us assume that they also produce only two goods: Coconuts and fish. The question is then how much coconuts and fish to produce, and how to allocate them among themselves.
We have already mentioned efficiency several times in this posts (for instance in The Deadweight Loss of a Monopoly and First Degree Price Discrimination ). Efficiency is about how much waste there is in an economy; less waste means more efficient. There are several related ways to define efficiency. An often-used measure of efficiency regarding allocations is Pareto efficiency. The definition of Pareto efficiency is:
Pareto improvement. A change in allocation such that
- No one is worse off; and
- At least one, possibly several, is better off.
Pareto efficient or Pareto optimal. An allocation such that
- No Pareto improvements are possible.
The Edgeworth Box
In Chapter 3, we discussed the basics of consumer theory. We described, for instance, indifference curves and the budget line. If we consider an exchange economy in which the quantities of the goods are fixed, we have a zero-sum game. This means that one individual can only get quantities that the other ones do not get; there is, for instance, no growth in the economy. The name “zero-sum game” comes from the fact that, the sum of what some people gain and what others lose is always zero.
Figure 18.1: Two Indifference Maps
Suppose that Robinson and Friday have different preferences over coconuts and fish, and that these look like in Figure 18.1. Since the quantities of the two goods are fixed, we can combine these two indifference maps into one by taking one of them, for instance Fridays, turning it upside-down and putting it over Robinson’s. We then get a picture such as the one in Figure 18.2. (The additional information in the figure will be explained below.) Such a diagram is called an Edgeworth-box .
Figure 18.2: An Edgeworth Box for Consumption
Note that the scales in the Edgeworth box are in opposite direction for the two agents. Upwards along the Y-axis, Robinson gets more fish while Friday gets less. To the right along the X-axis, Robinson gets more coconuts and Friday gets less. Also, be aware of which preference curves belong to which individual:
The full lines belong to Robinson while the broken lines belong to Friday.
Efficient Consumption in an Exchange Economy
The clever thing with the construction in Figure 18.2 is that we can see directly which allocations of goods that are Pareto efficient.
- Consider, for instance, point a. Can it correspond to an efficient allocation? Compare it to point b. In b, both Robinson and Friday are better off (as both are on a higher indifference curve). Consequently, b is a Pareto improvement as compared to a, and then a cannot be an efficient allocation. Note that all points within the grey area are Pareto improvements as compared to a.
- Is then point b efficient? Compare b to c. In c, Robinson is better off while Friday is indifferent between b and c. Consequently, c is a Pareto improvement as compared to b, and then b cannot be efficient either.
- In point c, One of Robinson’s indifference curves just barely touches one of Friday’s indifference curves. That is the criterion for a Pareto efficient allocation of goods. Compared to c, every other allocation makes either Robinson or Friday (or both) worse off. c is consequently a Pareto efficient allocation.
Remember that the slope of an indifference curve is the same thing as the marginal rate of substitution; MRS. The criterion for an efficient allocation of goods can then be written
MRS r = MRS f
where the subscripts refer to Robinson and Friday, respectively. In other words, for the allocation to be efficient, both agents are required to have the same marginal valuation of the goods.
Point c is, however, not the only Pareto efficient allocation in the diagram. We could repeat the procedure above for every possible indifference curve, and find all points of tangency. If we would do that, and then connect all points to a curve, we would get the so-called contract curve.
If we assume that the initial allocation of coconuts and fish is as in point a and that we have free trade, then we would expect Robinson and Friday to start trading until they end up somewhere on the contract curve. Moreover, since only points in the grey area are Pareto improvements compared to a, we would expect them to end up on the part of the contract curve that lies within that area. Exactly where we they end up is, however, a question about negotiations between Robinson and Friday.
The Two Theorems of Welfare Economics
There are two important theorems regarding efficiency and competitive markets: the two welfare theorems:
- 1st theorem of welfare economics: If all trade occurs in perfectly competitive markets, the allocation that arises in equilibrium is efficient.
- 2nd theorem of welfare economics: Each point along the contract curve is a competitive equilibrium for some initial allocation of goods.
The first theorem is a variation of “the invisible hand” (see: Properties of the Equilibrium of a Perfectly Competitive Market ). It is enough to have a perfect competition to get an efficient allocation. The second theorem states that there is no loss to efficiency from a reallocation. A competitive market will always find an efficient allocation.
Regarding production, we can perform an analysis that is very similar to the one we did for consumption. Instead of two indifference maps, we put two isoquant maps (see Production in the Long Run ) together. We imagine that Robinson and Friday have two firms, one that produces coconuts and one that produces fish. In the production, they use labor and capital, and their access to these input factors is fixed.
They have a certain number of fishing tools, tools to pick coconuts with, and a maximum number of working hours. This allows us to construct an Edgeworth box for production (see Figure 18.3).
Let us start by assuming that Robinson and Friday have chosen point a. They consequently invest quite a large number of working hours, but not so much capital, on fishing and, vice versa, a small number of working hours but quite much capital on picking coconuts.
Point a is not efficient. If they instead choose point b, they get, employing the same total number of working hours and the same total amount of capital, more coconuts and just as much fish as they did in a. If they would choose point c instead of a, they get more fish and just as many coconuts as they did before. Consequently, both point b and c constitute efficiency gains compared to point a.
Figure 18.3: Edgeworth Box for Production
What is “wrong” with point a? The isoquant for fishing (the full line) has a slope that is smaller than the isoquant for coconuts (the broken line). In Production in the Long Run , we defined the marginal rate of technical substitution, MRTS, as the slope of an isoquant. Remember what MRTS means: If we use one unit less of labor, how much more capital must we use in order to produce the same quantity of goods?
The fact that the curves have different slopes implies that we can reduce the work in the fishing firm by a small amount and increase the capital in the same firm by a small amount to keep the quantity the same.
This will free up labor that we can put into the coconut firm instead while we have to reduce the capital employed in that firm by a small amount. However, since the isoquant for coconuts is steeper than the one for fishing, the change means that they can now increase the production of coconuts.
In Figure 18.3, we see that the criterion for having an efficient production is that the isoquant for fishing just barely touches the isoquant for coconuts. In such a point, the two curves have the same slope and the criterion can be expressed as
MRTS good 1 = MRTS good 2
If we, similarly to before, find all such efficient combinations of work and capital for the two goods and joint them into a curve, we get the production contract curve.
The Transformation Curve
In the last section, we derived the production contract curve. That curve is a collection of all efficient combinations of the two input factors. If we take those combinations, they also define how much we can maximally produce of one good, given a certain production of the other. Say, for instance, that we
produce 50 fish. What is the maximum number of coconuts that we are then able to produce? For us to achieve the maximum, we have to produce at an efficient point, i.e. at some point on the production contract curve. At some point on this curve, we produce 50 fish.
The maximum number of coconuts we are able to produce is then the number produced in that same point, say 100 coconuts. combinations of goods they each correspond to, and then use that information in a new graph, we can derive the so-called transformation curve (also called the production-possibility frontier).
The point from the example above, 50 fish and 100 coconuts will then be one point on the transformation curve. On the curve, we have all efficient production possibilities and underneath it, we have all other possible, but inefficient, production possibilities.
Figure 18.4: The Transformation Curve
The opportunity cost of producing one more unit of either good 1 or good 2? Since point a is not efficient, we do not have to give up anything to move to, for instance, point b. We just have to decrease the degree of waste in the economy.
Consequently, the opportunity cost is zero! However, when we have reached point b, we cannot increase the production of good 2 more without reducing the production of good 1. If we want to move from point b to point c, we, instead, have to give up a certain quantity of good 2 to compensate for the increase in good 1.
The quantity we have to give up is the opportunity cost. In other words, on the transformation curve the two goods have a price in terms of the other good, a relative price.
Note that the slope of the transformation curve is the same thing as what we in Baskets of Goods and the Budget Line defined as the marginal rate of transformation, MRT. To find MRT , we then used prices:
MRT = - p1 / p2
However, in the transformation curve there are no prices. Here, instead, we directly get the relative price of the goods.
However, note that this is what we got in Baskets of Goods and the Budget Line as well. The example we used there was that one ice cream costs 10 units (of the appropriate currency) and a pizza 20 units. If we insert those prices into the formula, keeping all units, we get
MRT is consequently the relative price for one good, expressed in units of the other good. Note that this means that the slope of the budget line in Baskets of Goods and the Budget Line is directly related to which point on the transformation curve one has chosen.
Pareto Optimal Welfare
We will now put production and consumption together in one diagram. We start from the transformation curve, and assume that society has come up with an efficient mix of goods 1 and 2, i.e. of coconuts and fish in our example.
The production will then lie on a point on the transformation curve, for instance, point a in Figure 18.5. The marginal rate of transformation, MRT , is the slope in point a, and we produce the quantity q1 of good 1 and q2 of good 2. Robinson and Friday now have to allocate the goods between themselves. We therefore add an Edgeworth box under the transformation curve, with one corner at point a and the opposite one at the origin.
An efficient allocation then requires that their respective relative valuations of the goods are equal, i.e. that MRSR = MRSF (where the subscripts refer to Robinson and Friday, respectively). In the figure, two such allocations are indicated: point b and point c, that both lie on the contract curve. (Also, compare to Figure 18.2.) We now have efficiency in production (since the total production is on the transformation curve) and efficiency in consumption (since the allocation is on the contract curve). Is that enough for us to have general efficiency?
No, it is not. It is also required that we produce what the consumers demand. There is one big difference between points b and c. In point b, the slope of the indifference curves, MRS, is the same as the slope of the transformation curve, MRT, in the point at which we have chosen to produce. That is not the case at point c. At point c, MRS is smaller in magnitude than MRT.
To see what the problem with that is, think of what MRS and MRT are. MRS is the price the consumer are willing to pay for one good in terms of the other, i.e. how many coconuts Robinson and Friday are willing to trade for one fish. Equilibrium in consumption demands that they have the same valuation. MRT, on the other hand, is the price the producers have to pay (given that the production is efficient) to produce one more unit of one good, again in terms of the other good.
If the consumers are willing to pay more for one good than they have to, there are unexploited opportunities and the situation cannot constitute a general equilibrium. If we change the production such that we produce more of the good of which the consumers have a high valuation, then at least one consumer will be better off without anyone else being worse off.
Figure 18.5: Pareto Optimal Welfare
The criterion for an efficient output mix is then that
MRS = MRT
A Definition of Pareto Optimal Welfare
We have now discussed three different types of efficiency:
- MRSR = MRSF; Efficient consumption. Robinson and Friday have the same marginal valuation of the goods. None of them can be made better off by a reallocation, without making the other one worse off.
- MRTS1 = MRTS2; Efficient production. The production of any of the goods cannot be increased without a reduction in the production of the other.
- MRS = MR; Efficient output mix. It will cost as much to change from one good to the other as the relative valuation. No consumer can be made better off by another output mix without making the other worse off.
If all these three criteria are fulfilled, we talk of Pareto optimal welfare
In the remaining chapters, we will look at a few cases of market failures. A market failure is a situation in which the market fails to achieve an efficient allocation. A few such cases we have already seen. Both monopolies and oligopolies are, for instance, examples of market failures. In the following, we will briefly discuss externalities, public goods, and asymmetric information. Our consumption of goods does not occur in a social vacuum.
Much of our consumption, perhaps all of it, indirectly affects other people. The most classical example is pollution. It does not have to be a big factory; it could be your neighbors having a barbecue party. You do not participate, but you still get a share of the smell. You might take your car to work; you pay for the fuel but not for the pollution or the congestion to which you expose others. Other examples include the use of penicillin (you are cured, but contribute to making bacteria penicillin resistant), vaccinations, and a well-kept garden that your neighbors also enjoy looking at.
An externality is a situation in which the consumption or the production of goods has positive or negative effects on other people’s utility where these effects are not reflected in the price. It is common to distinguish between positive and negative externalities:
- Positive externalities. One person’s consumption of goods also increases other people’s utility without them having to pay for it.
- Negative externalities. One person’s consumption of a good decreases other people’s utility without them receiving any compensation.
Note that positive externalities are also a problem. Typically, we get too few goods with positive externalities and too many goods with negative externalities.
The Effect of a Negative Externality
Let us study the classical example of a negative externality: A firm produces a good, but in doing so they also pollute the environment. First, we need to define a few concepts:
- The marginal cost of the externality, ME. The change in the cost of the marginal effect, when production is increased by one unit. This is similar to the concept of MC, but instead of concerning the cost for the firm, it concerns the (uncompensated) cost of the externality.
- Social cost. The sum of the cost of producing the good and the cost of the external effect.
- Marginal social cost, MSC. The sum of the firm’s marginal cost and the marginal cost of the externality, i.e. MC + ME.
We can analyze this situation in a way that is similar to the one in Short-Run Equilibrium and Long-Run Production Look at Figure 19.1 (and compare, for instance, to Figure 9.2). The firm operates in a perfectly competitive market, so MR = p. To maximize its profit, the firm chooses to produce the quantity where MC = MR, i.e. the quantity qC.
However, this firm also emits pollution. The pollution does not cost the firm anything, but there is a cost to society. The more the firm produces, the more it pollutes. In the figure, we have drawn the marginal cost of the external effect, ME, and the marginal social cost, MSC = MC + ME. We see that for society, the optimal quantity to produce is qS.
The effect of the firm being able to ignore the cost of polluting is that it produces too much of the good. As an indirect effect of that, there will also be more pollution than at the optimum...
Figure 19.1: The Effect of a Negative Externality
Regulations of Markets with Externalities
One way to correct the situation in The Effect of a Negative Externality is to put a tax on each unit of the product. If one knows the size of ME then the tax should be that same amount. Thereby, the marginal cost curve of the firm will coincide with MSC, and the firm will automatically correct its production to the optimal quantity, qS. An obvious problem with this solution is that one rarely knows ME.
Other strategies to regulate the market include quantity regulations and the creation of transferable emissions permits. |
An item is the basic element. Usually it will be a character. A sequence is an ordered set of items. Sequential items in a sequence are said to be logically contiguous. The sequence data structure will keep the items of the sequence in buffers. A buffer is a set of sequentially addressed memory locations. A buffer contains items from the sequence but not necessarily in the same order as they appear logically in the sequence. Sequentially addressed items in a buffer are physically contiguous. When a string of items is physically contiguous in a buffer and is also logically contiguous in the sequence we call them a span. A descriptor is a pointer to a span. In some cases the buffer is actually part of the descriptor and so no pointer is necessary. This variation is not important to the design of the data structures but is more a memory management issue.
Sequence data structures keep spans in buffers and keep enough information (in terms of descriptors and sequences of descriptors) to piece together the spans to form the sequence. Buffers can be kept in memory but most sequence data structures allow buffers to get as large as necessary or allow an unlimited number of buffers. Thus it is necessary to keep the buffers on disk in disk files. Many sequence data structures use buffers of unlimited size, that is, their size is determined by the file contents. This requires the buffer to be a disk file. With enough disk block caching this can be made as fast as necessary.
The concepts of buffers, spans and descriptors can be found in almost every sequence data structure. Sequence data structures vary in terms of how these concepts are used.
If a sequence data structures uses a variable number of descriptors it requires a recursive sequence data structure to keep track of the sequence of descriptors. In section 5 we will look at three sequence data structures that use a fixed number of descriptors and in section 6 we will look at three sequence data structures that use a variable number of descriptors. Section 7 will present a general model of a sequence data structure that encompasses all these data structures. |
Metals combine with other substances to form compounds. There is a diversity
of metal compounds known. One can make an endless list of the compounds
of the metals. In this chapter, we are going to concentrate our efforts
on the following compounds of the metals: oxides, hydroxides, carbonates
and hydrogen carbonates, nitrates, chlorides and sulphates.OxidesPreparation of Oxides of Some Metals by Direct and Indirect MethodsPrepare oxides of some metals by direct and indirect methodsAll
elements except helium, neon and argon form compounds with oxygen. This
is because oxygen is quite reactive. Binary compounds of oxygen are
known as oxides. Therefore, a metal oxide is a binary compound of oxygen
and a metal.Metal OxidesClassify metal oxidesChemical properties of metal oxides
- Reaction with water:The
oxides of potassium, sodium and calcium are very soluble in water. They
will react vigorously with cold water to produce the corresponding
hydroxides. The oxides of metals below calcium in the reactivity series
are all insoluble in water.
- Reaction with acids:The oxides of metals above hydrogen in the reactivity series react with dilute acids to produce salt and water.
- Reaction with alkalis:Some oxides react with alkalis to produce salt and water. The oxides of this nature include ZnO, Al2O3, PbO and SnO.
The Reactions of Metal Oxides with Water and Dilute AcidsDemonstrate the reactions of metal oxides with water and dilute acidsPreparation of metal oxidesMetal oxides can be prepared by:
- direct methods. This involves heating metals directly in air.
methods. This involves such methods like heating carbonates and
hydrogencarbonates of certain metals in air, and reacting certain metals
with certain acids.
Preparation of metal oxides by direct combination(Direct method)In
this method, oxides can be prepared by direct combination of metals
with oxygen. This involves heating a metal in air. When some metals are
burned in the air, they react with oxygen of the air to form metal
oxides. However, this method is not intensively used because some metals
tend to form a protective layer of an oxide on the surface of metal and
prevent further attack by oxygen. The best example of such metals is
aluminium which, when heated in the air, forms a protective layer of
aluminium (III) oxide (Al2O3) on the surface of a
metal whichprevents further attack by oxygen. Table 9.1 shows the
products formed when certain metals are burned in the air.Table 9.1: The reaction of metals with oxygen
|Metal||How it reacts||Product|
|Barium||burns with a green flame||white solid (barium (II) oxide, BaO)|
|Calcium||burns with a brick-red flame||white solid (calcium oxide, CaO)|
|Sodium||burns with a yellow flame||white solid (sodium oxide, Na2O)|
|Potassium||burns with a purple flame||white solid (potassium oxide, K2O)|
|Magnesium||burns with a white flame||white solid (magnesium oxide, MgO)|
|Iron||burns with yellow sparks||Blue-black solid (iron (II) oxide, FeO)|
|Copper||doesnot burn, turns black||black solid (copper (II) oxide, CuO)|
Activity 1Preparation of oxides by direct combination methodsAim: to prepare oxides of metalsMaterials: magnesium ribbon, aluminium foil, iron filings, gas jar, Bunsen burner and coal tong.Discussion
- What changes did you observe when each of the metals above was heated in a Bunsen flame?
- What was the function of oxygen in the reaction?
- Write well balanced equations for the reactions that took place.
- Write and balance the equations for the reactions when the following metals burn in oxygen:(i)sodium (ii)potassium (iii) iron
- Lower a piece of burning magnesium ribbon, by means of tongs, into a gas jar of oxygen. Observe and record what happens.
- Heat the aluminium foil strongly on a Bunsen flame. Observe and record what happens.
- Perform the same experiment with iron filings. Also, record what happens.
Preparation of metal oxides by indirect methodsThis
method involves thermal decomposition of salts. When some salts are
heated, they decompose into oxides and other products as well. If the
anion part of the salt heated contains some oxygen, a portion of this
oxygen may remain bonded to the central metal atom.However,
this method is limited only to those compounds of metals below sodium
in the electrochemical series. For instance, potassium oxide or sodium
oxide cannot be prepared by action of heat on their carbonates. Thermal
decomposition of some metals is as shown by the following equations:
- CuCO3(s)→CuO(s)+ O2(g)
- CaCO3(s)→CaO(s)+ CO2(g)
- ZnCO3(s)→ ZnO(s)+ CO2(g)
also behave in a similar manner. When hydroxides of metals below sodium
in the electrochemical series are heated, they decompose into
respective oxides, giving off water in the form of steam.
- Ca(OH)2(s)→ CaO(s)+ H2O(g)
- Mg(OH)2(s)→ MgO(s)+ H2O(g)
The oxides can also prepared by heating some nitrates and sulphates:
- 2Pb(NO3)2(s)→ 2PbO(s)+ 4NO2(g)+ O2(g)
- 2Cu(NO3)2(s)→ 2CuO(s)+ 4NO2(g)+ O2(g)
of silver and mercury are not suitable for preparation of oxides by
thermal decomposition because they decompose to metals directly when
heated. The bond between oxygen and a metal atom is not strong enough to
withstand the thermal energy:
- 2AgNO3(s)→ 2Ag(s)+ 2NO2(g)+ O2(g)
Activity 2Preparation of oxides by thermal decomposition of carbonatesDiscussion
What was the colour of the copper carbonate before heating? (b) What
was the colour of the residue in the test tube after cooling? What
substance is this? (c) Which gas was evolved on heating the carbonate?
- (a) What happened to lead carbonate whenit was heated? (b) What gas was evolved? (c) What colour was the residue after cooling?
- Write well balanced chemical equations for the two experiments performed.
- Explain why metal oxides cannot be prepared by thermal decomposition of either potassium or sodium carbonate.
Aim:To prepare metal oxides by thermal decomposition of carbonatesProcedure:
- Put a sample of copper carbonate in a test tube.
- Place the test tube on a Bunsen flame and heat slowly then strongly.
- Observe and record any changes, including testing the gases evolved.
- When no further changes take place in the test tube, cool down the contents.
- Perform the same experiment with lead carbonate.
The Uses of Metal OxidesExplain the uses of metal oxidesMetal oxides find a wide range of uses. The following are the uses of the most common oxides:Uses of calcium oxide (CaO)
- Making mortar:Calcium oxide reacts with water to form the hydroxide, Ca(OH)2, known as slaked lime.Mortaris made by mixingslaked lime,sandandwater, and is used in sticking bricks together and in forming smooth surfaces on walls of buildings.
- Calcium oxide is used for makingwhitewash,
which is used in marking sport’s fields, roads and is brushed on walls
of buildings to give them the white colour. Whitewash is a suspension of
slaked lime in water.
- Cement and concrete:Cementis made by heating togetherlimeorlimestoneandclay. The product is a mixture of calcium silicates and aluminates. Clay is hydrated aluminium silicate. A mixture ofcement,sand,stonesandwatergivesconcrete,
which on setting becomes extremely hard. It is the materials used for
making foundations of buildings, pillars, roads, paths, bridges, etc
- Soil treatment:In agriculture,quicklime is used to neutralize soil acidity, and it also adds mineral nutrients (Ca2+) to the soil.
- Calcium oxide is dissolved in water to make slaked lime, which is used in the softening of water.
- Drying agent: Calcium oxide is used for drying ammonia and ethanol.
- It is used in the manufacture of bleaching powder, CaOCl2.
- Manufacture ofglass:heating a mixture ofsand,sodium carbonateandlimeorlimestonegives glass.
of furnaces: It is mixed with magnesium oxide to form the basic lining
of the furnaces to remove acidic impurities in the form of slag.
- it is used in the blast furnace to remove impurities fromiron ore which is removedin the formof slag.
- Preparation calcium carbide: Calcium carbide (CaC2) is manufactured in an electric furnace at 2000oC. CaO(s)+ 3C(s)→ CaC2(s)+ CO(g)
Uses of magnesium oxide (MgO)
- The oxide is used as a lining material in refractory furnaces, owing its high melting point, which is around 2900oC. It is also used as a refractory agent in the construction of crucibles.
oxide in its solution form (magnesium hydroxide) is commonly used as an
antacid. This works because magnesium hydroxide is a basic substance,
which means that, it will neutralize excess acidity and end up
indigestion, caused by too much hydrochloric acid in the stomach.
is used to manufacture the common chemical reagents in the laboratory
such a magnesium chloride, magnesium sulphate and magnesium hydroxide.
oxide is a popular drying agent. In its powder form, it is hydroscopic
in nature. This makes it suitable for drying different substances.
- Insulation: Due to its heat resistance properties, magnesium oxide powder makes an excellent insulator.
supplement: Since it is a good source of magnesium, the oxide is used
as or in dietary supplements forhumans and animals.
Uses of aluminium (III) oxide (Al2O3)
- Aluminium (III) oxide, in the form of bauxite, is used as a source of aluminium.
- Owing its rough surface, the oxide is used as an abrasive, i.e. it is used to rub and clean other surfaces.
- It is used as an adsorbent in chromatography.
- It is used in the lining of furnaces as a refractory material because it has a high melting point (2040oC).
Uses of zinc oxide (ZnO)
oxide is chiefly used in the manufacture of paints and pigments. In
addition, the oxide is used to manufacture anti-corrosive coatings,
lubricants, adhesive batteries, fire retardants, plastic, cement, glass
and ceramics (as a component of glazes).
- Manufacture of rubber:
It is mainly used to activate vulcanization, which aims at improving the
strength and elasticity of rubber.
- Manufacture of cigarette
filter: As a cigarette filter, zinc oxide helps to remove certain
harmful compounds from the tobacco smoke, without altering its flavour.
- Making concrete: It helps to make the concrete more resistant to water, besides improving the processing time required.
uses: Zinc oxide has anti-bacterial properties, for which it is
extensively used to treat a number of skin conditions. It is topically
applied to provide relief in skin irritation, diaper rash, minor burns
and cuts, and for dry and chapped skin. It is added to baby powder,
anti-dandruff shampoos as well as antiseptic creams and surgical tapes
due to its medicinal properties. In addition, together with iron oxide,
it is used to make calamine solution.
- Cosmetic uses: the most
important use of zinc oxide in the cosmetic industry is in the
preparation of sunscreen lotions and creams. Zinc oxide can absorb
ultraviolet (UV) radiation of the sun and thereby protect the skin from
sunburn and other damaging effects of UV radiation.
HydroxidesPreparation of Hydroxides of some Metals by Direct and Indirect MethodsPrepare hydroxides of some metals by direct and indirect methodsMetal hydroxides are electrovalent compounds, composed of metallic ions, which are positively charged, and hydroxyl ions, OH–,
which are negatively charged. The nature of the hydroxides of the
metals varies according to the position of the metal in the reactivity
series, as shown below:
Metal hydroxides can be prepared in the laboratory by two methods:
- Direct method
- Indirect method
Direct methodThis method involves such processes like adding metals directly in water. The method is suitable for preparation ofsoluble hydroxides(alkalis).
- 2Na(s)+ 2H2O(l)→ 2NaOH(aq)+ H2(g)
- 2K(s)+ 2H2O(l)→ 2KOH(aq)+ H2(g)
- Ca(s)+ 2H2O(l)→ Ca(OH)2(aq)+ H2(g)
Indirect methodThis method involves the preparation ofinsoluble hydroxidesby
reacting aqueous solutions of sodium or potassium hydroxide with
aqueous salts of metals. These are called precipitation reactions. Only
NH4OH, KOH and NaOH are completely soluble in water. Calcium
hydroxide is sparingly soluble in water (0.173g/100 ml at 20°C). All
other hydroxides are insoluble in water and they can be prepared by this
- CuCl2(aq)+ 2NaOH(aq)→ Cu(OH)2(s)+ 2NaCl(aq)
- Ionically:Cu2+(aq)+ 2OH–(aq)→ Cu(OH)2(s)
- FeSO4(aq)+ 2KOH(aq)→ Fe(OH)2(s)+ K2SO4(aq)
- Ionically:Fe2+(aq)+ 2OH–(aq)→ Fe(OH)2(s)
- FeCl3(aq)+ 3NaOH(aq)→ Fe(OH)3(s)+ 3NaCl(aq)
- Ionically:Fe3+(aq)+ 3OH–(aq)→ Fe(OH)3(s)
Metal HydroxidesClassify metal hydroxidesJust
like metal oxides, metal hydroxides can be classified based on their
solubility in water as either soluble or insoluble hydroxides. They can
also be classified as basic or amphoteric hydroxides, based on their
reaction with acids and bases.The classification of hydroxides as either soluble or insoluble is as shown on the previous page.Based on their reaction with acids and bases, metal hydroxides are classified into basic and amphoteric hydroxides.Abasic hydroxideis a metal hydroxide that contains hydroxyl ions, OH–, and will react with an acid to form a salt and water only.Anamphoteric hydroxideis
a hydroxide that shows both acidic and basic properties, that is, will
react with both an acid and a base.Examples of amphoteric hydroxides are
Al(OH)3, Zn(OH)2and Pb(OH)2.Amphoteric
hydroxides behave as bases and as acids under different conditions.
These hydroxides are only weakly basic, but like other bases, they still
combine with acids to form salt and water:
- Zn(OH)2(s)+ H2SO4(aq)→ ZnSO4(aq)+ 2H2O(l)
- Pb(OH)2(s)+ 2HNO3(aq)→ Pb(NO3)2(aq)+ 2H2O(l)
- Al(OH)3(s)+ 3HCl(aq)→ AlCl3(aq)+ 3H2O(l)
in the presence of strong alkalis, these hydroxides behave like acids,
and they combine with these strong alkalis to yield a salt and water:
The Chemical Properties of Metal HydroxidesExplain the chemical properties of metal hydroxidesIn this particular case, the chemical properties of only a few common hydroxides will be discussed.Chemical properties of sodium hydroxide (NaOH) and potassium hydroxide (KOH)When
heated together with aluminium, zinc and lead metals, concentrated
sodium hydroxide dissolves these metals to form sodium aluminate,
zincate and plumbate respectively.
alkalis react with soluble salts of certain metals like copper, lead,
zinc and iron to form their insoluble hydroxides by double
- 3NaOH(aq)+ FeCl3(aq)→ Fe(OH)3(s)+ 3NaCl(aq)
- 2NaOH(aq)+ ZnSO4(aq)→ Zn(OH)2(s)+ Na2SO4(aq)
hydroxide liberates ammonia gas from ammonia salts when both are warmed
together. This is the standard test for any ammonium salt.
- NH4Cl(aq)+ NaOH(aq)→ NaCl(aq)+ H2O(l)+ NH3(g)
- NH4NO3(aq)+ NaOH(aq)→ NaNO3(aq)+ H2O(l)+ NH3(g)
carbon dioxide is bubbled through aqueous solutions of the caustic
alkalis, the carbonates are formed. With excess of the gas, the
hydrogencarbonates are formed.
- 2NaOH(aq)+ CO2(g)→ Na2CO3(aq)+ H2O(l)
- Na2CO3(aq)+ H2O(l)+ CO2(g)→ 2NaHCO3(aq)
Chlorine reacts with excess of cold dilute caustic alkalis to form the hypochlorite, (NaClO or KClO).
- 2NaOH(aq)+ Cl2(g)→ NaCl(aq)+ NaClO(aq)+ H2O(l)
- 2KOH(aq)+ Cl2(g)→ KCl(aq)+ KClO(aq)+ H2O(l)
If excess chlorine is bubbled through hot concentrated solutions of caustic alkalis, the chlorates are formed, (NaClO3or KClO3).
- 6NaOH(aq)+ 3Cl2(g)→ 5NaCl(aq)+ NaClO3(aq)+ 3H2O(l)
- 6KOH(aq)+ 3Cl2(g)→ 5KCl(aq)+ KClO3(aq)+ 3H2O(l)
Caustic alkalis are strong bases, which react with acids by neutralization reactions:
- NaOH(aq)+ HCl(aq)→ NaCl(aq)+ H2O(l)
- 2KOH(aq)+ H2SO4(aq)→ K2SO4(aq)+ 2H2O(l)
Chemical properties of Ca(OH)2Action of heatAll
hydroxides except potassium hydroxide (KOH) and sodium hydroxide (NaOH)
decompose under the action heat to the oxide and steam. Calcium
hydroxide undergoes thermal decomposition to calcium oxide and
steam.Ca(OH)2(aq)→CaO(s)+ H2O(g)Reaction with acidsCalcium hydroxide dissolves readily in dilute hydrochloric acid and nitric acid to form the corresponding calcium salts:Ca(OH)2(aq)+ 2HCl(aq)→CaCl2(aq)+ H2O(l);Ca(OH)2(aq)+ 2HNO3(aq)→Ca(NO3)2(aq)+ 2H2O(l)Its
reaction with dilute sulphuric acid is unsatisfactory due to the
formation of calcium sulphate, which tends to precipitate on any
dissolved lime.Reaction with carbon dioxideA
solution of calcium hydroxide in water is called limewater. Carbon
dioxide turns limewater “milky” due to the precipitation of white
particles of calcium carbonate.Ca(OH)2(aq)+ CO2(g)→CaCO3(s)+ H2O(l)Reaction with ammonium saltsAny
ammonium salt will release ammonia gas when heated with calcium
hydroxide solution. This is the common laboratory method for preparation
of ammonia gas.Ca(OH)2(aq)+ 2NH4Cl(s)→2NH3(g)+ CaCl2(aq)+ 2H2O(l);Ionically;NH+4(s)+ OH–(s)→NH3(g)+ H2O(l)Reaction with chlorine
chlorine gas is passed over moist cold solid calcium hydroxide,
bleaching powder is formed. The formula of the bleaching powder is
complex, and it is known with certainty but it probably contains Ca2+, Cl–and OH–ions and water. The reaction equation for its formation is usually written in “approximate” form:Ca(OH)2(s)+ Cl2(g)→ CaOCl2(s)+ H2O(l).The accepted but highly approximate formula of bleaching powder is Ca(OCl)2. It is known to contain a mixture of calcium hypochlorite, Ca(OCl)2, and basic calcium chloride, CaCl2.Ca(OH)2.H2O.
- When chlorine gas is bubbled in a cold milk of lime, a mixture of calcium chloride and calcium hypochlorite, Ca(OCl)2, is formed.2Ca(OH)2(aq)+ 2Cl2(g)→ CaCl2(aq)+ Ca(OCl)2(aq)+ 2H2O(l)
- When chlorine is bubbled through hot milk of lime, calcium chloride and calcium chlorate are formed.6Ca(OH)2(aq)+ 6Cl2(g)→ 5CaCl2(aq)+ Ca(ClO3)2(aq)+ 6H2O(l)
Reaction with hydrogencarbonatesCalcium hydroxide removes temporary hardness of water by precipitating the carbonate from the hydrogencarbonate.Ca(HCO3)2(aq)+ Ca(OH)2(aq)→2CaCO3(s)+ 2H2O(l)The Uses of Metal HydroxidesDescribe the uses of metal hydroxidesUses of NaOH
- Sodium hydroxide is used in the industrial manufacture of sodium, soaps and in the extraction of aluminium from bauxite.
- It is used in the manufacture of paper, dyes and bleach.
- Sodium hydroxide is a very common laboratory chemical. It is used in the qualitative as well as quantitative analysis.
- It is used in the preparation of insoluble metal hydroxides.
- It is used in the manufacture of artificial textile fibres.
- It is used to produce mineral salts by neutralization reactions with mineral acids.
Uses of KOH
- Potassium hydroxide is used in the manufacture of soft soap.
- Use as an electrolyte: aqueous potassium hydroxide is used as an electrolyte in alkaline batteries.
of biodiesel: potassium hydroxide works well in the manufacture of
biodiesel by saponification of the fats in vegetable oil.4. Preparation
of salts: many potassium salts are prepared by neutralization reactions
involving potassium hydroxide.
Uses of Ca(OH)2
- Treatment of acid soils:Calcium
hydroxide is a cheap alkali which can be used in large quantities to
treat acid soils. The alkali neutralizes the excessive soil acidity,
just like any alkali reacts with an acid, to form salt and water.
- Softening of hard water:Calcium
hydroxide, in precisely calculated quantities, is used in the softening
of temporary hard water as discussed early in chapter three.Ca(OH)2(aq)+ Ca(HCO3)2(aq)→ 2CaCO3(s)+ 2H2O(l)
- Preparation of mortar and whitewash:Mortar
is prepared by mixing 1 part of slaked lime to 3 parts of sand into a
paste with water. Slaked lime mixed with sand and water is used to stick
bricks together.Whitewash is a thick suspension of slaked lime in
water. It is smeared on the walls of buildings to give them a smooth
- Manufacture of bleaching powder:Bleaching powder is manufactured by passing chlorine gas over moist calcium hydroxide as discussed previously in this chapter.
- Manufacture of paperFirst,
a suspension of calcium hydroxide in water (milk of lime) is treated
with sulphur dioxide gas to form calcium hydrogensulphite. Ca(OH)2(aq)+ 2SO2(g)→ Ca(HSO3)2(aq)Then,
a solution of calcium hydrogen sulphite is used to remove the lignin
from wood, leaving cellulose, which is used in paper manufacture. The
action of removing lignin from wood is calledbleaching of pulp.
- Manufacture of paints:Calcium
hydroxide is used in the manufacture of undercoat paints which are
applied as the first coat on plaster walls or on wood before applying
the final gloss paint.
- Extraction of metals:Sodium hydroxide is used in the extraction of aluminium from bauxite ore.
- Liberation of ammonia:Calcium hydroxide is used in the Solvary Process to produce ammonia from ammonium chloride.
- Calcium hydroxide is a very important reagent inqualitative and quantitative analysis.
It is used to determine the concentrations of acids in volumetric
analysis, and in detection of ions present in unknown solutions in
Uses of Mg(OH)2
hydroxide is used as an antacid to neutralize stomach acid, and as a
laxative. It is sold for medical use as chewable tablets, capsules, and
as liquids having various added flavours. It is primarily used to
alleviate constipation, but also relieve indigestion and heartburn.
- Magnesium hydroxide is also used as an antiperspirant and as armpit deodorant.
of magnesia (a suspension of magnesium hydroxide in water) is also
applied and massaged onto the scalp, a few minutes before washing, to
relieve symptoms ofseborrhoeaanddandruff. An additional use is for the
treatment of acne or oily skin by applying topically, allowing to dry,
and then washing it off the face (or other body part). It is also said
to be used for seborrhoeaic dermatitis, which is a drying and flaking of
the skin similar to dandruff but often occurring on the face.
- Magnesium hydroxide powder is used industrially as a non-hazardous alkali toneutralizeacidic waste waters.
- Solid magnesium hydroxide is used as fire and smoke retardant.
Carbonates and Hydrogen CarbonatesPreparation of Metal Carbonates and Hydrogen Carbonates by Different MethodsPrepare metal carbonate and hydrogen carbonates by different methodshere
are many types and forms of metal carbonates in the earth’s crust. The
most common and important carbonate is calcium carbonate which occurs
naturally in the form of chalk, limestone or marble. Carbonates of other
metals such as iron, copper, manganese, lead and zinc also occur
naturally.Shells of snail, tortoise, fish and eggs are made of a great
deal of carbonates.There
are four solid hydrogencarbonates. These are potassium, sodium, lithium
and ammonium hydrogencarbonates. The hydrogencarbonates of calcium and
magnesium occur in solution forms. Hydrogencarbonates are bases just
like other ordinary carbonates.All metal hydrogencarbonates are soluble
in water. Aluminium, zinc, iron, lead and copper hydrogencarbonates do
method used to prepare carbonates depends on whether the carbonate is
soluble in water or not. Sodium, potassium and ammonium carbonates are
the only soluble metal carbonates.Solublecarbonatesare
prepared by passing carbon dioxide gas to the alkali. For example,
sodium carbonate is formed when carbon dioxide gas is blown through
sodium hydroxide solution.CO2(g)+ 2NaOH(aq)→Na2CO3(aq)+ H2O(l)If more carbon dioxide is bubbled through the solution, a second reaction occurs and sodium hydrogencarbonate is formed.Na2CO3(aq)+ H2O(l)+ CO2(g)→2NaHCO3(aq)This is the easiest and convenient way for preparing hydrogencarbonates.Insoluble carbonatescan
be prepared by precipitation reactions. This involves adding a soluble
carbonate to a solution of a salt of heavy metal. For example, when a
solution of zinc carbonate is mixed with sodium carbonate solution, a
precipitate of zinc carbonate is formed.ZnSO4(aq)+ Na2CO3(aq)→ZnCO3(s)+ Na2SO4(aq)Likewise, copper carbonate can be precipitated by mixing copper (II) chloride solution with potassium carbonate solution.CuCl2(aq)+ K2CO3(aq)→CuCO3(s)+ 2KCl(aq)Classification ofMetal CarbonatesClassify metal carbonatesMetal carbonates are classified based on their solubility in water. Classified on this basis, we havesolubleandinsolublecarbonates. Potassium, sodium and ammonium carbonates are soluble in water. All other carbonates are insoluble in water.The Chemical Properties of Metal CarbonatesAnalyse the chemical properties of metal carbonatesReaction with acidsCarbonates and hydrogencarbonates react with dilute acids to produce carbon dioxide, salt and water. For example:CaCO3(s)+ 2HCl(aq)→CO2(g)+ CaCl2(aq)+ H2O(l)2NaHCO3(aq)+ H2SO4(aq)→ 2CO2(g)+ Na2SO4(aq)+ 2H2OAction of heatExcluding
sodium and potassium carbonates, all other carbonates decompose on
heating to form oxides of corresponding metals and carbon dioxide. For
example:PbCO3(s)→PbO(s)+ CO2(g)A hydrogencarbonate decomposes to give a carbonate, carbon dioxide and water.2NaHCO3(aq)+ Na2CO3(s)→H2O(l)+CO2(g)Distinction between carbonates and hydrogencarbonatesWe
can easily distinguish a carbonate from a hydrogencarbonate by the use
of magnesium sulphate.When a carbonate solution is added to a magnesium
sulphate solution, a white precipitate of magnesium carbonate is
formed:Na2CO3(aq)+ MgSO4(aq)→MgCO3(s)+ Na2SO4(aq)When a similar reaction is performed with a hydrogencarbonate solution, no white precipitate is formed:2NaHCO3(aq)+ MgSO4(aq)→Mg(HCO3)2(aq)+ Na2SO4(aq)This
is because magnesium hydrogencarbonate is soluble in water. However,
when the solution is boiled, magnesium hydrogencarbonate decomposes to
give a white precipitate of magnesium carbonate.Mg(HCO3)2(aq)→MgCO3(s)+ H2O(l)CO2(g)The Uses of Carbonates and Hydrogen CarbonatesDescribe the uses of carbonates and hydrogen carbonatesFor
easy understanding of the uses of carbonates and hydrogencarbonates,
each of the most common of these compounds will be dealt with
separately.Uses of sodium carbonate, Na2CO3
is widely used in the manufacture of sodium silicate, which is used to
manufacture glass (for more details, and the reaction equations
involved, refer to the uses of calcium carbonate).
- It is used to manufacture soap and paper.
of sodium hydroxide: sodium hydroxide is prepared by adding slaked lime
(calcium hydroxide) to sodium carbonate and the mixture continuously
stirred. A precipitate of calcium carbonate forms and is filtered off:
Na2CO3(aq)+ Ca(OH)2(s)→ 2NaOH(aq)+ CaCO3(s).The filtrate, which is a solution of sodium hydroxide, is then evaporated to obtain crystals of sodium hydroxide.
- It is used in volumetric analysis to determine the concentration of acids (standardize acids).
- Sodium carbonate is used for softening hard water.
- It is used in medicine as an antacid.
is also important inphotographyand the textile industry (where itis
used to facilitate the chemical bonding between the dye and the fibre).
- Manufacture of ‘water glass’: When sodium carbonate is heated together with silicon dioxide (SiO2), sodium silicate (NaSiO3) and carbon dioxide are produced. Na2CO3(s)+ SiO2(s)→ NaSiO3(s)+ CO2(g).A
concentrated solution of sodium silicate in water, known as water glass
is used as a preservative for eggs. It is also used asan adhesive in
paper making and in television tubes.
Uses of sodium hydrogencarbonate, NaHCO3This is the most important hydrogencarbonate. The following are some of its uses:
salt: The salt finds a variety of medicinal uses, particularly in
stopping diarrhoea and, as anantacidto treat heartburn, indigestion, and
other stomach disorders. It is also used to treat various kidney
disorders and to increase the effectiveness of sulphonamides.A
heath salt is a mixture of sodium hydrogencarbonate, tartaric acid and
sodium potassium tartrate.Acting as antacid, sodium hydrogencarbonate
neutralizes the hydrochloric acid produced in the stomach. Because it is
very soluble, it acts very fast thus providing a quick relief for
symptoms caused by excess acid in the stomach.
- It is used in the
removal of grease from clothes and other articles. Most of the cleaning
agents that are used for the removal of stains tend to contain a small
amount of sodium bicarbonate as a very important ingredient.
in industry: sodium bicarbonate is used in textile industry for dyeing
and printing operations; in leather industry as a neutralizer of dyeing
agents in tanning processes; and in rubber and plastic industry as a
blowing agent, as it releases carbon dioxide which is used to shape the
object that is made from the rubber and plastic.
- Baking: sodium
bicarbonate is widely used to bake breads and cakes. The baking soda
contains a large quantity of sodium hydrogencarbonate and is always
accompanied by a small quantity of acid phosphate. When baking soda
mixed with dough is heated, the sodium hydrogencarbonate decomposes to
give bubbles of carbon dioxide which make the dough rise: 2NaHCO3(s)→ Na2CO3(s)+ CO2(g)+ H2O(l).Commercial baking powder is a mixture of baking soda and organic acids such as citric or tartaric acid.
Uses of calcium carbonate, CaCO3Remember
we learned early that calcium carbonate exists in three forms: chalk,
limestone and marble. So, in this context, the uses of three forms will
- Sodium carbonate is used in the blast furnace for extraction of iron from its ore.
- It is used in the manufacture of glass. Heating a mixture of calcium carbonate (CaCO3), sodium carbonate (Na2CO3) and sand (SiO2)
very strongly, at a temperature of 1300 to 1400°C, produces a clear
melt. When this melt is cooled down, it hardness to form glass.Upon
heating these compounds, carbon dioxide is given off, leaving a mixture
of sodium silicate and calcium silicate with excess silicon dioxide:
CaCO3(s)+ SiO2(s)→ CaSiO3(s)+ CO2(g)Na2CO3(s)+ SiO2(s)→ Na2SiO3(s)+ CO2(g).Glass is a mixture of sodium and calcium silicates.
is used to make cement. Limestone powder and clay are roasted and the
product is grinded to form a fine powder called cement.
- Calcium carbonate is used in the manufacture of lime, which is used as a fertilizer.
- Marble is used on walls of houses and other buildings to make them look attractive.
is used in road construction. Small pieces of marble are stuck together
and then covered with tar to gives a smooth hard surface, which can
last for several years.
- A mixture of limestone and linseed oil
is used as putty, which helps in retaining the structure of windows,
doors and other wooden structures in position in houses and buildings.
- Calcium carbonate, in the fine precipitate form, can be used in the manufacture of toothpaste.
NitratesPreparation of Metal NitratesPrepare metal nitratesNitrates
are important compounds widely known for industrial purposes. Sodium
nitrate has been known for quite a long time. It occurs in nature as
saltpetre (NaNO3).Nitrates are usually prepared by methods which involvecrystallization. This can be done by reacting nitric acid withmetals,oxides,hydroxidesorcarbonates.
- In the laboratory, sodium nitrate may be prepared by neutralizing sodium hydroxide solution with nitric acid.NaOH(aq)+ HNO3(aq)→ NaNO3(aq)+ H2O(l)
- Calcium nitrate may be made in the laboratory by the action of nitric acid upon calcium carbonate.CaCO3(s)+ 2HNO3(aq)→ Ca(NO3)2(aq)+ H2O(l)+ CO2(g)
- Ammonium nitrate can be made by neutralization of ammonia solution with nitric acid.NH3(aq)+ HNO3(aq)→ NH4NO3(aq)
- Lead nitrate can be obtained by the reaction between lead oxide and dilute nitric acid.PbO(s)+ 2HNO3(aq)→ Pb(NO3)2(aq)+ H2O(l)
- Copper nitrate can be prepared by dissolving copper oxide in dilute nitric acid.CuO(s)+ 2HNO3(aq)→ Cu(NO3)2(aq)+ H2O(l)
- Nitrates of certain metals can be prepared by reacting the metals with dilute or concentrated nitric acid.3Pb(s)+ 8HNO3(aq)→ 3Pb(NO3)2(aq)+ 4H2O(l)+ 2NO(g);3Cu(s)+ 8HNO3(aq)→ 3Cu(NO3)2(aq)+ 4H2O(l)+ 2NO(g)
the nitrates of common heavy metals, except lead nitrate, are very
soluble in water and they are deliquescent. This makes it a matter of
grater difficult to prepare their crystals. All the crystals are white
in colour except those of copper (II) nitrate, which are blue. All
nitrates are soluble in water.The Chemical Properties of Metal NitratesExplain the chemical properties of metal nitratesAction of heatThe
chemical properties of metal nitrates vary according to the position of
the metal in the reactivity series. Metal nitrates give a variety of
products when thermally decomposed. This fact is summarized below:
Ammonium nitrate is decomposed by heat into dinitrogen oxide and water:NH4NO3(s)→N2O(g)+2H2O(l)The brown ring testAll nitrates undergo the same reaction withiron (II) sulphateandconcentrated sulphuric acidand this reaction becomes a test for the soluble nitrates, thebrown ring test.
This test is carried out by crushing a little potassium nitrate,
putting it in a test tube, adding water to a depth of about 2 cm and
then shaking the test tube to dissolve the potassium nitrate (note that
any metal nitrate could have been used instead of potassium nitrate).
This is followed by adding a little sulphuric acid and then two or three
crystals of iron (II) sulphate, which have also been crushed. The
contents are shaken to dissolve them. Finally, the test tube is held in a
slanting position and a slow continuous stream of concentrated
sulphuric acid is poured down the side of the test tube. The acid forms a
separate layer underneath the aqueous layer and, at the junction of the
two, a brown ring will be seen. This brown ring is the characteristic
test for all soluble nitrates.Explanation
- The concentrated sulphuric acid and the nitrate react to produce nitric acid:KNO3(s)+ H2SO4(aq)→ KHSO4(aq)+ HNO3(aq).
- The nitric acid formed is reduced by some of the iron (II) sulphate to nitrogen monoxide, NO:6FeSO4(s)+2HNO3(aq)+3H2SO4(aq)→3Fe2(SO4)3(aq)+4H2O(l)+2NO(g)
- The nitrogen monoxide produced then reacts with some of the remaining iron (II) sulphate to form a dark brown complex, FeSO4.NO, which appears as a ring. FeSO4(aq)+ NO(g)→FeSO4.NO(aq)
The Uses of Metal NitratesExplain the uses of metal nitratesUses of Metal Nitrates include:
nitrate is used in the preparation of gunpowder (mixture of charcoal,
sulphur and potassium nitrate) and other explosives. When gunpowder is
ignited, it explodes. In addition, ammonium nitrate is also used in
making explosives and blasting agents which are used in mines and
- Food preservation: nitrates and nitrites are used in
curing (salting and pickling) meats and fish. Not only do they kill
bacteria but they also produce a characteristic flavour, and give meat a
red or pink colour. Sodium nitrate or potassium nitrate is used as a
source of nitrite (nitrogen dioxide), NO2. The nitrite breaks down in the meat into nitric oxide (nitrogen monoxide), NO which helps to prevent oxidation.
of fertilizers: nitrogenous fertilizers are mainly nitrates. They
include ammonium nitrate, potassium nitrate, sodium nitrate and calcium
nitrate. Nitrogenous fertilizers are manufactured form nitric acid.
- Silver nitrate is used in photography, silvering mirrors, making marking ink, etc.
antiseptics are chemical agents that are used on the skin and mucous
membranes to kill germs. Silver compounds such as silver nitrate and
sulphadiazine have been used to prevent the infection of burns and some
eye infections and to destroy warts
ChloridesPreparation of Metal Chlorides by Direct and Indirect MethodsPrepare metal chlorides by direct and indirect methodsIn
everyday life, we come across many chlorides. Sodium chloride,
literally known as table salt, is the commonest chloride that we use in
every walk of our lives. It is added to some foods to make them taste
better. In Tanzania, the salt is found in large deposits at Uvinza
inKigoma region. The salty taste of sea water is mainly due to dissolved
sodium and potassium chlorides.Metal
chlorides can be prepared by direct and indirect methods. All metals
are attacked by chlorine to form chlorides. Metals above hydrogen in the
reactivity series can displace hydrogen of the hydrochloric acid to
form metal chlorides. Hydroxides, oxides or carbonates of potassium,
sodium or calcium react with dilute hydrochloric acid to produce
chlorides of respective metals. When dilute hydrochloric acid is added
to the aqueous salts of lead and silver, they produce lead and silver
chlorides respectively. These are the only two common insoluble
chlorides. The rest of the chlorides are all soluble in water.Preparation of chlorides by direct methodMetal
chlorides can be prepared by direct action of chlorine gas on metals.
For example, iron (III) chloride is made by the action of chlorine on
iron. Figure 9.2 shows how iron (III) chloride can be prepared by
passing chlorine directly over a heated metal. An iron wire is placed in
a hard glass tube as shown and a stream of pure chlorine gas is passed
over it. When the wire is heated, by means of a burner, the wire glows
and the source of heat is removed. The reaction continues even without
further application of heat. Black crystals of iron (III) chloride are
collected in the small bottle, which acts as a condenser.2Fe(s)+ 3Cl2(g)→2FeCl3(s)Because anhydrous iron (III) chloride is deliquescent, it should be removed and placed in a desiccator.Note:When hydrogen chloride gas is used instead of chlorine, iron (II) is produced.Fe(s)+ 2HCl(g)→FeCl2(s)+ H2(g)
A similar procedure above can be used for preparation of aluminium (III) chloride.2Al(s)+ 3Cl2(g)→2AlCl3(s)Preparation of soluble chloridesSoluble
chlorides may be prepared by the action of dilute hydrochloric acid on
(i) oxides, (ii) hydroxides, (iii) carbonates, or (iv) metals
- MgO(s)+ 2HCl(aq)→ MgCl2(aq)+ H2O(l)
- KOH(aq)+ HCl(aq)→ KCl(aq)+ H2O(l)
- CaCO3(s)+ 2HCl(aq)→CaCl2(aq)+ H2O(l)+ CO2(g)
- Zn(s)+ 2HCl(aq)→ ZnCl2(aq)+ H2(g)
Methods of preparation of soluble chlorides can be summarized as shown below:
Preparation of insoluble chlorides
- Lead (II) chloride, PbCl2:This
is a white insoluble substance prepared by the reaction of a solution
of any soluble lead (II) salt with a solution of any soluble chloride,
e.g.Pb(NO3)2(aq)+ 2HCl(aq)→PbCl2(s)+ 2HNO3(aq)
- Silver chloride, AgCl:This
is a white insoluble compound prepared by adding a solution of any
soluble silver salt to a solution of any soluble chloride, e.g.AgNO3(aq)+ NaCl(aq)→ AgCl(s)NaNO3(aq)This is a soluble decomposition reaction.
The Chemical Properties of Metal ChloridesExplain the chemical properties of metal chloridesChemical properties of metal chlorides include:
- Metal chlorides liberate hydrogen chloride gas when warmed together with concentrated sulphuric acid, e.g.2NaCl(s)+ H2SO4(aq)→ Na2SO4(aq)+ 2HCl(g)
concentrated sulphuric acid is added to a mixture of a chloride and an
oxidizing agent, and then warmed, a greenish gas, chlorine is
evolved.2NaCl(s)+ 2H2SO4(aq)+ MnO2(s)→ MnO4(aq)+ Na2SO4(aq)+ 2H2O(l)+ Cl2(g)
chlorides are easily hydrolyzed by water, e.g. magnesium, zinc and iron
chlorides.If solutions of the chlorides are evaporated, basic salts or
oxides of metals remain.
- Most chlorides are more volatile than
most salts, a property which makes them suitable for use in “flame test”
in which certain metals can be detected by the colour their vapour
imparts to the Bunsen flame.5. When heated, ammonium chloride sublimes
forming ammonia gas and hydrogen chloride gas.NH4Cl(s)NH3(g)+ HCl(g)
- Hydrated chlorides give their water of crystallization when heated.MgCl2.6H2O(s)→ MgCl2(s)+ 6H2O(g)
chlorides react with silver nitrate or lead nitrate to form insoluble
white precipitate of silver and lead chlorides respectively.
The Uses of Metal ChlorideExplain the uses of metal chlorideSodium chloride, NaCl
chloride is used a precursor or a starting point in the manufacture of
other important chemicals such as sodium hydroxide, sodium carbonate,
sodium hydrogencarbonate, sodium sulphate, sodium carbonate decahydrate
and other sodium compounds.
- It is used in the manufacture of
soaps: In the industrial manufacture of soap, tallow (fat from animals
such cattle and sheep) or vegetable fat is heated with sodium hydroxide
through a process called saponification. Once the saponification
reaction is over, sodium chloride is added to precipitate the soap.
a chloride, it yields hydrochloric acid and chlorine gas, which is used
as bleaching agent, and in the manufacture of hypochlorite solutions.
- Common salt is used to add flavour to our food at home.
is used as a food preservative by bringing the dehydration effect to
bacteria. It is an important preservative used in the preservation of
cheese, dairy products, meat, pickles and sauces.
(concentrated sodium chloride solution) is used for extraction of sodium
metal by electrolytic method. Chlorine gas and sodium hydroxide are the
major by-products from the electrolysis of brine.
- Melting ice:
sodium chloride has a property of lowering the melting point of ice.
Therefore, it is spread on icy roads during winter to quicken the
melting of ice.
- It is used in glazing (shiny and smooth finishing) pottery
agent: sodium chloride has also been used as a cleansing agent since
long time. In ancient times, it was used for household cleaning simply
by rubbing it against surfaces. It is also an ingredient of soaps,
detergents and shampoos.
Ammonium chloride, NH4Cl
- Ammonium chloride is used as a constituent of the Lenclanche’ voltaic cell.
- Ammonium chloride is sold in blocks for use in cleaning the tip of soldering iron and can also be included in solder as flux.
uses of ammonium chloride include use as a feed supplement in cattle,
in hair shampoo, in textile printing, in the glue that binds plywood, as
an ingredient in nutritive media for yeast, in cleaning products, and
as a cough medicine – its expectorant action is caused by irritative
action on the bronchial mucosa. This causes the production of excess
respiratory tract fluid, which presumably is easier to cough up.
- Its biological applications include using it as an energy source for microbiological growth of organisms.
Aluminium chloride, AlCl3
- Aluminium chloride is used for petroleum refining. It is also used in the manufacture of synthetic lubricating oils.
- It is used for manufacturing of paints, detergents and has also been used as an antiperspirant.
- It is used for production of synthetic rubber.
Calcium chloride, CaCl2
chloride is used mainly as a drying agent for most gases (except
ammonia, with which it forms a compound). This is because calcium
chloride is a deliquescent salt. It tends to absorb moisture from gases.
sodium chloride, it is spread on icy roads in winter to help melt the
ice. Calcium chloride melts the ice faster than any other chemical
- Calcium chloride has a salty taste and is used as the main ingredient in many types of food items including snacks.
chloride prevents spoilage of food and is popularly used as a
preservative in packed foods. It also helps to keep the food healthy and
fresh for a longer duration.
- Being strongly hygroscopic, a layer of calcium chloride is applied on roads and in mines to minimize dust problems.
Magnesium chloride, MgCl2
- Magnesium chloride is used to lubricate cotton threads in the spinning industry.
- Dentistry: magnesium chloride is a constituent of the cement used to fill cavities in teeth.
Potassium chloride, KCl
majority of potassium chloride is used for making fertilizers (muriate
of potash) since the growth of many plants is limited by their potassium
- As a chemical feedstock, it is used for the manufacture of potassium hydroxide and potassium metal.
- It is also used in medicine, scientific applications and in food processing.
- Along with sodium chloride and lithium chloride, potassium chloride is used as a flux for the gas welding of aluminium.
- It is widely used as an additive in the paper industry and in the manufacture of dyes.
SulphatesPreparation of Soluble and Insoluble SulphatesPrepare soluble and insoluble sulphatesMetal sulphates are generally soluble in water, except for the commonly known insoluble sulphates of lead (PbSO4) and barium (BaSO4).
Calcium sulphate is sparingly soluble.The sulphate of alkali metals and
those of the alkaline earth metals (Mg, Ca, Sr and Ba) are very stable
sulphates can be prepared in the laboratory by reacting metals, oxides,
hydroxides or carbonates with dilute sulphuric acid and then isolating
the crystals. The sulphates are isolated by the usual methods of
preparing soluble salts.
- Fe(s)+ H2SO4(aq)→ FeSO4(aq)+ H2(g)
- MgO(s)+ H2SO4(aq)→ MgSO4(aq)+ H2O(l)
- 2NaOH(aq)+ H2SO4(aq)→ Na2SO4(aq)+ 2H2O(l)(d)
- CaCO3(s)+ H2SO4(aq)→ CaSO4(aq)+ H2O(l)+ CO2(g)
Preparation of insoluble sulphatesThe
best method to prepare insoluble metal sulphates is by precipitation
reactions. This is achieved by reacting their soluble salts with dilute
- BaCl2(aq)+ H2SO4(aq)→ BaSO4(s)+ 2HCl(aq)
- Pb(NO3)2(aq)+ H2SO4(aq)→PbSO4(s)+ 2HNO3(aq)
The sulphates are isolated by the usual methods of preparing insoluble salts.Chemical Properties of SulphatesExplain chemical properties of sulphatesSulphates of all metals are normal salts and have the following chemical properties:
- All sulphates give a white precipitate when treated with aqueous salts ofleadandbarium, e.g.BaCl2(aq)+ K2SO4(aq)→ BaSO4(s)+ 2KCl(aq);Na2SO4(aq)+Pb(NO3)2(aq)→ PbSO4(s)+ 2NaNO3(aq)
and lead sulphates are the only two common insoluble sulphates. Calcium
sulphate is sparingly soluble. The rest of the sulphates are soluble in
- Sulphates of the metals in group I and II of the
periodic table are very stable to heat. Strong heating decomposes some
of the sulphates of the heavier metals.Iron (II) sulphate
disproportionates when heated:2FeSO4(s)→ Fe2O3(s)+ SO2(g)+ SO3(g)Iron (III) sulphate gives a good yield of sulphur trioxide gas: Fe2(SO4)3(s)→ Fe2O3(s)+ 3SO3(g)
- Hydrated sulphates decompose on heating to form oxides, water and sulphur trioxide.CuSO4.5H2O(s)→ CuO(s)+ 5H2O(g)+ SO3(g);FeSO4.7H2O(s)→ FeO(s)+ 7H2O(g)+ SO3(g).But hydrated sodium sulphates is stable to heat.It only loses its water of crystallization when heated. Na2SO4.10H2O(s)→ Na2SO4(s)+ 10H2O(g)
Uses of SulphatesDescribe uses of sulphatesSulphates are salts of considerable importance in our everyday lives. The following are the uses of some sulphates:Sodium sulphate (Na2SO4.10H2O), also known as Glauber’s salt.It is used as a mild purgative in medicine. The anhydrous salt, Na2SO4,is used as a laxative. It also finds its use in the manufacture of glass.Calcium sulphateIn the form of Plaster of Paris (CaSO4.½H2O), is used to make plaster casts that are used in hospitals for the repair of broken limbs. When in the form of gypsum (CaSO4.2H2O),
it is used in the manufacture of cement, moulds, wall plasters and
wallboards, and inexpensive art objects. Among the many other uses of
calcium sulphate are its uses as a pigment in white paints, coating
agent in papers, in the manufacture of sulphuric acid and sulphur, and
as a drying agent in many laboratory and commercial purposes.Some uses of aluminium sulphate are as follows:
- It is used in paper making where it binds the paper fibres together.
is also used in the manufacture of aluminium hydroxide, which is used
for mordant dyeing. The aluminium hydroxide formed by the hydrolysis of
the sulphate is deposited on the cloth fibres, where it helps the dye to
stick well on the fibre. The salt is also used in paper sizing (i.e.
giving it body and strength), and waterproofing the cloth.
sulphate is an important chemical in the treatment of urban water. It
precipitates colloidal matter from water. Microorganisms e.g. bacteria
and algae are also captured during the coagulation process and
precipitated with mud.
- Aluminium sulphate is used in the “foam”
type fire extinguishers. The sulphate is mixed with sodium carbonate or
hydrogencarbonate to produce carbon dioxide and aluminium (III)
hydroxide, Al(OH)3, which mix together to formthe foam. The foam is effective in excluding air from oil fires hence helping to put the fire off.
- Aluminium sulphate is used in the tanning of leather. It is also used as a fertilizer.
Iron (II) sulphate is used:
- in the manufacture of ink and dye
- in tanning leather (iron-tanning)
- to make tablets prescribed to patients with iron deficiency.
- to prepare a reddish-brown iron (III) oxide (‘red oxide’) which is used as a pigment.
- as a weed killer ( herbicide) and as a fungicide.
- for treating sewage and water.
- to coagulate ( bind together) blood in slaughterhouses.
Copper (II) sulphate finds many uses which include:
- Manufacture of copper fungicides (CuSO4.5H2O) such as red and blue copper, which is sprayed on crops to prevent certain species of fungi.
- Manufacture of certain green pigments.
(II) sulphates is used for makingwashes such as a “Bordeaux mixture”,
used in sprayingvines and potatoes to kill moulds which would otherwise
injure the plants.
- Manufacture of insecticides such as copper arsenite and Paris green for control of fungus diseases.
- Correction of copper deficiency in soils and in animals
- It is used as a growth stimulant in pigs and broiler chickens.
- It is also used as a molluscide for the destruction of slugs and snails, particularly the snail, which is a host of liver fluke.
- Copper sulphate is used as a timber preservative for the prevention of wood rot.
- It is used as an electrolyte in copper-plating and as a catalyst in preparation of ethanol.
Zinc sulphate has the following uses:
- it is used as an emetic and for treatment of certain skin diseases.
- Hydrated zinc sulphate (ZnSO4.7H2O)
is used as an agent in printing and textile dyeing, as an antiseptic
and in preserving wood and hides. It is also in zinc-plating by
- It is used as an antibacterial treatment for sewage, a miticide and an herbicide.
- It is used as a component of cosmetics (such as skin fresheners) and an ingredient in some deodorants.
- Diluted “White Vitriol”, (ZnSO4.7H2O), is used in medicine in the preparation of eye lotions and mouth washes. It is also used to assists the healing of wounds.
- Ammonium sulphate is used as a fertilizer.
- Magnesium sulphate, in the form of Epsom salt (MgSO4.7H2O), is used as a mild purgative.
- Barium sulphate (BaSO4) is used in the manufacture of white pigments, in white paints.
- The alums are used in dyes and in leather industry. Alums are double salts of general formula, X2SO4.Y2(SO4)3.24H2O, whereX is Na, K or NH4and Y is Fe(III), Al or Cr. The two commonest alums are: Potash alum, K2SO4.Al2(SO4)3.24H2O (colourless); and; Iron (III) alum, (NH4)2SO4.Fe2(SO4)3.24H2O(purple). |
If you plot galaxies by the estimated number of stars they have and the calculated rate at which stars are forming, then you find that most galaxies lie along a line.
One popular model of galaxy formation has been that stars form in the central region of a galaxy first, and then later stars further out form.
Astronomers have used a lens bigger than a galaxy to observe the faintest and youngest galaxies ever found.
A candle that burns twice as bright burns half as long, so the saying goes. We’ve thought of galaxies in the same way, in that the brightest galaxies (ones with high rates of star production) are likely in a cosmologically brief period of high activity. But new computer models suggest that might not be the case.
When you think of a galaxy, you likely think of a spiral galaxy. More specifically, you likely imagine a spiral galaxy with two large sweeping arms of stars, such as the image of the Whirlpool Galaxy above. Such a galaxy is known as a grand design spiral, and while it’s an iconic style, it isn’t particularly common among galaxies.
When galaxies collide, the diffuse material surrounding them can collide to produce a radio phoenix.
There’s a new record for the most distant galaxy, or the youngest galaxy depending on your point of view.
There are more nearby dwarf galaxies than we thought, and that may answer a mystery of dark matter.
An ultracompact dwarf galaxy has only about 100 million stars, but they are packed into a region only 200 light years across. In such a galaxy you might see a million stars with the naked eye. |
2 step math word problems
This 2 step math word problems helps to fast and easily solve any math problems. We will give you answers to homework.
The Best 2 step math word problems
2 step math word problems can be found online or in mathematical textbooks. Intermediate algebra involves solving a variety of problems involving addition, subtraction, multiplication, and division. This type of problem typically involves the application of previous knowledge to solve new problems. Examples of intermediate algebra include finding the area of a rectangular plot using a right triangle with unknown lengths and finding the value of an unknown quantity in a series by summing certain terms. Intermediate algebra problems tend to be more complex than elementary algebra problems because they require more advanced math skills such as addition and subtraction. Consequently, students may need additional time to practice these skills before they are ready to tackle these types of problems. In order to effectively solve an intermediate algebra problem, students must understand the process of adding and subtracting integers. In addition, they need to understand the concept of multiplication and division. To practice intermediate algebra, students can use online resources such as Khan Academy or their math textbooks. By practicing intermediate algebra problems at home, students will be able to develop better math skills that will help them tackle any future algebra problem that comes their way.
There's a lot of math to be done in medical school. In addition to the courses you take, you'll also need to master complicated calculations used in anatomy and physiology. And you'll need to do it quickly and accurately if you want to keep up with your classmates. Even those who excel at math have trouble staying on top of all the formulas they're expected to know by the end of their first year. One way many students manage is by keeping a notebook handy. This allows them to jot down important formulas as well as any mistakes they make along the way. Other students find it helpful to memorize key formulas that are useful in most classes. They can then use these formulas in other classes whenever they come across them (even if they don't understand the math behind it). One downside of this method is that it requires frequent review.
Math word problems are one of the most difficult things to learn for children. This is because it requires a lot of concentration and dedication. But there is no need to stress about these problems anymore because there is an app that can help solve these math word problems for free. This app, called Math Solver, is designed to help solve math word problems in an easy way. The main feature of this app is that it uses algorithms to solve math word problems for free. This means that you do not have to deal with complex calculations when trying to solve math word problems. All you need to do is write down your problem, and the app will do the rest. So if you want an easy way to solve math word problems, then download this app and start using it today!
Linear inequalities are used to check if one number is equal to another number. In order to solve the inequality, you must first solve the equation that represents the inequality. This can be done by adding or subtracting one of the numbers in the equation until they cancel each other out. When both numbers are equal, then the inequality is solved and you can move on to solving the inequality. There are two ways to solve a linear inequality: The distributive property The distributive property allows you to distribute (multiply) or multiply (add and subtract) one or more of the numbers in an inequality. When one number is multiplied, all other numbers are also multiplied. When one number is subtracted, all other numbers are also subtracted. For example, when a person earns $80 per week, how much does she earn each week? If the person earns $6 per day for 7 days, she earns $56 for the week. The distributive property is used to solve linear inequalities so that all of the terms can be added together to find the solution. When solving a linear inequality with two variables, it's important to keep track of which variables are being distributed or multiplied. This can be done by remembering that multiplication takes place only when both variables have units (e.g., when both variables have heights, only height is being multiplied). The slope
Linear equations describe straight lines over a period of time. It can be represented by a line connecting the points (A, B) and (C, D) with an equation like: AB = CD. Here A, B, and C are the coordinates on the graph. One way to solve linear equations is to use the slope formula. The slope formula is simply the y-intercept divided by the x-intercept. In other words, it tells you how fast one point moves up or down as another point moves up or down. For example, if one point moves up 1 cm and another point moves down 1 cm, then their slopes are equal and equal to -1, so their y-intercepts are (-1)(0) = -1 cm. If both points move up at the same rate, their slopes must be equal to 1. If one moves up at twice the rate of another, then their slopes must be greater than 1. Once you know your slope formula for an equation, you can plug in any number for A and get your answer for B.
Great app the camera quality is great and it gives you multiple solutions to a problem shows it to you on a graph gives you the steps necessary to solve your problem. It also has a history feature where you can see all your past problems.
It's been many years since math class, so helping my kids with math homework can be challenging (and I was actually pretty good at math in school!). My daughter's teacher told her class about this app, since it explains how to arrive at the solution. They're obviously not allowed to use it for tests, but helps with homework questions that just need some extra help. My mom, a former math teacher, was also impressed by this app. |
Seismic magnitude scales
|Part of a series on|
Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking (quaking) caused by an earthquake at a given location. Magnitudes are usually determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary on what aspect of the seismic waves are measured and how they are measured. Different magnitude scales are necessary because of differences in earthquakes, the information available, and the purposes for which the magnitudes are used.
Earthquake magnitude and ground-shaking intensity
The Earth's crust is stressed by tectonic forces. When this stress becomes great enough to rupture the crust, or to overcome the friction that prevents one block of crust from slipping past another, energy is released, some of it in the form of various kinds of seismic waves that cause ground-shaking, or quaking.
Magnitude is an estimate of the relative "size" or strength of an earthquake, and thus its potential for causing ground-shaking. It is "approximately related to the released seismic energy."
Intensity refers to the strength or force of shaking at a given location, and can be related to the peak ground velocity. With an isoseismal map of the observed intensities (see illustration) an earthquake's magnitude can be estimated from both the maximum intensity observed (usually but not always near the epicenter), and from the extent of the area where the earthquake was felt.
The intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil (such as fill) can amplify seismic waves, often at a considerable distance from the source, while sedimentary basins will often resonate, increasing the duration of shaking. This is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were also significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland. A similar effect channeled seismic waves between the other major faults in the area.
An earthquake radiates energy in the form of different kinds of seismic waves, whose characteristics reflect the nature of both the rupture and the earth's crust the waves travel through. Determination of an earthquake's magnitude generally involves identifying specific kinds of these waves on a seismogram, and then measuring one or more characteristics of a wave, such as its timing, orientation, amplitude, frequency, or duration. Additional adjustments are made for distance, kind of crust, and the characteristics of the seismograph that recorded the seismogram.
The various magnitude scales represent different ways of deriving magnitude from such information as is available. All magnitude scales retain the logarithmic scale as devised by Charles Richter, and are adjusted so the mid-range approximately correlates with the original "Richter" scale.
Most magnitude scales are based on measurements of only part of an earthquake's seismic wave-train, and therefore are incomplete. This results in systematic underestimation of magnitude in certain cases, a condition called saturation.
Since 2005 the International Association of Seismology and Physics of the Earth's Interior (IASPEI) has standardized the measurement procedures and equations for the principal magnitude scales, ML , Ms , mb , mB and mbLg .
"Richter" magnitude scale
The first scale for measuring earthquake magnitudes, developed in 1935 by Charles F. Richter and popularly known as the "Richter" scale, is actually the Local magnitude scale, label ML or ML. Richter established two features now common to all magnitude scales.
- First, the scale is logarithmic, so that each unit represents a ten-fold increase in the amplitude of the seismic waves. As the energy of a wave is proportional to A1.5, where A denotes the amplitude, each unit of magnitude represents a 101.5≈32-fold increase in the seismic energy (strength) of an earthquake.
- Second, Richter arbitrarily defined the zero point of the scale to be where an earthquake at a distance of 100 km makes a maximum horizontal displacement of 0.001 millimeters (1 µm, or 0.00004 in.) on a seismogram recorded with a Wood-Anderson torsion seismograph. Subsequent magnitude scales are calibrated to be approximately in accord with the original "Richter" (local) scale around magnitude 6.
All "Local" (ML) magnitudes are based on the maximum amplitude of the ground shaking, without distinguishing the different seismic waves. They underestimate the strength:
- of distant earthquakes (over ~600 km) because of attenuation of the S-waves,
- of deep earthquakes because the surface waves are smaller, and
- of strong earthquakes (over M ~7) because they do not take into account the duration of shaking.
The original "Richter" scale, developed in the geological context of Southern California and Nevada, was later found to be inaccurate for earthquakes in the central and eastern parts of the continent (everywhere east of the Rocky Mountains) because of differences in the continental crust. All these problems prompted the development of other scales.
Other "Local" magnitude scales
Richter's original "local" scale has been adapted for other localities. These may be labelled "ML", or with a lowercase "l", either Ml, or Ml. (Not to be confused with the Russian surface-wave MLH scale.) Whether the values are comparable depends on whether the local conditions have been adequately determined and the formula suitably adjusted.
Japan Meteorological Agency magnitude scale
In Japan, for shallow (depth < 60 km) earthquakes within 600 km, the Japanese Meteorological Agency calculates a magnitude labeled MJMA, MJMA, or MJ. (These should not be confused with moment magnitudes JMA calculates, which are labeled Mw(JMA) or M(JMA), nor with the Shindo intensity scale.) JMA magnitudes are based (as typical with local scales) on the maximum amplitude of the ground motion; they agree "rather well" with the seismic moment magnitude Mw in the range of 4.5 to 7.5, but underestimate larger magnitudes.
Body-wave magnitude scales
The original "body-wave magnitude" – mB or mB (uppercase "B") – was developed by Gutenberg (1945b, 1945c) and Gutenberg & Richter (1956) harvtxt error: no target: CITEREFGutenbergRichter1956 (help) to overcome the distance and magnitude limitations of the ML scale inherent in the use of surface waves. mB is based on the P- and S-waves, measured over a longer period, and does not saturate until around M 8. However, it is not sensitive to events smaller than about M 5.5. Use of mB as originally defined has been largely abandoned, now replaced by the standardized mBBB scale.
The mb or mb scale (lowercase "m" and "b") is similar to mB , but uses only P-waves measured in the first few seconds on a specific model of short-period seismograph. It was introduced in the 1960s with the establishment of the World-Wide Standardized Seismograph Network (WWSSN); the short period improves detection of smaller events, and better discriminates between tectonic earthquakes and underground nuclear explosions.
Measurement of mb has changed several times. As originally defined by Gutenberg (1945c) mb was based on the maximum amplitude of waves in the first 10 seconds or more. However, the length of the period influences the magnitude obtained. Early USGS/NEIC practice was to measure mb on the first second (just the first few P-waves), but since 1978 they measure the first twenty seconds. The modern practice is to measure short-period mb scale at less than three seconds, while the broadband mBBB scale is measured at periods of up to 30 seconds.
mbLg scale
The regional mbLg scale – also denoted mb_Lg, mbLg, MLg (USGS), Mn, and mN – was developed by Nuttli (1973) for a problem the original ML scale could not handle: all of North America east of the Rocky Mountains. The ML scale was developed in southern California, which lies on blocks of oceanic crust, typically basalt or sedimentary rock, which have been accreted to the continent. East of the Rockies the continent is a craton, a thick and largely stable mass of continental crust that is largely granite, a harder rock with different seismic characteristics. In this area the ML scale gives anomalous results for earthquakes which by other measures seemed equivalent to quakes in California.
Nuttli resolved this by measuring the amplitude of short-period (~1 sec.) Lg waves, a complex form of the Love wave which, although a surface wave, he found provided a result more closely related to the mb scale than the Ms scale. Lg waves attenuate quickly along any oceanic path, but propagate well through the granitic continental crust, and MbLg is often used in areas of stable continental crust; it is especially useful for detecting underground nuclear explosions.
Surface-wave magnitude scales
Surface waves propagate along the Earth's surface, and are principally either Rayleigh waves or Love waves. For shallow earthquakes the surface waves carry most of the energy of the earthquake, and are the most destructive. Deeper earthquakes, having less interaction with the surface, produce weaker surface waves.
The surface-wave magnitude scale, variously denoted as Ms, MS, and Ms, is based on a procedure developed by Beno Gutenberg in 1942 for measuring shallow earthquakes stronger or more distant than Richter's original scale could handle. Notably, it measured the amplitude of surface waves (which generally produce the largest amplitudes) for a period of "about 20 seconds". The Ms scale approximately agrees with ML at ~6, then diverges by as much as half a magnitude. A revision by Nuttli (1983), sometimes labeled MSn, measures only waves of the first second.
A modification – the "Moscow-Prague formula" – was proposed in 1962, and recommended by the IASPEI in 1967; this is the basis of the standardized Ms20 scale (Ms_20, Ms(20)). A "broad-band" variant (Ms_BB, Ms(BB)) measures the largest velocity amplitude in the Rayleigh-wave train for periods up to 60 seconds. The MS7 scale used in China is a variant of Ms calibrated for use with the Chinese-made "type 763" long-period seismograph.
The MLH scale used in some parts of Russia is actually a surface-wave magnitude.
Moment magnitude and energy magnitude scales
Other magnitude scales are based on aspects of seismic waves that only indirectly and incompletely reflect the force of an earthquake, involve other factors, and are generally limited in some respect of magnitude, focal depth, or distance. The moment magnitude scale – Mw or Mw – developed by Kanamori (1977) and Hanks & Kanamori (1979) harvtxt error: no target: CITEREFHanksKanamori1979 (help), is based on an earthquake's seismic moment, M0, a measure of how much work an earthquake does in sliding one patch of rock past another patch of rock. Seismic moment is measured in Newton-meters (Nm or N·m) in the SI system of measurement, or dyne-centimeters (dyn-cm; 1 dyn-cm = 10-7 Nm) in the older CGS system. In the simplest case the moment can be calculated knowing only the amount of slip, the area of the surface ruptured or slipped, and a factor for the resistance or friction encountered. These factors can be estimated for an existing fault to determine the magnitude of past earthquakes, or what might be anticipated for the future.
An earthquake's seismic moment can be estimated in various ways, which are the bases of the Mwb, Mwr, Mwc, Mww, Mwp, Mi, and Mwpd scales, all subtypes of the generic Mw scale. See Moment magnitude scale § Subtypes for details.
Seismic moment is considered the most objective measure of an earthquake's "size" in regard of total energy. However, it is based on a simple model of rupture, and on certain simplifying assumptions; it does not account for the fact that the proportion of energy radiated as seismic waves varies among earthquakes.
Much of an earthquake's total energy as measured by Mw is dissipated as friction (resulting in heating of the crust). An earthquake's potential to cause strong ground shaking depends on the comparatively small fraction of energy radiated as seismic waves, and is better measured on the energy magnitude scale, Me. The proportion of total energy radiated as seismic waves varies greatly depending on focal mechanism and tectonic environment; Me and Mw for very similar earthquakes can differ by as much as 1.4 units.
Despite the usefulness of the Me scale, it is not generally used due to difficulties in estimating the radiated seismic energy.
Energy class (K-class) scale
K (from the Russian word класс, "class", in the sense of a category) is a measure of earthquake magnitude in the energy class or K-class system, developed in 1955 by Soviet seismologists in the remote Garm (Tadjikistan) region of Central Asia; in revised form it is still used for local and regional quakes in many states formerly aligned with the Soviet Union (including Cuba). Based on seismic energy (K = log ES, in Joules), difficulty in implementing it using the technology of the time led to revisions in 1958 and 1960. Adaptation to local conditions has led to various regional K scales, such as KF and KS.
K values are logarithmic, similar to Richter-style magnitudes, but have a different scaling and zero point. K values in the range of 12 to 15 correspond approximately to M 4.5 to 6. M(K), M(K), or possibly MK indicates a magnitude M calculated from an energy class K.
Tsunami magnitude scales
Earthquakes that generate tsunamis generally rupture relatively slowly, delivering more energy at longer periods (lower frequencies) than generally used for measuring magnitudes. Any skew in the spectral distribution can result in larger, or smaller, tsunamis than expected for a nominal magnitude. The tsunami magnitude scale, Mt, is based on a correlation by Katsuyuki Abe of earthquake seismic moment (M0 ) with the amplitude of tsunami waves as measured by tidal gauges. Originally intended for estimating the magnitude of historic earthquakes where seismic data is lacking but tidal data exist, the correlation can be reversed to predict tidal height from earthquake magnitude. (Not to be confused with the height of a tidal wave, or run-up, which is an intensity effect controlled by local topography.) Under low-noise conditions, tsunami waves as little as 5 cm can be predicted, corresponding to an earthquake of M ~6.5.
Another scale of particular importance for tsunami warnings is the mantle magnitude scale, Mm. This is based on Rayleigh waves that penetrate into the Earth's mantle, and can be determined quickly, and without complete knowledge of other parameters such as the earthquake's depth.
Duration and Coda magnitude scales
Md designates various scales that estimate magnitude from the duration or length of some part of the seismic wave-train. This is especially useful for measuring local or regional earthquakes, both powerful earthquakes that might drive the seismometer off-scale (a problem with the analog instruments formerly used) and preventing measurement of the maximum wave amplitude, and weak earthquakes, whose maximum amplitude is not accurately measured. Even for distant earthquakes, measuring the duration of the shaking (as well as the amplitude) provides a better measure of the earthquake's total energy. Measurement of duration is incorporated in some modern scales, such as Mwpd and mBc .
Mc scales usually measure the duration or amplitude of a part of the seismic wave, the coda. For short distances (less than ~100 km) these can provide a quick estimate of magnitude before the quake's exact location is known.
Macroseismic magnitude scales
Magnitude scales generally are based on instrumental measurement of some aspect of the seismic wave as recorded on a seismogram. Where such records do not exist, magnitudes can be estimated from reports of the macroseismic events such as described by intensity scales.
One approach for doing this (developed by Beno Gutenberg and Charles Richter in 1942) relates the maximum intensity observed (presumably this is over the epicenter), denoted I0 (capital I with a subscripted zero), to the magnitude. It has been recommended that magnitudes calculated on this basis be labeled Mw(I0), but are sometimes labeled with a more generic Mms.
Another approach is to make an isoseismal map showing the area over which a given level of intensity was felt. The size of the "felt area" can also be related to the magnitude (based on the work of Frankel 1994 and Johnston 1996). While the recommended label for magnitudes derived in this way is M0(An), the more commonly seen label is Mfa. A variant, MLa, adapted to California and Hawaii, derives the Local magnitude (ML) from the size of the area affected by a given intensity. MI (upper-case letter "I", distinguished from the lower-case letter in Mi) has been used for moment magnitudes estimated from isoseismal intensities calculated per Johnston 1996.
Peak ground velocity (PGV) and Peak ground acceleration (PGA) are measures of the force that causes destructive ground shaking. In Japan, a network of strong-motion accelerometers provides PGA data that permits site-specific correlation with different magnitude earthquakes. This correlation can be inverted to estimate the ground shaking at that site due to an earthquake of a given magnitude at a given distance. From this a map showing areas of likely damage can be prepared within minutes of an actual earthquake.
Other magnitude scales
Many earthquake magnitude scales have been developed or proposed, with some never gaining broad acceptance and remaining only as obscure references in historical catalogs of earthquakes. Other scales have been used without a definite name, often referred to as "the method of Smith (1965)" (or similar language), with the authors often revising their method. On top of this, seismological networks vary on how they measure seismograms. Where the details of how a magnitude has been determined are unknown, catalogs will specify the scale as unknown (variously Unk, Ukn, or UK). In such cases, the magnitude is considered generic and approximate.
An Mh ("magnitude determined by hand") label has been used where the magnitude is too small or the data too poor (typically from analog equipment) to determine a Local magnitude, or multiple shocks or cultural noise complicates the records. The Southern California Seismic Network uses this "magnitude" where the data fail the quality criteria.
A special case is the Seismicity of the Earth catalog of Gutenberg & Richter (1954). Hailed as a milestone as a comprehensive global catalog of earthquakes with uniformly calculated magnitudes, they never published the full details of how they determined those magnitudes. Consequently, while some catalogs identify these magnitudes as MGR, others use UK (meaning "computational method unknown"). Subsequent study found many of the Ms values to be "considerably overestimated." Further study has found that most of the MGR magnitudes "are basically Ms for large shocks shallower than 40 km, but are basically mB for large shocks at depths of 40–60 km." Gutenberg and Richter also used an italic, non-bold "M without subscript" – also used as a generic magnitude, and not to be confused with the bold, non-italic M used for moment magnitude – and a "unified magnitude" m (bolding added). While these terms (with various adjustments) were used in scientific articles into the 1970s, they are now only of historical interest. An ordinary (non-italic, non-bold) capital "M" without subscript is often used to refer to magnitude generically, where an exact value or the specific scale used is not important.
- Bormann, Wendt & Di Giacomo 2013, p. 37. The relationship between magnitude and the energy released is complicated. See §22.214.171.124 and §3.3.3 for details.
- Bormann, Wendt & Di Giacomo 2013, §126.96.36.199.
- Bolt 1993, p. 164 et seq..
- Bolt 1993, pp. 170–171.
- Bolt 1993, p. 170.
- See Bolt 1993, Chapters 2 and 3, for a very readable explanation of these waves and their interpretation. J. R. Kayal's excellent description of seismic waves can be found here.
- See Havskov & Ottemöller 2009, §1.4, pp. 20–21, for a short explanation, or MNSOP-2 EX 3.1 2012 for a technical description.
- Chung & Bernreuter 1980, p. 1.
- Bormann, Wendt & Di Giacomo 2013, p. 18.
- IASPEI IS 3.3 2014, pp. 2–3.
- Kanamori 1983, p. 187.
- Richter 1935, p. 7.
- Spence, Sipkin & Choy 1989, p. 61.
- Richter 1935, pp. 5; Chung & Bernreuter 1980, p. 10. Subsequently redefined by Hutton & Boore 1987 as 10 mm of motion by an ML 3 quake at 17 km.
- Chung & Bernreuter 1980, p. 1; Kanamori 1983, p. 187, figure 2.
- Chung & Bernreuter 1980, p. ix.
- The "USGS Earthquake Magnitude Policy" for reporting earthquake magnitudes to the public as formulated by the USGS Earthquake Magnitude Working Group was implemented January 18, 2002, and posted at https://earthquake.usgs.gov/aboutus/docs/020204mag_policy.php. It has since been removed; a copy is archived at the Wayback Machine, and the essential part can be found here.
- Bormann, Wendt & Di Giacomo 2013, §3.2.4, p. 59.
- Rautian & Leith 2002, pp. 158, 162.
- See Datasheet 3.1 in NMSOP-2 for a partial compilation and references.
- Katsumata 1996; Bormann, Wendt & Di Giacomo 2013, §188.8.131.52, p. 78; Doi 2010.
- Bormann & Saul 2009, p. 2478.
- See also figure 3.70 in NMSOP-2.
- Havskov & Ottemöller 2009, p. 17.
- Bormann, Wendt & Di Giacomo 2013, p. 37; Havskov & Ottemöller 2009, §6.5. See also Abe 1981.
- Havskov & Ottemöller 2009, p. 191.
- Bormann & Saul 2009, p. 2482.
- MNSOP-2/IASPEI IS 3.3 2014, §4.2, pp. 15–16.
- Kanamori 1983, pp. 189, 196; Chung & Bernreuter 1980, p. 5.
- Bormann, Wendt & Di Giacomo 2013, pp. 37, 39; Bolt (1993, pp. 88–93) examines this at length.
- Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI IS 3.3 2014, p. 18.
- Nuttli 1983, p. 104; Bormann, Wendt & Di Giacomo 2013, p. 103.
- IASPEI/NMSOP-2 IS 3.2 2013, p. 8.
- Bormann, Wendt & Di Giacomo 2013, §184.108.40.206. The "g" subscript refers to the granitic layer through which Lg waves propagate. Chen & Pomeroy 1980, p. 4. See also J. R. Kayal, "Seismic Waves and Earthquake Location", here, page 5.
- Nuttli 1973, p. 881.
- Bormann, Wendt & Di Giacomo 2013, §220.127.116.11.
- Havskov & Ottemöller 2009, pp. 17–19. See especially figure 1-10.
- Gutenberg 1945a; based on work by Gutenberg & Richter 1936.
- Gutenberg 1945a.
- Kanamori 1983, p. 187.
- Stover & Coffman 1993, p. 3.
- Bormann, Wendt & Di Giacomo 2013, pp. 81–84.
- MNSOP-2 DS 3.1 2012, p. 8.
- Bormann et al. 2007, p. 118 harvnb error: no target: CITEREFBormannLiuRenGutdeutsch2007 (help).
- Rautian & Leith 2002, pp. 162, 164.
- The IASPEI standard formula for deriving moment magnitude from seismic moment is
Mw = (2/3) (log M0 – 9.1). Formula 3.68 in Bormann, Wendt & Di Giacomo 2013, p. 125.
- Anderson 2003, p. 944.
- Havskov & Ottemöller 2009, p. 198
- Havskov & Ottemöller 2009, p. 198; Bormann, Wendt & Di Giacomo 2013, p. 22.
- Bormann, Wendt & Di Giacomo 2013, p. 23
- NMSOP-2 IS 3.6 2012, §7.
- See Bormann, Wendt & Di Giacomo 2013, §18.104.22.168 for an extended discussion.
- NMSOP-2 IS 3.6 2012, §5.
- Bormann, Wendt & Di Giacomo 2013, p. 131.
- Rautian et al. 2007, p. 581.
- Rautian et al. 2007; NMSOP-2 IS 3.7 2012; Bormann, Wendt & Di Giacomo 2013, §22.214.171.124.
- Bindi et al. 2011, p. 330. Additional regression formulas for various regions can be found in Rautian et al. 2007, Tables 1 and 2. See also IS 3.7 2012, p. 17.
- Rautian & Leith 2002, p. 164.
- Bormann, Wendt & Di Giacomo 2013, §126.96.36.199, p. 124.
- Abe 1979; Abe 1989, p. 28. More precisely, Mt is based on far-field tsunami wave amplitudes in order to avoid some complications that happen near the source. Abe 1979, p. 1566.
- Blackford 1984, p. 29.
- Abe 1989, p. 28.
- Bormann, Wendt & Di Giacomo 2013, §188.8.131.52.
- Bormann, Wendt & Di Giacomo 2013, §184.108.40.206.
- Havskov & Ottemöller 2009, §6.3.
- Bormann, Wendt & Di Giacomo 2013, §220.127.116.11, pp. 71–72.
- Musson & Cecić 2012, p. 2.
- Gutenberg & Richter 1942.
- Grünthal 2011, p. 240.
- Grünthal 2011, p. 240.
- Stover & Coffman 1993, p. 3.
- Engdahl & Villaseñor 2002.
- Makris & Black 2004, p. 1032.
- Doi 2010.
- Hutton, Woessner & Haukson 2010, pp. 431, 433.
- NMSOP-2 IS 3.2, pp. 1–2 harvnb error: no target: CITEREFIS_3.2 (help).
- Abe 1981, p. 74; Engdahl & Villaseñor 2002, p. 667.
- Engdahl & Villaseñor 2002, p. 688.
- Abe & Noguchi 1983.
- Abe 1981, p. 72.
- Defined as "a weighted mean between MB and MS." Gutenberg & Richter 1956a, p. 1.
- "At Pasadena, a weighted mean is taken between mS as found directly from body waves, and mS, the corresponding value derived from MS ...." Gutenberg & Richter 1956a, p. 2.
- E.g., Kanamori 1977.
- Abe, K. (April 1979), "Size of great earthquakes of 1837 – 1874 inferred from tsunami data", Journal of Geophysical Research, 84 (B4): 1561–1568, Bibcode:1979JGR....84.1561A, doi:10.1029/JB084iB04p01561.
- Abe, K. (October 1981), "Magnitudes of large shallow earthquakes from 1904 to 1980", Physics of the Earth and Planetary Interiors, 27 (1): 72–92, Bibcode:1981PEPI...27...72A, doi:10.1016/0031-9201(81)90088-1.
- Abe, K. (September 1989), "Quantification of tsunamigenic earthquakes by the Mt scale", Tectonophysics, 166 (1–3): 27–34, Bibcode:1989Tectp.166...27A, doi:10.1016/0040-1951(89)90202-3.
- Abe, K; Noguchi, S. (August 1983), "Revision of magnitudes of large shallow earthquakes, 1897-1912", Physics of the Earth and Planetary Interiors, 33 (1): 1–11, Bibcode:1983PEPI...33....1A, doi:10.1016/0031-9201(83)90002-X.
- Anderson, J. G. (2003), "Chapter 57: Strong-Motion Seismology", International Handbook of Earthquake & Engineering Seismology, Part B, pp. 937–966, ISBN 0-12-440658-0.
- Bindi, D.; Parolai, S.; Oth, K.; Abdrakhmatov, A.; Muraliev, A.; Zschau, J. (October 2011), "Intensity prediction equations for Central Asia", Geophysical Journal International, 187: 327–337, Bibcode:2011GeoJI.187..327B, doi:10.1111/j.1365-246X.2011.05142.x.
- Blackford, M. E. (1984), "Use of the Abe magnitude scale by the Tsunami Warning System." (PDF), Science of Tsunami Hazards, 2 (1): 27–30.
- Bolt, B. A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 0-7167-5040-6.
- Bormann, P., ed. (2012), New Manual of Seismological Observatory Practice 2 (NMSOP-2), Potsdam: IASPEI/GFZ German Research Centre for Geosciences, doi:10.2312/GFZ.NMSOP-2.
- Bormann, P. (2012), "Data Sheet 3.1: Magnitude calibration formulas and tables, comments on their use and complementary data." (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_DS_3.1.
- Bormann, P. (2012), "Exercise 3.1: Magnitude determinations" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_EX_3.
- Bormann, P. (2013), "Information Sheet 3.2: Proposal for unique magnitude and amplitude nomenclature" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Dewey, J. W. (2014), "Information Sheet 3.3: The new IASPEI standards for determining magnitudes from digital data and their relation to classical magnitudes." (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.3.
- Bormann, P.; Fugita, K.; MacKey, K. G.; Gusev, A. (July 2012), "Information Sheet 3.7: The Russian K-class system, its relationships to magnitudes and its potential for future development and application" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.7.
- Bormann, P.; Saul, J. (2009), "Earthquake Magnitude" (PDF), Encyclopedia of Complexity and Applied Systems Science, vol. 3, pp. 2473–2496.
- Bormann, P.; Wendt, S.; Di Giacomo, D. (2013), "Chapter 3: Seismic Sources and Source Parameters" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch3.
- Choy, G. L.; Boatwright, J. L. (2012), "Information Sheet 3.6: Radiated seismic energy and energy magnitude" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_IS_3.6.
- Choy, G. L.; Boatwright, J. L.; Kirby, S. (2001), "The Radiated Seismic Energy and Apparent Stress of Interplate and Intraslab Earthquakes at Subduction Zone Environments: Implications for Seismic Hazard Estimation" (PDF), U.S. Geological Survey, Open-File Report 01-0005.
- Chung, D. H.; Bernreuter, D. L. (1980), Regional Relationships Among Earthquake Magnitude Scales., OSTI 5073993, NUREG/CR-1457.
- Doi, K. (2010), "Operational Procedures of Contributing Agencies" (PDF), Bulletin of the International Seismological Centre, 47 (7–12): 25, ISSN 2309-236X. Also available here (sections renumbered).
- Engdahl, E. R.; Villaseñor, A. (2002), "Chapter 41: Global Seismicity: 1900–1999", in Lee, W.H.K.; Kanamori, H.; Jennings, P.C.; Kisslinger, C. (eds.), International Handbook of Earthquake and Engineering Seismology (PDF), vol. Part A, Academic Press, pp. 665–690, ISBN 0-12-440652-1.
- Frankel, A. (1994), "Implications of felt area-magnitude relations for earthquake scaling and the average frequency of perceptible ground motion", Bulletin of the Seismological Society of America, 84 (2): 462–465.
- Grünthal, G. (2011), "Earthquakes, Intensity", in Gupta, H. (ed.), Encyclopedia of Solid Earth Geophysics, pp. 237–242, ISBN 978-90-481-8701-0.
- Gutenberg, B. (January 1945a), "Amplitudes of surface Waves and magnitudes of shallow earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (1): 3–12.
- Gutenberg, B. (1 April 1945c), "Magnitude determination for deep-focus earthquakes" (PDF), Bulletin of the Seismological Society of America, 35 (3): 117–130
- Gutenberg, B.; Richter, C. F. (1936), "On seismic waves (third paper)", Gerlands Beiträge zur Geophysik, 47: 73–131.
- Gutenberg, B.; Richter, C. F. (1942), "Earthquake magnitude, intensity, energy, and acceleration", Bulletin of the Seismological Society of America: 163–191, ISSN 0037-1106.
- Gutenberg, B.; Richter, C. F. (1954), Seismicity of the Earth and Associated Phenomena (2nd ed.), Princeton University Press, 310p.
- Gutenberg, B.; Richter, C. F. (1956a), "Magnitude and energy of earthquakes" (PDF), Annali di Geofisica, 9: 1–15
- Havskov, J.; Ottemöller, L. (October 2009), Processing Earthquake Data (PDF).
- Hough, S.E. (2007), Richter's scale: measure of an earthquake, measure of a man, Princeton University Press, ISBN 978-0-691-12807-8, retrieved 10 December 2011.
- Hutton, L. K.; Boore, David M. (December 1987), "The ML scale in Southern California" (PDF), Nature, 271: 411–414, Bibcode:1978Natur.271..411K, doi:10.1038/271411a0.
- Hutton, Kate; Woessner, Jochen; Haukson, Egill (April 2010), "Earthquake Monitoring in Southern California for Seventy-Seven Years (1932—2008)" (PDF), Bulletin of the Seismological Society of America, 100 (1): 423–446, doi:10.1785/0120090130
- Johnston, A. (1996), "Seismic moment assessment of earthquakes in stable continental regions – II. Historical seismicity", Geophysical Journal International, 125 (3): 639–678, Bibcode:1996GeoJI.125..639J, doi:10.1111/j.1365-246x.1996.tb06015.x.
- Kanamori, H. (July 10, 1977), "The energy release in great earthquakes" (PDF), Journal of Geophysical Research, 82 (20): 2981–2987, Bibcode:1977JGR....82.2981K, doi:10.1029/JB082i020p02981.
- Kanamori, H. (April 1983), "Magnitude Scale and Quantification of Earthquake" (PDF), Tectonophysics, 93 (3–4): 185–199, Bibcode:1983Tectp..93..185K, doi:10.1016/0040-1951(83)90273-1.
- Katsumata, A. (June 1996), "Comparison of magnitudes estimated by the Japan Meteorological Agency with moment magnitudes for intermediate and deep earthquakes.", Bulletin of the Seismological Society of America, 86 (3): 832–842.
- Makris, N.; Black, C. J. (September 2004), "Evaluation of Peak Ground Velocity as a "Good" Intensity Measure for Near-Source Ground Motions", Journal of Engineering Mechanics, 130 (9): 1032–1044, doi:10.1061/(asce)0733-9399(2004)130:9(1032).
- Musson, R. M.; Cecić, I. (2012), "Chapter 12: Intensity and Intensity Scales" (PDF), in Bormann (ed.), New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch12.
- Nuttli, O. W. (10 February 1973), "Seismic wave attenuation and magnitude relations for eastern North America", Journal of Geophysical Research, 78 (5): 876–885, Bibcode:1973JGR....78..876N, doi:10.1029/JB078i005p00876.
- Nuttli, O. W. (April 1983), "Average seismic source-parameter relations for mid-plate earthquakes", Bulletin of the Seismological Society of America, 73 (2): 519–535.
- Rautian, T. G.; Khalturin, V. I.; Fujita, K.; Mackey, K. G.; Kendall, A. D. (November–December 2007), "Origins and Methodology of the Russian Energy K-Class System and Its Relationship to Magnitude Scales" (PDF), Seismological Research Letters, 78 (6): 579–590, doi:10.1785/gssrl.78.6.579.
- Rautian, T.; Leith, W. S. (September 2002), "Developing Composite Regional Catalogs of the Seismicity of the Former Soviet Union." (PDF), 24th Seismic Research Review – Nuclear Explosion Monitoring: Innovation and Integration, Ponte Vedra Beach, Florida.
- Richter, C. F. (January 1935), "An Instrumental Earthquake Magnitude Scale" (PDF), Bulletin of the Seismological Society of America, 25 (1): 1–32.
- Spence, W.; Sipkin, S. A.; Choy, G. L. (1989), "Measuring the size of an Earthquake" (PDF), Earthquakes and Volcanoes, 21 (1): 58–63.
- Stover, C. W.; Coffman, J. L. (1993), Seismicity of the United States, 1568–1989 (Revised) (PDF), U.S. Geological Survey Professional Paper 1527.
- Perspective: a graphical comparison of earthquake energy release – Pacific Tsunami Warning Center
- USGS ShakeMap Providing near-real-time maps of ground motion and shaking intensity following significant earthquakes. |
In the uppermost Northern Hemisphere, North America, Europe, and Asia have significant expanses of land. The boreal forests ring the regions immediately south of the Arctic Circle in a vast expanse that easily rivals the rainforest regions of the world. The northern boreal ecoregion accounts for about one third of this planet's total forest area. This broad circumpolar band runs through most of Canada, Russia and Scandinavia.
The circumpolar range of the boreal forest. About two-thirds of the area is in Eurasia. The sector in Eastern Canada lies farthest from the North Pole. Map source, Hare and Ritchie (1972). In North America, the boreal eco-region extends from Alaska to Newfoundland, bordering the tundra to the north and touching the Great Lakes to the south.Known in Russia as the taiga, the boreal forest constitutes one of the largest biome in the world, covering some 12 million square kilometres.
Overlying formerly glaciated areas and areas of patchy permafrost on both continents, the forest is mosaic of successional and subclimax plant communities sensitive to varying environmental conditions. It has relatively few species, being composed mainly of spruces, firs, and conifers, with a smattering of deciduous trees, mostly along waterways. The boreal forest seems associated with the location of the summertime arctic airmass - it begins generally where it reaches its southern limit, and it extends to the southern most extension during the winter.
Thus, it lies between the summer and winter positions of the arctic front.The boreal forest corresponds with regions of subarctic and cold continental climate. Long, severe winters (up to six months with mean temperatures below freezing) and short summers (50 to 100 frost-free days) are characteristic, as is a wide range of temperatures between the lows of winter and highs of summer. For example, Verkhoyansk, Russia, has recorded extremes of minus 90 F and plus 90 F.
Mean annual precipitation is 15 to 20 inches, but low evaporation rates make this a humid climate.Also characteristic of the boreal forest are innumerable water bodies: bogs, fens, marshes, shallow lakes, rivers and wetlands, mixed in among the forest and holding a vast amount of water. The winters are long and severe while summers are short though often warm.Forests cover approximately 19.2 million square miles (49.
8 million square kilometres) - (33%) of the world's land surface area. They are broken down as follows: mil. sq. mi. mil. sq. km. Boreal Forests 6.4 16.6 Other Forests 12.8 33.2 Source: The World Bank 1996 Forest area in selected countries Country Total forest area (millions of ha.) Percentage of global forested area Russia 764 22 Brazil 566 16 Canada 247 7 U.S.A. 210 6 China 134 4 Indonesia 116 3 Zaire 113 3 Nordic countries 53 2 All other 1239 36 There are latitudinal zones within the boreal forest.
Running north to south, one finds the tundra/taiga ecotone, an open coniferous forest (the section most properly called taiga) the characteristic closed-canopy needleleaf evergreen boreal forest; and a mixed needleleaf evergreen-broadleaf deciduous forest, the ecotone with the Temperate Broadleaf Deciduous Forest. In the US, this southern ecotone is dominated by white pine (Pinus strobus), sugar maple (Acer saccharum), and American beech (Fagus americanus).
Extensions of the boreal forest occur down the spines of mountains at high elevations. In eastern North America, this occurs at high elevation down to New Jersey, then West Virginia and again in the southern Appalachians. The trees are red spruce and balsam fir in the north, and Fraser fir in the south. Fir tends to grow at the highest elevations. Yellow birch becomes prominent also, with a smattering of eastern hemlock.
In the southern Appalachians, these forests start at about 4,500 feet and in the north, where it is cooler, can be found at sea level (Maine and Canada). The boreal forest in the southern Appalachians is disjunct and, due to its relatively small areal coverage, is regarded as a highly endangered ecosystem.Boreal forest soilsSoils in this forest are called podzols, from the Russian word for ash (the colour of these soils) and their development podzolization.
Podzolization occurs as a result of the acid soil solution produced under needleleaf trees. This means that iron and aluminum are leached from the A horizon, and deposited in the B horizon. Clays and other minerals migrate to lower layers, leaving the upper one sandy in texture.Because of the low temperatures, decomposition is fairly slow, and soil microorganism activity limited. The highly lignified needles of the dominant trees decompose slowly, creating a mat over the soil.
Tannins and other acids cause the upper soil layers to become very acidic, and the permanent shade from the evergreen trees keeps evaporation to a minimum, and the soils are often wet. In some cases they are waterlogged nearly all year. This tends to limit nutrient cycling, compared to more southerly forests.Major plant speciesBy far the most dominant tree species are conifers which are well-adapted to the harsh climate, and thin, acidic soils.
Black and white spruce are characteristic species of this region along with Tamarack, Jack Pine and Balsam Fir. Needleleaf, coniferous (gymnosperm) trees, the dominant plants of the boreal biome, are a very few species found in four main genera - the evergreen spruce (Picea), fir (Abies), and pine (Pinus), and the deciduous larch or tamarack (Larix).In North America, one or two species of fir and one or two species of spruce are dominant.
Across Scandinavia and western Russia the Scots pine is a common component of the taiga.Broadleaf deciduous trees and shrubs are members of early successional stages of both primary and secondary succession. Most common are alder (Alnus), birch (Betula), and aspen (Populus).It is now recognized that so-called climax communities in the boreal undergo an approximately 200-year cycle between nitrogen-depleting spruce-fir forests and nitrogen-accumulating aspen forests.
The conical or spire-shaped needleleaf trees common to the boreal are adapted to the cold and the physiological drought of winter and to the short-growing season: Conical shape - promotes shedding of snow and prevents loss of branches. Needleleaf - narrowness reduces surface area through which water may be lost (transpired), especially during winter when the frozen ground prevents plants from replenishing their water supply.
The needles of boreal conifers also have thick waxy coatings - a waterproof cuticle - in which stomata are sunken and protected from drying winds. Evergreen habit - retention of foliage allows plants to photosynthesize as soon as temperatures permit in spring, rather than having to waste time in the short growing season merely growing leaves. (Note: Deciduous larch are dominant in areas underlain by nearly continuous permafrost and having a climate even too dry and cold for the waxy needles of spruce and fir.
) Dark colour - the dark green of spruce and fir needles helps the foliage absorb maximum heat from the sun and begin photosynthesis as early as possible. In European and Asian boreal forests, the spruces are replaced by two other species, Norway and Siberian. Throughout the vast Siberian section of Russia, and in wet areas, larches predominate. Larches are deciduous conifers, and more abundant along the northern extremes.
The severe winters, and short growing season, favour evergreen species. These trees are also able to shed snow in the winter, which keeps them from breaking under the loads, and to begin photosynthesis early in the spring, when the weather becomes favourable.Muskegs - low lying, water filled depressions or bogs - are common throughout the boreal forest, occurring in poorly drained, glacial depressions.
Sphagnum moss forms a spongy mat over ponded water. Growing on this mat are species of the tundra such as cotton grass and shrubs of the heath family. Black spruce and larch ring the edge. Sphagnum moss may enhance the water logging - once established, it has the ability to hold up to 4000% of its dry weight in water. It often limits what species can establish once it gains a foothold. Some of the trees can reproduce by layering, since the probability of seeds germinating are low.
Pine forests, in North America dominated by the jack pine (Pinus banksiana), occur on sandy outwash plains and former dune areas. These are low nutrient, droughty substrates not tolerated by spruce and fir.Larch forests claim the thin, waterlogged substrate in level areas underlain with permafrost. These forests are open with understories of shrubs, mosses and lichens. In Alaska, stands of Larix larichina are localized phenomena, but in Siberia east of the Yenesei River the extreme continentality and nearly continuous permafrost give rise to vast areas dominated by Larix dihurica.
Major animal speciesThe North American boreal forest offers breeding grounds to over 200 bird species, as well as being home to species such as Caribou, Lynx, Black Bear, Moose, Coyote, Timber Wolf and recovering populations of Wood Bison.Since most of the trees bear cones, there are animals that have evolved adaptations to obtain seeds from the cones, and, conversely, the trees have adaptations to deter it, usually spines on the cones.
Crossbills (which have crossed beaks) are highly efficient seed extractors.Herbivores have to cope with highly lignified food, which is hard to digest. Moose are common large herbivores in the boreal. Caribou use the forest for shelter in the worst parts of the winter. Moose (Alces alces, known as elk in Europe) generally prefer deciduous browse and herbaceous plants, while caribou scavenge for lichens and can eat conifer needles.
Thus, the two large herbivores have different food requirements - moose being an early successional (young forest) species, and caribou a late successional (older forest) species.The beaver (Castor canadensis), on which the early North American fur trade was based, is also a creature of early successional communities, indeed its dams along streams create such habitats.Bear are abundant in the boreal, along with wolves (where they haven't been exterminated).
Snowshoe hares and lynx, which have unusually large feet to walk across snow, are common throughout the eco-region.Fur-bearing predators like the lynx (Felis lynx) and various members of the weasel family (e.g., wolverine, fisher, pine martin, mink, ermine, and sable) are perhaps most characteristic of the boreal forest proper. The mammalian herbivores on which they feed include the snowshoe or varying hare, red squirrel, lemmings, and voles.
Among birds, insect-eaters like the wood warblers are migratory and leave after the breeding season. Seed-eaters (e.g., finches and sparrows) and omnivores (e.g., ravens) tend to be year-round residents. During poor cone years, normal residents like the evening grosbeak, pine siskin, and red crossbill leave the taiga in winter and may be seen at residential bird feeders.Role of forest fireFire is a crucial disturbance factor in the boreal ecoregion.
It facilitates the destruction of old, diseased trees along with the pests that are associated with those trees. Many animals are able to escape natural fires and some trees such as aspen and jack pine actually require fires to stimulate their reproductive cycles. Furthermore, the nutrient-rich ash left behind helps fuel plant growth. A patchy mosaic of plant communities left in the wake of fire action provides the variety required to sustain different species of wildlife.
Fire, which removes the lichen from the ground, can severely impact caribou but favours moose, which browse on the advance growth (new saplings) that emerges after the fire. As human populations encroach on this remote forest area, they increase the frequency of fires, and caribou populations decline.Human ActivityAlthough, the boreal forest conjures up images of vast pristine wilderness, an unending expanse of conifers in an area that has been left untouched by human interference and industrial development, it is increasingly threatened by a range of resource extraction and other activities.
Although the population in this ecozone is relatively sparse, there are many small communities which rely on various resource extraction industries such as forestry and mining. Unless they diversify, their existence is extremely tenuous, often relying on one mill or mine as their economic mainstay. For generations, the boreal forest has also been home to First Nations people including, in North America, the Cree, Innu, M�mtis, Dene, Gwich'in and Athabascan.
Traditional Aboriginal lifestyles are also deeply tied to the continued existence of wildlife.Major industrial developments in the boreal ecoregion include logging, mining, and hydroelectric development. These activities have had severe impacts on many areas and these will face increasing pressure for resource exploitation in the coming years. Approximately 90% of all logging that occurs in this region is by clear cutting, using heavy, capital-intensive machinery.
As wood shortages become more and more prevalent in the southern regions of Canada, timber that was once considered unprofitable to log in the north, is now being threatened to sustain "fibre supply". Vast regions of Canada's boreal forests are under leases to forestry companies, mostly for the production of pulp and paper.The "high mineral potential" in this region is also very problematic. Specific concerns include the disposal of acidic effluent from tailings, containment of radioactivity and the effects of emissions from processing plants.
The construction of most hydroelectric facilities (dams) in Canada have taken place in the boreal ecoregion. Massive hydroelectric development has produced changes in stream-flow patterns, flooded large areas to result in a dramatically altered landscape and cause the production of methyl-mercury. Acid rain also continues to be a serious problem for the lakes and shallow soils of the boreal region despite legislation curbing acid precipitation-producing emissions in both the US and Canada.
Furthermore, organochlorine and heavy metal contamination especially mercury and cadmium continue to be a source of concern.Threats to the Boreal RegionsWith these facts at hand, is the situation in the Boreal regions alarming? All in all there are problems, many of which could be ignored since the Boreal regions aren't yet popular to fret over. Remember, at these extreme polar latitudes the forests, once cut down, take much longer to regenerate than forests that are logged in tropical regions of the planet.
Some of the problems that the Boreal regions face are:air pollution from smelters and power plants radioactivity from atomic power and weapons testing water pollution & disruption of habitats if commercialization of a northern shipping routes become a reality adverse impact of new mineral and oil/gas extraction new threats to endangered species Conservation and environmental groups believe that to protect this ecosystem, human industrial activity both inside and outside the boreal forest must be carefully regulated.
Large reserves able to maintain their ecological integrity must be adequately set aside and thorough environmental assessments must be carried out before governments decide to allow any sort of large-scale industrial activity.The boreal forest's role in global climate controlLocked up in the Boreal forests are vast amounts of carbon, and their biomass is so huge and so vital that when they are in their maximum growth phase during the northern spring and summer, the worldwide levels of carbon dioxide fall and the worldwide levels of oxygen rise.
The Boreal Forests are just as important to the global ecosystem as the Tropical Forests and they should be given equal attention by all concerned with forestry and the environment. Global environmental changes, and the social, economic, and political processes of globalization that help drive the concerns, are now influencing local forest conditions and management practices.At the same time political changes and alliances are facilitating the evolution of novel institutions and the interplay between institutions from different governmental levels.
Some of these are clearly aimed at facilitating further exploitation of forest resources and promoting economic development, whereas others are aimed more at controlling or mitigating some of the environmental and social impacts of these transformations.At the international level a number of environmental regimes, like the Kyoto Protocol and the Convention on Biological Diversity, are evolving in ways that could potentially have a major influence on forest land development strategies of nations.
At more local levels, decentralization is facilitating what is in some a cases, a return to more community-based rather than state-centered forms of forest management.However, scientific understanding of the boreal forest's significance in the carbon cycle and its role in control of greenhouse gases and impact on global climate change is incomplete. Research efforts - few and far between prior to the last decade - are increasing, particularly the Canadian-based BOREAS Project.
Canadian Boreal Forest Map. Created by the Canadian Model Forest Project The BOREAS ProjectThe Boreal Ecosystem-Atmosphere Study (BOREAS) is a large-scale international interdisciplinary experiment in the northern boreal forests of Canada. Its goal is to improve our understanding of the boreal forests -- how they interact with the atmosphere, how much CO² they can store, and how climate change will affect them.
BOREAS wants to learn to use satellite data to monitor the forests, and to improve computer simulation and weather models so scientists can anticipate the effects of global change.Summary of ResultsThe first BOREAS field year was completed in 1993-1994. Surface flux data were collected throughout the growing season from the towers and other techniques . Over 350 research flights (remote sensing and airborne eddy correlation) were flown in support of the operation.
A surprising picture of the energy, water and carbon dynamics of the boreal ecosystem is emerging, even at this early stage in the experiment. In simple terms, the lowland forests of the boreal ecosystem in Saskatchewan and Manitoba grow on flat terrain, with a mineral soil base overlain by a very thin layer of live and decomposed moss. Observations show that the root zone of the conifers, which comprise the bulk of these forested lowlands, is very thin (less than 40 cm deep) and is contained entirely within the live/decomposed moss (moss/humus) layer.
In short, the boreal lowland soils behave hydrologically much like a gently rolling semi-impermeable floor, with a thin layer of cotton on top.In terms of the water and energy balance, we have seen that the boreal ecosystem often behaves like an arid landscape, particularly early in the growing season. This is because even though the moss layer is wet for most of the summer, the poor soils and harsh climatic conditions lead to low photosynthetic rates, which in turn lead to low evapotranspiration rates.
Much of the precipitation simply penetrates through the moss and sand to the underlying semi-impermeable layer and runs off. Most of the incoming solar radiation is intercepted by the vegetation canopies, which exert strong control over transpiration water losses, rather than by the moist underlying moss/soil surface. As a result, much of the available surface energy is dissipated as sensible heat which often leads to the development of a deep (3000 m) and turbulent atmospheric boundary layer.
These insights into the partitioning of the surface energy should have a significant impact on the development of climate and weather models, most of which currently characterize the boreal landscape as a freely evaporating surface.Importantly, it has been reported that the moisture level in the moss/humus layer never gets low enough to induce moisture stress in the overlying vegetation. If this finding holds up under further analysis, it would imply that root zone moisture, a difficult variable to quantify over large spatial scales, does not exert significant control on the surface energy balance.
Rather, the important variables controlling photosynthesis and evaporation appear to be soil temperature in the spring, and atmospheric relative humidity and air temperature in the summer and fall.This new understanding of controls on regional evaporation rates is relevant to the issue of whether the boreal ecosystem is a sink or source of carbon, but until the analysis is further along this question will remain unresolved.
We have learned that sequestration of carbon by conifers, the largest component of the boreal ecosystem, is limited in the spring by frozen or cold soils, and in the summer by hot temperatures and dry air. In the fall, the conifers were observed to have the largest carbon uptake of the season; presumably as soils are warm, the air temperatures are not so hot, and the air is not so dry. Leaf-level measurements suggest that the end of the growing season may be induced by frost.
Measurements show that at temperatures below about -5 to -10°C, black spruce needles do not recover, and photosynthesis stops.To summarize, the photosynthetic machinery of the boreal forest has considerably less capacity than the temperate forests to the south. This is reflected in low photosynthetic and carbon drawdown rates which are associated with low transpiration rates.The coniferous vegetation in particular follows a very conservative water use strategy.
The vegetation transpiration stream is drastically reduced by stomatal closure when the foliage is exposed to dry air, even if soil moisture is freely available. This feedback mechanism acts to keep the surface evapotranspiration rate at a steady and surprisingly low level (less than 2 mm/day over the season).The low evapotranspiration rates coupled with a high available energy during the growing season (the albedos are among the lowest observed over vegetated regions) can lead to high sensible heat fluxes and the development of deep planetary boundary layers, particularly during the spring and early summer.
These planetary boundary layers are often characterized by intense mechanical and sensible heat-driven turbulence.As far as we know, all current climate and numerical weather prediction models grossly overestimate evapotranspiration from the region.See Also: Rock Springs Animal Shelter
The zoo will probably be a terrific choice spot if you prefer for getting animals pictures without the need of acquiring a trip to safari in summer season. You may take their images in the safe bench that's obtainable near the cages. To create you achievement in taking the photographs of animals that you'd like, you can follow the next tips.
Out of a jungle of rain-washed pines and junipers spearing the new blueness in the Florida sky, ran a small, tawny-haired boy. His bare ft, extending from his overalled legs, crackled against the fallen palmettos. He leaped into the air, flinging his arms toward a flock of white doves circling over him.
For other uses, see Taiga (disambiguation). Taiga Jack London Lake at Kolyma, Russia The taiga is found throughout the high northern latitudes, between the tundra, and the temperate forest, from about 50°N to 70°N, but with considerable regional variation. Ecology Biome Terrestrial subarctic, humid Geography Countries Russia, Mongolia, Japan, Norway, Sweden, Iceland, Finland, United States, Canada, Scotland Climate type Dfc, Dwc, Dsc Taiga (/ˈtaɪɡə/; Russian: тайга́, IPA: [tɐjˈɡa]; from Turkic), also known as boreal forest or snow forest, is a biome characterized by coniferous forests consisting mostly of pines, spruces and larches.
The taiga is the world's largest biome apart from the oceans. In North America it covers most of inland Canada and Alaska as well as parts of the extreme northern continental United States (northern Minnesota through the Upper Peninsula of Michigan to Upstate New York and northern New England), where it is known as the Northwoods or "North woods". In Eurasia, it covers most of Sweden, Finland, much of Norway, some of the Scottish Highlands, some lowland/coastal areas of Iceland, much of Russia from Karelia in the west to the Pacific Ocean (including much of Siberia), and areas of northern Kazakhstan, northern Mongolia, and northern Japan (on the island of Hokkaidō).
However, the main tree species, the length of the growing season and summer temperatures vary. For example, the taiga of North America mostly consists of spruces; Scandinavian and Finnish taiga consists of a mix of spruce, pines and birch; Russian taiga has spruces, pines and larches depending on the region, while the Eastern Siberian taiga is a vast larch forest. A different use of the term taiga is often encountered in the English language, with "boreal forest" used in the United States and Canada to refer to only the more southerly part of the biome, while "taiga" is used to describe the more barren areas of the northernmost part of the biome approaching the tree line and the tundra biome.
Hoffman (1958) discusses the origin of this differential use in North America and why it is an inappropriate differentiation of the Russian term. Although at high elevations taiga grades into alpine tundra through Krummholz, it is not exclusively an alpine biome; and unlike subalpine forest, much of taiga is lowlands. White spruce taiga, Denali Highway, Alaska Range, Alaska Climate and geography Taiga is the world's largest land biome, making up 29% of the world's forest cover.
The largest areas are located in Russia and Canada. The taiga is the terrestrial biome with the lowest annual average temperatures after the tundra and permanent ice caps. Extreme winter minimums in the northern taiga are typically lower than those of the tundra. The lowest reliably recorded temperatures in the Northern Hemisphere were recorded in the taiga of northeastern Russia. The taiga or boreal forest has a subarctic climate with very large temperature range between seasons, but the long and cold winter is the dominant feature.
This climate is classified as Dfc, Dwc, Dsc, Dfd and Dwd in the Köppen climate classification scheme, meaning that the short summer (24 h average 10 °C (50 °F) or more) lasts 1–3 months and always less than 4 months. In Siberian taiga the average temperature of the coldest month is between −6 °C (21 °F) and −50 °C (−58 °F). There are also some much smaller areas grading towards the oceanic Cfc climate with milder winters, whilst the extreme south and (in Eurasia) west of the taiga reaches into humid continental climates (Dfb, Dwb) with longer summers.
The mean annual temperature generally varies from -5 °C to 5 °C (23 °F to 41 °F), but there are taiga areas in eastern Siberia and interior Alaska-Yukon where the mean annual reaches down to -10 °C (14 °F). According to some sources, the boreal forest grades into a temperate mixed forest when mean annual temperature reaches about 3 °C (37 °F).Discontinuous permafrost is found in areas with mean annual temperature below 0 °C, whilst in the Dfd and Dwd climate zones continuous permafrost occurs and restricts growth to very shallow-rooted trees like Siberian larch.
The winters, with average temperatures below freezing, last five to seven months. Temperatures vary from −54 °C to 30 °C (-65 °F to 86 °F) throughout the whole year. The summers, while short, are generally warm and humid. In much of the taiga, -20 °C (-4 °F) would be a typical winter day temperature and 18 °C (64 °F) an average summer day. The taiga in the river valley near Verkhoyansk, Russia, at 67°N, experiences the coldest winter temperatures in the northern hemisphere, but the extreme continentality of the climate gives an average daily high of 22 °C (72 °F) in July.
Boreal forest near Shovel Point in Tettegouche State Park, along the northern shore of Lake Superior in Minnesota. The growing season, when the vegetation in the taiga comes alive, is usually slightly longer than the climatic definition of summer as the plants of the boreal biome have a lower threshold to trigger growth. In Canada, Scandinavia and Finland, the growing season is often estimated by using the period of the year when the 24-hour average temperature is +5 °C (41 °F) or more.
For the Taiga Plains in Canada, growing season varies from 80 to 150 days, and in the Taiga Shield from 100 to 140 days. Some sources claim 130 days growing season as typical for the taiga. Other sources mention that 50–100 frost-free days are characteristic. Data for locations in southwest Yukon gives 80–120 frost-free days. The closed canopy boreal forest in Kenozersky National Park near Plesetsk, Arkhangelsk Province, Russia, on average has 108 frost-free days.
The longest growing season is found in the smaller areas with oceanic influences; in coastal areas of Scandinavia and Finland, the growing season of the closed boreal forest can be 145–180 days. The shortest growing season is found at the northern taiga–tundra ecotone, where the northern taiga forest no longer can grow and the tundra dominates the landscape when the growing season is down to 50–70 days, and the 24-hr average of the warmest month of the year usually is 10 °C (50 °F) or less.
High latitudes mean that the sun does not rise far above the horizon, and less solar energy is received than further south. But the high latitude also ensures very long summer days, as the sun stays above the horizon nearly 20 hours each day, with only around 6 hours of daylight occurring in the dark winters, depending on latitude. The areas of the taiga inside the Arctic Circle have midnight sun in mid-summer and polar night in mid-winter.
Lakes and other water bodies are common in the taiga. The Helvetinjärvi National Park, Finland, situated in the closed canopy taiga (mid-boreal to south-boreal) with mean annual temperature of 4 °C (39 °F). The taiga experiences relatively low precipitation throughout the year (generally 200–750 mm annually, 1,000 mm in some areas), primarily as rain during the summer months, but also as fog and snow.
This fog, especially predominant in low-lying areas during and after the thawing of frozen Arctic seas, means that sunshine is not abundant in the taiga even during the long summer days. As evaporation is consequently low for most of the year, precipitation exceeds evaporation, and is sufficient to sustain the dense vegetation growth. Snow may remain on the ground for as long as nine months in the northernmost extensions of the taiga ecozone.
In general, taiga grows to the south of the 10 °C July isotherm, but occasionally as far north as the 9 °C (48 °F) July isotherm. Rich in spruces, Scots pines in the western Siberian plain, the taiga is dominated by larch in Eastern Siberia, before returning to its original floristic richness on the Pacific shores. Two deciduous trees mingle throughout southern Siberia: birch and Populus tremula.
Late September in the fjords near Narvik, Norway. This oceanic part of the forest can see more than 1,000 mm precipitation annually and has warmer winters than the vast inland taiga The southern limit is more variable, depending on rainfall; taiga may be replaced by forest steppe south of the 15 °C (59 °F) July isotherm where rainfall is very low, but more typically extends south to the 18 °C (64 °F) July isotherm, and locally where rainfall is higher (notably in eastern Siberia and adjacent Outer Manchuria) south to the 20 °C (68 °F) July isotherm.
In these warmer areas the taiga has higher species diversity, with more warmth-loving species such as Korean pine, Jezo spruce, and Manchurian fir, and merges gradually into mixed temperate forest or, more locally (on the Pacific Ocean coasts of North America and Asia), into coniferous temperate rainforests where oak and hornbeam appear and join the conifers, birch and Populus tremula. The area currently classified as taiga in Europe and North America (except Alaska) was recently glaciated.
As the glaciers receded they left depressions in the topography that have since filled with water, creating lakes and bogs (especially muskeg soil) found throughout the taiga. Yukon, Canada. Several of the world's longest rivers go through the taiga, including Ob, Yenisei, Lena, and Mackenzie. In Sweden the taiga is associated with the Norrland terrain. Soils Taiga soil tends to be young and poor in nutrients.
It lacks the deep, organically enriched profile present in temperate deciduous forests. The thinness of the soil is due largely to the cold, which hinders the development of soil and the ease with which plants can use its nutrients. Fallen leaves and moss can remain on the forest floor for a long time in the cool, moist climate, which limits their organic contribution to the soil; acids from evergreen needles further leach the soil, creating spodosol, also known as podzol.
Since the soil is acidic due to the falling pine needles, the forest floor has only lichens and some mosses growing on it. In clearings in the forest and in areas with more boreal deciduous trees, there are more herbs and berries growing. Diversity of soil organisms in the boreal forest is high, comparable to the tropical rainforest. Flora Boreal forest near Lake Baikal in Russia Since North America and Asia used to be connected by the Bering land bridge, a number of animal and plant species (more animals than plants) were able to colonize both continents and are distributed throughout the taiga biome (see Circumboreal Region).
Others differ regionally, typically with each genus having several distinct species, each occupying different regions of the taiga. Taigas also have some small-leaved deciduous trees like birch, alder, willow, and poplar; mostly in areas escaping the most extreme winter cold. However, the Dahurian larch tolerates the coldest winters in the Northern Hemisphere in eastern Siberia. The very southernmost parts of the taiga may have trees such as oak, maple, elm and lime scattered among the conifers, and there is usually a gradual transition into a temperate mixed forest, such as the eastern forest-boreal transition of eastern Canada.
In the interior of the continents with the driest climate, the boreal forests might grade into temperate grassland. There are two major types of taiga. The southern part is the closed canopy forest, consisting of many closely spaced trees with mossy ground cover. In clearings in the forest, shrubs and wildflowers are common, such as the fireweed. The other type is the lichen woodland or sparse taiga, with trees that are farther-spaced and lichen ground cover; the latter is common in the northernmost taiga.
In the northernmost taiga the forest cover is not only more sparse, but often stunted in growth form; moreover, ice pruned asymmetric black spruce (in North America) are often seen, with diminished foliage on the windward side. In Canada, Scandinavia and Finland, the boreal forest is usually divided into three subzones: The high boreal (north boreal) or taiga zone; the middle boreal (closed forest); and the southern boreal, a closed canopy boreal forest with some scattered temperate deciduous trees among the conifers, such as maple, elm and oak.
This southern boreal forest experiences the longest and warmest growing season of the biome, and in some regions (including Scandinavia, Finland and western Russia) this subzone is commonly used for agricultural purposes. The boreal forest is home to many types of berries; some are confined to the southern and middle closed boreal forest (such as wild strawberry and partridgeberry); others grow in most areas of the taiga (such as cranberry and cloudberry), and some can grow in both the taiga and the low arctic (southern part of) tundra (such as bilberry, bunchberry and lingonberry).
The forests of the taiga are largely coniferous, dominated by larch, spruce, fir and pine. The woodland mix varies according to geography and climate so for example the Eastern Canadian forests ecoregion of the higher elevations of the Laurentian Mountains and the northern Appalachian Mountains in Canada is dominated by balsam fir Abies balsamea, while further north the Eastern Canadian Shield taiga of northern Quebec and Labrador is notably black spruce Picea mariana and tamarack larch Larix laricina.
Evergreen species in the taiga (spruce, fir, and pine) have a number of adaptations specifically for survival in harsh taiga winters, although larch, the most cold-tolerant of all trees, is deciduous. Taiga trees tend to have shallow roots to take advantage of the thin soils, while many of them seasonally alter their biochemistry to make them more resistant to freezing, called "hardening". The narrow conical shape of northern conifers, and their downward-drooping limbs, also help them shed snow.
Because the sun is low in the horizon for most of the year, it is difficult for plants to generate energy from photosynthesis. Pine, spruce and fir do not lose their leaves seasonally and are able to photosynthesize with their older leaves in late winter and spring when light is good but temperatures are still too low for new growth to commence. The adaptation of evergreen needles limits the water lost due to transpiration and their dark green color increases their absorption of sunlight.
Although precipitation is not a limiting factor, the ground freezes during the winter months and plant roots are unable to absorb water, so desiccation can be a severe problem in late winter for evergreens. Moss (Ptilium crista-castrensis) cover on the floor of taiga Although the taiga is dominated by coniferous forests, some broadleaf trees also occur, notably birch, aspen, willow, and rowan.
Many smaller herbaceous plants, such as ferns and occasionally ramps grow closer to the ground. Periodic stand-replacing wildfires (with return times of between 20–200 years) clear out the tree canopies, allowing sunlight to invigorate new growth on the forest floor. For some species, wildfires are a necessary part of the life cycle in the taiga; some, e.g. jack pine have cones which only open to release their seed after a fire, dispersing their seeds onto the newly cleared ground; certain species of fungi (such as morels) are also known to do this.
Grasses grow wherever they can find a patch of sun, and mosses and lichens thrive on the damp ground and on the sides of tree trunks. In comparison with other biomes, however, the taiga has low biological diversity. Jack pine cones and morels after fire in a boreal forest. Coniferous trees are the dominant plants of the taiga biome. A very few species in four main genera are found: the evergreen spruce, fir and pine, and the deciduous larch.
In North America, one or two species of fir and one or two species of spruce are dominant. Across Scandinavia and western Russia, the Scots pine is a common component of the taiga, while taiga of the Russian Far East and Mongolia is dominated by larch. Fauna Brown bear, Kamchatka peninsula. Brown bears are among the largest and most widespread taiga omnivores. The boreal forest, or taiga, supports a relatively small range of animals due to the harshness of the climate.
Canada's boreal forest includes 85 species of mammals, 130 species of fish, and an estimated 32,000 species of insects. Insects play a critical role as pollinators, decomposers, and as a part of the food web. Many nesting birds rely on them for food in the summer months. The cold winters and short summers make the taiga a challenging biome for reptiles and amphibians, which depend on environmental conditions to regulate their body temperatures, and there are only a few species in the boreal forest including red-sided garter snake, common European adder, blue-spotted salamander, northern two-lined salamander, Siberian salamander, wood frog, northern leopard frog, boreal chorus frog, American toad, and Canadian toad.
Most hibernate underground in winter. Fish of the taiga must be able to withstand cold water conditions and be able to adapt to life under ice-covered water. Species in the taiga include Alaska blackfish, northern pike, walleye, longnose sucker, white sucker, various species of cisco, lake whitefish, round whitefish, pygmy whitefish, Arctic lamprey, various grayling species, brook trout (including sea-run brook trout in the Hudson Bay area), chum salmon, Siberian taimen, lenok and lake chub.
The taiga is home to a number of large herbivorous mammals, such as moose and reindeer/caribou. Some areas of the more southern closed boreal forest also have populations of other deer species such as the elk (wapiti) and roe deer. The largest animal in the taiga is the wood bison, found in northern Canada, Alaska and has been newly introduced into the Russian far-east. Small mammals of the Taiga biome include rodent species including beaver, squirrel, North American porcupine and vole, as well as a small number of lagomorph species such as snowshoe hare and mountain hare.
These species have adapted to survive the harsh winters in their native ranges. Some larger mammals, such as bears, eat heartily during the summer in order to gain weight, and then go into hibernation during the winter. Other animals have adapted layers of fur or feathers to insulate them from the cold. Predatory mammals of the taiga must be adapted to travel long distances in search of scattered prey or be able to supplement their diet with vegetation or other forms of food (such as raccoons).
Mammalian predators of the taiga include Canada lynx, Eurasian lynx, stoat, Siberian weasel, least weasel, sable, American marten, North American river otter, European otter, American mink, wolverine, Asian badger, fisher, gray wolf, coyote, red fox, brown bear, American black bear, Asiatic black bear, polar bear (only small areas at the taiga - tundra ecotone) and Siberian tiger. More than 300 species of birds have their nesting grounds in the taiga.
Siberian thrush, white-throated sparrow, and black-throated green warbler migrate to this habitat to take advantage of the long summer days and abundance of insects found around the numerous bogs and lakes. Of the 300 species of birds that summer in the taiga only 30 stay for the winter. These are either carrion-feeding or large raptors that can take live mammal prey, including golden eagle, rough-legged buzzard (also known as the rough-legged hawk), and raven, or else seed-eating birds, including several species of grouse and crossbills.
Fire Fire has been one of the most important factors shaping the composition and development of boreal forest stands (Rowe 1955); it is the dominant stand-renewing disturbance through much of the Canadian boreal forest (Amiro et al. 2001). The fire history that characterizes an ecosystem is its fire regime, which has 3 elements: (1) fire type and intensity (e.g., crown fires, severe surface fires, and light surface fires), (2) size of typical fires of significance, and (3) frequency or return intervals for specific land units (Heinselman 1981).
The average time within a fire regime to burn an area equivalent to the total area of an ecosystem is its fire rotation (Heinselman 1973) or fire cycle (Van Wagner 1978). However, as Heinselman (1981) noted, each physiographic site tends to have its own return interval, so that some areas are skipped for long periods, while others might burn two-times or more often during a nominal fire rotation.
The dominant fire regime in the boreal forest is high-intensity crown fires or severe surface fires of very large size, often more than 10,000 ha, and sometimes more than 400,000 ha (Heinselman 1981). Such fires kill entire stands. Fire rotations in the drier regions of western Canada and Alaska average 50–100 years, shorter than in the moister climates of eastern Canada, where they may average 200 years or more.
Fire cycles also tend to be long near the tree line in the subarctic spruce-lichen woodlands. The longest cycles, possibly 300 years, probably occur in the western boreal in floodplain white spruce (Heinselman 1981). Amiro et al. (2001) calculated the mean fire cycle for the period 1980 to 1999 in the Canadian boreal forest (including taiga) at 126 years. Increased fire activity has been predicted for western Canada, but parts of eastern Canada may experience less fire in future because of greater precipitation in a warmer climate (Flannigan et al.
1998). The mature boreal forest pattern in the south shows balsam fir dominant on well-drained sites in eastern Canada changing centrally and westward to a prominence of white spruce, with black spruce and tamarack forming the forests on peats, and with jack pine usually present on dry sites except in the extreme east, where it is absent (Rowe and Scotter 1973). The effects of fires are inextricably woven into the patterns of vegetation on the landscape, which in the east favour black spruce, paper birch, and jack pine over balsam fir, and in the west give the advantage to aspen, jack pine, black spruce, and birch over white spruce.
Many investigators have reported the ubiquity of charcoal under the forest floor and in the upper soil profile, e.g., La Roi (1967). Charcoal in soils provided Bryson et al. (1965) with clues about the forest history of an area 280 km north of the then current tree line at Ennadai Lake, District Keewatin, Northwest Territories. Two lines of evidence support the thesis that fire has always been an integral factor in the boreal forest: (1) direct, eye-witness accounts and forest-fire statistics, and (2) indirect, circumstantial evidence based on the effects of fire, as well as on persisting indicators (Rowe and Scotter 1973).
The patchwork mosaic of forest stands in the boreal forest, typically with abrupt, irregular boundaries circumscribing homogenous stands, is indirect but compelling testimony to the role of fire in shaping the forest. The fact is that most boreal forest stands are less than 100 years old, and only in the rather few areas that have escaped burning are there stands of white spruce older than 250 years (Rowe and Scotter 1973).
The prevalence of fire-adaptive morphologic and reproductive characteristics of many boreal plant species is further evidence pointing to a long and intimate association with fire. Seven of the ten most common trees in the boreal forest—jack pine, lodgepole pine, aspen, balsam poplar (Populus balsamifera), paper birch, tamarack, black spruce—can be classed as pioneers in their adaptations for rapid invasion of open areas.
White spruce shows some pioneering abilities, too, but is less able than black spruce and the pines to disperse seed at all seasons. Only balsam fir and alpine fir seem to be poorly adapted to reproduce after fire, as their cones disintegrate at maturity, leaving no seed in the crowns. The oldest forests in the northwest boreal region, some older than 300 years, are of white spruce occurring as pure stands on moist floodplains (Rowe 1970).
Here, the frequency of fire is much less than on adjacent uplands dominated by pine, black spruce and aspen. In contrast, in the Cordilleran region, fire is most frequent in the valley bottoms, decreasing upward, as shown by a mosaic of young pioneer pine and broadleaf stands below, and older spruce–fir on the slopes above (Rowe and Scotter 1973). Without fire, the boreal forest would become more and more homogeneous, with the long-lived white spruce gradually replacing pine, aspen, balsam poplar, and birch, and perhaps even black spruce, except on the peatlands (Raup and Denny 1950).
Threats Human activities Plesetsk Cosmodrome is situated in the taiga Large areas of Siberia's taiga have been harvested for lumber since the collapse of the Soviet Union. Previously, the forest was protected by the restrictions of the Soviet Forest Ministry, but with the collapse of the Union, the restrictions regarding trade with Western nations have vanished. Trees are easy to harvest and sell well, so loggers have begun harvesting Russian taiga evergreen trees for sale to nations previously forbidden by Soviet law.
In Canada, eight percent of the taiga is protected from development, the provincial government allows forest management to occur on Crown land under rigorous constraints. The main forestry practice in the boreal forest of Canada is clearcutting, which involves cutting down most of the trees in a given area, then replanting the forest as a monocrop (one species of tree) the following season. Some of the products from logged boreal forests include toilet paper, copy paper, newsprint, and lumber.
More than 90% of boreal forest products from Canada are exported for consumption and processing in the United States. Some of the larger cities situated in this biome are Murmansk,Arkhangelsk, Yakutsk, Anchorage,Yellowknife, Tromsø, Luleå, and Oulu. Most companies that harvest in Canadian forests are certified by an independent third party agency such as the Forest Stewardship Council (FSC), Sustainable Forests Initiative (SFI), or the Canadian Standards Association (CSA).
While the certification process differs between these groups, all of them include forest stewardship, respect for aboriginal peoples, compliance with local, provincial or national environmental laws, forest worker safety, education and training, and other environmental, business, and social requirements. The prompt renewal of all harvest sites by planting or natural renewal is also required. Climate change Seney National Wildlife Refuge During the last quarter of the twentieth century, the zone of latitude occupied by the boreal forest experienced some of the greatest temperature increases on Earth.
Winter temperatures have increased more than summer temperatures. The number of days with extremely cold temperatures (e.g., −20 to −40 °C (-4 to -40 °F) has decreased irregularly but systematically in nearly all the boreal region, allowing better survival for tree-damaging insects. In summer, the daily low temperature has increased more than the daily high temperature. In Fairbanks, Alaska, the length of the frost-free season has increased from 60–90 days in the early twentieth century to about 120 days a century later.
Summer warming has been shown to increase water stress and reduce tree growth in dry areas of the southern boreal forest in central Alaska, western Canada and portions of far eastern Russia. Precipitation is relatively abundant in Scandinavia, Finland, northwest Russia and eastern Canada, where a longer growth season (i.e. the period when sap flow is not impeded by frozen water) accelerate tree growth.
As a consequence of this warming trend, the warmer parts of the boreal forests are susceptible to replacement by grassland, parkland or temperate forest. In Siberia, the taiga is converting from predominantly needle-shedding larch trees to evergreen conifers in response to a warming climate. This is likely to further accelerate warming, as the evergreen trees will absorb more of the sun's rays.
Given the vast size of the area, such a change has the potential to affect areas well outside of the region. In much of the boreal forest in Alaska, the growth of white spruce trees are stunted by unusually warm summers, while trees on some of the coldest fringes of the forest are experiencing faster growth than previously. Lack of moisture in the warmer summers are also stressing the birch trees of central Alaska.
Insects Recent years have seen outbreaks of insect pests in forest-destroying plagues: the spruce-bark beetle (Dendroctonus rufipennis) in Yukon and Alaska; the mountain pine beetle in British Columbia; the aspen-leaf miner; the larch sawfly; the spruce budworm (Choristoneura fumiferana); the spruce coneworm. Pollution The effect of sulphur dioxide on woody boreal forest species was investigated by Addison et al.
(1984), who exposed plants growing on native soils and tailings to 15.2 μmol/m3 (0.34 ppm) of SO2 on CO2 assimilation rate (NAR). The Canadian maximum acceptable limit for atmospheric SO2 is 0.34 ppm. Fumigation with SO2 significantly reduced NAR in all species and produced visible symptoms of injury in 2–20 days. The decrease in NAR of deciduous species (trembling aspen [Populus tremuloides], willow [Salix], green alder [Alnus viridis], and white birch [Betula papyrifera]) was significantly more rapid than of conifers (white spruce, black spruce [Picea mariana], and jack pine [Pinus banksiana]) or an evergreen angiosperm (Labrador tea) growing on a fertilized Brunisol.
These metabolic and visible injury responses seemed to be related to the differences in S uptake owing in part to higher gas exchange rates for deciduous species than for conifers. Conifers growing in oil sands tailings responded to SO2 with a significantly more rapid decrease in NAR compared with those growing in the Brunisol, perhaps because of predisposing toxic material in the tailings. However, sulphur uptake and visible symptom development did not differ between conifers growing on the 2 substrates.
Acidification of precipitation by anthropogenic, acid-forming emissions has been associated with damage to vegetation and reduced forest productivity, but 2-year-old white spruce that were subjected to simulated acid rain (at pH 4.6, 3.6, and 2.6) applied weekly for 7 weeks incurred no statistically significant (P 0.05) reduction in growth during the experiment compared with the background control (pH 5.
6) (Abouguendia and Baschak 1987). However, symptoms of injury were observed in all treatments, the number of plants and the number of needles affected increased with increasing rain acidity and with time. Scherbatskoy and Klein (1983) found no significant effect of chlorophyll concentration in white spruce at pH 4.3 and 2.8, but Abouguendia and Baschak (1987) found a significant reduction in white spruce at pH 2.
6, while the foliar sulphur content significantly greater at pH 2.6 than any of the other treatments. Protection Peat bog in Dalarna, Sweden. Bogs and peatland are widespread in the taiga. They are home to a unique flora, and store vast amounts of carbon. In western Eurasia, the Scots pine is common in the boreal forest. Many nations are taking direct steps to protect the ecology of the taiga by prohibiting logging, mining, oil and gas production, and other forms of development.
In February 2010 the Canadian government established protection for 13,000 square kilometres of boreal forest by creating a new 10,700-square-kilometre park reserve in the Mealy Mountains area of eastern Canada and a 3,000-square-kilometre waterway provincial park that follows alongside the Eagle River from headwaters to sea. Two Canadian provincial governments, Ontario and Quebec, introduced measures in 2008 that would protect at least half of their northern boreal forest.
Although both provinces admitted it will take years to plan, work with Aboriginal and local communities and ultimately map out precise boundaries of the areas off-limits to development, the measures are expected to create some of the largest protected areas networks in the world once completed. Both announcements came the following year after a letter signed by 1,500 scientists called on political leaders to protect at least half of the boreal forest.
The taiga stores enormous quantities of carbon, more than the world's temperate and tropical forests combined, much of it in wetlands and peatland. In fact, current estimates place boreal forests as storing twice as much carbon per unit area as tropical forests. Natural disturbance One of the biggest areas of research and a topic still full of unsolved questions is the recurring disturbance of fire and the role it plays in propagating the lichen woodland.
The phenomenon of wildfire by lightning strike is the primary determinant of understory vegetation and because of this, it is considered to be the predominant force behind community and ecosystem properties in the lichen woodland. The significance of fire is clearly evident when one considers that understory vegetation influences tree seedling germination in the short term and decomposition of biomass and nutrient availability in the long term.
The recurrent cycle of large, damaging fire occurs approximately every 70 to 100 years. Understanding the dynamics of this ecosystem is entangled with discovering the successional paths that the vegetation exhibits after a fire. Trees, shrubs, and lichens all recover from fire-induced damage through vegetative reproduction as well as invasion by propagules. Seeds that have fallen and become buried provide little help in re-establishment of a species.
The reappearance of lichens is reasoned to occur because of varying conditions and light/nutrient availability in each different microstate. Several different studies have been done that have led to the formation of the theory that post-fire development can be propagated by any of four pathways: self replacement, species-dominance relay, species replacement, or gap-phase self replacement. Self replacement is simply the re-establishment of the pre-fire dominant species.
Species-dominance relay is a sequential attempt of tree species to establish dominance in the canopy. Species replacement is when fires occur in sufficient frequency to interrupt species dominance relay. Gap-Phase Self-Replacement is the least common and so far has only been documented in Western Canada. It is a self replacement of the surviving species into the canopy gaps after a fire kills another species.
The particular pathway taken after a fire disturbance depends on how the landscape is able to support trees as well as fire frequency. Fire frequency has a large role in shaping the original inception of the lower forest line of the lichen woodland taiga. It has been hypothesized by Serge Payette that the spruce-moss forest ecosystem was changed into the lichen woodland biome due to the initiation of two compounded strong disturbances: large fire and the appearance and attack of the spruce budworm.
The spruce budworm is a deadly insect to the spruce populations in the southern regions of the taiga. J.P. Jasinski confirmed this theory five years later stating “Their [lichen woodlands] persistence, along with their previous moss forest histories and current occurrence adjacent to closed moss forests, indicate that they are an alternative stable state to the spruce–moss forests”. Taiga ecoregions Palearctic boreal forests/taiga v t e East Siberian taiga Russia Iceland boreal birch forests and alpine tundra Iceland Kamchatka-Kurile meadows and sparse forests Russia Kamchatka-Kurile taiga Russia Northeast Siberian taiga Russia Okhotsk-Manchurian taiga Russia Sakhalin Island taiga Russia Scandinavian and Russian taiga Finland, Norway, Russia, Sweden Trans-Baikal conifer forests Mongolia, Russia Urals montane tundra and taiga Russia West Siberian taiga Russia Romincka Forest Poland, Russia Nearctic Boreal forests/taiga v t e Alaska Peninsula montane taiga United States Central Canadian Shield forests Canada Cook Inlet taiga United States Copper Plateau taiga United States Eastern Canadian forests Canada Eastern Canadian Shield taiga Canada Interior Alaska-Yukon lowland taiga Canada, United States Mid-Continental Canadian forests Canada Midwestern Canadian Shield forests Canada Muskwa-Slave Lake forests Canada Newfoundland Highland forests Canada Northern Canadian Shield taiga Canada Northern Cordillera forests Canada Northwest Territories taiga Canada South Avalon-Burin oceanic barrens Canada Northern Lake Superior Taiga United States, Canada Southern Hudson Bay taiga Canada Yukon Interior dry forests Canada See also Birds of North American boreal forests Boreal Forest Conservation Framework Boreal forest of Canada Drunken trees, effect of global warming on the taiga Intact forest landscape Scandinavian and Russian taiga Success of fire suppression in northern forests Taiga Rescue Network (TRN) Agafia Lykov References ^ "taiga.
" Dictionary.com Unabridged (v 1.1). Random House, Inc. 12 Mar. 2008. web link ^ "List of Plants & Animals in the Canadian Wilderness". Trails.com. 2010-07-27. Retrieved 2016-12-26. ^ "Taiga biological station: FAQ". Wilds.mb.ca. Retrieved 2011-02-21. ^ "radford:Taiga climate". Radford.edu. Archived from the original on 2011-06-09. Retrieved 2011-02-21. ^ a b Encyclopedia Universalis édition 1976 VOL.
2 ASIE – Géographie physique, page 568 (in French) ^ "Marietta the Taiga and Boreal forest". Marietta.edu. Retrieved 2011-02-21. ^ "Yakutsk climate". Worldclimate.com. 2007-02-04. Retrieved 2011-02-21. ^ "Interior Alaska-Yukon lowland taiga". Terrestrial Ecoregions. World Wildlife Fund. Retrieved 2011-02-21. ^ "The eastern forest - boreal transition". Terrestrial Ecoregions. World Wildlife Fund.
Retrieved 2011-02-21. ^ Canada: Taiga Shield reference ^ "Climate of Canadian ecozones". Geography.ridley.on.ca. Archived from the original on 2011-05-05. Retrieved 2011-02-21. ^ "Berkley: about biomes". Ucmp.berkeley.edu. Retrieved 2011-02-21. ^ "Taiga". Blueplanetbiomes. Retrieved 2011-02-21. ^ "Southwest Yukon:Frost-free days". Yukon.taiga.net. Archived from the original on 2011-07-24. Retrieved 2011-02-21.
^ "Kenozersky National Park". Wild-russia.org. Retrieved 2011-02-21. ^ "University of Helsinki: Carabid diversity in Finnish taiga" (PDF). Retrieved 2011-02-21. ^ "Tundra". Blueplanetbiomes. Retrieved 2011-02-21. ^ "NatureWorks:Tundra". Nhptv.org. Retrieved 2011-02-21. ^ "The Arctic". saskschools.ca. Archived from the original on 2011-04-10. Retrieved 2011-02-21. ^ Finland vegetation zone and freshwater biome ^ "TAMPERE/PIRKKALA, FINLAND Weather History and Climate Data".
Worldclimate.com. 2007-02-04. Retrieved 2011-02-21. ^ A.P. Sayre, Taiga, (New York: Twenty-First Century Books, 1994) 16. ^ Arno & Hammerly 1984, Arno et al. 1995 ^ Sporrong, Ulf (2003). "The Scandinavian landscape and its resources". In Helle, Knut. The Cambridge History of Scandinavia. Cambridge University Press. p. 22. ^ a b Sayre, 19. ^ Sayre, 19-20. ^ "Study reveals for first time true diversity of life in soils across the globe, new species discovered".
Physorg.com. Retrieved 2012-01-14. ^ Sayre, 12-3. ^ C. Michael Hogan, Black Spruce: Picea mariana, GlobalTwitcher.com, ed. Nicklas Stromberg, November, 2008 Archived 2011-10-05 at the Wayback Machine. ^ George H. La Roi. "Boreal forest". The Canadian Encyclopedia. Retrieved 2013-11-27. ^ a b Sayre, 23. ^ "hww:Nature in the boreal forest biome". Hww.ca. Archived from the original on 2011-01-03. Retrieved 2011-02-21.
^ "Wapiti facts and range". Hww.ca. Archived from the original on 2011-01-03. Retrieved 2011-02-21. ^ "western roe deer: facts and range". Borealforest.org. Retrieved 2011-02-21. ^ "Government of Canada to Send Wood Bison to Russian Conservation Project". ^ "Boreal songbird initiative". Borealbirds.org. Retrieved 2011-02-21. ^ Sayre, 28. ^ Rowe, J.S. 1955. Factors influencing white spruce reproduction in Manitoba and Saskatchewan.
Can. Dep. Northern Affairs and National Resources, For. Branch, For. Res. Div., Ottawa ON, Project MS-135, Silv. Tech. Note 3. 27 p. ^ a b Amiro, B.D.; Stocks, B.J.; Alexander, M.E.; Flannigan, M.D.; Wotton, B.M. 2001. Fire, climate change, carbon and fuel management in the Canadian boreal forest. Internat. J. Wildland Fire 10:405–413. ^ a b c d Heinselman, M.L. 1981. Fire intensity and frequency as factors in the distribution and structure of northern ecosystems.
p. 7–57 in Proceedings of the Conference: Fire Regimes in Ecosystem Properties, Dec. 1978, Honolulu, Hawaii. USDA, For. Serv., Washington DC, Gen. Tech. Rep. WO-26. ^ Heinselman, M.L. 1973. Fire in the virgin forests of the Boundary Waters Canoe Area, Minnesota. Quart. Res. 3:329–382. ^ Van Wagner, C.E. 1978. Age-class distribution and the forest cycle. Can. J. For. Res. 8:220–227. ^ Flannigan, M.
D.; Bergeron, Y.; Engelmark, O.; Wotton, B.M. 1998. Future wildfire in circumboreal forests in relation to global warming. J. Veg. Sci. 9:469–476. ^ a b c d Rowe, J.S. and Scotter, G.W. 1973. Fire in the boreal forest. Quaternary Res. 3:444–464. [E3680, Coates et al. 1994] ^ La Roi, G.H. 1967. Ecological studies in the boreal spruce–fir forests of the North American taiga. I. Analysis of the vascular flora.
Ecol. Monogr. 37:229–253. ^ Bryson, R.A.; Irving, W.H.; Larson, J.A. 1965. Radiocarbon and soil evidence of former forest in the southern Canadian tundra. Science 147(3653):46–48. ^ Rowe, J.S. 1970. Spruce and fire in northwest Canada and Alaska. p. 245–254 in Komarek, E.V. (Ed.). Proc. 10th Annual Tall Timbers Fire Ecology Conference, Tallahassee FL. ^ Raup, H.M.; Denny, C.S. 1950. Photointerpretation of the terrain along the southern part of the Alaska highway.
US Geol. Surv. Bull. 963-D:95–135. ^ "Taiga Deforestation". American.edu. Retrieved 2011-02-21. ^ "Murmansk climate". Worldclimate.com. 2007-02-04. Retrieved 2011-02-21. ^ "Anchorage climate". Worldclimate.com. 2007-02-04. Retrieved 2011-02-21. ^ "Coincidence and Contradiction in the Warming Boreal Forest". ARCUS. doi:10.1029/2005GL023331. Retrieved 2012-01-14. ^ http://www.libraryindex.com/pages/3196/Boreal-Forests-Climate-Change.
html ^ "Russian boreal forests undergoing vegetation change, study shows". Sciencedaily.com. 2011-03-25. doi:10.1111/j.1365-2486.2011.02417.x. Retrieved 2012-01-14. ^ "Fairbanks Daily News-Miner - New study states boreal forests shifting as Alaska warms". Newsminer.com. Archived from the original on 2012-01-19. Retrieved 2012-01-14. ^ Morello, Lauren. "Forest Changes in Alaska Reveal Changing Climate".
Scientific American. Retrieved 2012-01-14. ^ "A New Method to Reconstruct Bark Beetle Outbreaks". Colorado.edu. Retrieved 2011-02-21. ^ "Spruce budworm and sustainable management of the boreal forest". Cfs.nrcan.gc.ca. 2007-12-05. Archived from the original on 2008-12-02. Retrieved 2011-02-21. ^ http://www.fs.fed.us/pnw/pubs/journals/pnw_2006_chapin001.pdf ^ Addison, P.A.; Malhotra, S.S.; Khan, A.
A. 1984. Effect of sulfur dioxide on woody boreal forest species grown on native soils and tailings. J. Environ. Qual. 13(3):333–336. ^ a b Abouguendia, Z.M.; Baschak, L.A. 1987. Response of two western Canadian conifers to simulated acidic precipitation. Water, Air and Soil Pollution 33:15–22. ^ Scherbatskoy, T.; Klein, R.M. 1983. Response of spruce Picea glauca and birch Betula alleghaniensis foliage to leaching by acidic mists.
J. Environ. Qual. 12:189–195. ^ Braun, David (February 7, 2010). "Boreal landscapes added to Canada's parks Boreal landscapes added to Canada's parks". NatGeo News Watch: News Editor David Braun's Eye on the World. National Geographic Society. Retrieved 17 February 2010. ^ Gillespie, Kerry (2008-07-15). "Ontario to protect vast tract". Toronto Star. Retrieved 25 June 2012. ^ Marsden, William (2008-11-16).
"Charest promises to protect north". Montreal Gazette. Archived from the original on 5 April 2011. Retrieved 25 June 2012. ^ "1,500 Scientists Worldwide Call for Protection of Canada's Boreal Forest". Retrieved 25 June 2012. ^ "Boreal forest and global change". Philos. Trans. R. Soc. Lond. B Biol. Sci. 363 (1501): 2245–9. July 2008. doi:10.1098/rstb.2007.2196. PMC 2387060 . PMID 18006417. ^ "Report: The Carbon the World Forgot".
Boreal Songbird Initiative. ^ a b Kurkowski, 1911. ^ a b Nilsson, 421. ^ Johnson, 212. ^ a b Johnson, 200 ^ Kurkowski, 1912. ^ Payette, 289. ^ Jasinski, 561. General references Arno, S. F. & Hammerly, R. P. 1984. Timberline. Mountain and Arctic Forest Frontiers. The Mountaineers, Seattle. ISBN 0-89886-085-7 Arno, S. F., Worral, J., & Carlson, C. E. (1995). Larix lyallii: Colonist of tree line and talus sites.
Pp. 72–78 in Schmidt, W. C. & McDonald, K. J., eds., Ecology and Management of Larix Forests: A Look Ahead. USDA Forest Service General Technical Report GTR-INT-319. Hoffmann, Robert S. (1958). "The Meaning of the Word "Taiga"" Ecology 39(3) (Jul., 1958), pp. 540-541 Nilsson, M.C. "Understory vegetation as a forest ecosystem driver, evidence from the northern Swedish boreal forest." Frontiers in Ecology and the Environment.
3.8 (2005): 421-428. Kurkowski, Thomas. "Relative Importance of Different Secondary Successional Pathways in an Alaskan Boreal Forest." Canadian Journal of Forest Research. 38. (2008): 1911-1923. Payette, Serge. "Origin of the lichen woodland at its southern range limit in eastern Canada: the catastrophic impact of insect defoliators and fire on the spruce-moss forest." Canadian journal of forest research.
30.2 (2000): 288-305. Johnson, E.A. "Vegetation Organization and Dynamics of Lichen Woodland Communities in the Northwest Territories." Ecology. 62.1 (1981): 200-215. Jasinski, J.P. "The Creation of Alternative Stable States in Southern Boreal Forest: Quebec, Canada." Ecological Monographs. 75.4 (2005): 561-583. Further reading Sayre, April Pulley (1994), Taiga, Twenty-First Century Books, ISBN 0-8050-2830-7 Gawthrop, Daniel (1999), Vanishing Halo: Saving the Boreal Forest, Greystone Books/David Suzuki Foundation, ISBN 0-89886-681-2 Day, Trevor; Richard Garratt (2006), Taiga, Facts On File, ISBN 0-8160-5329-4 External links Wikimedia Commons has media related to Taiga.
The Conservation Value of the North American Boreal Forest from an Ethnobotanical perspective a report by the Boreal Songbird Initiative Boreal Canadian Initiative International Boreal Conservation campaign Tundra and Taiga Threats to Boreal Forests Greenpeace Campaign against lumber giant Weyerhaeuser's logging practices in the Canadian boreal forest Rainforest Action Network Arctic and Taiga Canadian Geographic Terraformers Canadian Taiga Conservation Foundation Coniferous Forest, Earth Observatory NASA Taiga Rescue Network (TRN) A network of NGOs, indigenous peoples or individuals that works to protect the boreal forests.
Index of Boreal Forests/Taiga ecoregions at bioimages.vanderbilt.edu The Canadian Boreal Forest The Nature Conservancy and its partners Slater museum of natural history: Taiga Taiga Biological Station founded by Dr. William (Bill) Pruitt, Jr., University of Manitoba. v t e Biogeographic regionalisations Biomes Terrestrial biomes Polar/montane Tundra Taiga Montane grasslands and shrublands Temperate Coniferous forests Broadleaf and mixed forests Deciduous forests Grasslands, savannas, and shrublands Tropical and subtropical Coniferous forests Moist broadleaf forests Dry broadleaf forests Grasslands, savannas, and shrublands Dry Mediterranean forests, woodlands, and scrub Deserts and xeric shrublands Wet Flooded grasslands and savannas Riparian Wetland Aquatic biomes Pond Littoral Intertidal Mangroves Kelp forests Coral reefs Neritic zone Pelagic zone Benthic zone Hydrothermal vents Cold seeps Demersal zone Other biomes Endolithic zone Biogeographic realms Terrestrial Afrotropical Antarctic Australasian Nearctic Palearctic Indomalayan Neotropical Oceanian Marine Arctic Temperate Northern Pacific Tropical Atlantic Western Indo-Pacific Central Indo-Pacific Tropical Eastern Pacific Subdivisions Biogeographic provinces Bioregions Ecoregions List of ecoregions Global 200 ecoregions See also Ecological land classification Floristic kingdoms Vegetation classifications Zoogeographic regions Retrieved from "https://en. |
Let’s see and understand closure through an example.
Explanation: We can access the variable b which is defined in function foo() through function inner() as the later preserves the scope chain of the enclosing function at the time of execution of the enclosing function i.e. the inner function knows the value of b through it’s scope chain.
This is closure in action that is inner function can have access to the outer function variables as well as all the global variables.
Definition of Closure:
In programming languages, closures (also lexical closures or function closures) are techniques for implementing lexically scoped name binding in languages with first-class functions. Operationally, a closure is a record storing a function[a] together with an environment: a mapping associating each free variable of the function (variables that are used locally, but defined in an enclosing scope) with the value or reference to which the name was bound when the closure was created.[b]
In other words, closure is created when a child function keep the environment of the parent scope even after the parent function has already executed
Note: Closure is the concept of function + lexical environment in which function it was created. so every function declared within another function then it has access to the scope chain of the outer function and the variables created within the scope of the outer function will not get destroyed.
Now let’s look at another example.
Explanation: In the above example we used a parameter function rather than a default one. Note even when we are done with the execution of foo(5) we can access the outer_arg variable from the inner function. And on the execution of the inner function produce the summation of outer_arg and inner_arg as desired.
Now let’s see an example of closure within a loop.
In this example we would to store a anonymous function at every index of an array.
4 4 4 4
Explanation: Did you guess the right answer? In the above code we have created four closure which point to the variable i which is local variable to the function outer. Closure don’t remember the value of the variable it only points to the variable or stores the reference of the variable and hence, returns the current value. In the above code when we try to update the value of it gets reflected to all because the closure stores the reference.
Lets see the correct way to write the above code so as to get different values of i at different indexes.
0 1 2 3
Explanation: In the above code we are updating the argument of the function create_Closure with every call. Hence, we get different values of i at different index.
Note : It may be slightly difficult to get the concept of closure at once but try experimenting with closure in different scenarios like for creating getter/setter, callbacks and so on. |
These so-called starburst galaxies produce stars at a prodigious rate—creating the equivalent of a thousand new suns per year. Now the astronomers have found starbursts that were churning out stars when the universe was just a billion years old. Previously, astronomers didn't know whether galaxies could form stars at such high rates so early in time.
The discovery enables astronomers to study the earliest bursts of star formation and to deepen their understanding of how galaxies formed and evolved. The team describes their findings in a paper being published online on March 13 in the journal Nature and in two others that have been accepted for publication in the Astrophysical Journal.
Shining with the energy of over a hundred trillion suns, these newly discovered galaxies represent what the most massive galaxies in our cosmic neighborhood looked like in their star-making youth. "I find that pretty amazing," says Joaquin Vieira, a postdoctoral scholar at Caltech and leader of the study. "These aren't normal galaxies. They were forming stars at an extraordinary rate when the universe was very young—we were very surprised to find galaxies like this so early in the history of the universe."
The astronomers found dozens of these galaxies with the South Pole Telescope (SPT), a 10-meter dish in Antarctica that surveys the sky in millimeter-wavelength light—which is between radio waves and infrared on the electromagnetic spectrum. The team then took a more detailed look using the new Atacama Large Millimeter Array (ALMA) in Chile's Atacama Desert.
The new observations represent some of ALMA's most significant scientific results yet, Vieira says. "We couldn't have done this without the combination of SPT and ALMA," he adds. "ALMA is so sensitive, it is going to change our view of the universe in many different ways."
The astronomers only used the first 16 of the 66 dishes that will eventually form ALMA, which is already the most powerful telescope ever constructed for observing at millimeter and submillimeter wavelengths.
With ALMA, the astronomers found that more than 30 percent of the starburst galaxies are from a time period just 1.5 billion years after the big bang. Previously, only nine such galaxies were known to exist, and it wasn't clear whether galaxies could produce stars at such high rates so early in cosmic history. Now, with the new discoveries, the number of such galaxies has nearly doubled, providing valuable data that will help other researchers constrain and refine theoretical models of star and galaxy formation in the early universe.
But what's particularly special about the new findings, Vieira says, is that the team determined the cosmic distance to these dusty starburst galaxies by directly analyzing the star-forming dust itself. Previously, astronomers had to rely on a cumbersome combination of indirect optical and radio observations using multiple telescopes to study the galaxies. But because of ALMA's unprecedented sensitivity, Vieira and his colleagues were able make their distance measurements in one step, he says. The newly measured distances are therefore more reliable and provide the cleanest sample yet of these distant galaxies.
The measurements were also made possible because of the unique properties of these objects, the astronomers say. For one, the observed galaxies were selected because they could be gravitationally lensed—a phenomenon predicted by Einstein in which another galaxy in the foreground bends the light from the background galaxy like a magnifying glass. This lensing effect makes background galaxies appear brighter, cutting the amount of telescope time needed to observe them by 100 times.
Secondly, the astronomers took advantage of a fortuitous feature in these galaxies' spectra—which is the rainbow of light they emit—dubbed the "negative K correction." Normally, galaxies appear dimmer the farther away they are—in the same way a lightbulb appears fainter the farther away it is. But it turns out that the expanding universe shifts the spectra in such a way that light in millimeter wavelengths doesn't appear dimmer at greater distances. As a result, the galaxies appear just as bright in these wavelengths no matter how far away they are—like a magic lightbulb that appears just as bright no matter how distant it is.
"To me, these results are really exciting because they confirm the expectation that when ALMA is fully available, it can really allow astronomers to probe star formation all the way up to the edge of the observable universe," says Fred Lo, who, while not a participant in the study, was recently a Moore Distinguished Scholar at Caltech. Lo is a Distinguished Astronomer and Director Emeritus at the National Radio Astronomy Observatory, the North American partner of ALMA.
Additionally, observing the gravitational lensing effect will help astronomers map the dark matter—the mysterious unseen mass that makes up nearly a quarter of the universe—in the foreground galaxies. "Making high-resolution maps of the dark matter is one of the future directions of this work that I think is particularly cool," Vieira says.
These results represent only about a quarter of the total number of sources discovered by Vieira and his colleagues with the SPT, and they anticipate finding additional distant, dusty, starburst galaxies as they continue analyzing their data set. The ultimate goal for astronomers, Lo says, is to observe galaxies at all wavelengths throughout the history of the universe, piecing together the complete story of how galaxies have formed and evolved. So far, astronomers have made much progress in creating computer models and simulations of early galaxy formation, he says. But only with data—such as these new galaxies—will we ever truly piece together cosmic history. "Simulations are simulations," he says. "What really counts is what you see."
In addition to Vieira, the other Caltech authors on the Nature paper are Jamie Bock, professor of physics; Matt Bradford, visiting associate in physics; Martin Lueker-Boden, postdoctoral scholar in physics; Stephen Padin, senior research associate in astrophysics; Erik Shirokoff, a postdoctoral scholar in astrophysics with the Keck Institute for Space Studies; and Zachary Staniszewski, a visitor in physics. There are a total of 70 authors on the paper, which is titled "High-redshift, dusty, starburst galaxies revealed by gravitational lensing." This research was funded by the National Science Foundation, the Kavli Foundation, the Gordon and Betty Moore Foundation, NASA, the Natural Sciences and Engineering Research Council of Canada, the Canadian Research Chairs program, and the Canadian Institute for Advanced Research.
The work to measure the distances to the galaxies is described in the Astrophysical Journal paper "ALMA redshifts of millimeter-selected galaxies from the SPT survey: The redshift distribution of dusty star-forming galaxies," by Axel Weiss of the Max-Planck-Institut für Radioastronomie, and others. The study of the gravitational lensing is described in the Astrophysical Journal paper "ALMA observations of strongly lensed dusty star-forming galaxies," by Yashar Hezaveh of McGill University, and others.
ALMA, an international astronomy facility, is a partnership of Europe, North America, and East Asia in cooperation with the Republic of Chile. ALMA construction and operations are led on behalf of Europe by the European Southern Observatory (ESO) organization, on behalf of North America by the National Radio Astronomy Observatory (NRAO), and on behalf of East Asia by the National Astronomical Observatory of Japan (NAOJ). The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning, and operation of ALMA.The South Pole Telescope (SPT) is a 10-meter telescope located at the National Science Foundation (NSF) Amundsen-Scott South Pole Station, which lies within one kilometer of the geographic south pole. The SPT is designed to conduct low-noise, high-resolution surveys of the sky at millimeter and submillimeter wavelengths, with the particular design goal of making ultrasensitive measurements of the cosmic microwave background (CMB). The first major survey with the SPT was completed in October 2011 and covers 2,500 square degrees of the southern sky in three millimeter-wave observing bands. This is the deepest large millimeter-wave data set in existence and has already led to many groundbreaking science results, including the first detection of galaxy clusters through their Sunyaev-Zel'dovich effect signature, the most sensitive measurement yet of the small-scale CMB power spectrum, and the discovery of a population of ultrabright, high-redshift, star-forming galaxies. The SPT is funded primarily by the Division of Polar Programs in NSF's Geoscience Directorate. Partial support also is provided by the Kavli Institute for Cosmological Physics (KICP), an NSF-funded Physics Frontier Center; the Kavli Foundation; and the Gordon and Betty Moore Foundation. The SPT collaboration is led by the University of Chicago and includes research groups at Argonne National Laboratory, the California Institute of Technology, Cardiff University, Case Western Reserve University, Harvard University, Ludwig-Maximilians-Universität, the Smithsonian Astrophysical Observatory, McGill University, the University of Arizona, the University of California at Berkeley, the University of California at Davis, the University of Colorado at Boulder, and the University of Michigan, as well as individual scientists at several other institutions, including the European Southern Observatory and the Max-Planck-Institut für Radioastronomie in Bonn, Germany.
Deborah Williams-Hedges | EurekAlert!
Further reports about: > Astronomy Observatory > Astrophysical > Atacama Pathfinder Experiment > Caltech > Canadian Light Source > European Southern Observatory > Gates Foundation > Nature Immunology > Observatory > RNA Pol II > Radioastronomie > South Pole > computer model > cosmic history > galaxy cluster > galaxy formation > gravitational lens > star formation > starburst galaxies
Ultra-compact phase modulators based on graphene plasmons
27.06.2017 | ICFO-The Institute of Photonic Sciences
Smooth propagation of spin waves using gold
26.06.2017 | Toyohashi University of Technology
19.06.2017 | Event News
13.06.2017 | Event News
13.06.2017 | Event News
27.06.2017 | Power and Electrical Engineering
27.06.2017 | Information Technology
27.06.2017 | Physics and Astronomy |
In the years since the end of World War II, the Commonwealth has continued to expand.
New countries have been added and there are currently eight new Commonwealth countries, and four more are under negotiation.
But as with the rest of the world, it is still quite a long way from full democracy.
There are still many parts of the Commonwealth still without full democratic institutions, and there is still a long road to get there.
How did the Commonwealth end up with the number of countries?
Commonwealth membership came about in 1945 as a result of the US-led occupation of the former British colonies of Vietnam and Cambodia.
These countries, which were not yet independent, had joined the Commonwealth, but they did not have a formal parliament and had to rely on their own executive and legislative bodies.
The Commonwealth countries did not get their own currency, so it was up to the Commonwealth governments to create it.
This created the Commonwealth System of Units (CSUs), a system of units of currency that allowed them to trade freely with each other and with the world outside their borders.
They were able to set their own prices for goods and services, which allowed them the freedom to control their own economic affairs.
The CSUs were also able to create their own political institutions, which gave them the political power to decide how much money and goods could be sent to the other Commonwealth countries.
Commonwealth countries were also granted full freedom of movement and trade.
The idea that a country could freely move from one country to another was not a concept that existed until the early 1990s.
As a result, Commonwealth countries had to make sure that the people living in their territories were treated equally, and they had to guarantee that the citizens of the other countries were not subjected to discrimination.
The process of creating these countries began in the 1960s.
After the war ended, the British Commonwealth countries realised that they had a lot of territory that they did no want to lose.
As they began looking at ways of making up for the lost territory, they realised that a common currency would be an ideal way to do so.
This was the creation of the dollar.
In 1964, the then Commonwealth secretary of state, Robert Maxwell, wrote to the prime minister, Harold Macmillan, and offered to create a Commonwealth currency for the Commonwealth countries to use.
Maxwell’s offer was rejected.
It was not until 1975 that a new Commonwealth currency was finally adopted.
At that time, the United States had been the Commonwealth’s dominant currency and had an economy that relied heavily on US dollars.
As the Commonwealth became more successful in developing its own currencies, the US started to lose influence over the Commonwealth.
The United States and the Commonwealth were locked in a trade war, and the US Treasury began to take steps to weaken the Commonwealths currency.
This led to the creation in 1973 of the International Monetary Fund (IMF).
The IMF was a private organisation created to manage the international monetary system.
The IMF has been a key player in managing the Commonwealth and the world’s currencies for more than 60 years.
The US has used the IMF to set up its own currency since the early 1980s.
The World Bank has also been used by the Commonwealth to set its own monetary policy.
The development of the IMF in the 1970s has been particularly controversial in the Commonwealth as it has been criticised for creating a currency that has failed to meet the needs of the vast majority of the people of the countries that it manages.
It has also had to contend with some of the most powerful institutions in the world.
One of the criticisms levelled at the IMF has to do with the role it played in the creation and use of the World Trade Organisation.
This organisation was established in 1972 to deal with disputes between rich and poor countries.
In the 1970’s, when the IMF was set up, most of the developing countries were struggling to maintain their economies and their currencies.
As an alternative, the IMF agreed to help developing countries establish a currency.
Developing countries in the Caribbean were very concerned about the potential impact that the use of this currency would have on their economies.
They believed that it would cause the prices of goods and commodities to rise.
They also feared that the IMF would take money out of their economies, forcing them to pay higher prices.
So in 1974, the World Bank created a special working group, the International Bank for Reconstruction and Development (IBRD), to deal on this issue.
The group was led by a group of former senior World Bank officials who were former IMF officials and therefore understood the problems that developing countries in Africa had.
The working group also included a team of experts from the IMF’s foreign reserves unit, who had experience in managing currency crises.
The members of the working group were members of a very influential group called the IMF Board of Governors.
They included members of all the World Banking and International Monetary organisations.
It became clear very quickly that the working groups role was to manage |
NCERT Solutions for Class 9 Maths Chapter 4 Linear Equations In Two Variables are considered to be very useful when you are preparing for the CBSE Class 9 Maths Term I exams. Here, we bring to you detailed answers to the exercises of NCERT Class 9 Maths Chapter 4. Subject matter experts who created these NCERT Solutions have collected these questions for you to revise from Chapter 4 of the NCERT Textbook. We provide you with accurate solutions to all the questions that are covered in the NCERT books. These NCERT Solutions for Class 9 Maths will rely on the latest update on term wise CBSE syllabus for 2021-22 and its guidelines. You will get enough practice solving these exercises and it will also help you to score high marks.
The NCERT Solutions for Class 9 Maths helps to give you proper knowledge about the subject and the topic “Linear equations”. Does a linear equation in two variables have a solution? If yes, is it unique? What does the solution look like on the Cartesian plane? You shall also use the concepts you studied in Chapter 3 and the NCERT Solutions will also give you an idea about these concepts. These questions have been devised as per the updated Term I CBSE syllabus.
Summary of NCERT Solutions for Class 9 Maths Chapter 4 Linear Equations In Two Variables
“Linear Equations In Two Variables” is the 4th chapter of the Class 9 Maths NCERT Textbook, and it falls under Unit 2 Algebra. For the first and second term exams, from the unit Algebra, you usually get 7 questions that are 1 multiple choice question of 1 mark, 2 short answers with reasoning for a total of 4 marks, 3 short answer questions of a total of 9 marks and 1 long answer question of 6 marks. Thus, the total weightage for the unit is 20 marks. Thus, the total weightage for the unit is 20 marks. Topics covered under this chapter are listed below.
|Chapter 4||Linear Equations In Two Variables|
|4.3||Solution of a Linear Equation|
|4.4||Graph of a Linear Equation In Two Variables|
|4.5||Equations of Lines Parallel to the x-axis and y-axis|
NCERT Solutions for Class 9 Maths Chapter 4 – Linear Equations In Two Variables
In this chapter, the knowledge of linear equations in one variable is recalled and extended to that of two variables. Any equation which can be put in the form ax + by + c = 0, where a, b and c are real numbers, and a and b are not both zero, is called a linear equation in two variables. The Maths NCERT Solutions of Class 9 offers chapter-wise solutions with precise explanations of the exercises provided in the textbook. Students can easily understand the concept of linear equations of Algebra with the help of easy examples provided in these NCERT Solutions.
Key Benefits of NCERT Solutions for Class 9 Maths Chapter 4 – Linear Equations In Two Variables
In this chapter, you have studied about Linear Equations In Two Variables. And here we will see how the NCERT Solutions for Class 9 Maths Chapter 4 can benefit the students:
- It is created on the basis of the Term I CBSE syllabus for 2021-22
- It features all questions under each exercise section of the textbook
- Students get a clear idea about the concept and topic
- Solving the questions of the solutions will help them to self evaluate their performance
- Students can prepare for the exams on the basis of their knowledge gap
Students can also access the NCERT Solutions for Class 9 of other subjects in a chapter-wise format as well. Referring to these solutions will boost term 1 and 2 exam preparations for the students.
Practising more problems is very important when it comes to exam preparation. For this reason, students can also solve the questions from other textbooks prescribed by the CBSE Board.
- RD Sharma Solutions for Class 9 Maths Chapter 13 Linear Equations in Two Variables
Frequently Asked Questions on NCERT Solutions for Class 9 Maths Chapter 4
Give me a summary of exercises present in NCERT Solutions for Class 9 Maths Chapter 4.
There are five exercises 5 exercises present in NCERT Solutions for Class 9 Maths Chapter 4. viz,
4.1 – Introduction
4.2 – Linear Equations
4.3 – Solution of a Linear Equation
4.4 – Graph of a Linear Equation In Two Variables
4.5 – Equations of Lines Parallel to the x-axis and y-axis
Is it necessary to learn all the questions present in NCERT Solutions for Class 9 Maths Chapter 4?
Yes, if you want to score good in your CBSE Term I exams then you have to practice all the questions and formulae related to it. The solutions present in BYJU’S website are very accurate and clear. Students can start practicing NCERT Solutions for Class 9 Maths Chapter 4 to score higher marks. These solutions can be helpful not only for the first term exam preparation but also in solving homework and assignments.
Is NCERT Solutions for Class 9 Maths Chapter 4 difficult to understand?
No not at all, if you practice regularly NCERT Solutions for Class 9 Maths Chapter 4 is not much difficult to understand. The main aim of these solutions is to provide a fundamental aspect of Maths, which in turn, helps the students to understand every concept clearly. |
« PreviousContinue »
PROPOSITION XI. THEOREM.
If two circles cut each other in two points, the line which passes through their centres, will be perpendicular to the chord which joins the points of intersection, and will divide it into two equal parts.
For, let the line AB join the points of intersection. It will be a common chord to the two circles. Now if a perpendicular
PROPOSITION XII. THEOREM.
be erected from the middle of this chord, it will pass through each of the two centres C and D (Prop. VI. Sch.). But no more than one straight line can be drawn through two points; hence the straight line, which passes through the centres, will bisect the chord at right angles.
If the distance between the centres of two circles is less than the sum of the radii, the greater radius being at the same time less than the sum of the smaller and the distance between the centres, the two circumferences will cut each other.
For, to make an intersection possible, the triangle CAD must be possible. Hence, not only must we have CD<AC+AD, but also the greater radius AD< AC+CD (Book I. Prop. VII.). And, whenever the triangle CAD can be constructed, it is plain
that the circles described from the centres C and D, will cut each other in A and B.
PROPOSITION XIII. THEOREM.
If the distance between the centres of two circles is equal to the sum of their radii, the two circles will touch each other externally.
Let C and D be the centres at a distance from each other equal to CA+AD.
The circles will evidently have the point A common, and they will have no other; because, if they had two points common, the distance between their centres must be less than the sum of their radii.
PROPOSITION XIV. THEOREM.
If the distance between the centres of two circles is equal to the difference of their radii, the two circles will touch each other internally.
Let C and D be the centres at a dis- N tance from each other equal to AD-CA.
It is evident, as before, that they will have the point A common: they can have no other; because, if they had, the greater radius AD must be less than the sum of the radius AC and the distanceCD between the centres (Prop. XII.); which is contrary to the supposition.
Cor. Hence, if two circles touch each other, either externally or internally, their centres and the point of contact will be in the same right line.
Scholium. All circles which have their centres on the right line AD, and which pass through the point A, are tangent to each other. For, they have only the point A common, and it through the point A, AE be drawn perpendicular to AD, the straight line AE will be a common tangent to all the circles.
PROPOSITION XV. THEOREM.
In the same circle, or in equal circles, equal angles having their vertices at the centre, intercept equal arcs on the circumference: and conversely, if the arcs intercepted are equal, the angles contained by the radii will also be equal.
Let C and C be the centres of equal circles, and the angle ACB=DCE.
First. Since the angles ACB, DCE, are equal, they may be placed upon each other; and since their sides are equal, the point A will evidently fall on D, and the point B on E. But, in that case, the arc AB must also
fall on the arc DE; for if the arcs did not exactly coincide, there would, in the one or the other, be points unequally distant from the centre; which is impossible: hence the arc AB is equal to DE.
Secondly. If we suppose AB-DE, the angle ACB will be equal to DCE. For, if these angles are not equal, suppose ACB to be the greater, and let ACI be taken equal to DCE. From what has just been shown, we shall have AI=DE: but, by hypothesis, AB is equal to DE; hence AI must be equal to AB, or a part to the whole, which is absurd (Ax. 8.): hence, the angle ACB is equal to DCE.
PROPOSITION XVI. THEOREM.
In the same circle, or in equal circles, if two angles at the centre are to each other in the proportion of two whole numbers, the intercepted arcs will be to each other in the proportion of the same numbers, and we shall have the angle to the angle, as the corresponding arc to the corresponding arc.
Suppose, for example, that the angles ACB, DCE, are to each other as 7 is to 4; or, which is the same thing, suppose that the angle M, which may serve as a common measure, is contained 7 times in the angle ACB, and 4 times in DCE.
The seven partial angles ACm, mCn, nCp, &c., into which ACB is divided, being each equal to any of the four partial angles into which DCE is divided; each of the partial arcs Am, mn, np, &c., will be equal to each of the partial arcs Dr, xy, &c. (Prop. XV.). Therefore the whole arc AB will be to the whole arc DE, as 7 is to 4. But the same reasoning would evidently apply, if in place of 7 and 4 any numbers whatever were employed; hence, if the ratio of the angles ACB, DCE, can be expressed in whole numbers, the arcs AB, DE, will be to each other as the angles ACB, DCE.
Scholium. Conversely, if the arcs, AB, DE, are to each other as two whole numbers, the angles ACB, DCE will be to each other as the same whole numbers, and we shall have ACB DCE: AB: DE. For the partial arcs, Am, mn, &c. and Dx, xy, &c., being equal, the partial angles ACm, mCn, &c. and DCx, xCy, &c. will also be equal.
PROPOSITION XVII. THEOREM.
Whatever be the ratio of two angles, they will always be to each other as the arcs intercepted between their sides; the arcs being described from the vertices of the angles as centres, with equal radii.
Let ACB be the greater and ACD the less angle.
Let the less angle be placed on the greater. If the proposition is not true, the angle ACB will be to the angle ACD as the arc AB is to an arc
greater or less than AD. Suppose this arc to be greater, and let it be represented by AO; we shall thus have, the angle ACB: angle ACD:: arc AB: arc AO. Next conceive the arc
AB to be divided into equal parts, each of which is less than DO; there will be at least one point of division between D and O; let I be that point; and draw CI. The arcs AB, AI, will be to each other as two whole numbers, and by the preceding theorem, we shall have, the angle ACB: angle ACI:: arc AB
arc AI. Comparing these two proportions with each other, we see that the antecedents are the same: hence, the consequents are proportional (Book II. Prop. IV.); and thus we find the angle ACD: angle ACI :: arc AÓ: arc AI. But the arc AO is greater than the arc AI; hence, if this proportion is true, the angle ACD must be greater than the angle ACI on the contrary, however, it is less; hence the angle ACB cannot be to the angle ACD as the arc AB is to an arc greater than AD.
By a process of reasoning entirely similar, it may be shown that the fourth term of the proportion cannot be less than AD; hence it is AD itself; therefore we have
Angle ACB angle ACD
arc AB arc AD.
Cor. Since the angle at the centre of a circle, and the are intercepted by its sides, have such a connexion, that if the one be augmented or diminished in any ratio, the other will be augmented or diminished in the same ratio, we are authorized to establish the one of those magnitudes as the measure of the other; and we shall henceforth assume the arc AB as the measure of the angle ACB. It is only necessary that, in the comparison of angles with each other, the arcs which serve to measure them, be described with equal radii, as is implied in all the foregoing propositions.
Scholium 1. It appears most natural to measure a quantity by a quantity of the same species; and upon this principle it would be convenient to refer all angles to the right angle; which, being made the unit of measure, an acute angle would be expressed by some number between 0 and 1; an obtuse angle by some number between 1 and 2. This mode of expressing angles would not, however, be the most convenient in practice. It has been found more simple to measure them by arcs of a circle, on account of the facility with which arcs can be made equal to given arcs, and for various other reasons. At all events, if the measurement of angles by arcs of a circle is in any degree indirect, it is still equally easy to obtain the direct and absolute measure by this method; since, on comparing the arc which serves as a measure to any angle, with the fourth part of the circumference, we find the ratio of the given angle to a right angle, which is the absolute |
By: Damarrio C. Holloway
A parametric curve in the plane is a pair of functions x = f(t)
y = g(t)
where the two continuous functions define ordered pairs (x,y). These two functions are called the parametric equations of the curve that they form. The degree of the curve will depend on the range of t, in which in this exploration, we will denote t as the angle of rotation that some line makes from an initial location. The functions of x and y will vary with this time t.
Let us explore the different variations of graphs using the base equation of a cycloid:
x = (a + cos(3t) cos(t)
y = (a + cos(3t) sin (t). In the following graphs we will first set numbers for a and we will vary t to explore the angles of rotation.
In this initial graph, we have the base equation of our cycloid with a=1 and the rotation of our graph varies over t ranging from 0…..1. In this graph, we have one curve, or as we will see later, we will have half a flower leaf.
Let’s explore different ranges of t shall we.
Figure 2 Figure 3
Figure 2 shows the range of t: 0….2, while Figure 3 shows the range t: 0….5. In these images, we see that the set range for t determines the number of curves the figure will make.
A complete look at the rotation:
yields a three leaf rose.
Let us now explore variations of a in our equation.
This figure displays the rotations when a=0.5 and t has rotations ranging from 0….8. We see that it has two sets of complete rotations when a = 0.5.
As the range of t is increased by a multiple of 10, the rotations of the graph increase, giving the graph a bold look.
Even with a multiplication of 5 from the previous graph, the rotation of the curve has a drastic increase.
Figure 5 Figure 6
When a = 2, we have a quite different graph. The rotation of the graph does not go through the origin as did the original graphs.
We have seen what varying a and t will do to the graph, now let’s take a quick look at a change in “leaves.”
In Figure 7, we now have a 4 leaf rose because of the increase from 3 to 4 for the ‘t’ coefficient.
Figure 8 yields a 5 leaf rose with the increase of 1 from figure 4. The rose rotates through the origin because a = 1 as in the original equation. The shapes and curves for this particular parametric equation are endless. As you can see, the higher you set your t-values, the more rotations you can create. Also, with more rotations and any increase in you’re a-values, the closer your graph reaches the origin. |
Researchers at Weizmann Institute of Science and Cinvestav recently carried out a study testing the theory of Hawking radiation on laboratory analogues of black holes. In their experiments, they used light pulses in nonlinear fiber optics to establish artificial event horizons.
Back in 1974, renowned physicist Stephen Hawking amazed the physics world with his theory of Hawking radiation, which suggested that rather than being black, black holes should glow slightly due to quantum effects near the black hole's event horizon. According to Hawking's theory, the strong gravitational field around a black hole can affect the production of matching pairs of particles and anti-particles.
Should these particles be created just outside the event horizon, the positive member of this pair of particles could escape, resulting in an observed thermal radiation emitting from the black hole. This radiation, which was later termed Hawking radiation, would hence consist of photons, neutrinos and other subatomic particles. The theory of Hawking radiation was among the first to combine concepts from quantum mechanics with Albert Einstein's theory of General Relativity.
"I learned General Relativity in 1997 by lecturing a course, not by taking a course," Ulf Leonhardt, one of the researchers who carried out the recent study, told Phys.org. "This was a rather stressful experience where I was just a few weeks ahead of the students, but I really got to know General Relativity and fell in love with it. Fittingly, this also happened in Ulm, Einstein's birthplace. Since then, I have been looking for connections between my field of research, quantum optics and General Relativity. My main goal is to demystify General Relativity. If, as I and others have shown, ordinary optical materials like glass act like curved spaces, then the curved space-time of General Relativity becomes something tangible, without losing its charm."
In collaboration with his first Ph.D. student Paul Piwnicki, Leonhardt put together some initial ideas of how to create optical black holes, which were published in 1999 and 2000. In 2004, he finally achieved a method that actually worked, which is the one used in his recent study.
"Imagine, like in Einstein's gedanken experiments, light chasing after another pulse of light," Leonhardt explained. "Suppose that all the light travels inside an optical fiber. In the fiber glass, the pulse changes the speed of the light chasing it a little, such that the light cannot overtake the pulse. It experiences a white-hole horizon; a place it cannot enter. The front of the pulse acts like the exact opposite: a black-hole horizon, a place the light cannot leave. This is the idea in a nutshell."
Leonhardt and his colleagues published and demonstrated this idea in 2008. Subsequently, they tried to use it to demonstrate Hawking radiation.
Hawking radiation has never been directly observed in space, as this is not currently feasible. However, it can be demonstrated in laboratory environments, for instance, using Bose-Einstein condensates, water waves, polaritons or light. In the past, several researchers tried to test Hawking radiation in the lab using these techniques, yet most of their studies were, in fact, problematic and have thus been disputed.
For instance, some past findings obtained with intense light pulses in optical media turned out to be inconsistent with theory. Rather than observing Hawking radiation made by horizons, as the authors themselves found out later, they had, in fact, observed horizon-less radiation created by their light pulses, as they exceeded the phase velocity of light for other frequencies. Other studies attempting to observe Hawking radiation on water waves and in Bose-Einstein condensates also turned out to be problematic.
Discussing the outcomes of these studies with Physics World, Leonhardt wrote, "I greatly admire the heroism of the people doing them, and their technical skills and expertise, but this is a difficult subject." He also wrote: "Horizons are perfect traps; it is easy to get trapped behind them without noticing, and this applies to horizon research, as well. We learn and become experts according to the classic definition: An expert is someone who has made all possible mistakes (and learned from them)."
As proven by previous efforts, observing Hawking radiation in the lab is a highly challenging task. The study carried out by Leonhardt and his colleagues could be the first valid demonstration of Hawking radiation in optics.
"Black holes are surrounded by their event horizons," Leonhardt explained. "The horizon marks the border where light can no longer escape. Hawking predicted that at the horizon light quanta—photons—are created. One photon appears outside the horizon and is able to get away, while its partner appears on the inside and falls into the black hole. According to quantum mechanics, particles are associated with waves. The photon on the outside belongs to a wave that oscillates with positive frequency, the wave of its partner on the inside oscillates with a negative frequency."
In their study, Leonhardt and his colleagues made light out of positive and negative frequencies. Their positive-frequency light was infrared, while the negative-frequency one was ultraviolet. The researchers detected both of them and then compared them with Hawking's theory.
The tiny bit of ultraviolet light that they managed to detect using sensitive equipment is the first clear sign of stimulated Hawking radiation in optics. This radiation is referred to as 'stimulated' because it is stimulated by the probe light that the researchers sent in to chase the pulses.
"Our most important finding, perhaps, is that black holes are not something out of the ordinary, but that they closely resemble what light pulses do to ordinary light in fibers," Leonhardt said. "Demonstrating subtle quantum phenomena like Hawking radiation is not easy. It takes extremely short pulses, extraordinary fibers, sensitive equipment and, last but not least, the hard work of dedicated students. But even Hawking radiation is something one can actually understand."
The study carried out by Leonhardt and his colleagues is an important contribution to the physics field, as it provides the first laboratory demonstration of Hawking radiation in optics. The researchers also found the analogy to event horizons to be remarkably robust, despite pushing the optics to the extreme, which increased their confidence in the validity of their theories.
"We now need to improve our setup to get ready for the next big challenge: the observation of spontaneous Hawking radiation," Leonhardt said. "In this case, the radiation is not stimulated anymore, except by the inevitable fluctuations of the quantum vacuum. Our next goals are steps that improve the apparatus and test various aspects of stimulated Hawking radiation, before going all the way to spontaneous Hawking radiation."
Explore further: Black holes dissolving like aspirin: How Hawking changed physics
More information: Silke Weinfurtner et al. Measurement of Stimulated Hawking Emission in an Analogue System, Physical Review Letters (2011). DOI: 10.1103/PhysRevLett.106.021302
F. Belgiorno et al. Hawking Radiation from Ultrashort Laser Pulse Filaments, Physical Review Letters (2010). DOI: 10.1103/PhysRevLett.105.203901
Ulf Leonhardt. Questioning the Recent Observation of Quantum Hawking Radiation, Annalen der Physik (2018). DOI: 10.1002/andp.201700114
Jonathan Drori et al. Observation of Stimulated Hawking Radiation in an Optical Analogue, Physical Review Letters (2019). DOI: 10.1103/PhysRevLett.122.010404 |
The mathematical formula for mass is mass = density x volume. To calculate the mass of an object, you must first know its density and its volume.Know More
The formula "mass = density x volume" is a variation on the density formula: density = mass ÷ volume. As long as two of the variables are known, the third can be calculated by rearranging the equation. In order to calculate mass, it is necessary to divide both sides of the density equation by volume.
For example, to find the mass of a liter of milk, it is first necessary to know the density of the milk. If the density is 1.03 g/mL, then according to the equation, the mass of one liter of milk is 1.030 kilograms.
Mass is often confused with weight, although they are two different things. Mass is an unchanging quality of an object. Relativistic mass, however, can increase when an object approaches light speed. It is a measurement of the amount of matter the object has. When a physicist discusses the mass of an object, he or she is is talking about the number of particles in the object, not how much they weigh. Weight, on the other hand, is the force of gravity pulling on a mass. The SI unit of mass, as determined by the International System of Units, is the kilogram.
Density is how much mass there is in a given amount of a material. An object of higher density will weigh more than an object of lower density, even if the objects are the same size.Learn more in Measurements
Mass is measured using balances, and the two methods for weighing mass are called direct weighing and weighing by difference. Direct weighing is when an object is placed on a balance and the mass is read. Weighing by difference is done when two measurements are taken and the difference is found between the two weights.Full Answer >
Weight is a measurement of the force placed on an object by gravity, whereas mass is the amount of matter an object contains. Mass is commonly denoted using m or M, and weight is denoted with W.Full Answer >
Mass is equal to density times volume, or m = Dv, since density is equal to mass divided by volume, or D = m/v. Mass is denoted by the symbol "m," and its SI units are kilograms.Full Answer >
The mass of an object is the amount of matter it contains while volume is the amount of space it takes up. Mass should not be confused with weight, which is the measure of the gravitational force on an object.Full Answer > |
Introduction to Volatility for Kids and Teens
This video explains the concept of volatility in a simple, concise way for kids and beginners. It could be used by kids & teens to learn about volatility, or used as a money & personal finance resource by parents and teachers as part of a Financial Literacy course or K-12 curriculum.
Suitable for students from grade levels:
- Elementary School
- Middle School
- High School
The topics covered are:
- What is volatility
- Is volatility the same as risk
- What causes volatility
- Why is volatility important – and how to deal with it
- Additional thoughts and considerations
What is volatility?
Volatility describes how quickly and by how much a security or an index can fluctuate in price.
An investment that is very volatile can have large price changes over a short period of time. But an investment that is not volatile will not change in price quickly.
Volatility is expressed as a percentage, and represents price movements across different durations like daily, weekly, monthly, etc.
There are two main types of volatility: historical volatility is based on past performance, while implied volatility makes predictions about a security’s price movement in the future.
Volatility Index VIX, created by the Chicago Board Options Exchange (CBOE), is an index that shows the stock market’s expectations of volatility for the next 30 days.
Is volatility the same as risk?
Volatility and risk are not the same. Volatility is the fluctuation in the price of a security – which could go up or down, whereas risk is the possibility of losing your money.
Often, volatility contributes to risk, but it is only one of the factors that decide how risky a security is. Another key factor is the company’s fundamentals.
What causes volatility?
A number of factors can impact volatility – from political factors like government policies, war, and international trade agreements, to economic factors like inflation and unemployment numbers.
Some factors might affect volatility in specific sectors or industries, like major breakthroughs in medicine impacting pharmaceutical companies, or adverse weather impacting agriculture-dependent companies.
Why is volatility important? And how should I deal with it?
So instead of thinking of ways to avoid it altogether, or getting swayed by market fears and doomsday predictions, you can use it to your advantage by using tried and true investing strategies.
One such strategy is Dollar Cost Averaging, where you invest a fixed dollar amount every month irrespective of market movements.
This way, for the same dollar amount, you buy less when the price is high, but more when the price goes down – bringing down your average purchase price and taking advantage of volatility.
Additional thoughts and considerations
When making an investment, it is important to think about how volatile it is. At any time, you should not invest in something that is unnecessarily risky.
If a security is volatile but also has a high possibility for gains, then it might be good to have it in your portfolio when you are investing for something a long time into the future – like retirement.
Even if the price goes down drastically, if you don’t need the money anytime soon, you’ll not be forced to sell at a loss. If you are patient and disciplined, the investment will bounce back and you will make profit in the long run.
In fact, as long as you are confident in your investment, you can buy more at a discount!
More safe and conservative investments should be used when investing for something with a shorter investment horizon – like down payment on a house.
Bottomline: Use volatility to your advantage, and pick your investments wisely keeping in mind their volatility and your investment time horizon.
Download Transcript: Ideal for Use by Teachers in their Lesson Plan to Teach Kids & Teens
Podcast: What is Volatility
Fun, informative and concise episodes by a 10-year old, breaking down complex financial concepts in a way that kids and beginners can understand. Episodes cover personal finance topics like saving, investing, banking, credit cards, insurance, real estate, mortgage, retirement planning, 401k, stocks, bonds, income tax, and more, and are in the form of a conversation between a cowboy (a finance novice) and his friend, a stock broker. Making finance your friend, only at Easy Peasy Finance.
A little bit about me: I have been fascinated with the world of personal finance since I was 6! I love to read personal finance books, and keep myself updated on the latest by reading various personal finance magazines. My friends often ask me questions about finance because they find it complex and intimidating. That’s what inspired me to start my YouTube channel called Easy Peasy Finance when I was 8, and this podcast 2 years later.
Everything you need to know about volatility: What is volatility, Is volatility the same as risk, What causes volatility, Why is volatility important, How should you deal with volatility, and more. Show notes and transcript at: What is Volatility? A Simple Explanation for Kids, Teens and Beginners Everything you need to know about volatility: What … |
In chemistry, when a chemical reaction occurs, some molecular bonds are broken giving way to other bonds to make some different molecules. For example, the bonds between the 2 water molecules are broken to produce hydrogen and oxygen.
Bond energy (BE) is frequently used in chemistry as the formation of chemical compounds requires some bonds. Bond energy is also called the mean bond enthalpy or average bond enthalpy that measures the strength in the bonds of a molecule. Bond energy is a 2-way process because energy is always required to break a bond whereas the same is released when a bond formation occurs. Bonding plays a vital role in atomic energy because joining atoms attain lower energies while they acquire more energy when they are individual.
According to IUPAC “bond energy is the average value of the gas-phase Bond-dissociation energy at a temperature of 298.15 K for all bonds of the same type within the same chemical species”. When hydrogen atoms combine to form a molecule, a lot of energy is given out in the form of heat and this implies that the product formed is more stable than its reactants. The covalent bond in the hydrogen molecule is so strong that it needs approximately 435kJ of energy to dissociate one mole of the hydrogen molecules to hydrogen atoms. Bond dissociation energy refers to the amount of energy required to break the bond between two covalently bonded atoms. A carbon-carbon single covalent bond has a bond dissociation of about 347Kj. The ability of carbon to form strong C-C bonds helps explain the stability of carbon compounds. Compounds with only C-C and C-H have single covalent bonds, for example, methane.
Bond Dissociation energy is defined as the standard enthalpy change of the following fission: R - X → R + X. The BDE, denoted by Dº(R - X), is usually derived by the thermochemical equation. Therefore, it is the energy required to disrupt or dissociate bonds in a chemical reaction. It is otherwise known as bond disruption energy or binding energy.
The energy released during dissociation
Bond energy is the basis of explanation for how strong or weaker the bond strength is. Stronger the ionic bond, greater the amount of energy released when the bonds dissociate. Ionic bonds are crystalline and very tough in their structure, this is because of the nature of bonds that they form. When a bond is so strong, it implies that it needs a lot of energy to break the bond. However, almost all the ionic bonds can be broken or dissolved despite the fact that they have high melting points. Surprisingly, most of the ionic solids dissolve readily in water, and they are very good conductors of heat and electricity. The tables below give information about bond energies and bond dissociation energies of common atoms
Bond enthalpies or bond energy table
Factors affecting the ionic bond energy
The electronegativity of the 2 atoms bonding together affects ionic bond energy. There is a strong bond when the electronegativity of 2 atoms are farther away. The Strongest polar covalent bond is found in the Carbon-Fluorine bond. And mostly, ionic bonds are stronger than covalent bonds. By checking at melting points, ionic compounds have high melting points and covalent compounds have low melting points. |
Wildlife Conservation and the Natural Environment: A Comprehensive Look
Habitat destruction and the ever-increasing threat to wildlife populations have become critical global concerns in recent years. The delicate balance between human development and preserving the natural environment has been a topic of debate among policymakers, scientists, and environmentalists alike. One example that exemplifies this issue is the case study of the Amazon rainforest, which spans across several South American countries and harbors an astounding array of plant and animal species. With deforestation rates rising exponentially due to logging, agriculture expansion, and infrastructure development, efforts towards wildlife conservation are urgently needed to mitigate further ecological damage.
This article aims to provide a comprehensive examination of wildlife conservation strategies in relation to the preservation of natural environments worldwide. By exploring various approaches used by experts and organizations, it seeks to shed light on effective methods for mitigating habitat loss while protecting vulnerable species from extinction. Additionally, it will examine the intricate connections between biodiversity conservation, ecosystem functions, and human well-being. Understanding these interdependencies is crucial for formulating holistic solutions that address both environmental sustainability and socio-economic development. Ultimately, this analysis serves as a call-to-action for individuals and governments alike to prioritize long-term environmental protection through collaborative efforts driven by scientific research and informed decision-making processes.
The Impact of Human Activities on Biodiversity
Human activities have had a profound and detrimental impact on global biodiversity. One striking example is the case study of the Amazon rainforest, often referred to as the “lungs of the Earth.” This vast ecosystem, covering approximately 5.5 million square kilometers, plays a crucial role in regulating climate patterns and harboring an unparalleled diversity of plant and animal species. However, deforestation caused by human actions poses a significant threat to this invaluable natural resource.
One cannot underestimate the gravity of the situation when considering the consequences of deforestation. To highlight its devastating effects, here are four key points that illustrate the magnitude of damage inflicted upon our planet:
- Loss of habitat: Deforestation results in the destruction and fragmentation of wildlife habitats. Countless species rely on specific forest ecosystems for their survival, making them highly vulnerable to displacement or extinction.
- Disruption of ecological balance: Removing large areas of forests disrupts intricate ecological interactions between different organisms. As a result, vital processes such as pollination, seed dispersal, and nutrient cycling become compromised.
- Climate change exacerbation: Trees play a critical role in mitigating climate change by absorbing carbon dioxide from the atmosphere through photosynthesis. When trees are cleared at an alarming rate, this valuable carbon sink diminishes significantly, leading to increased greenhouse gas emissions and further contributing to global warming.
- Loss of cultural heritage: Indigenous communities residing within these forested regions possess rich ancestral knowledge regarding medicinal plants and sustainable living practices. Their displacement due to deforestation not only threatens their way of life but also leads to a loss in traditional wisdom that has been passed down for generations.
To fully comprehend the extent of devastation caused by human-induced deforestation on wildlife populations around the world, it is essential to examine specific instances where its impacts have been evident.
In light of these grim realities brought about by deforestation’s effect on biodiversity, it becomes evident that addressing this issue is of utmost importance. The subsequent section delves into the devastating effects of deforestation on wildlife, building upon the knowledge gained from understanding human impacts on biodiversity.
The Devastating Effects of Deforestation on Wildlife
As we delve deeper into the intricate relationship between human activities and biodiversity, it becomes evident that deforestation poses a significant threat to wildlife populations. This section will explore the devastating effects of deforestation on wildlife, focusing on the loss of habitat, disruption of ecosystems, and increased vulnerability of species.
Loss of Habitat:
One concrete example illustrating the impact of deforestation on wildlife is the case study conducted in the Amazon rainforest. The accelerated rate of deforestation in this region has resulted in widespread Habitat Destruction for countless animal species. For instance, the jaguar population has experienced a sharp decline due to their dependence on forested areas for hunting and shelter. With large swathes of their natural environment cleared for agricultural purposes or logging, these majestic creatures are being forced into smaller fragmented habitats, increasing competition for resources and diminishing their chances of survival.
Disruption of Ecosystems:
Deforestation disrupts delicate ecological balance by altering critical interdependencies among various organisms within an ecosystem. When forests are cleared, not only do numerous tree species vanish but also the associated plants, insects, birds, and mammals that rely on them for food or shelter. This leads to cascading effects throughout the food chain as predator-prey relationships become imbalanced. The extinction or displacement of certain key species can result in overpopulation of others or even cause entire trophic levels to collapse.
The consequences of deforestation extend beyond immediate loss; they have long-term repercussions for many vulnerable species. Without adequate protection from trees against extreme weather conditions such as heatwaves or heavy rains, animals face heightened exposure to adverse climate elements. Moreover, without dense vegetation cover acting as a barrier against predators or poachers, animals become more susceptible to predation and illegal activities like hunting or capturing for exotic pet trade.
- Irreversible destruction: Once lost through deforestation, pristine habitats and unique ecosystems cannot be easily restored, leading to permanent loss of biodiversity.
- Tragic decline: Countless species face the threat of extinction as their habitats are relentlessly destroyed.
- Ecological imbalance: The disruption caused by deforestation can lead to a domino effect throughout ecosystems, jeopardizing the survival of multiple species.
- Ethical responsibility: We have a moral obligation to protect and preserve Earth’s natural heritage for future generations.
|Effects of Deforestation on Wildlife|
|Loss of Habitat||Disruption||Increased||Irreversible|
Understanding the devastating consequences of deforestation provides critical insights into the urgency with which we must address this issue. In our next section, we will explore another pressing concern in wildlife conservation – the global trade in endangered species: a growing threat that exacerbates biodiversity loss on a global scale.
The Global Trade in Endangered Species: A Growing Threat
Having explored the devastating effects of deforestation on wildlife, it is vital to also address another pressing concern threatening biodiversity – the global trade in endangered species. This illicit industry poses significant challenges to wildlife conservation efforts worldwide. By examining its underlying causes, impacts, and potential solutions, we can gain a deeper understanding of this growing threat.
The global trade in endangered species encompasses various activities such as poaching, smuggling, and selling rare animals or their parts for profit. To illustrate the gravity of this issue, let us consider a hypothetical case study involving rhinoceros horn trafficking. In recent years, demand for rhino horns has escalated due to misguided beliefs regarding their medicinal properties. As a result, countless rhinos have fallen victim to illegal hunting practices aimed at satisfying this market demand. Such cases demonstrate how lucrative opportunities drive individuals towards engaging in harmful actions that directly contribute to the decline of vulnerable species.
To comprehend the complex dynamics surrounding this issue fully, several factors must be taken into account:
- Economic incentives fueling illegal trade
- Insufficient law enforcement and penalties
- Demand-driven by cultural beliefs or status symbols
- Weak international collaboration among governments and organizations
These elements intertwine to create an intricate web that perpetuates the global trade in endangered species and exacerbates its negative repercussions on wildlife populations worldwide.
Table (Emotional Response):
|Consequences of Global Trade in Endangered Species|
|Loss of biodiversity|
|Irreparable damage to ecosystems|
|Disruption of ecological balance|
|Potential extinction risks|
Bullet Point List (Emotional Response):
- Innocent lives lost due to ruthless hunting practices.
- Destruction of delicate habitats and fragile ecosystems.
- Irreversible damage to the intricate web of life on Earth.
- Species pushed to the brink of extinction, with long-term consequences.
In light of these distressing realities, it is imperative for governments, organizations, and individuals alike to collaborate actively in combating this trade. Strengthening law enforcement efforts through stricter penalties and increased surveillance can help deter potential offenders. Additionally, raising awareness about the importance of wildlife conservation, supporting sustainable livelihoods for local communities, and promoting responsible tourism practices can contribute to reducing demand for illegal animal products.
Understanding the grave implications associated with the global trade in endangered species lays the groundwork for exploring another significant ecological concern – the consequences of invasive species. By examining how non-native organisms disrupt natural habitats and native biodiversity, we can gain valuable insights into developing effective management strategies that preserve our delicate ecosystems without causing further harm.
The Ecological Consequences of Invasive Species
Section H2: The Ecological Consequences of Invasive Species
In the interconnected web of ecosystems, the introduction and spread of invasive species can have far-reaching consequences. These non-native organisms can disrupt delicate ecological balances and pose significant threats to native flora and fauna. To illustrate this point, let us consider a hypothetical scenario where an introduced plant species rapidly spreads throughout a pristine forest ecosystem.
The Ecological Impact:
Imagine a dense forest teeming with diverse plant life, providing food and habitat for numerous animal species. Now, picture the sudden invasion of an aggressive vine that outcompetes native plants for resources such as sunlight, water, and nutrients. This invader not only overpowers other vegetation but also alters soil composition, leading to decreased biodiversity in the affected area. As a result, many indigenous plant species struggle to survive or disappear altogether. Consequently, animals that rely on these plants for sustenance face scarcity of their primary food sources.
This case study highlights some broader ecological consequences associated with invasive species:
- Disruption of Food Webs: Invasive predators can upset natural predator-prey relationships by preying upon or outcompeting native species.
- Alteration of Habitats: Invasive plants often modify habitats by monopolizing resources like space and light availability.
- Decline in Native Biodiversity: Competition from invasive species can lead to reduced populations or even extinction of native flora and fauna.
- Increased Vulnerability to Other Disturbances: Ecosystems already weakened by invasive species become more susceptible to additional stressors such as climate change or pollution.
|Altered nutrient cycling||Nutrient deficiencies|
|Habitat degradation||Loss of nesting sites|
|Displacement of native species||Local extinctions|
Conclusion & Transition:
Understanding the ecological repercussions caused by invasive species is crucial for effective conservation efforts. By comprehending the consequences of these invasions, we can develop strategies to prevent their introduction or mitigate their impact. However, invasive species are not the only threats faced by wildlife and natural ecosystems. The subsequent section will delve into another pressing concern: the overexploitation of natural resources and its dangers to wildlife.
Section H2 (Transition): The Overexploitation of Natural Resources: A Danger to Wildlife
The Overexploitation of Natural Resources: A Danger to Wildlife
Previous section H2:’The Ecological Consequences of Invasive Species’
Next section H2:’The Overexploitation of Natural Resources: A Danger to Wildlife’
Having explored the ecological consequences of invasive species, it is essential to delve into another pressing issue threatening wildlife populations – the overexploitation of natural resources. By examining this topic in detail, we can gain a comprehensive understanding of the challenges faced by conservation efforts.
To illustrate the detrimental effects of overexploitation, let us consider a hypothetical case study involving marine fisheries. Imagine a coastal region where fishing practices have become increasingly intensified due to growing demand for seafood. As fishermen employ unsustainable methods such as bottom trawling or longlining without proper regulations, fish stocks rapidly decline. This scenario highlights how unchecked exploitation can disrupt ecosystems and jeopardize the sustainability of vital wildlife populations.
This rampant depletion of natural resources has far-reaching implications that extend beyond individual species loss. To comprehend its broader impact on biodiversity and ecosystem health, consider the following emotional bullet points:
- Irreversible damage caused by habitat destruction
- Loss of crucial food sources for predators
- Disruption in ecological balance leading to cascading effects
- Diminished cultural heritage tied to traditional livelihoods
Table showcasing examples highlighting these consequences:
|Habitat Destruction||Fragmented habitats threaten survival|
|Decline in Predator Populations||Imbalances disrupt trophic dynamics|
|Cascading Effects||Chain reactions affect entire ecosystems|
|Cultural Heritage||Traditional ways of life are at risk|
Understanding these profound repercussions illustrates the urgent need for effective resource management strategies and sustainable harvesting practices. Governments, communities, and individuals must collaborate to develop and implement measures that mitigate the negative impacts of overexploitation.
In light of these challenges, it becomes increasingly clear that addressing habitat loss is crucial for conserving wildlife populations. The subsequent section will explore the role played by habitat destruction in diminishing biodiversity and offer insights into potential conservation approaches. By understanding this critical aspect, we can work towards creating a more harmonious coexistence between human activities and the natural environment.
The Role of Habitat Loss in Declining Wildlife Populations
In the previous section, we discussed how the overexploitation of natural resources poses a significant threat to wildlife populations. Now, let us explore another crucial factor contributing to the decline in wildlife numbers: habitat loss. To illustrate this, consider the hypothetical case study of the Amazon rainforest.
The Amazon rainforest is home to an incredibly diverse array of species, including jaguars, macaws, and tapirs. However, due to increased deforestation for agricultural purposes and logging operations, vast portions of this vital ecosystem have been destroyed. This loss of habitat has had devastating consequences for countless species that rely on the forest’s resources for survival.
Habitat loss directly impacts wildlife populations in several ways:
- Displacement: As their habitats shrink or disappear entirely, animals are forced to relocate or adapt to new environments. This often leads to increased competition for limited resources and can result in population declines.
- Fragmentation: The destruction of large contiguous areas disrupts ecological connectivity and isolates smaller pockets of habitat. This fragmentation hinders migration patterns and gene flow among populations.
- Decreased food availability: Habitat loss means less access to essential food sources for many species, leading to malnutrition and reduced reproductive success.
- Increased vulnerability: Animals living in fragmented habitats become more susceptible to predation and disease outbreaks as they lack adequate protection from predators or immunological support from healthy ecosystems.
To grasp the magnitude of habitat loss globally, consider the following table highlighting some alarming statistics:
|Region||Area Deforested (sq km)||Species Affected|
|Amazon Rainforest||7 million||Countless|
|Great Barrier Reef||N/A||Coral Reefs|
|African Savanna||3 million||Elephants, Lions, Giraffes|
These numbers emphasize the urgent need for action to protect and restore habitats worldwide. Efforts must be made to preserve intact ecosystems and create corridors that allow wildlife populations to move freely between fragmented areas.
In the subsequent section, we will delve into yet another significant concern: illegal wildlife trafficking—a multibillion-dollar industry that further threatens vulnerable species across the globe. By understanding its impacts, we can explore strategies to combat this illicit trade and safeguard our planet’s biodiversity.
Illegal Wildlife Trafficking: A Multibillion-Dollar Industry
Section H2: Illegal Wildlife Trafficking: A Multibillion-Dollar Industry
Continuing our exploration of the threats faced by wildlife, we now delve into the disturbing realm of illegal wildlife trafficking. This sinister practice has emerged as a multibillion-dollar industry, fueling immense profits for criminal networks while pushing numerous species closer to extinction. To better understand its impact and implications, let us consider an example scenario:
In Southeast Asia, the Sumatran tiger population is under severe threat due to rampant poaching driven by the demand for their body parts in traditional medicine markets. These magnificent creatures once roamed freely across vast stretches of rainforests but are now reduced to mere fragments of their former habitat. The combination of deforestation and illicit trade poses a grave risk not only to the survival of these apex predators but also to the delicate ecological balance they help maintain.
One crucial aspect contributing to the success of this illicit trade is its intricate web of connections spanning multiple countries and continents. Here are some key factors involved in illegal wildlife trafficking:
- High Demand: Driven by cultural beliefs, fashion trends, exotic pet ownership desires, and alternative medicines.
- Inadequate Law Enforcement: Insufficient resources allocated towards combating smuggling operations.
- Corruption Networks: Collaborations between criminals and public officials weaken law enforcement efforts.
- Global Market Accessibility: Technological advancements facilitate online trading platforms that transcend national boundaries.
The magnitude of this issue becomes even more apparent when examining the financial gains associated with it. The following table highlights estimated annual revenues generated from various illegally traded wildlife products:
|Wildlife Product||Estimated Annual Revenue|
|Rhino Horn||$250 million|
|Tiger Parts||$200 million|
This staggering profitability fuels further criminal activities while exacerbating biodiversity loss and ecological imbalances. To address this crisis effectively, international cooperation, stringent legislation, increased law enforcement efforts, and public awareness campaigns are imperative.
As we have witnessed the devastating consequences of Illegal Wildlife Trafficking, it becomes evident that protecting biodiversity requires a multi-pronged approach. Consequently, our investigation now turns towards understanding another significant threat to natural ecosystems: the spread of invasive species.
The Spread of Invasive Species: Disrupting Ecosystems
In the realm of wildlife conservation, one cannot overlook the profound impact of invasive species on ecosystems worldwide. These non-native organisms, introduced either purposefully or accidentally by human activities, often outcompete native species and disrupt delicate ecological balances. To illustrate this issue, let us consider a hypothetical scenario where an exotic plant known as Purple Loosestrife (Lythrum salicaria) is inadvertently brought to a wetland area.
The invasion of Purple Loosestrife can have far-reaching consequences for both flora and fauna in the affected ecosystem. Its rapid growth and ability to choke waterways diminishes the availability of open space for native plants like cattails and sedges. As a result, these indigenous species struggle to find suitable habitats for nesting birds and shelter for aquatic organisms such as tadpoles and small fish. This disruption reverberates throughout the food chain, impacting various trophic levels within the ecosystem.
To further comprehend the problematic nature of invasive species, we must examine their detrimental effects:
- Increased competition for resources.
- Alteration of nutrient cycles.
- Loss of genetic diversity.
- Disruption of pollination networks.
These impacts are not isolated instances; they occur across diverse ecosystems globally. Understanding these effects allows us to grasp the urgency with which action needs to be taken against invasive species proliferation.
|Effects of Invasive Species|
|1. Habitat degradation|
|4. Impaired ecosystem function|
As we delve deeper into wildlife conservation efforts, it becomes increasingly evident that addressing invasive species’ spread is paramount for safeguarding our natural heritage. By implementing effective management strategies such as early detection systems, quarantine protocols, and targeted eradication methods, we can strive towards mitigating potential damage caused by invasive species.
Transitioning seamlessly into the subsequent section about “The Unsustainable Harvesting of Wildlife for Commercial Gain,” we must now turn our attention to yet another significant threat facing wildlife conservation efforts worldwide.
The Unsustainable Harvesting of Wildlife for Commercial Gain
Disruptive invasive species are not the only threat to our natural ecosystems; another pressing issue is the unsustainable harvesting of wildlife for commercial gain. This detrimental practice poses a significant risk to biodiversity and ecological balance, as it disrupts delicate food chains and depletes populations of vulnerable species. To shed light on this matter, let us delve into an example that highlights the consequences of such unsustainable practices.
Example: Consider a fictional scenario in Southeast Asia where the demand for exotic animal products has skyrocketed in recent years. As a result, poachers have intensified their efforts to capture and kill endangered animals like tigers, pangolins, and elephants due to the lucrative trade opportunities. These iconic creatures face imminent danger as they fall victim to illegal hunting driven by profit motives rather than conservation considerations.
To fully comprehend the gravity of unsustainable wildlife exploitation, we must acknowledge its far-reaching implications. Here are some key aspects worth considering:
- Loss of Biodiversity: When certain species become targets for commercial exploitation, their populations decline rapidly or even face extinction. Such loss disrupts intricate ecological relationships within habitats, negatively impacting other plant and animal communities.
- Ecological Imbalance: Removing specific species from an ecosystem can lead to imbalances in predator-prey dynamics and alter natural processes such as pollination and seed dispersal. These disruptions have cascading effects throughout entire ecosystems.
- Economic Consequences: While there may be short-term economic gains associated with exploiting wildlife commercially, the long-term costs outweigh them significantly. The depletion of valuable resources ultimately undermines potential ecotourism revenue streams and sustainable livelihood options linked to intact ecosystems.
- Ethical Concerns: Unregulated harvestings often involve cruel methods that cause immense suffering to individual animals. This raises ethical questions about our responsibility towards other living beings sharing this planet.
To illustrate further how damaging these practices can be, consider the following table outlining some examples of wildlife exploitation and its consequences:
|Ivory trade||Elephant poaching for ivory tusks threatens elephant populations and disrupts their social structures.|
|Shark finning||Unsustainable fishing practices targeting sharks endanger various shark species and disturb marine food webs.|
|Bushmeat hunting||Overhunting of wild animals in tropical forests endangers numerous animal species and contributes to zoonotic disease transmission.|
|Traditional medicine||Excessive harvesting of certain plants and animals for medicinal purposes depletes natural resources and undermines ecosystem health.|
In light of these alarming realities, it is crucial that we address the unsustainable harvesting of wildlife urgently. By implementing effective conservation strategies, we can mitigate the destructive impact on ecosystems while ensuring the sustainable use of our planet’s abundant biodiversity.
Understanding the dire need for immediate action, let us now explore strategies aimed at conserving biodiversity effectively in order to protect vulnerable wildlife from further harm.
Conserving Biodiversity: Strategies for Effective Wildlife Protection
Building upon the discussion on the detrimental consequences of unsustainable wildlife harvesting, this section delves into strategies that can effectively conserve biodiversity and protect wildlife populations. By implementing these approaches, we can work towards safeguarding our natural environment for future generations.
Section H2: Conserving Biodiversity: Strategies for Effective Wildlife Protection
To illustrate the effectiveness of conservation strategies, let us consider a hypothetical case study involving an endangered species—the Malabar Giant Squirrel (Ratufa indica). Found in the Western Ghats region of India, this vibrant squirrel faces numerous threats due to habitat loss and illegal hunting. To ensure its survival, it requires comprehensive protective measures aimed at conserving both its habitat and population.
To achieve successful wildlife protection, it is imperative to adopt a multi-faceted approach. The following bullet points highlight key strategies that have proven effective:
Strengthening Legal Frameworks:
- Enact stringent laws against poaching and trafficking.
- Establish protected areas with clear boundaries and regulations.
- Implement penalties that serve as deterrents for engaging in illegal activities.
Promoting Community Engagement:
- Foster collaboration between local communities and conservation organizations.
- Educate residents about sustainable practices and the value of preserving biodiversity.
- Encourage community-led initiatives such as eco-tourism or involvement in monitoring programs.
Enhancing International Cooperation:
- Facilitate cooperation among countries to combat cross-border wildlife crimes.
- Share knowledge and best practices through international agreements like CITES (Convention on International Trade in Endangered Species).
- Support joint efforts in research, monitoring, and rescue operations across borders.
The table below provides a comparative analysis of three popular conservation methods based on their impact on wildlife protection:
|Protected Areas||Provides a safe haven for wildlife||Limited land availability||Serengeti National Park, Tanzania|
|Wildlife Corridors||Facilitates species movement and gene flow||Requires extensive planning||Banff National Park, Canada|
|Community-based Conservation||Promotes local stewardship involvement||Dependent on community engagement||Annapurna Conservation Area, Nepal|
By combining these strategies and tailoring their implementation to specific contexts, governments, organizations, and individuals can collectively contribute towards effective wildlife protection. Through sustained efforts aimed at conserving biodiversity, we not only protect individual species but also safeguard the intricate web of life that supports our ecosystems.
Incorporating comprehensive conservation plans while addressing the challenges posed by illegal activities and habitat loss will ensure a sustainable future for our natural environment. By valuing our wildlife and taking proactive measures to preserve their habitats, we can create a harmonious coexistence between humans and nature—an achievement essential for the long-term well-being of our planet. |
The philosopher Socrates is something of an enigma.
Condemned to death in 399 BC and leaving no written works, we rely extensively on the writings of his pupil, philosophical heavyweight Plato (Honderich, 2005).
Perhaps Socrates’ most significant legacy is his contribution to the art of conversation, known as Socratic questioning. Rather than the teacher filling the mind of the student, both are responsible for pushing the dialogue forward and uncovering truths (Raphael & Monk, 2003).
And yet, what could a 2500-year old approach to inquiry add to the toolkit of the teacher, psychotherapist, and coach?
Well, it turns out, quite a lot.
In this article, we explore the definition of Socratic questioning and how we apply it in education, Cognitive Behavioral Therapy, and coaching. We then identify techniques, examples of good questions, and exercises that promote better, more productive dialogue.
Before you read on, we thought you might like to download our three Positive Psychology Exercises for free. These science-based exercises explore fundamental aspects of positive psychology, including strengths, values, and self-compassion, and will give you the tools to enhance the wellbeing of your clients, students, or employees.
This Article Contains:
- Socratic Questioning Defined
- What Is Socratic Questioning in CBT and Therapy?
- How to Do Socratic Questioning
- 15 Examples of Socratic Questioning
- Using Socratic Questioning in Coaching
- Applications in the Classroom: 2 Examples
- 3 Helpful Techniques
- 4 Exercises and Worksheets for Your Sessions
- Best Books on the Topic
- A Take-Home Message
Socratic Questioning Defined
Many of us fail to recognize questioning as a skill. And yet, whether in education or therapy, vague, purposeless questions have a rather aimless quality, wasting time and failing to elicit useful information (Neenan, 2008).
The Socratic method, often described as the cornerstone of Cognitive Behavioral Therapy (CBT), solves this inadequacy by asking a series of focused, open-ended questions that encourage reflection (Clark & Egan, 2015). By surfacing knowledge that was previously outside of our awareness, the technique produces insightful perspectives and helps identify positive actions.
“I know you won’t believe me, but the highest form of human excellence is to question oneself and others.”
Socratic questioning involves a disciplined and thoughtful dialogue between two or more people. It is widely used in teaching and counseling to expose and unravel deeply held values and beliefs that frame and support what we think and say.
By using a series of focused yet open questions, we can unpack our beliefs and those of others.
In education, we can remove, albeit temporarily, the idea of the ‘sage on the stage.’ Instead, the teacher plays dumb, acting as though ignorant of the subject. The student, rather than remaining passive, actively helps push the dialogue forward.
Rather than teaching in the conventional sense, there is no lesson plan and often no pre-defined goal; the dialogue can take its path, remaining open ended between teacher and student.
The Socratic method is used in coaching, with, or without, a clear goal in mind, to probe our deepest thoughts. A predetermined goal is useful when there are time pressures but can leave the client feeling that the coach has their own agenda or nothing to learn from the discussion (Neenan, 2008).
In guided discovery, the absence of a clear goal leads to questions such as “can you be made to feel inferior by someone else’s laughter?” asked with genuine curiosity. Here, the coach gently encourages the client to look at the bigger picture and see other options for tackling an issue.
Ultimately, both approaches have the goal of changing minds. One is coach led, and the other is client led; the coach or therapist may need to move on a continuum between the two.
What Is Socratic Questioning in CBT and Therapy?
Socratic questioning is critical to successful Cognitive Behavioral Therapy (Clark & Egan, 2015). Indeed, in CBT, where the focus is on modifying thinking to facilitate emotional and behavioral change, the technique is recognized as helping clients define problems, identify the impact of their beliefs and thoughts, and examine the meaning of events (Beck & Dozois, 2011).
The use of the Socratic method by CBT therapists helps clients become aware of and modify processes that perpetuate their difficulties. The subsequent shift in perspective and the accompanying reevaluation of information and thoughts can be hugely beneficial.
It replaces the didactic, or teaching-based, approach and promotes the value of reflective questioning. Indeed, several controlled trials have demonstrated its effectiveness in dealing with a wide variety of psychological disorders.
While there is no universally accepted definition of the Socratic method in CBT, it can be seen as an umbrella term for using questioning to “clarify meaning, elicit emotion and consequences, as well as to gradually create insight or explore alternative action” (James, Morse, & Howarth, 2010).
It is important to note that the approach, when used in CBT, must remain non-confrontational and instead guide discovery, in an open, interested manner, leading to enlightenment and insight (Clark & Egan, 2015).
You will find that Socratic questions usually have the following attributes (modified from Neenan, 2008):
|Attributes of Socratic questions||Description|
|Concise, directed, and clear||The attention remains on the client and should avoid jargon and reduce confusion.|
|Open, yet with purpose||The client is invited to actively engage, with a clear rationale behind each question.|
|Focused but tentative||The focus is on the issue under discussion, yet does not assume the client has the answer.|
|Neutral||The questioning does not suggest there is a correct or preferred answer.|
Above all else, it is essential to remember that Socratic questioning should be confusion-free.
How to Do Socratic Questioning
A fruitful dialogue using Socratic questioning is a shared one, between teachers and students or therapists and clients.
Each participant must actively participate and take responsibility for moving the discussion forward.
The best environment, according to professor Rob Reich, is one of ‘productive discomfort,’ but in the absence of fear and panic (Reis, 2003).
There should be no opponents and no one playing ‘devil’s advocate’ or testing the other.
Instead, it is best to remain open minded and prepared to both listen and learn.
Some guidance is suggested to perform Socratic questioning effectively.
|Advice for the counselor or teacher|
|Plan significant questions to inform an overall structure and direction without being too prescriptive.|
|Allow time for the student or client to respond to the questions without feeling hurried.|
|Stimulate the discussion with probing questions that follow the responses given.|
|Invite elaboration and facilitate self-discovery through questioning.|
|Keep the dialogue focused, specific, and clearly worded.|
|Regularly summarize what has been said.|
|Pose open questions rather than yes/no questions.|
|Avoid or re-word questions that are vague, ambiguous, or beyond the level of the listener’s understanding.|
For a student or client, it is useful to understand what is expected.
|Advice for the student|
|Participate actively and thoughtfully.|
|Answer clearly and succinctly.|
|Address the whole class (where appropriate.)|
To be the ideal companion for Socratic questioning, you need to be genuinely curious, willing to take the time and energy to unpack beliefs, and able to logically and dispassionately review contradictions and inconsistencies.
15 Examples of Socratic Questioning
When used effectively, Socratic questioning is a compelling technique for exploring issues, ideas, emotions, and thoughts. It allows misconceptions to be addressed and analyzed at a deeper level than routine questioning.
You will need to use several types of questions to engage and elicit a detailed understanding.
|Clarification||What do you mean when you say X?
Could you explain that point further? Can you provide an example?
|Challenging assumptions||Is there a different point of view?
What assumptions are we making here? Are you saying that… ?
|Evidence and reasoning||Can you provide an example that supports what you are saying?
Can we validate that evidence? Do we have all the information we need?
|Alternative viewpoints||Are there alternative viewpoints?
How could someone else respond, and why?
|Implications and consequences||How would this affect someone?
What are the long-term implications of this?
|Challenging the question||What do you think was important about that question?
What would have been a better question to ask?
Students and clients should be encouraged to use the technique on themselves to extend and reinforce the effect of Socratic questioning and promote more profound levels of understanding.
Using Socratic Questioning in Coaching
Coaching is “the art of facilitating the performance, learning, and development of another” (Downey, 2003). To reach a deeper understanding of a client’s goals, core values, and impediments to change, a coach must elicit information that is relevant, insightful, and ultimately valuable.
And yet, not all questions are equally useful in coaching.
Vague or aimless questions are costly in terms of time and will erode the client’s confidence in the coaching process (Neenan, 2008).
Asking open-ended questions helps clients reflect and generate knowledge of which they may have previously been unaware. Such insights result in clients reaching new or more balanced perspectives and identifying actions to overcome difficulties.
Coaches should avoid becoming ‘stuck’ entirely in the Socratic mode. Complete reliance on Socratic questions will lead to robotic and predictable sessions. Indeed, at times, the therapist may require closed questions to push a point and offer some direction (Neenan, 2008).
Applications in the Classroom: 2 Examples
Socratic questioning requires the student to identify and defend their position regarding their thoughts and beliefs.
The student is asked to account for themselves, rather than recite facts, including their motivations and bias upon which their views are based.
Discussion is less about facts or what others think about the facts, and more about what the student concludes about them. The underlying beliefs of each participant in the conversation are under review rather than abstract propositions.
And according to science, it works very well. Research has confirmed that Socratic questioning provides students with positive support in enhancing critical thinking skills (Chew, Lin, & Chen, 2019).
1. Socratic circles
Socratic circles can be particularly useful for gaining an in-depth understanding of a specific text or examine the questioning technique itself and the abilities of the group using it:
- Students are asked to read a chosen text or passage.
- Guidance is given to analyze it and take notes.
- Students are arranged in two circles – an inner one and an outer one.
- The inner circle is told to read and discuss the text with one another for the next 10 minutes.
- Meanwhile, the outer circle is told to remain silent and observe the inner circle’s discussion.
- Once completed, the outer circle is given a further 10 minutes to evaluate the inner circle’s dialogue and provide feedback.
- The inner circle listens and takes notes.
- Later the roles of the inner and outer circles are reversed.
Observing the Socratic method can provide a valuable opportunity to learn about the process of questioning.
2. Socratic seminars
Socratic seminars are the true embodiment of Socrates’ belief in the power of good questioning.
- The teacher uses Socratic questions to engage discussion around a targeted learning goal, often a text that invites authentic inquiry.
- Guidelines are provided to the students to agree to fair participation, including example questions and behaviors for thinking, interacting, and listening within the group.
- Learning is promoted by encouraging critical analysis and reasoning to find deep answers to questions.
- The teacher may define some initial open-ended questions but does not adopt the role of a leader.
- Once over, a review of the techniques and the group’s effectiveness at using them should be performed and learnings fed into future seminars.
It takes time to learn and use the Socratic method effectively and should be considered a necessary part of the group’s overall journey.
3 Helpful Techniques
1. The five Ws
At times we all need pointers regarding the questions to ask. The misleadingly named five Ws – who, what, when, where, why, and how – are widely used for basic information gathering, from journalism to policing.
|Five Ws (and an H)|
|Who is involved?|
|When did it happen?|
|Where did it happen?|
|Why did it happen?|
|How did it happen?|
The five Ws (and an H) provide a useful set of open questions, inviting the listener to answer and elaborate on the facts.
2. Socratic method steps
Simply stated, Socratic questioning follows the steps below.
- Understand the belief.
Ask the person to state clearly their belief/argument.
- Sum up the person’s argument.
Play back what they said to clarify your understanding of their position.
- Ask for evidence.
Ask open questions to elicit further knowledge and uncover assumptions, misconceptions, inconsistencies, and contradictions.
- Upon what assumption is this belief based?
- What evidence is there to support this argument?
- Challenge their assumptions.
If contradictions, inconsistencies, exceptions, or counterexamples are identified, then ask the person to either disregard the belief or restate it more precisely.
- Repeat the process again, if required.
Until both parties accept the restated belief, the process is repeated.
The order may not always proceed as above. However, the steps provide an insight into how the questioning could proceed. Repeat the process to drill down into the core of an issue, thought, or belief.
3. Best friend role-play
Ask the client to talk to you as though they were discussing similar experiences to a friend (or someone else they care about.)
People are often better at arguing against their negative thinking when they are talking to someone they care about.
For example, “Your best friend tells you that they are upset by a difficult conversation or situation they find themselves in. What would you tell them? Talk to me as though I am that person.”
4 Exercises and Worksheets for Your Sessions
1. Socratic question types
The Socratic method relies on a variety of question types to provide the most complete and correct information for exploring issues, ideas, emotions, and thoughts.
Use a mixture of the following question types for the most successful engagement.
|Questions regarding an initial question or issue||Answers|
|What is significant about this question?||||
|Is this a straightforward question to answer?||||
|Why do you think that?||||
|Are there any assumptions we can take from this question?||||
|Is there another important question that follows on from this one?||||
|Questions about assumptions||Answers|
|Why would someone assume that X?||||
|What are we assuming here?||||
|Is there a different assumption here?||||
|Are you saying that X?||||
|Questions of viewpoint||Answers|
|Are there alternative views?||||
|What might someone who thought X think?||||
|How would someone else respond, and why?||||
|Questions of clarification||Answers|
|What do you mean when you say X?||||
|Can you rephrase and explain that differently?||||
|What is the main issue here?||||
|Can you expand that point further?||||
|Questions of implication and consequence||Answers|
|Why do you think this is the case?||||
|Is there any other information needed?||||
|What led you to that belief?||||
|Are there any reasons to doubt the evidence?||||
|Questions of evidence and reasoning||Answers|
|Can you provide an example?||||
|Why do you think this is the case?||||
|Is there any other information needed?||||
|What led you to that belief?||||
|Are there any reasons to doubt the evidence?||||
|Questions regarding origin||Answers|
|Have you heard this somewhere?||||
|Have you always felt this way?||||
|What caused you to feel that way?||||
2. Cognitive restructuring
Ask readers to consider and record answers to several Socratic questions to help challenge their irrational thoughts.
3. Life coaching questions
Refer to the 100 Most Powerful Life Coaching Questions on our blog for in-depth examples of open-ended questions for use as a coach.
4. Art of Socratic questioning checklist
While observing others leading Socratic discussions, use this questioning checklist to capture thoughts and provide feedback.
5 Best Books on the Topic
To learn more about Socratic questioning and good questioning in general, check out these five books available on Amazon:
- The Socratic Method of Psychotherapy – James Overholser (Amazon)
- The Thinker’s Guide to Socratic Questioning – Richard Paul and Linda Elder (Amazon)
- Thinking Through Quality Questioning: Deepening Student Engagement – Elizabeth D. Sattes and Jackie A. Walsh (Amazon)
- Techniques for Coaching and Mentoring – Natalie Lancer, David Clutterbuck, and David Megginson (Amazon)
- The Art of Interactive Teaching: Listening, Responding, Questioning – Selma Wassermann (Amazon)
A Take-Home Message
Socratic questioning provides a potent method for examining ideas logically and determining their validity.
Used successfully, it challenges (possibly incorrect) assumptions and misunderstandings, allowing you to revisit and revise what you think and say.
However, like any tool, it is only as good as the person who uses it.
Socratic questioning requires an absence of ego and a level playing field for all who take part. If you are willing to use logical, open questions without a fixed plan, and are prepared to practice, the technique is an effective way of exploring ideas in depth.
The theory, techniques, and exercises we shared will help you to push the boundaries of understanding, often into uncharted waters, and unravel and explore assumptions and misunderstandings behind our thoughts.
We hope you enjoyed reading this article. Don’t forget to download our three Positive Psychology Exercises for free.
- Beck, A. T., & Dozois, D. J. (2011). Cognitive therapy: Current status and future directions. Annual Review of Medicine, 62, 397–409.
- Chew, S. W., Lin, I. H., & Chen, N. S. (2019). Using Socratic questioning strategy to enhance critical thinking skills of elementary school students. Paper presented at the 2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT), Maceió, Brazil.
- Clark, G. I., & Egan, S. J . (2015). The Socratic method in cognitive behavioural therapy: A narrative review. Cognitive Therapy and Research, 39(6), 863–879.
- Downey, M. (2003). Effective coaching: Lessons from the coach’s coach (2nd ed.). Thomson/ Texere.
- Honderich, T. (2005). The Oxford companion to philosophy. Oxford University Press.
- James, I. A., Morse, R., & Howarth, A. (2010). The science and art of asking questions in cognitive therapy. Behavioural and Cognitive Psychotherapy, 38(1), 83–93.
- Lancer, N., Clutterbuck, D., & Megginson, D. (2016). Techniques for coaching and mentoring (2nd ed.). Routledge.
- Neenan, M. (2008). Using Socratic questioning in coaching. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 27(4), 249–264.
- Overholser, J. (2018). The Socratic method of psychotherapy. Columbia University Press.
- Paul, R., & Elder, L. (2016). The thinker’s guide to the art of Socratic questioning. The Foundation for Critical Thinking.
- Raphael, F., & Monk, R. (2003). The great philosophers. Routledge.
- Reis, R. (2003). The Socratic method: What it is and how to use it in the classroom. Tomorrow’s Professor Postings. Retrieved June 10, 2020, from https://tomprof.stanford.edu/posting/810
- Walsh, J. A., & Sattes, E. D. (2011). Thinking through quality questioning: Deepening student engagement (1st ed.). Corwin.
- Wasserman, S. (2017). The art of interactive teaching: Listening, responding, questioning (1st ed.). Routledge. |
|This article needs additional citations for verification. (April 2014)|
Electromagnetic radiation (EM radiation or EMR) is a form of radiant energy released by certain electromagnetic processes. Visible light is one type of electromagnetic radiation, and in some contexts light can refer to all EMR. Other familiar forms are invisible electromagnetic radiations such as X-rays and radio waves.
Classically, EMR consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields that propagate at the speed of light. The oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. Electromagnetic waves can be characterized by either the frequency or wavelength of their oscillations to form the electromagnetic spectrum, which includes, in order of increasing frequency and decreasing wavelength: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.
Electromagnetic waves are produced whenever charged particles are accelerated, and these waves can subsequently interact with any charged particles. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. EM waves are massless, but they are still affected by gravity. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this jargon, the near field refers to EM fields near the charges and current that directly produced them, as (for example) with simple magnets, electromagnetic induction and static electricity phenomena.
In the quantum theory of electromagnetism, EMR consists of photons, the elementary particles responsible for all electromagnetic interactions. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E=hν, where E is the energy per photon, ν is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light.
The effects of EMR upon biological systems (and also to many other chemical systems, under standard conditions) depend both upon the radiation's power and its frequency. For EMR of visible frequencies or lower (i.e., radio, microwave, infrared), the damage done to cells and other materials is determined mainly by power and caused primarily by heating effects from the combined energy transfer of many photons. By contrast, for ultraviolet and higher frequencies (i.e., X-rays and gamma rays), chemical materials and living cells can be further damaged beyond that done by simple heating, since individual photons of such high frequency have enough energy to cause direct molecular damage.
- 1 Physics
- 2 History of discovery
- 3 Electromagnetic spectrum
- 4 Atmosphere and magnetosphere
- 5 Types and sources, classed by spectral band
- 6 Biological effects
- 7 Derivation from electromagnetic theory
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
James Clerk Maxwell first formally postulated electromagnetic waves. These were subsequently confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave.
According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time. Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type field causing the other; rather, they occur together in the same way that time and space changes occur together and are interlinked in special relativity. In fact, magnetic fields may be viewed as relativistic distortions of electric fields, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again affect the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term.
Near and far fields
Maxwell's equations established that some charges and currents ("sources") produce a local type of electromagnetic field near them that does not have the behavior of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also declines with distance. These fields make up the near-field near the EMR source. Neither of these behaviors are responsible for EM radiation. Instead, they cause electromagnetic field behavior that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside a transformer, or the feedback behavior that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased “load” (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not “propagate” freely out into space, carrying their energy away without distance-limit, but rather oscillate, returning their energy to the transmitter if it is not received by a receiver.
By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up or not. This distant part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Because such waves conserve the amount of energy they transmit through any spherical boundary surface drawn around their source, and because such surfaces have an area that is defined by the square of the distance from the source, the power of EM radiation always varies according to an inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which varies in power according to an inverse cube power law, and thus does not transport a conserved amount of energy over distances, but instead fades with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil).
The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell’s equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power “far” from sources. The term “far from sources” refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase.
A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents).
In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation.
The physics of electromagnetic radiation is electrodynamics. Electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent lightwaves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual lightwaves.
Since light is an oscillation it is not affected by travelling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields — these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.
EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not too difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair.
Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.
Electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. The electric and magnetic parts of the field stand in a fixed ratio of strengths in order to satisfy the two Maxwell equations that specify how one is produced from the other. These E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see math section below).
An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.
A wave consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves the size of buildings to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation and its polarization.
Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI).
Particle model and quantum theory
An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem, which later became known as the ultraviolet catastrophe, unsuccessfully for many years. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. Later, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by
where h is Planck's constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.
Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength:
The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.
The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that electrons also exhibited wave–particle duality.
Wave and particle effects of electromagnetic radiation
Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature.
These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.
Any electric charge that accelerates, or any changing magnetic field, produces electromagnetic radiation. Electromagnetic information about the charge travels at the speed of light. Accurate treatment thus incorporates a concept known as retarded time, which adds to the expressions for the electrodynamic electric field and magnetic field. These extra terms are responsible for electromagnetic radiation.
When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion.
At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another.
As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hν, where E is the energy of the photon, h = 6.626 × 10−34 J·s is Planck's constant, and ν is the frequency of the wave.
One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of light, relative to the observer, regardless of the observer's velocity. (This observation led to Einstein's development of the theory of special relativity.)
In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.
Special theory of relativity
By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion.
History of discovery
Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.
In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Hershel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.
In 1862-4 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.:286,7
Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evaccuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.:307
The last portion of the EM spectrum was discovered associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice-versa. The origin of the ray differentiates them, gamma rays tend to be a natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.:308,9
EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which in turn can each be classified into these regions of the EMR spectrum.
For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum.
The behavior of EM radiation depends on its frequency. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.
Soundwaves are not electromagnetic radiation. At the lower end of the electromagnetic spectrum, about 20 Hz to about 20 kHz, are frequencies that might be considered in the audio range. However, electromagnetic waves cannot be directly perceived by human ears. Sound waves are instead the oscillating compression of molecules. To be heard, electromagnetic radiation must be converted to pressure waves of the fluid in which the ear is located (whether the fluid is air, water or something else).
Interactions as a function of frequency
When EM radiation interacts with matter, its behavior changes qualitatively as its frequency changes.
Radio and microwave
At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both. Infrared EMR interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. For this reason, infrared is reflected by metals (as is most EMR into the ultraviolet) but is absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. In the same process, bulk substances radiate in the infrared spontaneously (see thermal radiation section below).
As frequency increases into the visible range, photons of EMR have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the "visible range," as the mechanism of vision involves the change in bonding of a single molecule (retinal) which absorbs light in the rhodopsin in the retina of the human eye. Photosynthesis becomes possible in this range as well, for similar reasons, as a single molecule of chlorophyll is excited by a single photon. Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons (see infrared sensing in snakes). For this reason, infrared, microwaves and radio waves are thought to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation (however, there does remain controversy about possible non-thermal biological damage from low frequency EM radiation, see below).
Visible light is able to affect a few molecules with single photons, but usually not in a permanent or damaging way, in the absence of power high enough to increase temperature to damaging levels. However, in plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, in order to prevent reactions that would otherwise interfere with photosynthesis at high light levels. Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.
As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects. This property of causing molecular damage that is out of proportion to heating effects, is characteristic of all EMR with frequencies at the visible light range and above. These properties of high-frequency EMR are due to quantum effects that permanently damage materials and tissues at the molecular level.
At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volts (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." Ionizing UV is strongly filtered by the Earth's atmosphere).
X-rays and gamma rays
Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter. This type of damage causes these types of radiation to be especially carefully monitored, due to their hazard, even at comparatively low-energies, to all living organisms.
Atmosphere and magnetosphere
Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.
Visible light is well transmitted in air, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor.
Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 meters or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 meters or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 meters).
Types and sources, classed by spectral band
When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. Such effects can cover macroscopic distances in conductors (including as radio antennas), since the wavelength of radiowaves is long. Radio waves thus have the most overtly "wave-like" characteristics of EMR.
Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.
Thermal radiation and electromagnetic radiation as a form of heat
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.
Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.
Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy termperature of) a material, when it is absorbed.
The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.
Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms.The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to visible light) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important only as it affects penetration into the organism (for example, microwaves penetrate better than infrared). Initially, it was believed that low frequency fields that were too weak to cause significant heating could not possibly have any biological effect.
Despite this opinion among researchers, evidence has accumulated that supports the existence of complex biological effects of weaker non-thermal electromagnetic fields, (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation), and modulated RF and microwave fields. Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood.
The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B - possibly carcinogenic. This group contains possible carcinogens that have weaker evidence, at the same level as coffee and automobile exhaust. For example, epidemiological studies looking for a relationship between cell phone use and brain cancer development, have been largely inconclusive, save to demonstrate that the effect, if it exists, cannot be a large one.
At higher frequencies (visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequences have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.
Thus, at UV frequencies and higher (and probably somewhat also in the visible range), electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.
Derivation from electromagnetic theory
Electromagnetic waves were predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. Inspection of Maxwell's equations without sources (charges or currents) results in nontrivial solutions of changing electric and magnetic fields. Beginning with Maxwell's equations in free space:
- is a vector differential operator (see Del).
For a more useful solution, we utilize vector identities, which work for any vector, as follows:
The curl of equation (2):
Evaluating the left hand side:
- simplifying the above by using equation (1).
Evaluating the right hand side:
Equations (6) and (7) are equal, so this results in a vector-valued differential equation for the electric field, namely
Applying a similar pattern results in similar differential equation for the magnetic field:
These differential equations are equivalent to the wave equation:
- c0 is the speed of the wave in free space and
- f describes a displacement
Or more simply:
- where is d'Alembertian:
In the case of the electric and magnetic fields, the speed is:
This is the speed of light in vacuum. Maxwell's equations unified the vacuum permittivity , the vacuum permeability , and the speed of light itself, c0. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light.
These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field.
Here, is the constant amplitude, is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. is a generic solution to the wave equation. In other words
for a generic wave traveling in the direction.
This form will satisfy the wave equation.
The first of Maxwell's equations implies that the electric field is orthogonal to the direction the wave propagates.
The second of Maxwell's equations yields the magnetic field. The remaining equations will be satisfied by this choice of .
The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, , which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first order in time, resulting in the same phase shift for both fields in each mathematical operation.
From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field.
More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation.
- Antenna (radio)
- Antenna measurement
- Control of electromagnetic radiation
- Electromagnetic field
- Electromagnetic pulse
- Electromagnetic radiation and health
- Electromagnetic spectrum
- Electromagnetic wave equation
- Evanescent wave coupling
- Finite-difference time-domain method
- Impedance of free space
- Maxwell's equations
- Near and far field
- Radiant energy
- Radiation reaction
- Risks and benefits of sun exposure
- Sinusoidal plane-wave solutions of the electromagnetic wave equation
- Carmichael, H. J. "Einstein and the Photoelectric Effect". Quantum Optics Theory Group, University of Auckland. Retrieved 22 December 2009.[dead link]
- Thorn, J. J.; Neel, M. S.; Donato, V. W.; Bergreen, G. S.; Davies, R. E.; Beck, M. (2004). "Observing the quantum behavior of light in an undergraduate laboratory". American Journal of Physics 72 (9): 1210. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397.
- Paul M. S. Monk (2004). Physical Chemistry. John Wiley and Sons. p. 435. ISBN 978-0-471-49180-4.
- Weinberg, S. (1995). The Quantum Theory of Fields 1. Cambridge University Press. pp. 15–17. ISBN 0-521-55001-7.
- Philosophical Transactions of the Royal Society of London, Vol. 90 (1800), pp. 284-292, http://www.jstor.org/stable/info/107057
- James Jeans (1947) The Growth of Physical Science, link from Internet Archive
- Liebel, F.; Kaur, S.; Ruvolo, E.; Kollias, N.; Southall, M. D. (2012). "Irradiation of Skin with Visible Light Induces Reactive Oxygen Species and Matrix-Degrading Enzymes". Journal of Investigative Dermatology 132 (7): 1901–1907. doi:10.1038/jid.2011.476. PMID 22318388.
- Binhi, Vladimir N; Repiev, A & Edelev, M (translators from Russian) (2002). Magnetobiology: Underlying Physical Problems. San Diego: Academic Press. pp. 1–16. ISBN 978-0-12-100071-4. OCLC 49700531.
- Delgado, J. M.; Leal, J.; Monteagudo, J. L.; Gracia, M. G. (1982). "Embryological changes induced by weak, extremely low frequency electromagnetic fields". Journal of anatomy 134 (Pt 3): 533–551. PMC 1167891. PMID 7107514.
- Harland, J. D.; Liburdy, R. P. (1997). "Environmental magnetic fields inhibit the antiproliferative action of tamoxifen and melatonin in a human breast cancer cell line". Bioelectromagnetics 18 (8): 555–562. doi:10.1002/(SICI)1521-186X(1997)18:8<555::AID-BEM4>3.0.CO;2-1. PMID 9383244.
- Aalto, S.; Haarala, C.; Brück, A.; Sipilä, H.; Hämäläinen, H.; Rinne, J. O. (2006). "Mobile phone affects cerebral blood flow in humans". Journal of Cerebral Blood Flow & Metabolism 26 (7): 885–890. doi:10.1038/sj.jcbfm.9600279. PMID 16495939.
- Cleary, S. F.; Liu, L. M.; Merchant, R. E. (1990). "In vitro lymphocyte proliferation induced by radio-frequency electromagnetic radiation under isothermal conditions". Bioelectromagnetics 11 (1): 47–56. doi:10.1002/bem.2250110107. PMID 2346507.
- Ramchandani, P. (2004). "Prevalence of childhood psychiatric disorders may be underestimated". Evidence-based mental health 7 (2): 59. doi:10.1136/ebmh.7.2.59. PMID 15107355.
- IARC classifies Radiofrequency Electromagnetic Fields as possibly carcinogenic to humans. World Health Organization. 31 May 2011
- "Trouble with cell phone radiation standard". CBS News.
- See PMID 22318388 for evidence of quantum damage from visible light via reactive oxygen species generated in skin. This happens also with UVA. With UVB, the damage to DNA becomes direct, with photochemical formation of pyrimidine dimers.
- Narayanan, DL; Saladi, RN, Fox, JL (September 2010). "Ultraviolet radiation and skin cancer". International Journal of Dermatology 49 (9): 978–86. doi:10.1111/j.1365-4632.2010.04474.x. PMID 20883261.
- Saladi, RN; Persaud, AN (January 2005). "The causes of skin cancer: a comprehensive review". Drugs of today (Barcelona, Spain : 1998) 41 (1): 37–53. doi:10.1358/dot.2005.41.1.875777. PMID 15753968.
- Kinsler, P. (2010). "Optical pulse propagation with minimal approximations". Phys. Rev. A 81: 013819. arXiv:0810.5689. Bibcode:2010PhRvA..81a3819K. doi:10.1103/PhysRevA.81.013819.
- Hecht, Eugene (2001). Optics (4th ed.). Pearson Education. ISBN 0-8053-8566-5.
- Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks Cole. ISBN 0-534-40842-7.
- Tipler, Paul (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics (5th ed.). W. H. Freeman. ISBN 0-7167-0810-8.
- Reitz, John; Milford, Frederick; Christy, Robert (1992). Foundations of Electromagnetic Theory (4th ed.). Addison Wesley. ISBN 0-201-52624-7.
- Jackson, John David (1999). Classical Electrodynamics (3rd ed.). John Wiley & Sons. ISBN 0-471-30932-X.
- Allen Taflove and Susan C. Hagness (2005). Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed. Artech House Publishers. ISBN 1-58053-832-0.
|Wikisource has original text related to this article:|
|Library resources about
- Electromagnetism – a chapter from an online textbook
- Electromagnetic Radiation – an introduction for electrical engineers
- Electromagnetic Waves from Maxwell's Equations on Project PHYSNET.
- Radiation of atoms? e-m wave, Polarisation, ...
- An Introduction to The Wigner Distribution in Geometric Optics
- The windows of the electromagnetic spectrum, on Astronoo
- Introduction to light and electromagnetic radiation course video from the Khan Academy
- Lectures on electromagnetic waves course video and notes from MIT Professor Walter Lewin |
Pythagorean Theorem in 3D
Grade 8 Math Worksheets
The Pythagorean Theorem states that the square of the length of the longest side (the hypotenuse) of a right triangle is equal to the sum of the squares of the lengths of the other two sides.
Table of Contents:
- Pythagorean Theorem in 3D
- Solved Examples
Personalized Online Tutoring
Pythagorean Theorem in 3D - Grade 8 Math Worksheet PDF
This is a free worksheet with practice problems and answers. You can also work on it online.
Sign up with your email ID to access this free worksheet.
"We really love eTutorWorld!"
"We really love etutorworld!. Anand S and Pooja are excellent math teachers and are quick to respond with requests to tutor on any math topic!" - Kieran Y (via TrustSpot.io)
"My daughter gets distracted easily"
"My daughter gets distracted very easily and Ms. Medini and other teachers were patient with her and redirected her back to the courses.
With the help of Etutorworld, my daughter has been now selected in the Gifted and Talented Program for the school district"
- Nivea Sharma (via TrustSpot.io)
Pythagorean Theorem in 3D
The Pythagorean Theorem can be extended to three-dimensional space, where it is called the Pythagorean Theorem in 3D. In 3D space, the theorem states that the square of the length of the longest side (the hypotenuse) of a right triangle is equal to the sum of the squares of the lengths of the other two sides.
In mathematical notation, the Pythagorean Theorem in 3D can be written as:
c^2 = a^2 + b^2 + h^2
where c is the length of the hypotenuse, and a, b, and h are the lengths of the other three sides of the right triangle.
In 3D space, the sides of a right triangle are called legs, and the longest side (opposite the right angle between the height and diagonal of the base) is called the hypotenuse, just as in 2D space. The Pythagorean Theorem in 3D can be used to find the length of the hypotenuse or the length of one of the legs of a right triangle in 3D space.
The Pythagorean Theorem in 3D has many applications in fields such as physics, engineering, and computer graphics, where it is used to calculate distances, lengths, and angles in 3D space.
“There have been times when we booked them last minute, but the teachers have been extremely well-prepared and the help desk at etutorworld is very prompt.
Our kid is doing much better with a higher score.”
8th Grade Tutoring
eTutorWorld offers Personalized Online Tutoring for Math, Science, English, and Standardised Tests.
Our Tutoring Packs start at just under $21 per hour, and come with a moneyback guarantee.
Schedule a FREE Trial Session, and experience quality tutoring for yourself. (No credit card required.)
Find the distance between two points in 3D space with coordinates (1, 2, 3) and (4, 5, 6).
We can model this situation as a right triangle in 3D space, with the distance between the two points as the hypotenuse, and the differences in x, y, and z coordinates as the legs. Using the Pythagorean Theorem in 3D, we have:
c^2 = a^2 + b^2 + h^2
c^2 = (4 – 1)^2 + (5 – 2)^2 + (6 – 3)^2
c^2 = 3^2 + 3^2 + 3^2
c^2 = 27
c = sqrt(27)
Therefore, the distance between the two points is approximately 5.196 units.
A rectangular box has dimensions of 3 feet by 4 feet by 5 feet. What is the length of the longest diagonal of the box?
We can model this situation as a right triangle in 3D space, with the diagonal of the box as the hypotenuse, and the dimensions of the box as the legs. Using the Pythagorean Theorem in 3D, we have:
c^2 = a^2 + b^2 + h^2
c^2 = 3^2 + 4^2 + 5^2
c^2 = 9 + 16 + 25
c^2 = 50
c = sqrt(50)
Therefore, the length of the longest diagonal of the box is approximately 7.071 feet.
Do You Stack Up Against the Best?
If you have 30 minutes, try our free diagnostics test and assess your skills.
Can the Pythagorean Theorem in 3D be used for non-right triangles?
No, the Pythagorean Theorem in 3D only applies to right triangles in 3D space. For non-right triangles in 3D space, other methods such as the Law of Cosines or Law of Sines are used to find the lengths of the sides.
Can the Pythagorean Theorem in 3D be used for any three sides of a triangle?
No, the Pythagorean Theorem in 3D can only be used for right triangles in 3D space, where one of the angles is a right angle. In a non-right triangle in 3D space, there is no hypotenuse, and the Pythagorean Theorem in 3D does not apply.
What are some practical applications of the Pythagorean Theorem in 3D?
The Pythagorean Theorem in 3D has many practical applications in fields such as architecture, engineering, and physics. For example, it can be used to calculate the distance between two points in 3D space, the length of the diagonal of a rectangular box, or the distance between a point and a plane.
How do you use the Pythagorean Theorem in 3D?
To use the Pythagorean Theorem in 3D, you need to identify the right triangle in 3D space and its three sides. Then, you can plug in the lengths of the two legs and the hypotenuse into the equation c^2 = a^2 + b^2 + h^2 and solve for the unknown length.
How is the Pythagorean Theorem in 3D related to the Pythagorean Theorem in 2D?
The Pythagorean Theorem in 3D is an extension of the Pythagorean Theorem in 2D, which applies to right triangles in a two-dimensional plane. Both theorems relate the lengths of the sides of a right triangle, where the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides.
Gloria Mathew writes on math topics for K-12. A trained writer and communicator, she makes math accessible and understandable to students at all levels. Her ability to explain complex math concepts with easy to understand examples helps students master math. LinkedIn
Affordable Tutoring Now Starts at Just $21
eTutorWorld offers affordable one-on-one live tutoring over the web for Grades K-12. We are also a leading provider of Test Prep help for Standardized Tests (SCAT, CogAT, MAP, SSAT, SAT, ACT, ISEE, and AP).
What makes eTutorWorld stand apart are: flexibility in lesson scheduling, quality of hand-picked tutors, assignment of tutors based on academic counseling and diagnostic tests of each student, and our 100% money-back guarantee.
Whether you have never tried personalized online tutoring before or are looking for better tutors and flexibility at an affordable price point, schedule a FREE TRIAL Session with us today.
*There is no purchase obligation or credit card requirement
- Earth & Space Science
- Physical Science
- Life Science
- Structure of Atom (Basic)
- Bio Classification
- Sedimentary Rock
- Chemical Reaction
- Compounds and Mixtures
- The Universe
- Dynamics of our solar system
- Organization of the universe, and its development
- Earth Systems
- Rocks and Fossils
- Sources of energy that power the subsystems and cycles of the dynamic earth: the geosphere, hydrosphere, atmosphere and biosphere
- Evolutionary Theory
- Structure and function of the cell.
- Chromosomes, genes, and the molecular basis of heredity
- Interdependence of organisms and their interaction with the physical environment
- Behavior of animals
- Understanding biological evolution
- How Living things are classified
- How Living Systems live
- Life Cycle of living things
- Structure of the atom
- States and properties of matter
- Changes in States Of Matter
- Chemical properties of Matter
- Structure of matter
- Conservation of matter
- Basics of chemical reactions
- Forces and Motion
- Laws and Formulas Of Motion |
29. FREE. Word Document File. Multiple graphic organizers included. Students explore various types of graphs and learn about the characteristics and advantageous traits of each type, including line plot, histogram, line graph, circle graph, bar graph, stem-and-leaf plot, and double line graph. There are two types of graphic organizers included.
4. enable the researcher to draw charts and graphs for the presentation of data ... Anchor line at start and finish. ... data plot that uses part of the data value as the stem and part of the data value as the leaf to form groups or classes. Steps to Making a …
A stem and leaf diagram is drawn by splitting the tens and units column. The tens column becomes the ‘stem‘ and the units become the ‘leaf‘. Stem and leaf diagrams must be in order to read them properly. To put the number 78.9 into a stem and leaf diagram, the ‘stem‘ would be 78 and the ‘leaf‘ would be 9. Get Details
A stem-and-leaf display (also known as a stemplot) is a diagram designed to allow you to quickly assess the distribution of a given dataset. Basically, the plot splits two-digit numbers in half: Stems – The first digit; Leaves – The second digit; As an example, look at the chart below. The chart displays the age breakdown of a small population.
A stem-and-leaf display is often called a stemplot, but the latter term often refers to another chart type. Unlike histograms, stem-and-leaf displays retain the original data to at least two significant digits, and put the data in order, thereby easing the move to order-based inference and non-parametric statistics.
A stem-and-leaf plot can be used to show how often data values occur and how they are distributed. Each leaf on the plot represents the right-hand digit in a data value, and each stem represents left-hand digits. 2 4 7 9 3 0 6 Stems Leaves Key: 2|7 means 27 Course 2 1-3 Frequency Tables and Stem-and-Leaf Plots
Asexual Propagation: Leaf Cuttings • Leaf Cuttings Leaf cuttings are used almost exclusively for a few indoor plants. Leaves of most plants will either produce a few roots but no plant, or just decay. • Whole Leaf with Petiole Detach the leaf and up to 1 1/2 inches of petiole. Insert the lower end of the petiole into the medium.
Author: Ed Nelson Department of Sociology M/S SS97 California State University, Fresno Fresno, CA 93740 Email: [email protected] Note to the Instructor: The data set used in this exercise is gss14_subset_for_classes_STATISTICS.sav which is a subset of the 2014 General Social Survey. Some of the variables in the GSS have been recoded to make them easier to …
Back to Back Stem and Leaf Plots/Describing Histograms (9) Compare Data Displays Using Mean, Median and Range to Describe and Interpret Numerical Data Sets in Terms of Location (centre) and Spread (9) Quartiles/IQR/Box and Whisker Plots (10) Scatter Plots (10) Calculating Standard Deviation (10A) Calculating Z-Scores (VCE)
Construct a Gingerbread House 4th Grade Math Review STEM Activity. by. Spaids in the Classroom. 10. $7.35. Zip. This gingerbread math activity is a fun and engaging way to review before or after winter break. In this holiday math review, students will practice place value, rounding, subtraction, multiplication, and division.
Cut out leaf shapes, and attach them to the stem. Use a hole punch to make holes at the bottom of the stem, and tie brown yarn through the holes to represent roots. Using the strawberry plant or Parts of a Strawberry Plant poster as a reference, have the students attach each plant part from Parts of a Plant Template 2 onto the corresponding petal.
Free Math Games Activities for Kids. Stem and Leaf Plots for each row the number in this stem or the middle column, represents the first digit or digits of sample values. The leaf at the top of the plot indicates which decimal place leaf value represents. Stem and leaf plot chocolate game. Stem and Leaf Plotter. Stem and Leaf Plot Powerpoints.
Grade 6 Topic Anchor Charts. Grade 6 Course Overview Topic 1: Factors and Multiples Topic 2: Positive Rational Numbers Topic 3: Shapes and Solids Topic 4: Decimals ... Stem-and-leaf plot. Module 5: Describing Variability in Quantities Topic 2: Numerical Summaries of Data Mean as a balance point Standard algorithm for mean
Here is a set of data on showing the test scores on the last science quiz. Step 1: In order to create a stem and leaf plot, we need to first organize the data into groups. In this situation, we will group the tests by decades. Step 2: Create the plot with …
Histograms are connected bar charts for quantitative (numeric) data. In these graphs, the bars are connected because there is an ordered underlying continuum of possible values; ordering of bars in a bar graph is arbitrary. We will also examine frequency polygons (a line chart connecting the tops of the bars in a histogram), stem-and-leaf plots ...
In a stem-and-leaf graph, we separate the digits of each data point into the right most digit vs. the rest, so, for example, the 1s place becomes the “leaf,” and the rest of the number, becomes the “stem.” So in the above data set, we can split the numbers into the number of 10s (the stem) and the number of 1s (the leaf).
Jan 26, 2021 Section contents: Introduction to vascular plant structure ← Leaf structure evolution Branching Feature image. Plant anatomy sections. Left: Cross section of a polyarch root of Aeglopsis chevalieri, a plant in the citrus family (Rutaceae) of angiosperms. Center: Cross section of a woody stem of kenaf (Hibisicus cannabinus, an angiosperm). Right: Epidermis of …
Jul 07, 2021 This can be done using multiple ways. One way was discussed above using the add_axes () method of the figure class. Let’s see various ways multiple plots can be added with the help of examples. Method 1: Using the add_axes () method. The add_axes () method figure module of matplotlib library is used to add an axes to the figure.
Jul 24, 2021 The leaf- and fruit-related traits were measured in the F 1 population (NY) and the two parents. LL was the maximum distance between the leaf base and tip. LW was the widest distance across the leaf. FW was the weight of one mature fruit. FL was the maximum distance between the top and bottom of the fruit. FD was the widest distance across a fruit.
Jun 16, 2016 Support: Primary function of the stem is to hold up buds, flowers, leaves, and fruits to the plant. Along with the roots, a stem anchors the plants and helps them to stand upright and perpendicular to the ground. Transportation: It is the part which transports water and minerals from the root and prepared food from leaves to other parts of the ...
Jun 22, 2020 Show activity on this post. I have the following MWE to generate a Stem-and-leaf plot. I have included a pdf of the output. I would like the word Stem to appear above the stem on the plot; and the word leaf to appear avove the leaves. At …
leaf is one in which the blade is all one piece. A compound leaf is one in which the blade is composed of a number of separate leaf-like parts called the leaflets. 5. The shape of the blade can be long and slender, or oval, or heart-shaped or triangular. The top of a leaf may be pointed, rounded or flattened. The margin may have
Make Stem-and-Leaf Plots - I : Worksheet for Fifth Grade Math. Practice problems to help you make stem and leaf plots from a wide range of data set in this worksheet. Category: Data and Graphs Data Display and Interpretation Histograms …
May 14, 2020 When reading a stem and leaf plot, you will want to start with the key. It will guide you on how to read the other values. The key on this plot shows that the stem is the tens place and the leaf is the ones place. Stem and leaf plots are similar to horizontal bar graph, but the actual numbers are used instead of bars. Click to see full answer.
Stem Leaf Displays. The stem leaf display is a simple data summary technique which not only rank orders the data points in a sample but presents them visually so that the shape of the data distribution is reflected. Stem leaf displays are formed from data scores by splitting each score into two parts: the first part of each score serving ...
Stem and Leaf Plots. A Stem and Leaf Plot is a special table where each data value is split into a stem (the first digit or digits) and a leaf (usually the last digit). Like in this example: Example: 32 is split into 3 (stem) and 2 (leaf). More Examples: Stem 1 Leaf 5 means 15;
Stem Cuttings • Prepare a pot or flat that has drainage holes with moistened 1/2 perlite and 1/2 coir. • Select a healthy side shoot which bears the characteristics of the parent plant. • With a sharp knife cut a 4 to 6 inch shoot 1/4 to 1/2 inch below a leaf node. • Remove the lowest leaves and dip in rooting hormone.
Table 1.2. . A simple way to order, and also to display, the data is to use a stem and leaf plot. To do this we need to abbreviate the observations to two significant digits. In the case of the urinary concentration data, the digit to the left of the decimal point is …
The data are to be interpreted and questions based on it are to be answered in the make and interpret plot pages. Stem-and-leaf plots also contain back-to-back plots, rounding data, truncating data and more. These pdf worksheets are recommended for students of grade 4 through grade 8. Our free stem and leaf plots can be accessed instantly.
These Data Analysis Anchor Charts are great to hang or project in the classroom and added to interactive journals. Easy to Read with graphics and explanations of dot plots, frequency tables, and stem-and-leaf plots. Please see the PREVIEW for examples of what is included in the product. Included: Dot Plot. Stem-and-Leaf Plot.
This example has two lists of values. Since the values are similar, I can plot them all on one stem-and-leaf plot by drawing leaves on either side of the stem I will use the tens digits as the stem values, and the ones digits as the leaves. Since 9 (in the Econ 101 list) has no tens digit, the stem value will be 0 .
Three problems are there in this worksheet to make you fluent in making stem and leaf plots using a big data set. Category: Data and Graphs Data Display and Interpretation Histograms and Stem-and-Leaf Plots. Get this Worksheet. Worksheet: Fifth Grade. Make Histograms - I. Make a frequency table from a given data set and then use it make a ...
W O R K T O G E T H E R * Anchor plant Absorb water and minerals Store sugar as starch Transport materials Produce some hormones Interact with soil microbes * Photosynthesis (primarily in leaves) Transport of materials Reproduction Hormone synthesis * Covers flowers, seeds, fruit Secretes a waxy substance called cuticle as waterproofing.
High-tech Zone, Zhengzhou, ChinaInquiry Online
We immediately communicate with youGet Quote Online
If you have any needs or questions, please click on the consultation or leave a message, we will reply to you as soon as we receive it!
Copyright © 2021 Kollmorgen Machinery Company All rights reserved sitemap |
Choose any 3 digits and make a 6 digit number by repeating the 3
digits in the same order (e.g. 594594). Explain why whatever digits
you choose the number will always be divisible by 7, 11 and 13.
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Find some examples of pairs of numbers such that their sum is a
factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and
16 is a factor of 48.
List any 3 numbers. It is always possible to find a subset of
adjacent numbers that add up to a multiple of 3. Can you explain
why and prove it?
For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target?
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important.
Pick a square within a multiplication square and add the numbers on
each diagonal. What do you notice?
Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten.
Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . .
In how many ways can you arrange three dice side by side on a
surface so that the sum of the numbers on each of the four faces
(top, bottom, front and back) is equal?
The number of plants in Mr McGregor's magic potting shed increases
overnight. He'd like to put the same number of plants in each of
his gardens, planting one garden each day. How can he do it?
Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten
numbers from the bags above so that their total is 37.
Consider all two digit numbers (10, 11, . . . ,99). In writing down
all these numbers, which digits occur least often, and which occur
most often ? What about three digit numbers, four digit numbers. . . .
Explore the effect of reflecting in two intersecting mirror lines.
Think of a number, add one, double it, take away 3, add the number
you first thought of, add 7, divide by 3 and take away the number
you first thought of. You should now be left with 2. How do I. . . .
Can you explain how this card trick works?
Can you find the values at the vertices when you know the values on
Imagine starting with one yellow cube and covering it all over with
a single layer of red cubes, and then covering that cube with a
layer of blue cubes. How many red and blue cubes would you need?
It would be nice to have a strategy for disentangling any tangled
Can you tangle yourself up and reach any fraction?
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Explore the effect of combining enlargements.
What would you get if you continued this sequence of fraction sums?
1/2 + 2/1 =
2/3 + 3/2 =
3/4 + 4/3 =
Can you find sets of sloping lines that enclose a square?
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . .
Imagine a large cube made from small red cubes being dropped into a
pot of yellow paint. How many of the small cubes will have yellow
paint on their faces?
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
Imagine an infinitely large sheet of square dotty paper on which you can draw triangles of any size you wish (providing each vertex is on a dot). What areas is it/is it not possible to draw?
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
We can show that (x + 1)² = x² + 2x + 1 by considering
the area of an (x + 1) by (x + 1) square. Show in a similar way
that (x + 2)² = x² + 4x + 4
With one cut a piece of card 16 cm by 9 cm can be made into two pieces which can be rearranged to form a square 12 cm by 12 cm. Explain how this can be done.
Can you find an efficient method to work out how many handshakes
there would be if hundreds of people met?
How could Penny, Tom and Matthew work out how many chocolates there
are in different sized boxes?
Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes.
The Egyptians expressed all fractions as the sum of different unit
fractions. Here is a chance to explore how they could have written
Can all unit fractions be written as the sum of two unit fractions?
Great Granddad is very proud of his telegram from the Queen
congratulating him on his hundredth birthday and he has friends who
are even older than he is... When was he born?
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
Three circles have a maximum of six intersections with each other.
What is the maximum number of intersections that a hundred circles
Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws?
It starts quite simple but great opportunities for number discoveries and patterns!
A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle
contains 20 squares. What size rectangle(s) contain(s) exactly 100
squares? Can you find them all? |
Fungi are more than meets the eye, according to a recently published study in Mycokeys. When we usually think of fungi, we tend to imagine the common forms that mycologists have studied for decades. This might include mushrooms, mold, kombucha, and bakers yeast — something that holds a physical form or can be grown on a culture dish in a lab. However, the new DNA sequencing study has found overwhelming evidence of fungi that do not hold the physical form of a fruiting body nor can be grown in a scientific setting. Scientists now know they exist, but we currently cannot observe them outside of their DNA code. These fungal groups are often referred to as “dark taxa” and are much more common than we think. This discovery challenges how the mycological community has been classifying and naming fungi and provides a new dimension of fungal diversity research.
Dark matter is a term used in astronomy to describe matter that is not visible by any light-detecting instruments such as a telescope. Though scientists have not yet seen this matter’s physical form, they believe it makes up around 85% of the matter in the universe.
Dark taxa is a very similar concept, as they are not detected through means of direct observation nor defined by morphology but rather through their DNA sequencing (1). In a study of global soil fungi, researchers used ribosomal RNA phylogenetic analysis to determine several unidentified sequences that make up the fungal tree of life and relate to major fungal lineages (2). Through the sequencing of the soil samples, they found around 10-20% new groups of fungi using the genetic information of dark fungal taxa (DFT).
The world of fungi is vast and diverse, with around 2.2 to 3.8 million different species. Molecular data uses the DNA sequence of fungi to explore other fungi and fungal communities. Scientists utilize a specific genetic marker in a DNA sequence called the internal transcribed spacer (ITS) to identify different kinds of fungi (3). Over a million ITS sequences are stored using an identification process called metabarcoding for reference in databases.
One database called UNITE organizes and shares information about the DNA sequence data of fungi. UNITE has over 450,000 “species hypotheses” that allow various studies to reference their data based on similarities in their DNA sequences, as each one has a unique ID to find more information about it. Many species of DFT are within the UNITE database, but contemporary mycologists believe DFT has little to no real and objective existence in the biological world (4). DNA barcoding allows data to be easily stored online, but this data has limitations with its amount of information and its difficulty in linking different sources of information together. Since DFTs have DNA that has not yet been identified on a species level, they create a challenge for integrating biodiversity data with shared classification names (5).
The International Code of Nomenclature (ICN) for algae, fungi, and plants defines the scientific naming and description of fungal species. It does not allow species descriptions based solely on typified DNA sequences, so DFT has been ignored from any formal classification. However, DFT has significant scientific value in discovering information about new species, branching orders, overlooked taxa, and ecological patterns (6).
The authors of the new study argue that scientists should include DFT in the DNA-based typification system because these fungi can later be identified or used to identify and understand new species of fungi. They observe that the traditional approaches used to recover fungal species should be updated because environmental DNA sequencing is most effective at identifying all forms of fungi, including DFT (7). Regardless of DFT’s lack of physical form, the authors found that they hold much significance in the fungal kingdom and should no longer be ignored by mycologists.
In an interview with Eurek Alert lead author Henrik Nilsson from the University of Gothenburg, Sweeden notes, “species and groups that cannot be named formally, well, they tend to fall between the cracks. They’re typically not considered in nature conservation initiatives. They are often left out from efforts to estimate the evolutionary history of fungi, and their ecological roles and associations are largely overlooked when we try to figure out how mass and energy flow in ecosystems. They’re essentially treated as if they didn’t exist.”
Despite major debates within the mycological community, a recent effort to revise the ICN’s definition of formal species naming was declined by a majority vote (8). This rejection demonstrates how DFT is still perceived as irrelevant to the fungi world. Despite the lack of support from many mycologists, the authors have requested minimal changes to be made to the nomenclatural rules to include official naming for the most well-known and documented species of DFT. These changes to the ICN would exclude any DFT that is not yet well-defined. By updating the naming system, the significant forms of DFT will not become outdated and irrelevant. The research behind DFT is ongoing and remains to be further debated by the mycological community.
“The nomenclatural aspects of dark fungi will presumably be discussed at some length at next year’s international mycological congress in Maastricht, the Netherlands. We’re hopeful that the mycological community will reach meaningful agreement on integration of the dark fungi into the rules of nomenclature. After all, mycologists are used to negotiating and solving non-trivial questions on a day-to-day basis, and this one is hardly any different,” says Marisol Sanchez-Garcia of the Swedish Agricultural University.
Even with the debates in the community, mycologists hope to reach a consensus about integrating DFT into the rules of nomenclature. Using DNA sequencing to identify and classify fungal groups as distinct species may be a novel approach; however, this method can potentially recognize and add several thousand more fungi to existing databases for future research. Therefore, scientists can one day better understand the role of certain fungal groups like DFT, which are abundant in both soil and water ecosystems. Still, further research is necessary for scientists to fully understand the characteristics and functions of DFT. |
|Sound pressure||p, SPL,LPA|
|Particle velocity||v, SVL|
|Sound intensity||I, SIL|
|Sound power||P, SWL, LWA|
|Sound energy density||w|
|Sound exposure||E, SEL|
|Speed of sound||c|
The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. At 20 °C (68 °F), the speed of sound in air is about 343 metres per second (1,235 km/h; 1,125 ft/s; 767 mph; 667 kn), or a kilometre in or a mile in . It depends strongly on temperature, but also varies by several meters per second due to humidity and carbon dioxide.
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example, (as noted above), sound travels at in air; it travels at in water (4.3 times as fast as in air); and at in iron (about 15 times as fast as in air). In an exceptionally stiff material such as diamond, sound travels at 12,000 metres per second (27,000 mph); (about 35 times as fast as in air) which is around the maximum speed that sound will travel under normal conditions.
Sound waves in solids are composed of compression waves (just as in gases and liquids), and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds.
Sir Isaac Newton computed the speed of sound in air as 979 feet per second (298 m/s), which is too low by about 15%,. Newton's analysis was good save for neglecting the (then unknown) effect of rapidly-fluctuating temperature in a sound wave (in modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process). This error was later rectified by Laplace.
During the 17th century, there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second).
In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. Derham used a telescope from the tower of the church of St Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.
The transmission of sound can be illustrated by using a model consisting of an array of spherical objects interconnected by springs.
In real material terms, the spheres represent the material's molecules and the springs represent the bonds between them. Sound passes through the system by compressing and expanding the springs, transmitting the acoustic energy to neighboring spheres. This helps transmit the energy in-turn to the neighboring sphere's springs (bonds), and so on.
The speed of sound through the model depends on the stiffness/rigidity of the springs, and the mass of the spheres. As long as the spacing of the spheres remains constant, stiffer springs/bonds transmit energy quicker, while larger spheres transmit the energy slower.
In a real material, the stiffness of the springs is known as the "elastic modulus", and the mass corresponds to the material density. Given that all other things being equal (ceteris paribus), sound will travel slower in spongy materials, and faster in stiffer ones. Effects like dispersion and reflection can also be understood using this model.
For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids in turn are more difficult to compress than gases.
Some textbooks mistakenly state that the speed of sound increases with density. This notion is illustrated by presenting data for three materials, such as air, water and steel, which also have vastly different compressibility, more which making up for the density differences. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the larger density of water, which works to slow sound in water relative to air, nearly makes up for the compressibility differences in the two media.
A practical example can be observed in Edinburgh when the "One o' Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired.
In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations.
These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first and rocking transverse waves seconds later.
The speed of a compression wave in a fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility and density, but with the additional factor of shear modulus which affects compression waves due to off-axis elastic energies which are able to influence effective tension and relaxation in a compression. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density.
The speed of sound in mathematical notation is conventionally represented by c, from the Latin celeritas meaning "velocity".
For fluids in general, the speed of sound c is given by the Newton-Laplace equation:
Thus the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material and decreases with an increase in density. For ideal gases, the bulk modulus K is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature.
In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which is a dispersive medium, and causes dispersion to air at ultrasonic frequencies .
In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description.
The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility.
In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect.
In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of temperature and molecular structure important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it).
In low molecular weight gases such as helium, sound propagates faster as compared to heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas.
For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density--just as in liquids--but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas.
In non-ideal gas behavior regimen, for which the van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure.
Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%-0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect.
In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent solely upon temperature; see Details below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.
Since temperature (and thus the speed of sound) decreases with increasing altitude up to , sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient.
However, there are variations in this trend above . In particular, in the stratosphere above about , the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the aptly-named thermosphere above .
This section does not cite any sources. (February 2018) (Learn how and when to remove this template message)
The approximate speed of sound in dry (0% humidity) air, in meters per second, at temperatures near , can be calculated from
where is the temperature in degrees Celsius (°C).
This equation is derived from the first two terms of the Taylor expansion of the following more accurate equation:
Dividing the first part, and multiplying the second part, on the right hand side, by gives the exactly equivalent form
which can also be written as
where T denotes the thermodynamic temperature.
The value of , which represents the speed at (or ), is based on theoretical (and some measured) values of the heat capacity ratio, ?, as well as on the fact that at 1 atm real air is very well described by the ideal gas approximation. Commonly found values for the speed of sound at may vary from 331.2 to 331.6 due to the assumptions made when it is calculated. If ideal gas ? is assumed to be exactly, the speed is calculated (see section below) to be , the coefficient used above.
This equation is correct to a much wider temperature range, but still depends on the approximation of heat capacity ratio being independent of temperature, and for this reason will fail, particularly at higher temperatures. It gives good predictions in relatively dry, cold, low-pressure conditions, such as the Earth's stratosphere. The equation fails at extremely low pressures and short wavelengths, due to dependence on the assumption that the wavelength of the sound in the gas is much longer than the average mean free path between gas molecule collisions. A derivation of these equations will be given in the following section.
A graph comparing results of the two equations is at right, using the slightly different value of for the speed of sound at .
For an ideal gas, K (the bulk modulus in equations above, equivalent to C, the coefficient of stiffness in solids) is given by
thus, from the Newton-Laplace equation above, the speed of sound in an ideal gas is given by
Using the ideal gas law to replace p with nRT/V, and replacing ? with nM/V, the equation for an ideal gas becomes
This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for cair have been found to vary slightly from experimentally determined values.
Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of ? but was otherwise correct.
Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode, have energies too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon.
For air, we introduce the shorthand
In addition, we switch to the Celsius temperature , which is useful to calculate air speed in the region near 0°C (about 273 kelvin). Then, for dry air,
where (theta) is the temperature in degrees Celsius(°C).
Substituting numerical values
for the molar gas constant in J/mole/Kelvin, and
for the mean molar mass of air, in kg; and using the ideal diatomic gas value of , we have
Finally, Taylor expansion of the remaining square root in yields
The above derivation includes the first two equations given in the "Practical formula for dry air" section above.
The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of . Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the sound is not being carried along by the wind.
For sound propagation, the exponential variation of wind speed with height can be defined as follows:
In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only (six miles) downwind.
In the standard atmosphere:
In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere--actual conditions may vary.
|Speed of sound
|Density of air
|Characteristic specific acoustic impedance|
Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude:
(Cruising altitude of commercial jets,
and first supersonic flight)
|29,000 m (Flight of X-43A)||301||1,083||673||585|
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result, the speed of sound can vary with frequency.
The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the soundwave is considerably longer than the mean free path of molecules in a gas.
The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher ? (...) than diatomics do . Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of
This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases).
Note that in this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics.
By far the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases.
The speed of sound is raised by humidity but decreased by carbon dioxide. The difference between 0% and 100% humidity is about at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature. The carbon dioxide content of air is not fixed, due to both carbon pollution and human breath (e.g., in the air blown through wind instruments).
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about as the frequency rises from to . For audible frequencies above it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature. Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed.
A range of different methods exist for the measurement of sound in air.
The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200-400 meters, and not needing something as loud as a shotgun.
If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:
Then v = x/t.
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2n)?/4 where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that v = f?.
The effect of impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will, in turn, contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at but corrected for temperature in order to report them at . The result was for dry air at STP, for frequencies from to .
In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by
The last quantity is not an independent one, as . Note that the speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only.
Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, , and , yielding a compressional speed csolid,p of . This is in reasonable agreement with csolid,p measured experimentally at for a (possibly different) type of steel. The shear speed csolid,s is estimated at using the same numbers.
The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by:
where E is Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material.
In a fluid, the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
where K is the bulk modulus of the fluid.
In fresh water, sound travels at about at (see the External Links section below for online calculators). Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography.
In salt water that is free of air bubbles or suspended sediment, sound travels at about ( at , 10°C and 3% salinity by one method). The speed of sound in seawater depends on pressure (hence depth), temperature (a change of ~ ), and salinity (a change of 1? ~ ), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since in most ocean regions temperature decreases with depth, the profile of the speed of sound with depth decreases to a minimum at a depth of several hundred meters. Below the minimum, sound speed increases again, as the effect of increasing pressure overcomes the effect of decreasing temperature (right). For more information see Dushaw et al.
A simple empirical equation for the speed of sound in sea water with reasonable accuracy for the world's oceans is due to Mackenzie:
The constants a1, a2, ..., a9 are
(Note: The Sound Speed vs. Depth graph does not correlate directly to the MacKenzie formula. This is due to the fact that the temperature and salinity varies at different depths. When T and S are held constant, the formula itself it always increasing.)
Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation.
In contrast to a gas, the pressure and the density are provided by separate species, the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field.
When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean, there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth.
In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined to a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint.
It may be seen that refraction effects occur only because there is a wind gradient and it is not due to the result of sound being convected along by the wind.
As wind speed generally increases with altitude, wind blowing towards the listener from the source will refract sound waves downwards, resulting in increased noise levels. |
Radiometry is at the core of practically every rendering algorithm out there. Each pixel in the frame buffer is just a small surface on which light reflected by objects in the scene falls onto. The unique goal of a rendering algorithm is to compute the amount of light passing through every pixel in the frame buffer. For this reason, I think it is important to have, at least, a basic understanding of some of the concepts studied by radiometry. I’ve written this article basically as a reference for myself, but, hopefully, others might find it useful although this stuff is well covered in any good computer graphics text.
1. Radiometric quantities
1.1 Flux (Φ)
The formal definition for flux, also know as radiant flux or power, is the total energy passing through a surface or region of space per unit time. Imagine that you define a region in space, or a region in a surface (which is nothing more than a region in space). To calculate the flux in that region, you have to “count” the number of photons passing through that region from time t to time t + δt.
Obviously, Q must be defined as a function of time in order for the idea of differentiating it to make sense. As Q is energy, it’s defined in Joules, and time is defined in seconds, the unit for flux will be J/s. J/s is often called Watts. Flux can be seen as a function of position and direction. We can measure the flux at point x by measuring the flux at a small differential area around x. We can also measure the flux generated by photons coming from a certain direction by “counting” the number of photons going through the surface from that direction.
1.2 Irradiance (E) and Radiant excitance (M)
Irradiance (E) is a function which describes the area density of flux arriving at any given point on the surface, and randiant excitance (M) describes the area density of flux leaving a surface. Irradiance will allow us to calculate how much energy is passing through any point in the surface per unit area, and Radiant excitance how much energy is leaving from any point in the surface per unit area. Basically, irradiance answers the question “How much light arrives at point p?” and radiant excitance “How much light leaves from point p?” In general, flux will not be constant over the surface, that’s the reason why irradiance is generally defined as:
Which means that irradiance at a point x equals the flux at point x over the differential area around x. δA needs to be small because the radiant flux distribution over a given surface is generally not constant. We need to consider the smallest region possible, at least, small enough that we can assume the incident flux to be constant over that area ( i.e evaluating Φ(x) for any point x in that area will produce the same result )
1.3 Radiant Intensity
Radiant intensity is flux density per solid angle. It describes the directional distribution of light. Radiant flux will be arriving to a surface from any direction, radiant intensity describes how much flux is arriving per unit solid angle. It answers the question How much light is arriving at the surface from any given direction? If we know how much light passes through the surface per unit solid angle, we can compute how much light is arriving from any cone of directions.
1.4 Radiance (L)
Radiance is the flux density per unit projected area, per unit solid angle. It is a combination of the irradiance and radiant intensity concepts. We can think of radiance as irradiance per unit solid angle ( or as radiant intensity per unit area ). Radiance will answer the question “How much light arrives/leaves at/from point X from/in direction ω?” It “counts” the number of photons arriving at a unit area of a surface from a unit solid angle. With that measure, we can compute the flux passing through a small patch of a surface coming from a given cone of directions. Radiance is important to us because that is what an optical system will perceive, it is an indicator of how bright a surface will appear. This is the value we have to calculate for every pixel in the screen! It is also important since all the other quantities can be derived from it as we will see later. Radiance is defined as:
where δA * cos(ω) is the projected area of δA on a hypothetical surface perpendicular to ω. As you can see, we are differentiating flux against area and direction. Obviously, flux has to be defined as a function of position and direction in order to be differentiable with respect to area and direction. Another way of writing the previous formula would be this:
Radiance is a function of position and direction L(x,ω), making it a 5D function, also know as plenoptic function ( three coordinates for point x and two angles for direction ). The units of radiance are W/(sr*m²).
2. Using radiometric quantities
Say we want to calculate flux passing through a surface, and all we know is how to calculate irradiance for any point in that surface. We know that irradiance ( E ) is defined as:
So, if we integrate over area we get
Which means that Φ = ∫ E(x) δA. In other words, if we integrate irradiance over the area, we get the flux passing through that area.
As I said, radiance is an important quantity, not only because that’s what our eyes ( and camera sensors ) perceive, but also because it is possible to derive any other radiometric quantity from it. Lets try to derive irradiance from radiance. We know that:
If we integrate respect solid angle on both sides:
By definition, irradiance is δΦ / δA, so, finally
Which means that irradiance is the integral ( infinite sum ) of radiance coming from all the directions over the surface.
Similarly, if we integrate radiance over area, we would get the radiant intensity at the surface, that is, the total flux passing through the surface from any given direction. |
Go to www.21stCenturyLessons.org for unit sequencing; Lesson Objectives: Students will be able to describe the formula for the area of any circle in the world. Students will engage in a hands-on activity that allows for them to discover that it takes a little more than 3 "radius squares" to find the area of a circle. The lesson utilizes a short video to solidify the relationship while maintaining a hands-on experience. The lesson begins with a warm-up which you can build on as the class progresses. Aligned with CCSS: 6.G.1 , 6.G.4 , 7.G.4
Area of a Circle Day 1 (of 2)
Find the area of right triangles, other triangles, special quadrilaterals, and polygons by composing into rectangles or decomposing into triangles and other shapes; apply these techniques in the context of solving real-world and mathematical problems.
Represent three-dimensional figures using nets made up of rectangles and triangles, and use the nets to find the surface area of these figures. Apply these techniques in the context of solving real-world and mathematical problems.
Know the formulas for the area and circumference of a circle and use them to solve problems; give an informal derivation of the relationship between the circumference and area of a circle.
This is a fantastic visual! Thank you for your thoroughness!
Excellent materials for teaching how to find area of a circle.
More from this Contributor
Congruent Triangles & Developing Congruence Criteria - 2 lessons
Handout, Presentation | Grades 9-12 |
The atmosphere of Venus contains a gas that on Earth can be attributed to living organisms, scientists said on Monday, a discovery the head of NASA called “the most significant development yet” in the hunt for extraterrestrial life.
Conditions on the Earth’s planetary neighbor are often described as hellish, with daytime temperatures hot enough to melt lead and an atmosphere composed almost entirely of carbon dioxide.
However, a team of experts detected traces of phosphine, a flammable gas that on Earth often occurs from the breakdown of organic matter.
The team used telescopes in Hawaii and Chile’s Atacama Desert to observe Venus’ upper cloud deck, about 60km from the surface.
Writing in Nature Astronomy, the team said that the presence of phosphine did not prove the presence of life on Venus, but as the clouds swirling about its broiling surface are highly acidic, and therefore destroy phosphine very quickly, the research did show that something was creating it anew.
The researchers conducted several modeling calculations in a bid to explain the new phosphine production.
They concluded that their research provided evidence “for anomalous and unexplained chemistry” on Venus.
Lead author Jane Greaves, from Cardiff University’s School of Physics and Astronomy, said that the presence of phosphine alone was not proof of life on Venus.
“I don’t think we can say that — even if a planet was abundant in phosphorus, it might lack something else important to life — some other element, or conditions might be too hot, too dry,” she said.
Greaves added that it was the first time phosphine had been found on a rocky planet other than Earth.
The breakthrough was hailed by NASA administrator Jim Bridenstine, who tweeted: “It’s time to prioritize Venus.”
“Life on Venus? The discovery of phosphine, a byproduct of anaerobic biology, is the most significant development yet in building the case for life off Earth,” he wrote.
The bulk of current efforts to look for past extraterrestrial life focus on Mars, which is known to have once contained all the necessary ingredients to support carbon-based organisms.
The US and China recently sent rovers to the Red Planet, while the United Arab Emirates sent an atmospheric probe.
Alan Duffy, an astronomer from Swinburne University and lead scientist of The Royal Institution of Australia, said that while it was tempting to believe that the phosphine was produced by lifeforms, “we have to rule out all possible other non-biological means of producing it.”
He called the finding “one of the most exciting signs of the possible presence of life beyond Earth I have ever seen.”
Thomas Zurbuchen, associate administrator of NASA’s Science Mission Directorate, which has conducted several flybys of Venus, called Monday’s research “intriguing.”
“As with an increasing number of planetary bodies, Venus is proving to be an exciting place of discovery, though it had not been a significant part of the search for life,” he tweeted.
He added that the planet was the focus of two out of four of NASA’s next four candidate missions under its Discovery Program, as well as Europe’s proposed EnVision mission, in which NASA is a partner.
An Australian university student who has never visited China and has only a modest social media following would seem an unlikely target for the Chinese government. However, when a Chinese Ministry of Foreign Affairs spokesman personally denounced Drew Pavlou at a news conference, it was just the next phase in an extraordinary campaign against the 21-year-old that has fueled concerns over China’s targeting of critics overseas. Pavlou first placed himself in the superpower’s sights when in July last year he organized a small sit-in at the University of Queensland, where he studies, to protest against various Chinese government policies. Since then, the Global
‘ASKED TO MOVE OUT’: Indonesian coast guard personnel argued with a Chinese vessel over territorial claims after it entered the country’s exclusive economic zone An Indonesian patrol ship confronted a Chinese coast guard vessel that spent almost three days in waters where Indonesia claims economic rights and that are near the southernmost part of China’s disputed claims to the South China Sea. The Indonesian Maritime Security Agency on Friday night detected Chinese ship 5204 entering Indonesia’s exclusive economic zone (EEZ) in what Indonesia calls the North Natuna Sea. The agency sent a patrol ship that closed within 1km of the Chinese coast guard vessel and they communicated to affirm their position and their nation’s claims to the area, Indonesian Maritime Security Agency head Aan Kurnia said. “We
BEFORE WINTER COMES: Snow cuts off roads into Ladakh for four months or more each year, so the crunch is on to get food, tents and high-altitude equipment to Leh From deploying mules to large transport aircraft, the Indian military has activated its entire logistics network to transport supplies to thousands of troops for a harsh winter along a bitterly disputed Himalayan border with China. In the past few months, one of India’s biggest military logistics exercises in years has brought vast quantities of ammunition, equipment, fuel, winter supplies and food into Ladakh, a region bordering Tibet that India administers as a union territory, officials said. The move was triggered by a border standoff with China in the snow deserts of Ladakh that began in May and escalated in June into hand-to-hand
Dark matter, mysterious invisible stuff that makes up most of the mass of galaxies, including the Milky Way, is confounding scientists again, with new observations of distant galaxies conflicting with the current understanding of its nature. Research published this week revealed an unexpected discrepancy between observations of dark matter concentrations in three massive clusters of galaxies encompassing trillions of stars and theoretical computer simulations of how dark matter should be distributed. “Either there is a missing ingredient in the simulations or we have made a fundamental incorrect assumption about the nature of dark matter,” Yale University astrophysicist Priyamvada Natarajan, a coauthor of |
Earth's Orbit Teacher Resources
Find Earth's Orbit educational lesson plans and worksheets
Showing 73 - 96 of 2,075 resources
Impact Craters: A Look at the Past
The Galle crater on Mars is also known as the Happy Face crater because of its appearance. First, scholars use pebbles and flour to simulate craters and study their properties. They then apply this knowledge to help decipher the history...
5th - 8th CCSS: Adaptable
Space Shuttle Ascent: Altitude vs. Time
How long did it take to get to that altitude? Using a Google Earth file, groups explore a space shuttle launch. Using a calculator, groups determine the function that models the altitude/time data from an actual launch. With the model in...
9th - 12th CCSS: Adaptable
Effective and Alternative Secondary Education: Planets in the Solar System
This is a complete packet consisting of three lessons about the origin of the solar system, the sun, and the planets. Work in this packet is meant to be self-directed; learners go at their own pace and instructions direct them on how to...
7th - 12th
Exploring the Solar System: All About Spacecraft/Spaceflight
Rarely do you find resources that reach high school astronomy learners. Here is something at their level! The physics of flyby missions is explained via several examples. Landing, penetrating, and roving spacecraft are examined. Diagrams...
9th - 12th |
What is Acceleration Help
Acceleration is an expression of the rate of change in the velocity of an object. This can occur as a change in speed, a change in direction, or both. Acceleration can be defined in one dimension (along a straight line), in two dimensions (within a flat plane), or in three dimensions (in space), just as can velocity. Acceleration sometimes takes place in the same direction as an object’s velocity vector, but this is not necessarily the case.
Acceleration is a Vector
Acceleration, like velocity, is a vector quantity. Sometimes the magnitude of the acceleration vector is called “acceleration,” and is usually symbolized by the lowercase italic letter a. But technically, the vector expression should be used; it is normally symbolized by the lowercase bold letter a.
In our previous example of a car driving along a highway, suppose the speed is constant at 25 m/s. The velocity changes when the car goes around curves, and also if the car crests a hilltop or bottoms-out in a ravine or valley (although these can’t be shown in this two-dimensional drawing). If the car is going along a straight path, and its speed is increasing, then the acceleration vector points in the same direction that the car is traveling. If the car puts on the brakes, still moving along a straight path, then the acceleration vector points exactly opposite the direction of the car’s motion.
Fig. 15-7. Acceleration vectors x, y, and z for a car at three points (X, Y, and Z) along a road. The magnitude of y is 0 because there is no acceleration at point Y.
Acceleration vectors can be graphically illustrated as arrows. Figure 15-7 illustrates acceleration vectors for a car traveling along a level, but curving, road at a constant speed of 25 m/s. Three points are shown, called X, Y, and Z. The corresponding acceleration vectors are x, y, and z. Because the speed is constant and the road is level, acceleration only takes place where the car encounters a bend in the road. At point Y, the road is essentially straight, so the acceleration is zero (y = 0 ). The zero vector is shown as a point at the origin of a vector graph.
How Acceleration is Determined
Acceleration magnitude is expressed in meters per second per second, also called meters per second squared (m/s 2 ). This seems esoteric at first. What does s 2 mean? Is it a “square second”? What in the world is that? Forget about trying to imagine it in all its abstract perfection. Instead, think of it in terms of a concrete example. Suppose you have a car that can go from a standstill to a speed of 26.8 m/s in 5 seconds. Suppose that the acceleration rate is constant from the moment you first hit the gas pedal until you have attained a speed of 26.8 m/s on a level straightaway. Then you can calculate the acceleration magnitude:
a = (26.8m/s)/(5s) = 5.36 m/s 2
The expression s 2 translates, in this context, to “second, every second.” The speed in the above example increases by 5.36 meters per second, every second.
Fig. 15-8. An accelerometer. This measures the magnitude only, and must be properly oriented to provide an accurate reading.
Acceleration magnitude can be measured in terms of force against mass. This force, in turn, can be determined according to the amount of distortion in a spring. The force meter shown in Fig. 15-4 can be adapted to make an acceleration meter, more technically known as an accelerometer, for measuring acceleration magnitude.
Here’s how a spring type accelerometer works. A functional diagram is shown in Fig. 15-8. Before the accelerometer can be used, it is calibrated in a lab. For the accelerometer to work, the direction of the acceleration vector must be in line with the spring axis, and the acceleration vector must point outward from the fixed anchor toward the mass. This produces a force on the mass. The force is a vector that points directly against the spring, exactly opposite the acceleration vector.
A common weight scale can be used to indirectly measure acceleration. When you stand on the scale, you compress a spring or balance a set of masses on a lever. This measures the downward force that the mass of your body exerts as a result of a phenomenon called the acceleration of gravity . The effect of gravitation on a mass is the same as that of an upward acceleration of approximately 9.8 m/s 2 . Force, mass, and acceleration are interrelated as follows:
F = m a
That is, force is the product of mass and acceleration. This formula is so important that it’s worth remembering, even if you aren’t a scientist. It quantifies and explains a lot of things in the real world, such as why it takes a fully loaded semi truck so much longer to get up to highway speed than the same truck when it’s empty, or why, if you drive around a slippery curve too fast, you risk sliding off the road.
Suppose an object starts from a dead stop and accelerates at an average magnitude of a avg in a straight line for a period of time t . Suppose after this length of time, the distance from the starting point is d . Then this formula applies:
d = a avg t 2 /2
In the above example, suppose the acceleration magnitude is constant; call it a. Let the instantaneous speed be called v inst at time t . Then the instantaneous speed is related to the acceleration magnitude as follows:
v inst = at
- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Signs Your Child Might Have Asperger's Syndrome
- Theories of Learning
- A Teacher's Guide to Differentiating Instruction
- Child Development Theories
- Social Cognitive Theory
- Curriculum Definition
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development |
7.RP.A.2 Recognize and represent proportional relationships between quantities. Decide whether two quantities are in a proportional relationship (e.g., by testing for equivalent ratios in a table or graphing on a coordinate plane and observing whether the graph is a straight line through the origin). Identify unit rate (also known as the constant of proportionality) in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships. Represent proportional relationships by equations (e.g., If total cost t is proportional to the number n of items purchased at a constant price p, the relationship between the total cost and the number of items can be expressed as t = pn). Explain what a point (x, y) on the graph of a proportional relationship means in terms of the situation, with special attention to the points (0, 0) and (1, r) where r is the unit rate.
7.NS.A Apply and extend previous understandings of operations with fractions
7.NS.A.1 Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram. Describe situations in which opposite quantities combine to make 0 and show that a number and its opposite have a sum of 0 (additive inverses) (e.g., A hydrogen atom has 0 charge because its two constituents are oppositely charged.). Understand p + q as a number where p is the starting point and q represents a distance from p in the positive or negative direction depending on whether q is positive or negative. Interpret sums of rational numbers by describing real-world contexts (e.g., 3 + 2 means beginning at 3, move 2 units to the right and end at the sum of 5. 3 + (-2) means beginning at 3, move 2 units to the left and end at the sum of 1. 70 + (-30) = 40 could mean after earning $70, $30 was spent on a new video game, leaving a balance of $40.). Understand subtraction of rational numbers as adding the additive inverse, p - q = p + (-q). Show that the distance between two rational numbers on the number line is the absolute value of their difference and apply this principle in real-world contexts. (e.g., The distance between -5 and 6 is 11. -5 and 6 are 11 units apart on the number line.) Fluently add and subtract rational numbers by applying properties of operations as strategies.
7.NS.A.2 Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers. Understand that multiplication is extended from fractions to all rational numbers by requiring that operations continue to satisfy the properties of operations, particularly the distributive property, and the rules for multiplying signed numbers. Interpret products of rational numbers by describing real-world contexts. Understand that integers can be divided, provided that the divisor is not zero, and every quotient of integers (with non-zero divisor) is a rational number (e.g., If p and q are integers, then -(p/q) = (-p)/q = p/(-q).). Interpret quotients of rational numbers by describing real-world contexts. Fluently multiply and divide rational numbers by applying properties of operations as strategies. Convert a fraction to a decimal using long division. Know that the decimal form of a fraction terminates in 0s or eventually repeats.
7.EE.B Solve real-life and mathematical problems using numerical and algebraic expressions and equations
7.EE.B.3 Solve multi-step, real-life, and mathematical problems posed with positive and negative rational numbers in any form using tools strategically. Apply properties of operations to calculate with numbers in any form (e.g., -(1/4)(n-4)). Convert between forms as appropriate (e.g., If a woman making $25 an hour gets a 10% raise, she will make an additional 1/10 of her salary an hour, or $2.50, for a new salary of $27.50.). Assess the reasonableness of answers using mental computation and estimation strategies (e.g., If you want to place a towel bar 9 3/4 inches long in the center of a door that is 27 1/2 inches wide, you will need to place the bar about 9 inches from each edge; this estimate can be used as a check on the exact computation.).
7.EE.B.4 Use variables to represent quantities in a real-world or mathematical problem. Construct simple equations and inequalities to solve problems by reasoning about the quantities. Solve word problems leading to equations of these forms px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently. Write an algebraic solution identifying the sequence of the operations used to mirror the arithmetic solution (e.g., The perimeter of a rectangle is 54 cm. Its length is 6 cm. What is its width? Subtract 2*6 from 54 and divide by 2; (2*6) + 2w = 54). Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem (e.g., As a salesperson, you are paid $50 per week plus $3 per sale. This week you want your pay to be at least $100. Write an inequality for the number of sales you need to make, and describe the solutions.).
7.G.A.2 Draw (freehand, with ruler and protractor, and with technology) geometric shapes with given conditions. Given three measures of angles or sides of a triangle, notice when the conditions determine a unique triangle, more than one triangle, or no triangle. Differentiate between regular and irregular polygons.
7.G.B.6 Solve real-world and mathematical problems involving area of two-dimensional objects and volume and surface area of three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms.
7.SP.A Use random sampling to draw inferences about a population
7.SP.A.1 Understand that: statistics can be used to gain information about a population by examining a sample of the population. Generalizations about a population from a sample are valid only if the sample is representative of that population. Random sampling tends to produce representative samples and support valid inferences.
7.SP.A.2 Use data from a random sample to draw inferences about a population with a specific characteristic. Generate multiple samples (or simulated samples) of the same size to gauge the variation in estimates or predictions.
7.SP.B Draw informal comparative inferences about two populations
7.SP.B.3 Draw conclusions about the degree of visual overlap of two numerical data distributions with similar variability such as Interquartile Range or Mean Absolute Deviation, expressing the difference between the centers as a multiple of a measure of variability such as Mean, Median, or Mode.
7.SP.B.4 Draw informal comparative inferences about two populations using measures of center and measures of variability for numerical data from random samples.
7.SP.C Investigate chance processes and develop, use, and evaluate probability models
7.SP.C.5 Understand that the probability of a chance event is a number between 0 and 1 that expresses the likelihood of the event occurring. A probability near 0 indicates an unlikely event, a probability around 1/2 indicates an event that is neither unlikely nor likely, and a probability near 1 indicates a likely event.
7.SP.C.7 Develop a probability model and use it to find probabilities of events. Compare probabilities from a model to observed frequencies; if the agreement is not good, explain possible sources of the discrepancy. Develop a uniform probability model, assigning equal probability to all outcomes, and use the model to determine probabilities of events (e.g., If a student is selected at random from a class of 6 girls and 4 boys, the probability that Jane will be selected is .10 and the probability that a girl will be selected is .60.). Develop a probability model, which may not be uniform, by observing frequencies in data generated from a chance process (e.g., Find the approximate probability that a spinning penny will land heads up or that a tossed paper cup will land open-end down. Do the outcomes for the spinning penny appear to be equally likely based on the observed frequencies?).
7.SP.C.8 Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation. Understand that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. Identify the outcomes in the sample space which compose the event. Generate frequencies for compound events using a simulation. (e.g., What is the frequency of pulling a red card from a deck of cards and rolling a 5 on a die?). |
What is a black hole? Do they really exist? How do they form? How are they related
to stars? What would happen if you fell into one? How do you see a black hole if they
emit no light? What’s the difference between a black hole and a really dark star?
Could a particle accelerator create a black hole? Can a black hole also be a worm
hole or a time machine?
In Astro 101: Black Holes, you will explore the concepts behind black holes. Using the theme of black holes, you will learn the basic ideas of astronomy, relativity, and quantum physics.
After completing this course, you will be able to:
• Describe the essential properties of black holes.
• Explain recent black hole research using plain language and appropriate analogies.
• Compare black holes in popular culture to modern physics to distinguish science fact from science fiction.
• Describe the application of fundamental physical concepts including gravity, special and general relativity, and quantum mechanics to reported scientific observations.
• Recognize different types of stars and distinguish which stars can potentially become black holes.
• Differentiate types of black holes and classify each type as observed or theoretical.
• Characterize formation theories associated with each type of black hole.
• Identify different ways of detecting black holes, and appropriate technologies associated with each detection method.
• Summarize the puzzles facing black hole researchers in modern science.
Introduction to Black Holes
-Hello and welcome to the first module of Astro 101! In this module, you will become familiar with the basic structure of a black hole, learn the terminology used to describe them, and explore the history of black hole physics.
Life and Death of a Star
-Stars are the progenitors of black holes. In this module the student will learn about the lifecycle of stars, how stars produce energy, and how they radiate away energy. We will explore the death of stars, and what is produced by the death of stars, on all scales ; from the building blocks of life (carbon) to black holes.
The Structure of Spacetime
-What happens if you travel close to the speed of light? What happens to the passage of time as you fall towards a black hole? This module will explore relativity. We look at the many ways black holes affect the universe around them from discussions of reference frames through to the change in the passage of time as you approach a black hole.
Sizing Up Black Holes
-So far discussion has focussed on either the general case for black holes, of the stellar mass variety (endpoint of a star's life). In this module students will explore the various sizes of black holes and their measurable properties. Students will learn that there are four major types of astrophysical black holes (primordial/mini black hole’s, stellar mass, intermediate mass and supermassive black holes), and discover current theories on their formation, and what might feed them. Students will also gain an knowledge of ‘no-hair’ theorem and gravitational lensing. We will also explore the formation of supermassive black holes, intermediate mass black holes, and mini black holes in particle accelerators.
Approaching a Black Hole
-What would you see as you approached a black hole, using a black hole binary as a vehicle to explore black holes? In this module students will follow material as it is transferred from a companion star to a black hole via Roche lobe overflow or wind fed accretion. They will then follow that material down through the accretion disc to explore tidal forces to learn about the ways in which black holes can rip apart surrounding material. This material will then pass through the innermost stable orbit of the disc, before falling in. Students will also get the opportunity to look at jets - the outflow of material from the innermost regions of this structure.
Module Objective: Introduce properties of black holes from the outside in, through the context of a journey into the event horizon of a black hole. What would we see as we are far away? What will we see and experience as we get closer? What is a disc? What is a jet?
Crossing the Event Horizon
-Module Description: What would happen if you fell into a black hole? In this module students continue on their journey through a black hole binary system, from the innermost stable orbit of the accretion disc to the singularity itself. Students will learn about the structure of a basic black hole, as well as rotating black holes. Students will explore the concept of wormholes and singularities. Module Objectives: Students will learn about the innermost region around a black hole, about its lack of surface and about the presence and definition of an event horizon. Students will also explore the impact that spin can have on this region, and how it is measured. Finally they will look inside the event horizon to discover the basic concepts of singularities and wormholes.
Inside a Black Hole
-What is in a black hole? This module will start to explore the theoretical side of black hole physics. You will receive a basic introduction to relevant topics of Quantum Mechanics and thermodynamics with the aim of understanding current black hole debates among the giants of the field.
Hunting for Black Holes
-If black holes absorb all light, how do we see them? In this module, you will explore how astronomers observe real black holes, from studies of accretion discs and jets to the study of material orbiting a black hole.
Our Eyes in the Skies
-Black holes change over time. This module will focus on how and why black holes change as well as how we look for these changes.
Riding the Gravity Wave
-How do you study a black hole that has no visible companion? In this module the student will be introduced to gravitational radiation. With the 2016 LIGO discovery of gravitational waves, a whole new branch of astronomy has been opened. |
I cut this square into two different shapes. What can you say about the relationship between them?
Use the information on these cards to draw the shape that is being described.
This activity investigates how you might make squares and pentominoes from Polydron.
Can you draw a square in which the perimeter is numerically equal to the area?
If I use 12 green tiles to represent my lawn, how many different ways could I arrange them? How many border tiles would I need each time?
What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters.
A thoughtful shepherd used bales of straw to protect the area around his lambs. Explore how you can arrange the bales.
Are these statements always true, sometimes true or never true?
My local DIY shop calculates the price of its windows according to the area of glass and the length of frame used. Can you work out how they arrived at these prices?
Measure problems for inquiring primary learners.
How can you change the area of a shape but keep its perimeter the same? How can you change the perimeter but keep the area the same?
In this game for two players, you throw two dice and find the product. How many shapes can you draw on the grid which have that area or perimeter?
Measure problems for primary learners to work on with others.
Can you deduce the perimeters of the shapes from the information given?
Measure problems at primary level that may require resilience.
Measure problems at primary level that require careful consideration.
Polygons drawn on square dotty paper have dots on their perimeter (p) and often internal (i) ones as well. Find a relationship between p, i and the area of the polygons.
Make some loops out of regular hexagons. What rules can you discover?
Sally and Ben were drawing shapes in chalk on the school playground. Can you work out what shapes each of them drew using the clues?
Can you find rectangles where the value of the area is the same as the value of the perimeter?
If you move the tiles around, can you make squares with different coloured edges?
How have "Warmsnug" arrived at the prices shown on their windows? Which window has been given an incorrect price?
Can you predict, without drawing, what the perimeter of the next shape in this pattern will be if we continue drawing them in the same way?
Create some shapes by combining two or more rectangles. What can you say about the areas and perimeters of the shapes you can make?
Points A, B and C are the centres of three circles, each one of which touches the other two. Prove that the perimeter of the triangle ABC is equal to the diameter of the largest circle.
I'm thinking of a rectangle with an area of 24. What could its perimeter be?
Draw a square. A second square of the same size slides around the first always maintaining contact and keeping the same orientation. How far does the dot travel? |
Presentation on theme: "PHYS 218 sec. 517-520 Review Chap. 2 Motion along a straight line."— Presentation transcript:
PHYS 218 sec. 517-520 Review Chap. 2 Motion along a straight line
velocity Average velocity You can choose the origin, where x = 0, and the (+)-ve x-direction for convenience. Once you fix them, keep this convention. Instantaneous velocity = velocity Velocity at any specific instant of time or specific point along the path Definition of the derivative
acceleration Velocity: the rate of change of position with time Acceleration: the rate of change of velocity with time Average acceleration (Instantaneous) acceleration acceleration on v-t graph Acceleration is to velocity as velocity is to position. Therefore, in v-t graph, the slope of tangent line of v(t) at a given t is the acceleration at that time.
acceleration on x-t graph Curvature upward Curvature downward c is the second derivative of the curve at t=0. Thus from the x-t graph you can know the acceleration qualitatively even though you cannot not know its magnitude. In an x-t graph, the slope of the curve gives the velocity, while the curvature gives the sign of the acceleration. a<0 region a>0 region v=0 since dx/dt =0
Velocity and position by integration You can also obtain v(t) and x(t), when a(t) is known/given. differentiation integration First obtain v(t) from a(t) then obtain x(t) from v(t). You can set t 0 = 0
Constant acceleration If a=constant, you can easily calculate the integrals. When a = constant This gives the familiar expressions for constant acceleration.
Some relations can be obtained for 1-dim. motion with a constant acceleration. Here we eliminate t! This give a relation between v, a and x. Here we eliminate a! This give a relation between v, t and x. Do not try to memorize these formulas. If you understand the equation of motion, these relations follow in a natural way.
Freely falling objects Typical example of 1-dim. motion Choose the upward as the (+)-ve y-direction This is a convention. You can make other choice. Choose the origin initial velocity acceleration due to gravity magnitude: g = 9.8 m/s direction: downward ground always true
Freely falling objects (2) Maximum height h Time when it hits the ground Since you know the time t H, you can know its velocity at t H.
Tips Can you obtain v(t) and x(t) if a(t) is given? To do this, you should be familiar with differentiation and integration. Find the proper mathematical equation to describe the motion, i.e. formulate the situation. Here are some examples. What equation describes the maximum height? The ball hits the ground, how do you describe this situation in a mathematical formula? Give your answer in terms of the given information such as v 0, H, g, etc. If you have to give numerical answers, be careful with the unit. |
Curated and Reviewed by
Geometry - Measurement
Learners review the procedure for determining appropriate types of measurements for given situations and measurement conversions. They figure perimeter, area, and volume of 2 and 3 dimensional objects.
15 Views 63 Downloads
- This resource is only available on an unencrypted HTTP website. It should be fine for general use, but don’t use it to share any personally identifiable information
- Folder Types
- Activities & Projects
- Graphics & Images
- Handouts & References
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- PD Courses
- Study Guides
- Performance Tasks
- Graphic Organizers
- Writing Prompts
- Constructed Response Items
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- Home Letters
- Unknown Types
- All Resource Types
- Show All
See similar resources:
Measuring Angles: LessonLesson Planet
Develop a degree of measurement. With the aid of a protractor, the video shows how to measure angles. The segment of a larger playlist on geometry basics points out that it is possible to start at a value other than zero when measuring...
5 mins 4th - 6th Math CCSS: Adaptable
Measuring Distances: LessonLesson Planet
Let's rule the coordinate plane. By reviewing the marking on rulers, the video discusses how to determine the measurement between two points. The resource, a portion of a basic geometry playlist, determines the length of a side of a...
4 mins 5th - 8th Math CCSS: Adaptable
Indirect Measurement: LessonLesson Planet
When the tape measure will not reach, use similarity. An installment of a large playlist on geometry introduces the method of indirect measurement. Using a story of finding the distance to the sun, the video shows how to use similar...
4 mins 7th - 11th Math CCSS: Adaptable
Introduction to Angle MeasureLesson Planet
An all-encompassing package provides video clips that demonstrate real-world activities that have to do with angles. After watching the Cyberchase cartoons, learners discuss why a "v" shape is used to measure a turn. A pair of vital...
4th - 6th Math CCSS: Adaptable
How Do You Solve a Problem Using Indirect Measurement with Shadows?Lesson Planet
Using indirect measurement, this video shows viewers how to find the height of a flagpole. The lecturer employs drawings as well as proportions to solve this problem, explaining each step. A useful resource to aid struggling learners and...
4 mins 6th - 10th Math
Setting the Stage With GeometryLesson Planet
Get your class thinking about geometry in their own bedrooms! Solving geometry challenges, individuals match words and definitions, find perimeter and area in their homes (rectangular, circular, and triangular), find circumference and...
5th - 7th Math CCSS: Adaptable |
|Part of a series on:|
In geometry, a cross-section is the intersection of a figure in 2-dimensional space with a line, or of a body in 3-dimensional space with a plane, etc. More plainly, when cutting an object into slices one gets many parallel cross-sections.
Cavalieri's principle states that solids with corresponding cross-sections of equal areas have equal volumes.
The cross-sectional area (A') of an object when viewed from a particular angle is the total area of the orthographic projection of the object from that angle. For example, a cylinder of height h and radius r has A' = πr2 when viewed along its central axis, and A' = 2πrh when viewed from an orthogonal direction. A sphere of radius r has A' = πr2 when viewed from any angle. More generically, A' can be calculated by evaluating the following surface integral:
where is a unit vector pointing along the viewing direction toward the viewer, is a surface element with outward-pointing normal, and the integral is taken only over the top-most surface, that part of the surface that is "visible" from the perspective of the viewer. For a convex body, each ray through the object from the viewer's perspective crosses just two surfaces. For such objects, the integral may be taken over the entire surface (A) by taking the absolute value of the integrand (so that the "top" and "bottom" of the object do not subtract away, as would be required by the Divergence Theorem applied to the constant vector field ) and dividing by two:
with a cross section in yellow.]]
A cross section is what one gets if one cuts an object into slices. |
What is a Function? The Difference between Functions and Relations 03:51 minutes
Test your knowledge
<b>Learn with fun & improve your grades
Start your FREE trial now and get instant access to this video and...
study at your own pace — with more videos that break down even the most difficult topics in easy-to-understand chunks of knowledge.
increase your confidence in class — by practicing before tests or exams with our fun interactive Practice Problems.
practice whenever and wherever — Our PDF Worksheets fit the content of each video. Together they create a well-round learning experience for you.
Transcript What is a Function? The Difference between Functions and Relations
Herman the German is at the end of his vacation in Japan, but he forgot to buy souvenirs! He decides to get a souvenir from one of the vending machines on one of the many touristy streets. Herman walks up to a pair of vending machines that look similar, but not exactly the same. The vending machines sell the same items and the keypads are the same.
Herman remembers that f(x) is math code for function notation and that the mark on the other vending machine is called a relation mapping diagram. What do functions and relations have to do with vending machines? Herman decides to go to the relation vending machine. He does feel lucky. Herman puts in 100 ¥. He chooses the Lap Pillow and enters E3 in the keypad.
What's happening? Why is the vending machine giving him a Noodle Eating Guard? that's also labeled E3? Upon closer inspection, Herman notices that there's something curious about this particular vending machine. There are several items that are labeled E3, there are also a few items labeled I3.
And look at that! S7 is the only item with that label. Herman decides to get a Rocketcroc Toaster. Again, he puts in money and enters in S7. Perfect! Herman decides to give the other items another try. After all, he does feel lucky! Once again, Herman puts in 100 ¥, chooses an item and enters the number in the keypad. This time, he chooses B3 since there are only two items labeled B3. Herman gets the square watermelon. Nice, but he wanted a Mommagotcha. So he tries again.
This time, he gets the Mommagotcha! But wait he didn't do anything differently, but got two different items. Herman thinks back to math class and remembers his teacher telling him that relations were when each element in the domain is related to one or more items in the range. When he enters in the code for an item, any one of the items with the same label could come out. With relations, an element in the domain of inputs can be related to one or more items in the range of outputs. Enough of this nonsense. Herman is pressed for time and he can't hope for a cool souvenir.
Herman decides to use the vending machine labeled with the function notation. Surely this will act like it should. Herman remembers that the function notation version of y = x is f(x) = x. And, although the name of this function is 'f', some other common letters used in function notation are 'g' or 'h', these would be read 'g' of 'x' and 'h' of 'x', respectively.
But no matter how a function is written, it has three main parts. First there is an input, 'x', that is chosen out of a set of starting points called the domain. Then, the function changes each input into a unique output, f(x), the artist formerly known as 'y'. The outputs create a set called the range.
Herman's sure he can get what he wants. He has his eye on AR2, which is the selfie stick. This'll make the perfect gift for his girlfriend! You've gotta be kidding the item's not coming out!
Herman's got an idea...Well, that didn't work. What's this? Herman catches a glimpse of a tool machine...Maybe...just maybe...NO...no...this is definitely worse.
What is a Function? The Difference between Functions and Relations Übung
Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video What is a Function? The Difference between Functions and Relations kannst du es wiederholen und üben.
Describe Herman's problems with the vending machine.
Every function is a relation, but not every relation is a function.
A function is a special kind of relation where for every input $x$ there is at most one output $y$.
He puts 100 yen in the machine and inserts B3 for the Mammagotcha. But he gets a square melon instead.
That's because there are several items with the same label, like Mammagotcha and the square watermelon with B3. There is also a label with only one item: S7 and the rocket-toaster.
With relations, each element in the domain is related to one or more items in the range.
Find three main facts about functions.
Each element of the left set is assigned to one element of the right set.
The following is an example of a function:
you $\rightarrow$ your age.
Keep in mind that not every relation is a function. For example:
you $\rightarrow$ the names of all your friends.
There are three main facts about functions:
- There is a set of all input values $x$, called the domain.
- A function $f(x)$ changes an input value $x$ into a unique output value $y$.
- The set of all output values $y$ is called the range.
Explain the difference between functions and relations.
The following relation is a function:
you $\rightarrow$ number of brothers and sisters you have
The following relation is not a function:
you $\rightarrow$ the names of your brothers and sisters
Every function is a relation, but not the other way round.
Both functions and relations have a set of inputs, called the domain, and a set of outputs, called the range.
For any relation, for each element of the domain you can have one or more elements in the range.
For a function, for each element of the domain you can have at most one element in the range.
Determine if the assignment is a function or relation.
A person has a unique hair color but different people could have the same hair color.
For a function the assignment must be unique.
Here is an example for the difference between a relation and a function:
Paul $\rightarrow$ date of his birthday
is unique and thus a function.
Turning this assignment around, we get
date $\rightarrow$ the person who was born on this date
which isn't unique at all.
Peter, Paul, and Mary drink just one soda each, but two of them could drink the same kind of soda.
The difference of a relation and a function is the uniqueness of the assignment.
Each person has a unique hair color, his own, so this assignment is a function. But the other way round you can surely find more than one person with the same color hair. Thus this is a relation.
Social security card
Each person has a unique the social security number. So this is a function.
Paul has the email addresses firstname.lastname@example.org, email@example.com, and firstname.lastname@example.org. So three email addresses are assigned to Paul. This is a relation.
Each of the three drinks just one soda, so that's a function. Since two or three of them could order the same soda, this direction is just a relation.
Decide which mapping diagrams represent a function.
Keep the definition of a function in mind: for every element of the domain $x$ there exists at most one element in the range $y$ which is assigned to $x$.
You can imagine the definition of a function as follows: for each element in the domain there is at most one arrow.
If every $x$ in the domain is assigned to the same $y$, then the function is called a constant function.
Here is an example of a function:
The kinds of diagrams we are looking at are called mapping diagrams. On the left of each picture we have the domain, the set of inputs, and on the right we have the range, the set of outputs.
If all inputs $x$ are assigned to at most one output $y$ then the mapping diagram in question is that of a function.
For any mapping diagram of a function, you see that only one arrow starts at any element of the domain. However many arrows lead to an element in the range doesn't matter.
Thus, from left to the right, we have a relation, a function, a function, a function, and a relation.
Identify which statements are describing a function.
For a function, we must have that each input $x$ is assigned at most one output $y$.
Remember the important facts about the town given above.
For each house in town the address is uniquely assigned. Because of this, we can then view the assignment of addresses to houses as a function! This is how the postman knows where to deliver the mail.
Specifically, we have the function:
house $\rightarrow$ address,
where the address includes the street name, the house number, and the zip code.
If you leave out any of the three parts of the address, we don't have a function any longer, as the address no longer becomes unique. We know this from the given facts about the town.
- If we leave out the street name, then we know there exists more than one house with the number $30$ in town with the zip code 12345.
- If we leave out the house number, then we know there exists more than one house on a Beagle Street in the town with the zip code 12345.
- If we leave out the zip code, then we know there exists more than one house in town on Beagle Street with the house number 30. |
Start off by going to Key Skills - Bar Charts
In statistics, a stemplot (or stem-and-leaf plot) is a graphical display of quantitative data that is similar to a histogram and is useful in visualizing the shape of a distribution. They are generally associated with the Exploratory Data Analysis (EDA) ideas of John Tukey and the course Statistics in Society (NDST242) of the Open University, although in fact Arthur Bowley did something very similar in the early 1900s. Unlike histograms, stemplots: · retain the original data (at least the most important digits) · put the data in order - thereby easing the move to order-based inference and non-parametric statistics. A basic stemplot contains two columns separated by a vertical line. The left column contains the stems and the right column contains the leaves.
Constructing a stemplot To construct a stemplot, the observations must first be sorted in ascending order. Here is the sorted set of data values that will be used in the example: 54 56 57 59 63 64 66 68 68 72 72 75 76 81 84 88 106 Next, it must be determined what the stems will represent and what the leaves will represent. Typically, the leaf contains the last digit of the number and the stem contains all of the other digits. In the case of very large or very small numbers, the data values may be rounded to a particular place value (such as the hundreds place) that will be used for the leaves. The remaining digits to the left of the rounded place value are used as the stems.
In this example, the leaf represents the ones place and the stem will represent the rest of the number (tens place and higher). The stemplot is drawn with two columns separated by a vertical line. The stems are listed to the left of the vertical line. It is important that each stem is listed only once and that no numbers are skipped, even if it means that some stems have no leaves. The leaves are listed in increasing order in a row to the right of each stem.
5 | 4 6 7 9 6 | 3 4 6 8 8 7 | 2 2 5 6 8 | 1 4 8 9 | 10 | 6
key: 5|4=54 leaf unit: 1.0 stem unit: 10.0
For negative numbers, a negative is placed in front of the stem unit, which is still the value X / 10. Non-integers are rounded. This allowed the stem and leaf plot to retain its shape, even for more complicated data sets. As in this example below:
-2 | 4 -1 | 2 -0 | 3 0 | 4 6 6 1 | 7 2 | 5 3 | 4 | 5 | 7
Which represents the set of data:
-23.678758, -12.45, -3.4, 4.43, 5.5, 5.678, 16.87, 24.7, 56.8 |
103 search results
Introduction to Character Foils
During this lesson, students will view video clips and read texts that have character foils examples. Students will complete a graphic organizer with evidence that supports their identification of foil characters. Once complete, students will use the information from the graphic organizer to discuss character foils.
Metacognitive Approaches to Student-based Learning
In this lesson, students will learn how to make complex inferences and draw conclusions about a work of literary fiction using a combination of text evidence and background knowledge. Using a graphic organizer and a short story, students will record both text evidence and their prior knowledge, and combine these elements to make an inference about the character.
Generating Different Representations of Relationships
Given problems that include data, the student will generate different representations, such as a table, graph, equation, or verbal description.
Determining Slopes from Equations, Graphs, and Tables
Given algebraic, tabular, and graphical representations of linear functions, the student will determine the slope of the relationship from each of the representations.
Approximating the Value of Irrational Numbers
Given problem situations that include pictorial representations of irrational numbers, the student will find the approximate value of the irrational numbers.
Expressing Numbers in Scientific Notation
Given problem situations, the student will express numbers in scientific notation.
Linguistic Roots and Affixes (English 8 Reading)
You will be able to recognize linguistic roots and affixes to use in determining the meanings of academic English words and in other content areas.
Determining if a Relationship is a Functional Relationship
The student is expected to gather and record data & use data sets to determine functional relationships between quantities.
Graphing Dilations, Reflections, and Translations
Given a coordinate plane, the student will graph dilations, reflections, and translations, and use those graphs to solve problems.
Graphing and Applying Coordinate Dilations
Given a coordinate plane or coordinate representations of a dilation, the student will graph dilations and use those graphs to solve problems.
Developing the Concept of Slope
Given multiple representations of linear functions, the student will develop the concept of slope as a rate of change.
Predicting, Finding, and Justifying Data from a Graph
Given data in the form of a graph, the student will use the graph to interpret solutions to problems.
Denotation and Connotation (English I Reading)
You will be able to distinguish between the denotative (dictionary) meaning of a word and its connotative (emotions or associations that are implied rather than literal) meaning.
Newton's Three Laws of Motion
This resource provides alternate or additional learning opportunities for students learning the three Newton's Laws of Motion. It includes a collection of interactive materilas, videos, and other digital media. Physics TEKS, (4)(D)
Drawing Conclusions about Three-Dimensional Figures from Nets
Given a net for a three-dimensional figure, the student will make conjectures and draw conclusions about the three-dimensional figure formed by the given net.
Newton's Law of Inertia
This resource provides instructional resources for Newton's First Law, the law of inertia.
Newton's Law of Action-Reaction
This resource is to support TEKS (8)(6)(C), specifically the Newton's third law or the law of action-reaction.
Given schematic diagrams, illustrations or descriptions, students will identify the relationship of electric and magnetic fields in applications such as generators, motors, and transformers.
Given diagrams, illustrations, scenarios, or relevant data, students will calculate the power of a physical system.
Kinetic and Potential Energy
Given diagrams, illustrations or relevant data, students will identify examples of kinetic and potential energy and their transformations. |
Forms for the equation of a straight line Suppose that we have the graph of a straight line and that we wish to find its equation.
Lines through a sphere A line can intersect a sphere at one point in which case it is called a tangent. It can not intersect the sphere at all or it can intersect the sphere at two points, the entry and exit points.
For the mathematics for the intersection point s of a line or line segment and a sphere see this. Antipodal points A line that passes through the center of a sphere has two intersection points, these are called antipodal points.
Planes through a sphere A plane can intersect a sphere at one point in which case it is called a tangent plane. Otherwise if a plane intersects a sphere the "cut" is a circle. Lines of latitude are examples of planes that intersect the Earth sphere. Lines of latitude Lines of longitude Meridians Great Circles A great circle is the intersection a plane and a sphere where the plane also passes through the center of the sphere.
Lines of longitude and the equator of the Earth are examples of great circles. Two points on a sphere that are not antipodal define a unique great circle, it traces the shortest path between the two points.
If the points are antipodal there are an infinite number of great circles that pass through them, for example, the antipodal points of the north and south pole of Earth there are of course infinitely many others. Great circles define geodesics for a sphere. A geodesic is the closest path between two points on any surface.
Lune A lune is the area between two great circles who share antipodal points. Unlike a plane where the interior angles of a triangle sum to pi radians degreeson a sphere the interior angles sum to more than pi.
As the sphere becomes large compared to the triangle then the the sum of the internal angles approach pi. Calculating Centre Two lines can be formed through 2 pairs of the three points, the first passes through the first two points P1 and P2. Line b passes through the next two points P2 and P3.
The equation of these two lines is where m is the slope of the line given by The centre of the circle is the intersection of the two lines perpendicular to and passing through the midpoints of the lines P1P2 and P2 P3.
Alternatively one can also rearrange the equations of the perpendiculars and solve for y. Radius The radius is easy, for example the point P1 lies on the circle and we know the centre The denominator mb - ma is only zero when the lines are parallel in which case they must be coincident and thus no circle results.
If either line is vertical then the corresponding slope is infinite. This can be solved by simply rearranging the order of the points so that vertical lines do not occur. Equation of a Sphere from 4 Points on the Surface Written by Paul Bourke June Given 4 points in 3 dimensional space [ x1,y1,z1 x2,y2,z2 x3,y3,z3 x4,y4,z4 ] the equation of the sphere with those points on the surface is found by solving the following determinant.Just type the two points, and we'll take it form there Equation of line from 2 points Calculator.
Enter 2 points and get slope intercept, point slope and standard forms. In similarity with a line on the coordinate plane, we can find the equation of a line in a three-dimensional space when given two different points on the line, since subtracting the position vectors of the two points will give the direction vector.
YOUR TURN: Find the equation of the line passing through the points (-4, 5) and (2, -3). Feb 07, · Article SummaryX.
To algebraically find the intersection of two straight lines, write the equation for each line with y on the left side. Next, write down the right sides of the equation so that they are equal to each other and solve for x.
Equation Line Two Points. Showing top 8 worksheets in the category - Equation Line Two Points. Some of the worksheets displayed are Write equation of line from 2 points work, Two point formula work, Writing linear equations, Equation of a line slope intercept l1s1, Finding the equation of a line given two points practice, Finding the equation of a line given two points.
The first half of this page will focus on writing the equation in slope intercept form like example 1 below. However, if you are comfortable using the point slope form of a line, then skip to the second part of this page because writing the equation from 2 points is easier with point slope form. |
Last Update: May 23rd, 2022
Addition is an essential part of a child’s education. There are times, however, when mathematics can be a certainly intimidating ordeal. Fortunately, through a parent’s care and guidance, any troubled tot’s weariness will surely be addressed.
Guiding and nurturing your child’s mathematical acumen is a step towards a firm footing on their academic future. Putting a priority on children’s ability or understanding of the mathematical process of addition gives them confidence and potent preparation for the early rigors of education.
In many states, the goal is to have all their first graders know, at least, their addition and subtraction for values up to 20. The first step in understanding and mastery of addition as an operation is to make the kids understand the nature of adding itself.
To help parents and their children gain a better understanding of addition and mathematics in general, ways to aid in learning the concept have been inventively mechanized by think tanks and studies conducted around the world. The methods that they have developed have not ultimately proven to be effective, accessible, and easy to facilitate.
Here are the many educational ways that can help your kids acquire a deeper understanding of the mathematical operation known as addition.
Utilization of Objects to Showcase How Education Works
It has been proven by research that young minds respond very well to visual stimuli. Making use of this knowledge is sure to aid in understanding addition concepts. From blocks, images, and even counting sticks, there are a wide and varied array of choices to use for these demonstrations and exhibitions. Just about any object that can be easily handled can be used like beads, lego, and even chips. To allow them to acclimate themselves in such activities, start with a small number of items and work your way to more complex problems to further demonstrate the relationship between numbers and the operation.
Here are some activities you can consider for these lessons:
- Assign a certain number of blocks to a group of children. Have the kids count their blocks first, then the blocks of two other children, and then of the whole group. Once they get a good grasp on the current number of blocks, add some more blocks for some of the kids and redo the exercise.
- Give your kids a set number of chips or fries or any tasty snack. Have them count how many they have and have them ask you how many more they need to reach a certain number of snacks. For example, if they have five pieces of chips at the start of the activity, ask them how many more they need to reach 15.
- Use stacking activities to exhibit the effect of addition as an operation. Using blocks, demonstrate how addition works in increasing the value of things. You can use money, legos, and other visually helpful materials that can concretely show an increase in their value or number.
Counting Using Body Parts
The most basic math lessons can be learned through the use of the human body. Further, this is something that the kids can use for future lessons as well. This can be done in a group or individually.
The most fundamental way to use the body for addition is through the use of their fingers. Using their fingers, you can teach kids basic concepts of addition. You can take this further by doing this as a group. Have them count how many heads there are in the group, how many hands, toes, feet, fingers, and even eyes.
To add a dimension to this, you can group them further and move them around for a more challenging experience. Have them count themselves, their body parts and configure the activity to optimize entertainment and learning.
Play Games That Employ Math Concepts
From the money counting of Monopoly to the navigation of Snakes and Ladders, there are a wide selection of games that can help aid you in your goal of nurturing your child’s proficiency in addition.
Games with dice often function as good entryways into addition. The dice itself is an instrument that provides a repetitive drill for learning the operation as you and the kids will have to constantly two numbers together to know how many units your pieces in a board game are to move. Dominoes and playing cards may also provide an exceptional learning experience for the kids as they are excellent instruments for practice.
Once the group gets a better grasp of how the games work and show an improved response to math problems, increase the challenge by adding new elements like throwing in another die or increasing the number of playing cards.
Count with Coins
If you run out of games, you can still practice some addition with the kids using coins. Money is an important tool in practicing mathematical concepts. You may use money to practice adding ones, five, tens, and even by the 25s.
Using money provides a visual reference and stimulus for the kids to learn from and demonstrates a practical benefit and advantage to learning addition.
Familiarizing Children with the Math Language
Once the kids understand what it means to “add,” it’s time to use their knowledge on solving basic math problems. First, educate them on the meaning of the symbols “+” and “=.” After introducing the meaning of the symbols to the kids, they have to learn how to write it themselves next.
Guide them as they practice writing number sentences. This will start out as just practicing the addition and equal symbols, but when they’re ready, it’s time they write actual number sentences such as “1+1=2,” so forth and so on.
It would also help children a great deal to learn the words that also mean addition such as “all together,” “put together,” “how many in all,” “total,” and “sum” that usually mean that a child will need to add two or more numbers.
There are many other scientifically proven ways to improve your children’s math learning. But some are easier and more accessible than others. Through the easy to execute strategies detailed above, parents are provided with options that are easy to implement themselves.
- The Most Popular Toy by the Year 1960 to 2022 (Part 1) - May 18, 2022
- Make-A-Fort: An In-Depth Review of the Award-Winning Innovation - May 18, 2022
- 50 Toys That Will Make Your Kids Smarter - May 18, 2022 |
4-by-3 rectangle inquiry
Mathematical inquiry processes: Interpret; define parameters; explore. Conceptual field of inquiry: Area and perimeter; formulae.
Students have posed some of the following questions in their first response to the prompt:
Are there other rectangles with an area of 12 square units?
What other shapes could have an area of 12 square units?
What is different and the same about the rectangles?
How many rectangles are possible with the same area?
Which rectangle has the longest perimeter? ... the shortest?
Is there a rectangle with an area equal to the length of its perimeter?
A discussion can ensue at this point about whether a 4-by-3 rectangle is the 'same' as one with dimensions of 3-by-4. Accepting that they are not, we might speculate that the number of rectangles with the same area is half the area (assuming the dimensions are whole numbers). So there are six rectangles with an area of 12 square units and four with an area of eight. However, if the area is a prime number, there will only be two. The inquiry might develop into a consideration of the factors of prime and composite numbers.
Perhaps, the teacher prefers to hold the length of the perimeter constant for the initial prompt (see below), when similar questions might arise:
What is different and the same about the rectangles?
How many rectangles are possible with the same perimeter?
Which has the greatest area? ... the smallest?
The conjectures that develop from the particular cases shown in the two prompts above might be combined. So, considering the 4-by-3 rectangle, there are six rectangles with an area of 12 square units and six with a perimeter of 14 units. Is this always the case with every rectangle? This pathway to the inquiry has the potential to reinforce the distinction between the concept of area and that of perimeter.
Matthew Bernstein, a teacher of a grade 5/6 class at the Fred Varley Public School (Markham, Ontario), reports on the inquiry his students carried out into the prompt:
Even having done only a little bit of initial work on area and perimeter, I felt my Grade 5s would do well with this inquiry as the Grade 4 curriculum in Ontario asks students to inquire into the formula for the area of a rectangle. There was lots of great thinking when I introduced the prompt.
Students’ curiosity was aroused and they immediately wanted to know if there were other rectangles with the same area. Then they began to investigate other shapes with areas of 12 square units, including triangles. This eventually led one group to use pattern blocks to inquire independently into the formula for the area of a trapezoid. During the inquiry, which lasted over two days, the students had some great learning. This has made for an easy transition to inquiring into the formulas for parallelograms and trapezoids!
The first picture (top left) shows the students' initial responses to the prompt. The other pictures show the planning sheet the students' used and examples of their inquiries.
Building resilience and developing creativity
Michelle Cole, Leader of Learning in the Mathematics at Ormiston Bushfield Academy, Peterborough (UK), gave the prompt to her year 7 class as an introduction to the concepts of perimeter and area. She reports that the students’ responses were “inspiring, amazing, and truly beyond any of my expectations.”
The students posed questions on a wide range of mathematical topics: perimeter and area; symmetry, angles, and other properties of the shapes; coordinates; volume; and enlargement by a scale factor of ½. Other questions suggested novel lines of inquiry:
How many rectangles (or squares) can you see in each shape?
How many triangles from one point can you find?
What shapes can be made out of each shape?
What fraction of the grid do the shapes take up?
Michelle describes how she approached the inquiry:
I have been experimenting with prompts, which in the past I have structured a little more. This was the first time I simply gave them the diagram on A3 paper and said, “What questions could we ask?”
Students recorded their questions and ideas on A3 paper.
Students were a little reluctant to put things on paper but once they realised that they had free range to think about questions that we would then discuss they came up with some many and varied ideas.
When we discussed their ideas we also talked about which questions we could answer (for example, What is the perimeter? What is the area?) compared to the questions we could not answer (for example, ' Where is the origin? Is there a reason why they are different colours?).
We then centred the activity back on perimeter and area with the students investigating the perimeter when they put more than one of their desks together.
I use the ‘what is the same? what is different?’ prompt fairly regularly as a starter (you can see some students have used this as questions on their sheets). It is this that has helped build up their resilience and has got them thinking in a wider context than the most obvious.
Engaging younger learners in inquiry
Amelia O’Brien, a teacher at the United World College Thailand in Phuket, tried out the prompt with her grade 3 class. She reports on discovering that the Inquiry Maths approach is just as effective with younger learners as it is with secondary students:
Grade 3 tried out the prompt by first using Project Zero’s Visible Thinking Routine 'see, think, wonder' as a collaborative group, sharing and building on each other's ideas. At first, some students could 'see' a face (if another rectangular eye was added!) and others could 'see' a similarity to the Chinese character 'up'. "After being asked to think like a mathematician, students made immediate connections to arrays. It is worth noting that we are concurrently inquiring into multiplication and division.
After modelling expectations and discussing possible ways of unpacking and responding to the prompt, including sentence starters and suggestions of other ways we could represent the prompt using a multidimensional approach, we continued our thinking in small groups.
Students identified patterns, experimented using symbols and numbers and were encouraged to ask questions. They then choose a question to explore that interested them.
As the prompt and associated questions inherently differentiate themselves, most students chose an appropriately challenging question based on their own prior knowledge and understanding. We then planned how we would carry out our inquiries, selecting and connecting concepts that might help focus our thinking. Most students decided to experiment by modelling (using blocks, counters or grid paper) or research using maths dictionaries and discussion.
I had not used mathematical prompts with students this young before and was unsure of how it would play out. With some differentiated modelling, the prompt and subsequent inquiries proved to be as engaging and meaningful as using this approach with older students!
Grade 6 pupils at the Luanda International School (Angola) began their inquiry into measurement by considering the rectangles prompt. As they attempted to make sense of the prompt, the pupils' questions connected their existing knowledge of relevant mathematical concepts to the prompt.
The class went on to conduct personal inquiries, during which the generation of even more questions opened up new pathways for exploration. The quality and depth of this generative questioning attests to the sophistication of inquiry processes developed in the class.
Questioning, wondering and speculating
The picture shows the questions and observations from Aine Carroll's year 7 class. They initially focus on the area and perimeter of the rectangles in the prompt before students start to wonder whether it is possible to create other shapes with the same area. One student speculates about the number of rectangles with an area twice the size of those in the prompt. The other two pictures show individual students' contributions to the inquiry.
Notice, think, wonder
Amanda Klahn, a PYP and inquiry-based maths teacher, posted this blog about using the 4-by-3 prompt with her class on her Doing Maths website.
Observe and question
Samia Henaine posted the diagram on twitter with the caption, "Eliminate three unit squares from a 4-by-3 rectangle and observe what happens."
The prompt gives rise to new lines of inquiry with the 4-by-3 rectangle:
What happens to the area of the rectangle?
What eliminations give the longest and shortest perimeters?
How many different perimeters are possible when eliminating three squares?
What happens if you eliminate more or less squares?
Is it possible to create the same perimeter by eliminating two, three or four squares?
What happens if you eliminate a square in the middle of the rectangle?
What do you notice if we start with a bigger or smaller rectangle?
What eliminations always give the longest and shortest perimeters?
Samia is a math teacher, and PYP and ICT Coordinator at Houssam Eddine Hariri High School in Saida (Lebanon). For more inquiry prompts, see her website Math Teachers as Bridge Builders.
Genesis of the prompt
Inquiries can begin with the simplest of prompts. Mark Greenaway (an Advanced Skills Teacher in the UK) contacted Inquiry Maths about developing prompts for students with lower prior attainment. He proposed using a 4-by-3 rectangle as a prompt. This could potentially lead to a very open inquiry encompassing a number of different directions. However, a prompt like the 4-by-3 rectangle is so familiar to students that it might fail to meet the first criterion for creating a prompt: "A prompt must promote curiosity and questioning in students of the sort 'that can't be right' or 'I've noticed ...'. Prompts should be engaging, and ripe for speculation or conjecture."
There is no guarantee that, beyond stating the obvious features of the rectangle, students will be able to isolate a key concept on which to build an inquiry. For example, a student might be able to identify the area as 12 square units, but not be able to extrapolate the concept of area as a foundation for inquiry. Such a step requires the use of well-developed inquiry skills, and particularly high levels of confidence and creativity.
If students lack those skills, then the teacher will need to define the inquiry further. Teacher intervention at this point reduces the possibility of students working on their own questions and statements, which is a key motivational aspect of inquiry. It is better for the prompt to 'suggest' the key concept so that students can generalise to other cases for themselves. When the rectangle is placed in the context of a series of rectangles sharing the same characteristic (for example, the area), then an initial inquiry can develop out of students' observations.
After the first phase, the teacher can highlight the constraint in the prompt, and invite students to change the prompt by holding another characteristic (for example, the perimeter) of the 4-by-3 rectangle constant. While suggesting changes to the prompt is empowering for students, it remains a highly developed skill. Not only do students have to learn how to be creative, they also have to learn how to make mathematically-valid suggestions.
The teacher can run a fully open inquiry by starting with the 4-by-3 rectangle only. The risk is that, with pressures to 'cover' a curriculum, the inquiry could go in one of many different directions.
Alternatively, the teacher could guide the inquiry into a particular direction by offering prompts in which the 4-by-3 rectangle is an integral part.
Indeed, a teacher who starts a series of inquiries with one key component thereby emphasises the inter-connected nature of mathematics.
The two additional prompts link the 4-by-3 rectangle to sequences and reflection symmetry.
For more on the differences between different types of inquiry and on the factors involved in deciding whether to choose an open, guided or structured inquiry, see Levels of Inquiry Maths.
Prompt 1: sequences
The prompt invites students to pose questions and make observations on sequences: How many sequences are there that contain the 4-by-3 rectangle? Is there another rectangle before the sequence in the prompt? Can you find an expression (in words or algebra) to describe the area and perimeter for shape n? What are the term-to-term or position-to-term rules for the sequences?
Prompt 2: symmetry
A second alternative prompt invites students to remove squares to create rectangle patterns with lines of symmetry. How many patterns can be created with one line or two lines of symmetry? What if you remove more than two squares? Why can you not make patterns with more than two lines of symmetry? What shape would you need if that was your aim? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.