text stringlengths 4 602k |
|---|
Thirty Years' War
The Thirty Years' War[l] took place largely within the Holy Roman Empire from 1618 to 1648. One of the most destructive wars in European history, it caused an estimated 4.5 to 8 million deaths, while some areas of Germany experienced population declines of over 50%. Related conflicts include the Eighty Years' War, War of the Mantuan Succession, the Franco-Spanish War, and Portuguese Restoration War.
Until the 20th century, it was seen as part of the German religious struggle initiated by the 16th century Reformation. The 1555 Peace of Augsburg divided the Empire into Lutheran and Catholic states, but over the next 50 years the expansion of Protestantism beyond these boundaries destabilised Imperial authority. Although religion was a significant factor in starting the war, scholars now generally agree its scope and extent was driven by the contest for European dominance between Habsburgs in Austria and Spain, and the French House of Bourbon.
The war began in 1618 when Ferdinand II was deposed as King of Bohemia and replaced by Frederick V of the Palatinate. Although the Bohemian Revolt was quickly suppressed, fighting expanded into the Palatinate, whose strategic importance drew in the Dutch Republic and Spain, then engaged in the Eighty Years War. Since ambitious external rulers like Christian IV of Denmark and Gustavus Adolphus of Sweden also held territories within the Empire, what began as an internal dynastic dispute was transformed into a far more destructive European conflict.
The first phase from 1618 until 1635 was primarily a civil war between German members of the Holy Roman Empire, with external powers playing supportive roles. After 1635, the Empire became one theatre in a wider struggle between France, supported by Sweden, and Spain in alliance with Emperor Ferdinand III. This concluded with the 1648 Peace of Westphalia, whose provisions included greater autonomy within the Empire for states like Bavaria and Saxony, as well as acceptance of Dutch independence by Spain. By weakening the Habsburgs relative to France, the conflict altered the European balance of power and set the stage for the wars of Louis XIV.
The 1552 Peace of Passau ended the Schmalkaldic War between Protestants and Catholics in the Holy Roman Empire, while the 1555 Peace of Augsburg tried to prevent future conflict by fixing existing boundaries. Under the principle of cuius regio, eius religio, states were either Lutheran, then the most usual form of Protestantism, or Catholic, based on the religion of their ruler. Other provisions protected substantial religious minorities in cities like Donauwörth and confirmed Lutheran ownership of property taken from the Catholic Church since Passau.
The agreement was undermined by the expansion of Protestantism beyond its 1555 boundaries, into areas previously dominated by Catholicism. An additional source of conflict was the growth of Reformed faiths not recognised by Augsburg, especially Calvinism, a theology viewed with hostility by both Lutherans and Catholics. Finally, religion was increasingly superseded by economic and political objectives; Lutheran Saxony, Denmark-Norway and Sweden competed with each other and Calvinist Brandenburg over the Baltic trade.
Managing these issues was complicated by the fragmented nature of the Empire and its representative institutions, which included 300 Imperial Estates distributed across Germany, the Low Countries, Northern Italy, as well as Alsace and Franche-Comté in modern France. They ranged in size and importance from the seven Prince-electors who voted for the Holy Roman Emperor, down to Prince-bishoprics and Imperial cities like Hamburg.[m] In addition, each belonged to a regional Imperial circle, which focused on defence and taxes and often operated as autonomous bodies. Above them sat the Imperial Diet, which prior to 1663 assembled on an irregular basis and was primarily a forum for discussion, rather than legislation. [n]
Although Emperors were elected, since 1440 the position had been held by a member of the Habsburg family. The largest single landowner within the Empire, they controlled territories containing over eight million subjects, including the Archduchy of Austria, Bohemia and Hungary. The Habsburg emperors also ruled Spain until 1556, when it became a separate entity; Spain retained Imperial interests, including the Duchy of Milan, and while the two branches of the family often co-operated, their objectives did not always align. The Spanish Empire was a global maritime superpower whose possessions included the Spanish Netherlands, Milan, the Kingdom of Naples, the Philippines, and most of the Americas. Austria was a land-based power, whose strategic focus was securing a pre-eminent position in Germany and their eastern border against the Ottoman Empire.
Before Augsburg, unity of religion compensated for lack of strong central authority; once removed, it presented opportunities for those who sought to further weaken it. These included ambitious Imperial states like Lutheran Saxony and Catholic Bavaria, as well as France, confronted by Habsburg lands on its borders to the North, South, and along the Pyrenees. A further complication was that many foreign rulers were also Imperial princes, involving them in its internal disputes; Christian IV of Denmark joined the war in 1625 as Duke of Holstein.
Background: 1556 to 1618
Disputes occasionally resulted in full-scale conflict like the 1583 to 1588 Cologne War, caused when its ruler converted to Calvinism. More common were events such as the 1606 'Battle of the Flags' in Donauwörth, when riots broke out after the Lutheran majority blocked a Catholic religious procession. Emperor Rudolf approved intervention by the Catholic Maximilian of Bavaria. In return, he was allowed to annex the town and as agreed at Augsburg, the official religion changed from Lutheran to Catholic.
When the Imperial Diet opened in February 1608, both Lutherans and Calvinists united to demand formal re-confirmation of the Augsburg settlement. However, in return the Habsburg heir Archduke Ferdinand required the immediate restoration of all property taken from the Catholic church since 1555, rather than the previous practice whereby court ruling case by case. This threatened all Protestants, paralysed the Diet, and removed the perception of Imperial neutrality.
Loss of faith in central authority meant towns and rulers began strengthening their fortifications and armies; outside travellers often commented on the growing militarisation of Germany in this period. This was taken a stage further in 1608 when Frederick IV, Elector Palatine formed the Protestant Union and Maximilian responded by setting up the Catholic League in July 1609. Both structures were primarily designed to support the dynastic ambitions of their leaders, but their creation combined with the 1609 to 1614 War of the Jülich Succession to increase tensions throughout the Empire. Some historians who see the war as primarily a European conflict argue Jülich marks its beginning, with Spain and Austria backing the Catholic candidate, France and the Dutch Republic the Protestant.
External powers became involved in what was an internal German dispute due to the imminent expiry of the 1609 Twelve Years' Truce, which suspended the Eighty Years War between Spain and the Dutch Republic. Before restarting hostilities, Ambrosio Spinola, commander in the Spanish Netherlands, needed to secure the Spanish Road, an overland route connecting Habsburg possessions in Italy to Flanders. This allowed him to move troops and supplies by road, rather than sea where the Dutch navy was dominant; by 1618, the only part not controlled by Spain ran through the Electoral Palatinate.
Since Emperor Matthias had no surviving children, in July 1617 Philip III of Spain agreed to support Ferdinand's election as king of Bohemia and Hungary. In return, Ferdinand made concessions to Spain in Northern Italy and Alsace, and agreed to support their offensive against the Dutch. Delivering these commitments required his election as Emperor, which was not guaranteed; one alternative was Maximilian of Bavaria, who opposed the increase of Spanish influence in an area he considered his own, and tried to create a coalition with Saxony and the Palatinate to support his candidacy.
A third candidate was the Calvinist Frederick V, Elector Palatine, who succeeded his father in 1610, then in 1613 married Elizabeth Stuart, daughter of James I of England. Four of the electors were Catholic, three Protestant; if this could be changed, it might result in a Protestant Emperor. When Ferdinand was elected king of Bohemia in 1617, he gained control of its electoral vote; however, his conservative Catholicism made him unpopular with the largely Protestant nobility, who were also concerned at the erosion of their rights. In May 1618, these factors combined to bring about the Bohemian Revolt.
Phase I: 1618 to 1635
The Bohemian Revolt
The Jesuit-educated Ferdinand once claimed he would rather see his lands destroyed than tolerate heresy for a single day. Appointed to rule the Duchy of Styria in 1595, within eighteen months he eliminated Protestantism in what was previously a stronghold of the Reformation. Focused on retaking the Netherlands, the Spanish Habsburgs preferred to avoid antagonising Protestants elsewhere, and recognised the dangers associated with Ferdinand's fervent Catholicism, but accepted the lack of alternatives.
Ferdinand reconfirmed Protestant religious freedoms when elected king of Bohemia in May 1617, but his record in Styria led to the suspicion he was only awaiting a chance to overturn them. These concerns were exacerbated when a series of legal disputes over property were all decided in favour of the Catholic Church. In May 1618, Protestant nobles led by Count Thurn met in Prague Castle with Ferdinand's two Catholic representatives, Vilem Slavata and Jaroslav Borzita. In what became known as the Third Defenestration of Prague, the two men and their secretary Philip Fabricius were thrown out of the castle windows, although all three survived.
Thurn established a Protestant-dominated government in Bohemia, while unrest expanded into Silesia and the Habsburg heartlands of Lower and Upper Austria, where much of the nobility was also Protestant. Losing control of these threatened the entire Habsburg state, while Bohemia was one of the most prosperous areas of the Empire and its electoral vote crucial to ensuring Ferdinand succeeded Matthias as Emperor. The combination meant their recapture was vital for the Austrian Habsburgs but chronic financial weakness left them dependent on Maximilian and Spain for the resources needed to achieve this.
Spanish involvement inevitably drew in the Dutch, and potentially France, although the strongly Catholic Louis XIII faced his own Protestant rebels at home and refused to support them elsewhere. The revolt also provided opportunities for external opponents of the Habsburgs, including the Ottoman Empire and Savoy. Funded by Frederick and the Duke of Savoy, a mercenary army under Ernst von Mansfeld was sent to support the Bohemian rebels. Attempts by Maximilian and John George of Saxony to broker a negotiated solution ended when Matthias died in March 1619, since many believed the loss of his authority and influence had fatally damaged the Habsburgs.
By mid-June 1619, the Bohemian army under Thurn was outside Vienna and although Mansfeld's defeat by Imperial forces at Sablat forced him to return to Prague, Ferdinand's position continued to worsen. Gabriel Bethlen, Calvinist Prince of Transylvania, invaded Hungary with Ottoman support, although the Habsburgs persuaded them to avoid direct involvement; this was helped when the Ottomans became involved in the 1620 Polish war, followed by the 1623 to 1639 conflict with Persia.
On 19 August, the Bohemian Estates rescinded Ferdinand's 1617 election as king, and formally offered the crown to Frederick on 26th; two days later, Ferdinand was elected Emperor, making war inevitable if Frederick accepted. With the exception of Christian of Anhalt, his advisors urged him to reject it, as did the Dutch, the Duke of Savoy, and his father-in-law James. 17th century Europe was a highly structured and socially conservative society, and their lack of enthusiasm was due to the implications of removing a legally elected ruler, regardless of religion.
As a result, although Frederick accepted the crown and entered Prague in October 1619, his support gradually eroded over the next few months. In July 1620, the Protestant Union proclaimed its neutrality, while John George of Saxony agreed to back Ferdinand in return for Lusatia, and a promise to safeguard the rights of Lutherans in Bohemia. A combined Imperial-Catholic League army funded by Maximilian and led by Count Tilly pacified Upper and Lower Austria before invading Bohemia, where they defeated Christian of Anhalt at the White Mountain in November 1620. Although the battle was far from decisive, the rebels were demoralised by lack of pay, shortages of supplies, and disease, while the countryside had been devastated by Imperial troops. Frederick fled Bohemia and the revolt collapsed.
The Palatinate Campaign
By abandoning Frederick, the German princes hoped to restrict the dispute to Bohemia, but Maximilian's dynastic ambitions made this impossible. In the October 1619 Treaty of Munich, Ferdinand agreed to transfer the Palatinate's electoral vote to Bavaria and allow him to annex the Upper Palatinate. Many Protestants supported Ferdinand because they objected to deposing the legally elected king of Bohemia, and now opposed Frederick's removal on the same grounds. Doing so turned the conflict into a contest between Imperial authority and "German liberties", while Catholics saw an opportunity to regain lands lost since 1555. The combination destabilised large parts of the Empire.
The strategic importance of the Palatinate and its proximity to the Spanish Road drew in external powers; in August 1620, the Spanish under Spinola and Córdoba occupied the Lower Palatinate. James I of England responded to this attack on his son-in-law by sending naval forces to threaten Spanish possessions in the Americas and the Mediterranean, and announced he would declare war if Spinola had not withdrawn his troops by spring 1621. These actions were primarily designed to placate his opponents in Parliament, who considered his pro-Spanish policy a betrayal of the Protestant cause. However, Spanish chief minister Olivares correctly interpreted them as an invitation to open negotiations, and in return for an Anglo-Spanish alliance offered to restore Frederick to his Rhineland possessions.
Since Frederick demanded full restitution of his lands and titles, which was incompatible with the Treaty of Munich, hopes of reaching a negotiated peace quickly evaporated. When the Eighty Years War restarted in April 1621, the Dutch provided Frederick military support to regain his lands, along with a mercenary army under Mansfeld paid for with English subsidies. Over the next eighteen months, Spanish and Catholic League forces won a series of victories; by November 1622, they controlled most of the Palatinate, apart from Frankenthal, held by a small English garrison under Sir Horace Vere. The remnants of Mansfeld's army took refuge in the Dutch Republic, as did Frederick; he spent most of his time in The Hague, until his death in November 1632.
At a meeting of the Imperial Diet in February 1623, Ferdinand forced through provisions transferring Frederick's titles, lands, and electoral vote to Maximilian. He did so with support from the Catholic League, despite strong opposition from Protestant members, as well as the Spanish. The Palatinate was clearly lost; in March, James instructed Vere to surrender Frankenthal, while Tilly's victory over Christian of Brunswick at Stadtlohn in August completed military operations. However, Spanish and Dutch involvement in the campaign was a significant step in internationalising the war, while Frederick's removal meant other Protestant princes began discussing armed resistance to preserve their own rights and territories.
Danish intervention (1625–1629)
With Saxony dominating the Upper Saxon Circle and Brandenburg the Lower, both kreis had remained neutral during the campaigns in Bohemia and the Palatinate. However, Frederick's deposition in 1623 meant John George of Saxony and the Calvinist George William, Elector of Brandenburg became concerned Ferdinand intended to reclaim formerly Catholic bishoprics currently held by Protestants. These fears seemed confirmed when Tilly restored the Roman Catholic Diocese of Halberstadt in early 1625.
As Duke of Holstein, Christian IV was also a member of the Lower Saxon circle, while Denmark's economy relied on the Baltic trade and tolls from traffic through the Øresund. In 1621, Hamburg accepted Danish 'supervision', while his son Frederick became joint-administrator of Lübeck, Bremen, and Verden; possession ensured Danish control of the Elbe and Weser rivers.
Ferdinand had paid Wallenstein for his support against Frederick with estates confiscated from the Bohemian rebels, and now contracted with him to conquer the north on a similar basis. In May 1625, the Lower Saxony kreis elected Christian their military commander, although not without resistance; Saxony and Brandenburg viewed Denmark and Sweden as competitors, and wanted to avoid either becoming involved in the Empire. Attempts to negotiate a peaceful solution failed as the conflict in Germany became part of the wider struggle between France and their Habsburg rivals in Spain and Austria.
In the June 1624 Treaty of Compiègne, France had agreed to subsidise the Dutch war against Spain for a minimum of three years, while in the December 1625 Treaty of The Hague, the Dutch and English agreed to finance Danish intervention in the Empire.[o] Hoping to create a wider coalition against Ferdinand, the Dutch invited France, Sweden, Savoy, and the Republic of Venice to join, but it was overtaken by events. In early 1626, Cardinal Richelieu, main architect of the alliance, faced a new Huguenot rebellion at home and in the March Treaty of Monzón, France withdrew from Northern Italy, re-opening the Spanish Road.
However, the Dutch and English subsidies enabled Christian to devise an ambitious three part campaign plan; while he led the main force down the Weser, Mansfeld would attack Wallenstein in Magdeburg, supported by forces led by Christian of Brunswick and Maurice of Hesse-Kassel. The advance quickly fell apart; Mansfeld was defeated at Dessau Bridge in April, and when Maurice refused to support him, Christian of Brunswick fell back on Wolfenbüttel, where he died of disease shortly after. The Danes were comprehensively beaten at Lutter in August, and Mansfeld's army dissolved following his death in November.
Many of Christian's German allies, such as Hesse-Kassel and Saxony, had little interest in replacing Imperial domination for Danish, while few of the subsidies agreed in the Treaty of the Hague were ever paid. Charles I of England allowed Christian to recruit up to 9,000 Scottish mercenaries, but they took time to arrive, and while able to slow Wallenstein's advance, were insufficient to stop him. By the end of 1627, Wallenstein occupied Mecklenburg, Pomerania, and Jutland, and began making plans to construct a fleet capable of challenging Danish control of the Baltic. He was supported by Spain, for whom it provided an opportunity to open another front against the Dutch.
In May 1628, his deputy von Arnim besieged Stralsund, the only port with large enough shipbuilding facilities, but this brought Sweden into the war. Gustavus Adolphus despatched several thousand Scots and Swedish troops to Stralsund, commanded by Alexander Leslie who was also appointed governor. Von Arnim was forced to lift the siege on 4 August, but three weeks later, Christian suffered another defeat at Wolgast. He began negotiations with Wallenstein, who despite his recent victories was concerned by the prospect of Swedish intervention, and thus anxious to make peace.
With Austrian resources stretched by the outbreak of the War of the Mantuan Succession, Wallenstein persuaded Ferdinand to agree to relatively lenient terms in the June 1629 Treaty of Lübeck. Christian retained his German possessions of Schleswig and Holstein, in return for relinquishing Bremen and Verden, and abandoning support for the German Protestants. While Denmark kept Schleswig and Holstein until 1864, this effectively ended its reign as the predominant Nordic state.
Once again, the methods used to obtain victory explain why the war failed to end. Ferdinand paid Wallenstein by letting him confiscate estates, extort ransoms from towns, and allowing his men to plunder the lands they passed through, regardless of whether they belonged to allies or opponents. Anger at such tactics and his growing power came to a head in early 1628 when Ferdinand deposed the hereditary Duke of Mecklenburg, and appointed Wallenstein in his place. Although opposition to this act united all German princes regardless of religion, Maximilian of Bavaria was compromised by his acquisition of the Palatinate; while Protestants wanted Frederick restored and the position returned to that of 1618, the Catholic League argued only for pre-1627.
Made overconfident by success, in March 1629 Ferdinand passed an Edict of Restitution, which required all lands taken from the Catholic church after 1555 to be returned. While technically legal, politically it was extremely unwise, since doing so would alter nearly every single state boundary in North and Central Germany, deny the existence of Calvinism and restore Catholicism in areas where it had not been a significant presence for nearly a century. Well aware none of the princes involved would agree, Ferdinand used the device of an Imperial edict, once again asserting his right to alter laws without consultation. This new assault on 'German liberties' ensured continuing opposition and undermined his previous success.
Swedish intervention; 1630 to 1634
Richelieu stated his policy was to "arrest the course of Spanish progress", and "protect her neighbours from Spanish oppression". With French resources tied up in Italy, he helped negotiate the September 1629 Truce of Altmark between Sweden and Poland, freeing Gustavus Adolphus to enter the war. Partly a genuine desire to support his Protestant co-religionists, like Christian he also wanted to maximise his share of the Baltic trade that provided much of Sweden's income. Using Stralsund as a bridgehead, in June 1630 nearly 18,000 Swedish troops landed in the Duchy of Pomerania. At the same time, Gustavus signed an alliance with Bogislaw XIV, Duke of Pomerania, securing his interests in Pomerania against the Catholic Polish–Lithuanian Commonwealth, another Baltic competitor linked to Ferdinand by family and religion. As a result, the Poles turned their attention to Russia, initiating the 1632 to 1634 Smolensk War.
Swedish expectations of widespread German support proved unrealistic and by the end of 1630, their only new ally was the city of Magdeburg, which was besieged by Tilly. Despite the devastation inflicted on their territories by Imperial soldiers, both Saxony and Brandenburg had their own ambitions in Pomerania, which clashed with those of Gustavus; previous experience also showed inviting external powers into the Empire was easier than getting them to leave.
Once again Richelieu used French financial power to reconcile these differences; the 1631 Treaty of Bärwalde provided funds for the Swedes and their Protestant allies, including Saxony and Brandenburg. These payments amounted to 400,000 Reichstaler, or one million livres, per year, plus an additional 120,000 Reichstalers for 1630. Though less than 2% of the total French state budget, it constituted over 25% of the Swedish budget and allowed Gustavus to support an army of 36,000. He won major victories at Breitenfeld in September 1631, then Rain in April 1632, where Tilly was killed.
After Tilly's death, Ferdinand turned once again to Wallenstein; knowing Gustavus was overextended, he marched into Franconia and established himself at Fürth, threatening Swedish supply lines. The largest battle of the war took place in late August, when an assault on the Imperial camp outside the town was bloodily repulsed, arguably the greatest blunder committed by Gustavus during his German campaign. Two months later, the Swedes and Imperials met at Lützen, both sides suffering heavy casualties; some Swedish units incurred losses of over 60%, while Wallenstein's deputy Pappenheim and Gustavus himself were killed. Fighting continued until dusk when Wallenstein retreated, abandoning his artillery and wounded. Despite their losses, this allowed the Swedes to claim victory, although the result continues to be disputed.
Following the death of Gustavus, Swedish policy was directed by his extremely capable Chancellor Axel Oxenstierna; in April 1633, the Swedes and their German allies formed the Heilbronn League with funding provided by the French and in July their combined forces defeated an Imperial army led by the Bavarian general Bronckhorst-Gronsfeld at Oldendorf. Lützen had severely impacted Wallenstein's prestige, while his domestic opponents claimed he failed to support Bronckhorst-Gronsfeld. Combined with rumours he was preparing to switch sides, Emperor Ferdinand ordered his arrest in February 1634; on 25th, he was assassinated by his own officers in Cheb.
The loss of Wallenstein and his organisation left Emperor Ferdinand reliant on Spain for military support; since their main concern was to re-open the Spanish Road for their campaign against the Dutch, the focus now shifted to the Rhineland and Bavaria. Cardinal-Infante Ferdinand of Austria, newly appointed Governor of the Spanish Netherlands, raised an army of 18,000 in Italy, which met up with an Imperial force of 15,000 at Donauwörth on 2 September 1634. Three days later, they won a decisive victory at Nördlingen which destroyed Swedish power in Southern Germany and led to the defection of their German allies, who now sought to make peace with the Emperor.
Phase II; France joins the war 1635 to 1648
By triggering direct French intervention, Nördlingen expanded the conflict rather than ending it. Richelieu provided the Swedes with new subsidies, hired mercenaries led by Bernhard of Saxe-Weimar for an offensive in the Rhineland and in May 1635 formally declared war on Spain. A few days later, the German states and Ferdinand agreed the Peace of Prague; in return for withdrawing the Edict of Restitution, the Heilbronn and Catholic Leagues were dissolved and replaced by a single Imperial army, although Saxony and Bavaria retained control of their own forces. This is generally seen as the point when the conflict ceased to be primarily a German civil war.
In March 1635, a French force entered the Valtellina, once again cutting the link between Milan and the Empire. In May, their main army of 35,000 invaded the Spanish Netherlands but was forced to retreat in July after suffering 17,000 casualties from disease and desertion. A Spanish offensive in 1636 reached Corbie in Northern France; although it caused panic in Paris, lack of supplies forced them to retreat, and it was not repeated. In the March 1636 Treaty of Wismar, France formally joined the Thirty Years War in alliance with Sweden; a Swedish army under Johan Banér entered Brandenburg and re-established their position in North-East Germany following the Battle of Wittstock on 4 October 1636.
Ferdinand II died in February 1637 and was succeeded by his son Ferdinand III, who faced a deteriorating military position. In March 1638, Bernhard destroyed an Imperial army at Rheinfelden, while his capture of Breisach in December secured French control of Alsace and severed the Spanish Road. In October, von Hatzfeldt defeated a Swedish-English-Palatine force at Vlotho but the main Imperial army under Matthias Gallas abandoned North-East Germany to the Swedes, unable to sustain itself in the devastated area. Banér defeated the Saxons at Chemnitz in April 1639, then entered Bohemia in May. To retrieve the situation, Ferdinand was forced to divert Piccolomini's army from Thionville, ending direct military cooperation between Austria and Spain.
Pressure grew on Spanish minister Olivares to make peace, especially after attempts to hire Polish auxiliaries proved unsuccessful. Cutting the Spanish Road had forced Madrid to resupply their armies in Flanders by sea and in October 1639 a large Spanish convoy was destroyed at the Battle of the Downs. Dutch attacks on their possessions in Africa and the Americas caused unrest in Portugal, then part of the Spanish Empire and combined with heavy taxes caused revolts in Portugal and Catalonia. After the French captured Arras in August 1640 and overran Artois, Olivares argued it was time to accept Dutch independence and prevent further losses in Flanders. The Empire remained a formidable power but could no longer subsidise Ferdinand, impacting his ability to continue the war.
After Bernhard died in July 1639, his troops joined Banér's Swedish army on an ineffectual campaign along the Weser, the highlight being a surprise attack in January 1641 on the Imperial Diet in Regensburg. Forced to retreat, Banér reached Halberstadt in May where he died; despite beating off an Imperial force at Wolfenbüttel in June, his largely German troops mutinied due to lack of pay. The situation was saved by the arrival of Lennart Torstenson in November with 7,000 Swedish recruits and enough cash to satisfy the mutineers.
French victory at Kempen in January 1642 was followed by Second Breitenfeld in October 1642, where Torstenson inflicted almost 10,000 casualties on an Imperial army led by Archduke Leopold Wilhelm of Austria. The capture of Leipzig in December gave the Swedes a significant new base in Germany, and despite their failure to take Freiberg, by 1643 the Saxon army had been reduced to a few isolated garrisons. Ferdinand accepted he could no longer achieve his objectives by military means, but by fighting on hoped to detain the Imperial Estates from joining negotiations with France and Sweden, instead representing the Empire as a whole.
This seemed more achievable with the deaths of Richelieu in December 1642, followed by Louis XIII on 14 May 1643, leaving the five-year-old Louis XIV as king. However, Richelieu's policies were continued by his successor Cardinal Mazarin, while French gains in Alsace allowed him to re-focus on the war against Spain in the Netherlands. On 19 May, Condé won a famous victory over the Spanish at Rocroi, although it was less decisive than often assumed.
Condé's inability to take full advantage of Rocroi was partially due to factors that affected all the combatants. The devastation inflicted by 25 years of warfare meant armies spent more time foraging than fighting, forcing them to become smaller and more mobile with a much greater emphasis on cavalry. It also shortened the campaigning seasons, since the need to gather forage meant they started later, and restricted them to areas that could be easily supplied, usually close to rivers. In addition, the French had to rebuild their army in Germany after it was shattered by an Imperial-Bavarian force led by Franz von Mercy at Tuttlingen in November.
Soon after Rocroi, Ferdinand invited Sweden and France to attend peace talks in the Westphalian towns of Münster and Osnabrück, but these were delayed when Christian of Denmark blockaded Hamburg and increased toll payments in the Baltic. This severely impacted the Dutch and Swedish economies and in December 1643 the Swedes began the Torstenson War by invading Jutland, with the Dutch providing naval support. Ferdinand pulled together an Imperial army under Gallas to attack the Swedes from the rear, which proved a disastrous decision. Leaving Wrangel to finish the war in Denmark, in May 1644 Torstenson marched into the Empire; Gallas was unable to stop him, while the Danes sued for peace after their defeat at Fehmarn in October 1644.
In August 1644, the French and Bavarian armies met in the three day Battle of Freiburg, in which both sides suffered heavy casualties but is generally viewed as a narrow Bavarian victory. His losses convinced Maximilian the war could no longer be won and he now put pressure on Ferdinand to end the conflict. Shortly after peace talks restarted in November, Gallas' Imperial army disintegrated and the remnants retreated into Bohemia, where they were scattered by Torstenson at Jankau in March 1645. In May, a Bavarian force under von Mercy destroyed a French detachment at Herbsthausen, before he was defeated and killed at Second Nördlingen in August. With Ferdinand unable to help, John George of Saxony signed a six-month truce with Sweden in September, followed by the March 1646 Treaty of Eulenberg in which he agreed to remain neutral until the end of the war.
Now led by Wrangel, who had replaced Torstenson, the Swedes invaded Bavaria in the summer of 1646 and by the autumn, Maximilian was desperate to end the war he was largely responsible for starting. At this point, the Spanish publicised a secret offer by Mazarin to exchange French-occupied Catalonia for the Spanish Netherlands. Angered by what they viewed as French duplicity, the Dutch agreed a truce with Spain in January 1647 and began to negotiate separate peace terms. Having failed to acquire the Netherlands through diplomacy, Mazarin decided to do so by force and to free up resources, on 14 March 1647 he signed the Truce of Ulm with Bavaria, Cologne and Sweden.
The offensive was to be led by Turenne, French commander in the Rhineland, but the plan fell apart when his mostly German troops mutinied, while Bavarian general Johann von Werth refused to comply with the truce. Although the mutinies were quickly suppressed, Maximilian felt obliged to follow Werth's example and in September ordered Bronckhorst-Gronsfeld to combine the remnants of the Bavarian army with Imperial troops under von Holzappel. Outnumbered by a Franco-Swedish army led by Wrangel and Turenne, they were defeated at Zusmarshausen in May 1648 and von Holzappel was killed. Although the bulk of the Imperial army escaped thanks to an effective rearguard action by Raimondo Montecuccoli, Bavaria was left defenceless once again.
The Swedes sent a second force under von Königsmarck to attack Prague, seizing the castle and Malá Strana district in July. The main objective was to gain as much loot as possible before the war ended; they failed to take the Old Town but captured the Imperial library, along with treasures including the Codex Gigas, now in Stockholm. On 5 November, news arrived that Ferdinand had signed peace treaties with France and Sweden on 24 October, ending the war.
The conflict outside Germany
Northern Italy had been contested by France and the Habsburgs for centuries, since it was vital for control of South-West France, an area with a long history of opposition to the central authorities. While Spain remained the dominant power in Italy, its reliance on long exterior lines of communication was a potential weakness, especially the Spanish Road; this overland route allowed them to move recruits and supplies from the Kingdom of Naples through Lombardy to their army in Flanders. The French sought to disrupt the Road by attacking the Spanish-held Duchy of Milan or blocking the Alpine passes through alliances with the Grisons.
A subsidiary territory of the Duchy of Mantua was Montferrat and its fortress of Casale Monferrato, whose possession allowed the holder to threaten Milan. Its importance meant when the last duke in the direct line died in December 1627, France and Spain backed rival claimants, resulting in the 1628 to 1631 War of the Mantuan Succession. The French-born Duke of Nevers was backed by France and the Republic of Venice, his rival the Duke of Guastalla by Spain, Ferdinand II, Savoy and Tuscany. This minor conflict had a disproportionate impact on the Thirty Years War, since Pope Urban VIII viewed Habsburg expansion in Italy as a threat to the Papal States. The result was to divide the Catholic church, alienate the Pope from Ferdinand II and make it acceptable for France to employ Protestant allies against him.
In March 1629, the French stormed Savoyard positions in the Pas de Suse, lifted the Spanish siege of Casale and captured Pinerolo. The Treaty of Suza then ceded the two fortresses to France and allowed their troops unrestricted passage through Savoyard territory, giving them control over Piedmont and the Alpine passes into Southern France. However, as soon as the main French army withdrew in late 1629, the Spanish and Savoyards besieged Casale once again, while Ferdinand II provided German mercenaries to support a Spanish offensive which routed the main Venetian field army and forced Nevers to abandon Mantua. By October 1630, the French position seemed so precarious their representatives agreed the Treaty of Ratisbon but since the terms effectively destroyed Richelieu's policy of opposing Habsburg expansion, it was never ratified.
Several factors restored the French position in Northern Italy, notably a devastating outbreak of plague; between 1629 and 1631, over 60,000 died in Milan and 46,000 in Venice, with proportionate losses elsewhere. Richelieu took advantage of the diversion of Imperial resources from Germany to fund a Swedish invasion, whose success forced the Spanish-Savoyard alliance to withdraw from Casale and sign the Treaty of Cherasco in April 1631. Nevers was confirmed as Duke of Mantua and although Richelieu's representative, Cardinal Mazarin, agreed to evacuate Pinerolo, it was later secretly returned under an agreement with Victor Amadeus I, Duke of Savoy. With the exception of the 1639 to 1642 Piedmontese Civil War, this secured the French position in Northern Italy for the next twenty years.
After the outbreak of the Franco-Spanish War in 1635, Richelieu supported a renewed offensive by Victor Amadeus against Milan to tie down Spanish resources. These included an unsuccessful attack on Valenza in 1635, plus minor victories at Tornavento and Mombaldone. However, the anti-Habsburg alliance in Northern Italy fell apart when first Charles of Mantua died in September 1637, then Victor Amadeus in October, whose death led to a struggle for control of the Savoyard state between his widow Christine of France and brothers, Thomas and Maurice.
In 1639, their quarrel erupted into open warfare, with France backing Christine and Spain the two brothers, and resulted in the Siege of Turin. One of the most famous military events of the 17th century, at one stage it featured no less than three different armies besieging each other. However, the revolts in Portugal and Catalonia forced the Spanish to cease operations in Italy and the war was settled on terms favourable to Christine and France.
In 1647, a French-backed rebellion succeeded in temporarily overthrowing Spanish rule in Naples. The Spanish quickly crushed the insurrection and restored their rule over all of southern Italy, defeating multiple French expeditionary forces sent to back the rebels. However, it exposed the weakness of Spanish rule in Italy and the alienation of the local elites from Madrid; in 1650, the governor of Milan wrote that as well as widespread dissatisfaction in the south, the only one of the Italian states that could be relied on was the Duchy of Parma.
Throughout the 1630s, tax increases levied to pay for the war led to protests throughout Spanish territories, which in 1640 resulted in simultaneous revolts first in Portugal, then the Principality of Catalonia. Backed by France as part of Richelieu's 'war by diversion', in January 1641 the rebels proclaimed a Catalan Republic. The Madrid government quickly assembled an army of 26,000 men to crush the revolt, which defeated the rebels at Martorell on 23 January 1641. The French now persuaded the Catalan Courts to recognise Louis XIII as Count of Barcelona, and ruler of Catalonia.
On 26 January, a combined French-Catalan force routed a larger Spanish army at Montjuïc and secured Barcelona. However, the rebels soon found the new French administration differed little from the old, turning the war into a three-sided contest between the Franco-Catalan elite, the rural peasantry, and the Spanish. There was little serious fighting after France took control of Perpignan and Roussillon, establishing the modern Franco-Spanish border in the Pyrenees. The revolt ended in 1651 with the Spanish capture of Barcelona.
In 1580, Philip II of Spain also became ruler of the Portuguese Empire, creating the Iberian Union; long-standing commercial rivals, the 1602 to 1663 Dutch–Portuguese War was an offshoot of the Dutch fight for independence from Spain. The Portuguese dominated the trans-Atlantic economy known as the Triangular trade, in which slaves were transported from West Africa and Portuguese Angola to work on plantations in Portuguese Brazil, which exported sugar and tobacco to Europe. Known by Dutch historians as the 'Great Design", control of this trade would not only be extremely profitable but also deprive the Spanish of funds needed to finance their war in the Netherlands.
The Dutch West India Company was formed in 1621 to achieve this purpose and a Dutch fleet captured the Brazilian port of Salvador, Bahia in 1624. After it was retaken by the Portuguese in 1625, a second fleet established Dutch Brazil in 1630, which was not returned until 1654. The second part was seizing slave trading hubs in Africa, chiefly Angola and São Tomé; supported by the Kingdom of Kongo, whose position was threatened by Portuguese expansion, the Dutch successfully occupied both in 1641.
Spain's inability or unwillingness to provide protection against these attacks increased Portuguese resentment and were major factors in the outbreak of the Portuguese Restoration War in 1640. Although ultimately expelled from Brazil, Angola and São Tomé, the Dutch retained the Cape of Good Hope, as well as Portuguese trading posts in Malacca, the Malabar Coast, the Moluccas and Ceylon.
Peace of Westphalia (1648)
What became known as the Peace of Westphalia consisted of three separate agreements; the Peace of Münster between Spain and the Dutch Republic, the Treaty of Osnabrück between the Empire and Sweden, plus the Treaty of Münster between the Empire and France. Preliminary discussions began in 1642 but only became serious in 1646; a total of 109 delegations attended at one time or other, with talks split between Münster and Osnabrück. The Swedes rejected a proposal that Christian of Denmark act as mediator, and the parties finally agreed on Papal Legate Fabio Chigi and the Venetian envoy Alvise Contarini.
The Peace of Münster was the first to be signed on 30 January 1648; it was part of the Westphalia settlement because the Dutch Republic was still technically part of the Spanish Netherlands and thus Imperial territory. The treaty confirmed Dutch independence, although the Imperial Diet did not formally accept that it was no longer part of the Empire until 1728. The Dutch were also given a monopoly over trade conducted through the Scheldt estuary, ensuring the commercial ascendancy of Amsterdam; Antwerp, capital of the Spanish Netherlands and previously the most important port in Northern Europe, would not recover until the late 19th century.
Negotiations with France and Sweden were conducted in conjunction with the Imperial Diet, and were multi-sided discussions involving many of the German states. This resulted in the treaties of Münster and Osnabrück, making peace with France and Sweden respectively. Ferdinand resisted signing until the last possible moment, doing so on 24 October only after a crushing French victory over Spain at Lens, and with Swedish troops on the verge of taking Prague. It has been argued they were a "major turning point in German and European...legal history", because they went beyond normal peace settlements and effected major constitutional and religious changes to the Empire itself.
Key elements of the Peace were provisions confirming the autonomy of states within the Empire, including Ferdinand's acceptance of the supremacy of the Imperial Diet, and those seeking to prevent future religious conflict. Article 5 reconfirmed the Augsburg settlement, established 1624 as the basis, or "Normaljahr", for determining the dominant religion of a state and guaranteed freedom of worship for religious minorities. Article 7 recognised Calvinism as a Reformed faith and removed the ius reformandi, the requirement that if a ruler changed his religion, his subjects had to follow suit. These terms did not apply to the hereditary lands of the Habsburg monarchy, such as Lower and Upper Austria.
In terms of territorial concessions, Brandenburg-Prussia received Farther Pomerania, and the bishoprics of Magdeburg, Halberstadt, Kammin, and Minden. Frederick's son Charles Louis regained the Lower Palatinate and became the eighth Imperial elector, although Bavaria kept the Upper Palatinate and its electoral vote. Externally, the treaties formally acknowledged the independence of the Dutch Republic and the Swiss Confederacy, effectively autonomous since 1499. In Lorraine, the Three Bishoprics of Metz, Toul and Verdun, occupied by France since 1552, were formally ceded, as were the cities of the Décapole in Alsace, with the exception of Strasbourg and Mulhouse. Sweden received an indemnity of five million thalers, the Imperial territories of Swedish Pomerania, and the Prince-bishoprics of Bremen and Verden, which also gave them a seat in the Imperial Diet.
The Peace was later denounced by Pope Innocent X, who regarded the bishoprics ceded to France and Brandenburg as property of the Catholic church, and thus his to assign. It also disappointed many exiles by accepting Catholicism as the dominant religion in Bohemia, Upper and Lower Austria, all of which were Protestant strongholds prior to 1618. Fighting did not end immediately, since demobilising over 200,000 soldiers was a complex business, and the last Swedish garrison did not leave Germany until 1654. In addition, Mazarin insisted on excluding the Burgundian Circle from the treaty of Münster, allowing France to continue its campaign against Spain in the Low Countries, a war that continued until the 1659 Treaty of the Pyrenees. The political disintegration of Poland-Lithuania led to the 1655 to 1660 Second Northern War with Sweden, which also involved Denmark, Russia and Brandenburg, while two Swedish attempts to impose its control on the port of Bremen failed in 1654 and 1666.
It has been argued the Peace established the principle known as Westphalian sovereignty, the idea of non-interference in domestic affairs by outside powers, although this has since been challenged. The process, or 'Congress' model, was adopted for negotiations at Aix-la-Chapelle in 1668, Nijmegen in 1678, and Ryswick in 1697; unlike the 19th century 'Congress' system, these were to end wars, rather than prevent them, so references to the 'balance of power' can be misleading.
Human and financial cost of the war
Historians often refer to the 'General Crisis' of the mid-17th century, a period of sustained conflict in states such as China, the British Isles, Tsarist Russia and the Holy Roman Empire. In all these areas, war, famine and disease inflicted severe losses on local populations. While the Thirty Years War certainly ranks as one of the worst of these events, 19th century nationalists often increased or exaggerated its impact to illustrate the dangers of a divided Germany. Suggestions of up to 12 million deaths from a population of 18 million are no longer accepted, while claims of material losses are either not supported by contemporary evidence or in some cases exceed prewar tax records.
By modern standards, the number of soldiers involved was relatively low but the conflict has been described as one of the greatest medical catastrophes in history. Battles generally featured armies of around 13,000 to 20,000 each, the largest being Alte Veste in 1632 with a combined 70,000 to 85,000. Estimates of the total deployed by both sides within Germany range from an average of 80,000 to 100,000 from 1618 to 1626, peaking at 250,000 in 1632 and falling to under 160,000 by 1648. Casualty rates could be extremely high; of 230 men conscripted from the Swedish village of Bygdeå between 1621 and 1639, 215 are recorded as dead or missing, while another five returned home crippled.
Until the mid-19th century, most soldiers died of disease; historian Peter Wilson, aggregating figures from known battles and sieges, gives a figure for those either killed or wounded in combat as around 450,000. Since experience shows two to three times that number either died or were incapacitated by disease, that would suggest total military casualties ranged from 1.3 to 1.8 million dead or otherwise rendered unfit for service. One estimate by Pitirim Sorokin calculates an upper limit of 2,071,000 military casualties, although his methodology has been widely disputed by others. In general, historians agree the war was an unprecedented mortality disaster and the vast majority of casualties, whether civilian or military, took place after Swedish intervention in 1630.
Based on local records, military action accounted for less than 3% of civilian deaths; the major causes were starvation (12%), bubonic plague (64%), typhus (4%), and dysentery (5%). Although regular outbreaks of disease were common for decades prior to 1618, the conflict greatly accelerated their spread. This was due to the influx of soldiers from foreign countries, the shifting locations of battle fronts and displacement of rural populations into already crowded cities. This was not restricted to Germany; disease carried by French and Imperial soldiers allegedly sparked the 1629–1631 Italian plague, leading to an estimated 280,000 deaths, the "worst mortality crisis to affect Italy during the Early modern period". Poor harvests throughout the 1630s and repeated plundering of the same areas led to widespread famine; contemporaries record people eating grass, or too weak to accept alms, while instances of cannibalism were common.
The modern consensus is the population of the Holy Roman Empire declined from 18 to 20 million in 1600 to 11–13 million in 1650, and did not regain pre-war levels until 1750. Nearly 50% of these losses appear to have been incurred during the first period of Swedish intervention from 1630 to 1635. The high mortality rate compared to the Wars of the Three Kingdoms in Britain may partly be due to the reliance of all sides on foreign mercenaries, often unpaid and required to live off the land. Lack of a sense of 'shared community' resulted in atrocities such as the destruction of Magdeburg, in turn creating large numbers of refugees who were extremely susceptible to sickness and hunger. While flight saved lives in the short-term, in the long run it often proved catastrophic.
In 1940, agrarian historian Günther Franz published a detailed analysis of regional data from across Germany covering the period from 1618 to 1648. Broadly confirmed by more recent work, he concluded "about 40% of the rural population fell victim to the war and epidemics; in the cities,...33%". These figures can be misleading, since Franz calculated the absolute decline in pre and post-war populations, or 'total demographic loss'. They therefore include factors unrelated to death or disease, such as permanent migration to areas outside the Empire or lower birthrates, a common but less obvious impact of extended warfare. There were also wide regional variations; some areas in Northwest Germany were relatively peaceful after 1630 and experienced almost no population loss, while those of Mecklenburg, Pomerania and Württemberg fell by nearly 50%.
Although some towns may have overstated their losses to avoid taxes, individual records confirm serious declines; from 1620 to 1650, the population of Munich fell from 22,000 to 17,000, that of Augsburg from 48,000 to 21,000. The financial impact is less clear; while the war caused short-term economic dislocation, especially in the period 1618 to 1623, overall it accelerated existing changes in trading patterns. It does not appear to have reversed ongoing macro-economic trends, such as the reduction of price differentials between regional markets, and a greater degree of market integration across Europe. The death toll may have improved living standards for the survivors; one study shows wages in Germany increased by 40% in real terms between 1603 and 1652.
Innovations made during the war by Gustavus in particular are considered part of the tactical evolution known as the "Military Revolution", although there is some debate as to whether tactics or technology were at the heart of these changes. These developments were popularised by Maurice of Orange in the 1590s and sought to increase infantry firepower by moving from massed columns to line formation. Gustavus refined these changes by reducing the ten ranks used by Maurice to six, while increasing the proportion of Musketeers to pikemen; in addition, each unit was equipped with quick-firing light artillery pieces on either flank. Perhaps the best example of their application in real life was the defeat of Tilly's traditionally organised army by the Swedes at Breitenfeld in September 1631.
Line formations were not always successful, as demonstrated by the victory of the supposedly obsolete Spanish tercios over the "new model" Swedish army at Nördlingen in 1634. They were also harder to co-ordinate in offensive operations; Gustavus compensated by requiring his cavalry to be far more aggressive, often employing his Finnish light cavalry or Hakkapeliitta as shock troops. He also used columns on occasion, including the failed assault at Alte Veste in September 1632. Columns continued to be viewed as more effective in offensive operations and were used by Napoleon throughout the latter stages of the Napoleonic Wars.
Such tactics needed professional soldiers, who could retain formation, reload and fire disciplined salvos while under attack, as well as the use of standardised weapons. The first half of the 17th century saw the publication of numerous instruction manuals showing the movements required, thirty-two for pikemen and forty-two for musketeers. The period needed to train an infantryman who could operate in this way was estimated as six months, although in reality many went into battle with far less experience. It also placed greater responsibility on junior officers who provided the vital links between senior commanders and the tactical unit. One of the first military schools designed to produce such men was set up at Seigen in 1616 and others soon followed.
On the other hand, strategic thinking failed to develop at the same pace. Historian Jeremy Black claims most campaigns were "inconclusive" and almost exclusively concerned with control of territory, rather than focused strategic objectives. The lack of connection between military and diplomatic goals helps explain why the war lasted so long and why peace proved so elusive. There were a number of reasons for this. When the Treaty of Westphalia was signed in 1648, the Franco-Swedish alliance still had over 84,000 men under arms on Imperial territory, their opponents around 77,000; while relatively small in modern terms, such numbers were unprecedented at the time. With the possible exception of Spain, the 17th century state could not support armies of this size, forcing them to depend on "contributions" levied or extorted from areas they passed through.
Obtaining supplies thus became the limiting factor in campaign planning, an issue that grew more acute later in the war when much of the Empire had already been fought over. Even when adequate provisions could be gathered, the next problem was getting them to the troops; to ensure security of supply, commanders were forced to stay close to rivers, then the primary means of bulk transportation, and could not move too far from their main bases. Many historians argue feeding the troops became an objective in itself, unconnected to diplomatic goals and largely uncontrolled by their central governments. The result was "armies increasingly devoid of intelligible political objectives...degenerating into travelling armed mobs living in a symbiotic relationship with the countryside they passed through". This lack of connection often worked against the political aims of their employers; the devastation inflicted in 1628 and 1629 by Imperial troops on Brandenburg and Saxony, both nominally their allies, was a major factor in their support for Swedish intervention.
Social and cultural impact
It has been suggested the breakdown of social order caused by the war was often more significant and longer lasting than the immediate damage. The collapse of local government created landless peasants, who banded together to protect themselves from the soldiers of both sides, and led to widespread rebellions in Upper Austria, Bavaria and Brandenburg. Soldiers devastated one area before moving on, leaving large tracts of land empty of people and changing the eco-system. Food shortages were worsened by an explosion in the rodent population; Bavaria was overrun by wolves in the winter of 1638, and its crops were destroyed by packs of wild pigs the following spring.
Contemporaries spoke of a 'frenzy of despair' as people sought to make sense of the relentless and often random bloodshed unleashed by the war. Attributed by religious authorities to divine retribution for sin, other attempts to identify a supernatural cause led to a series of Witch-hunts, beginning in Franconia in 1626 and quickly spreading to other parts of Germany. They began in the Bishopric of Würzburg, an area with a history of such events going back to 1616 and now re-ignited by Bishop von Ehrenberg, a devout Catholic eager to assert the church's authority in his territories. By the time he died in 1631, over 900 people from all levels of society had been executed.
The Bamberg witch trials, held in the nearby Bishopric of Bamberg from 1626 to 1631, claimed over one thousand lives; in 1629, 274 died in the Eichstätt witch trials, plus another 50 in the adjacent Duchy of Palatinate-Neuburg. Elsewhere, persecution followed Imperial military success, expanding into Baden and the Palatinate following their reconquest by Tilly, then into the Rhineland. However, the extent to which they were symptomatic of the impact of the conflict on society is debatable, since many took place in areas relatively untouched by the war. Concerned their brutality would discredit the Counter-Reformation, Ferdinand ensured active persecution largely ended by 1630.
Although the war caused immense destruction, it has also been credited with sparking a revival in German literature, including the creation of societies dedicated to "purging of foreign elements" from the German language. One example is Simplicius Simplicissimus, often suggested as one of the earliest examples of the Picaresque novel; written by Hans Jakob Christoffel von Grimmelshausen in 1668, it includes a realistic portrayal of a soldier's life based on his own experiences, many of which are verified by other sources. Other less famous examples include the diaries of Peter Hagendorf, a participant in the Sack of Magdeburg whose description of the everyday brutalities of the war remain compelling.
For German, and to a lesser extent Czech writers, the war continued to be remembered as a defining moment of national trauma, the 18th century poet and playwright Friedrich Schiller being one of many to use it in their work. Variously known as the 'Great German War,' 'Great War' or 'Great Schism', for 19th and early 20th century German nationalists it showed the dangers of a divided Germany and was used to justify the creation of the German Empire in 1871, as well as the Greater Germanic Reich envisaged by the Nazis. Bertolt Brecht used it as the backdrop for his 1939 anti-war play Mother Courage and Her Children, while its enduring cultural resonance is illustrated by the novel Tyll; written by Austro-German author Daniel Kehlmann and also set during the war, it was nominated for the 2020 Booker Prize.
The Peace reconfirmed "German liberties", ending Habsburg attempts to convert the Holy Roman Empire into a more centralised state similar to Spain. Over the next 50 years, Bavaria, Brandenburg-Prussia, Saxony and others increasingly pursued their own policies, while Sweden gained a permanent foothold in the Empire. Despite these setbacks, the Habsburg lands suffered less from the war than many others and became a far more coherent bloc with the absorption of Bohemia, and restoration of Catholicism throughout their territories.
By laying the foundations of the modern nation state, Westphalia changed the relationship between subjects and their rulers. Previously, many had overlapping, sometimes conflicting, political and religious allegiances; they were now understood to be subject first and foremost to the laws and edicts of their respective state authority, not the claims of any other entity, religious or secular. This made it easier to levy national forces of significant size, loyal to their state and its leader; one lesson learned from Wallenstein and the Swedish invasion was the need for their own permanent armies, and Germany as a whole became a far more militarised society.
The benefits of Westphalia for the Swedes proved short-lived. Unlike French gains which were incorporated into France, Swedish territories remained part of the Empire, and they became members of the Lower and Upper Saxon kreis. While this gave them seats in the Imperial Diet, it also brought them into direct conflict with both Brandenburg-Prussia and Saxony, their competitors in Pomerania. The income from their imperial possessions remained in Germany and did not benefit the kingdom of Sweden; although they retained parts of Swedish Pomerania until 1815, much of it was ceded to Prussia in 1679 and 1720.
France arguably gained more from the Thirty Years' War than any other power; by 1648, most of Richelieu's objectives had been achieved. These included separation of the Spanish and Austrian Habsburgs, expansion of the French frontier into the Empire, and an end to Spanish military supremacy in Northern Europe. Although the Franco-Spanish conflict continued until 1659, Westphalia allowed Louis XIV of France to begin replacing Spain as the predominant European power.
While differences over religion remained an issue throughout the 17th century, it was the last major war in Continental Europe in which it can be said to be a primary driver; later conflicts were either internal, such as the Camisards revolt in South-Western France, or relatively minor like the 1712 Toggenburg War. It created the outlines of a Europe that persisted until 1815 and beyond; the nation-state of France, the beginnings of a unified Germany and separate Austro-Hungarian bloc, a diminished but still significant Spain, independent smaller states like Denmark, Sweden and Switzerland, along with a Low Countries split between the Dutch Republic and what became Belgium in 1830.
|Directly against Emperor|
|Indirectly against Emperor|
|Directly for Emperor|
|Indirectly for Emperor|
- States that fought against the Emperor at some point between 1618 to 1635
- States that allied at some point between 1618 to 1635
- Since officers were paid per soldier, numbers Reported frequently differed from Actual, i.e. those present and available for duty. Variances between Reported and Actual are estimated as averaging up to 25% for the Dutch, 35% for the French and 50% for the Spanish. Most battles of the period were fought between opposing forces of 13,000 to 20,000 men; the numbers reflect Maximum at any one time and exclude citizen militia, who often formed a large proportion of garrisons
- All armies were multinational; an estimated 60,000 Scottish, English or Irish individuals fought on one side or the other during the period; based on an analysis of a mass grave discovered in 2011, fewer than 50% of "Swedish" forces at Lützen came from Scandinavia.
- Maximum in Germany, excludes 24,000 home defence
- Approved 80,000, actual 60,000
- 1640 figures for the Army of Flanders, when it was at its maximum strength; these are Reported numbers, so as mentioned elsewhere, Actual would have been considerably lower. The Spanish army officially had more than 200,000 soldiers in 1640, but most were second line troops in garrisons elsewhere in Europe, not facing the Dutch.
- Parrott suggests many of these should be included in the figures for Imperial troops above, and estimates of irregular cavalry are massively overstated
- Wilson estimates a total of 450,000 combat deaths on all sides, the vast majority of whom were German; by one calculation, four times as many Germans died fighting for Sweden than Swedes and hence casualties are referenced as being "in service", rather than nationality
- France lost another 200,000 to 300,000 killed or wounded in the related Franco-Spanish War
- Wilson estimates three soldiers died of disease for every one killed in combat.
- German: Dreißigjähriger Krieg, pronounced [ˈdʁaɪ̯sɪçˌjɛːʁɪɡɐ kʁiːk] (listen)
- Its official title remains Freie und Hansestadt Hamburg
- There were nearly 1,800 separate Imperial Estates, of whom only 300 were represented in the Imperial Diet or Circles; the majority of the remaining 1,500 were Imperial Knights or individual members of the lower nobility, who were excluded
- As well as being brother-in-law to Frederick of the Palatinate, James I was also linked to Christian IV of Denmark, having married his elder sister Anne of Denmark (1574–1619)
- Not to be confused with Freiberg in Saxony
- Croxton 2013, pp. 225–226.
- Heitz & Rischer 1995, p. 232.
- Parrott 2001, p. 8.
- Nicklisch et al. 2017.
- Schmidt & Richefort 2006, p. 49.
- Wilson 2009, p. 387.
- Parrott 2001, pp. 164–168.
- Van Nimwegen 2010, p. 62.
- Parrott 2001, p. 61.
- Parker 1972, p. 231.
- Clodfelter 2008, p. 39.
- Parrott 2001, p. 62.
- Wilson 2009, p. 791.
- Parker 1984, p. 173.
- Wilson 2009, p. 790.
- Wilson 2009, p. 787.
- Outram 2002, p. 248.
- Wilson 2009, pp. 4, 787.
- Parker 1984, p. 189.
- Sutherland 1992, pp. 589–590.
- Parker 1984, pp. 17–18.
- Sutherland 1992, pp. 602–603.
- Wedgwood 1938, pp. 22–24.
- Wilson 2009, pp. 17–22.
- Wilson 2009, p. 21.
- Wedgwood 1938, pp. 159–161.
- Hayden 1973, pp. 1–23.
- Wilson 2009, p. 222.
- Wilson 2009, p. 224.
- Parker 1984, p. 11.
- Wedgwood 1938, pp. 47–49.
- Wilson 2008, p. 557.
- Wedgwood 1938, p. 50.
- Wedgwood 1938, pp. 63–65.
- Wilson 2009, pp. 271–274.
- Bassett 2015, p. 14.
- Wedgwood 1938, pp. 74–75.
- Wedgwood 1938, pp. 78–79.
- Bassett 2015, pp. 12, 15.
- Wedgwood 1938, pp. 81–82.
- Wedgwood 1938, p. 94.
- Baramova 2014, pp. 121–122.
- Wedgwood 1938, pp. 98–99.
- Wedgwood 1938, pp. 127–129.
- Stutler 2014, pp. 37–38.
- Wedgwood 1938, p. 117.
- Zaller 1974, pp. 147–148.
- Zaller 1974, pp. 152–154.
- Spielvogel 2017, p. 447.
- Pursell 2003, pp. 182–185.
- Wedgwood 1938, pp. 162–164.
- Wedgwood 1938, pp. 179–181.
- Lockhart 2007, pp. 107–109.
- Murdoch 2000, p. 53.
- Wilson 2009, p. 382.
- Davenport 1917, p. 295.
- Wedgwood 1938, p. 208.
- Wedgwood 1938, p. 212.
- Murdoch & Grosjean 2014, pp. 43–44.
- Wilson 2009, p. 426.
- Murdoch & Grosjean 2014, pp. 48–49.
- Lockhart 2007, p. 170.
- Lockhart 2007, p. 172.
- Wedgwood 1938, pp. 232–233.
- Wedgwood 1938, pp. 242–244.
- Maland 1980, pp. 98–99.
- Wedgwood 1938, pp. 385–386.
- Norrhem 2019, pp. 28–29.
- Porshnev 1995, p. 106.
- Parker 1984, p. 120.
- O'Connell 1968, pp. 253–254.
- O'Connell 1968, p. 256.
- Porshnev 1995, p. 38.
- Wedgwood 1938, pp. 305–306.
- Brzezinski 2001, p. 4.
- Wilson 2018, p. 89.
- Wilson 2009, p. 509.
- Wilson 2018, p. 99.
- Brzezinski 2001, p. 74.
- Wilson 2009, p. 523.
- Wedgwood 1938, pp. 220–222.
- Kamen 2003, pp. 385–386.
- Parker 1984, pp. 132–134.
- Bireley 1976, p. 32.
- Kamen 2003, p. 387.
- Israel 1995, pp. 272–273.
- Murdoch, Zickerman & Marks 2012, pp. 80–85.
- Wilson 2009, pp. 595–598.
- Wilson 2009, p. 615.
- Wilson 2009, pp. 661–662.
- Pazos 2011, pp. 130–131.
- Bely 2014, pp. 94–95.
- Costa 2005, p. 4.
- Van Gelderen 2002, p. 284.
- Parker 1984, p. 150.
- Wedgwood 1938, p. 446.
- Wedgwood 1938, p. 447.
- Clodfelter 2008, p. 41.
- Wilson 2009, pp. 636–639.
- Wilson 2009, pp. 641–642.
- Milton, Axworthy & Simms 2018, pp. 60–65.
- Parker 1984, p. 153.
- Wilson 2009, p. 587.
- Wilson 2009, pp. 643–645.
- Wilson 2009, p. 671.
- Wilson 2009, p. 687.
- Wedgwood 1938, pp. 472–473.
- Croxton 1998, p. 273.
- Wilson 2009, pp. 693–695.
- Bonney 2002, p. 64.
- Wilson 2009, p. 711.
- Wedgwood 1938, pp. 493–494.
- Wedgwood 1938, pp. 495–496.
- Wilson 2009, p. 716.
- Wedgwood 1938, p. 496.
- Wilson 2009, p. 726.
- Wilson 2009, pp. 740–741.
- Wedgwood 1938, p. 501.
- Hanlon 2016, pp. 118–119.
- Wedgwood 1938, pp. 235–236.
- Wedgwood 1938, p. 247.
- Thion 2008, p. 62.
- Ferretti 2014, pp. 12–18.
- Wedgwood 1938, pp. 263–264.
- Kohn 1995, p. 200.
- Ferretti 2014, p. 20.
- Duffy 1995, p. 125.
- Wilson 2009, p. 259.
- Hanlon 2016, p. 124.
- Kamen 2003, p. 406.
- Kamen 2003, p. 407.
- Mitchell 2005, pp. 431–448.
- Thornton 2016, pp. 189–190.
- Van Groesen 2011, pp. 167–168.
- Thornton 2016, pp. 194–195.
- Gnanaprakasar 2003, pp. 153–172.
- Croxton 2013, pp. 3–4.
- Wilson 2009, p. 746.
- Israel 1995, pp. 197–199.
- Wedgwood 1938, pp. 500–501.
- Lesaffer 1997, p. 71.
- "The Peace of Westphalia" (PDF). University of Oregon. Retrieved 30 September 2021.
- Wilson 2009, p. 707.
- Ryan 1948, p. 597.
- Wedgwood 1938, p. 504.
- Wilson 2009, p. 757.
- Croxton 2013, pp. 331–332.
- Parker 2008, p. 1053.
- Wedgwood 1938, p. 510.
- Parker 1984, pp. 188–189.
- Outram 2001, p. 155.
- Clodfelter 2008, p. 40.
- Levy 1983, pp. 88–91.
- Outram 2001, pp. 156–159.
- Outram 2001, pp. 160–161.
- Outram 2002, p. 250.
- Alfani & Percoco 2016, p. 2.
- Wilson 2009, p. 345.
- Parker 2008, p. 1058.
- Parker 1984, p. 122.
- Outram 2002, pp. 245–246.
- Outram 2001, p. 152.
- Wedgwood 1938, p. 512.
- Schulze & Volckart 2019, p. 30.
- Pfister, Riedel & Uebele 2012, p. 18.
- Sharman 2018, pp. 493–495.
- Parker 1984, p. 185.
- Parker 1976, p. 200.
- Chandler 1990, pp. 130–137.
- Parker 1976, p. 202.
- Parker 1984, p. 184.
- Croxton 1998, p. 254.
- Wilson 2009, p. 770.
- Parker 1984, p. 177.
- Croxton 1998, pp. 255–256.
- O'Connell 1990, p. 147.
- Wedgwood 1938, pp. 257–258.
- Wedgwood 1938, p. 516.
- Wilson 2009, p. 784.
- White 2012, p. 220.
- Jensen 2007, p. 93.
- Trevor-Roper 1967, pp. 83–117.
- Briggs 1996, p. 163.
- Briggs 1996, pp. 171–172.
- Talbott 2021, pp. 3–4.
- Helfferich 2009, pp. 283–284.
- Cramer 2007, pp. 18–19.
- Talbott 2021, p. 6.
- McMurdie 2014, p. 65.
- Bonney 2002, pp. 89–90.
- McMurdie 2014, pp. 67–68.
- Lee 2001, pp. 67–68.
- Storrs 2006, pp. 6–7.
- Gutmann 1988, pp. 752–754.
- Alfani, Guido; Percoco, Marco (2016). "Plague and long-term development: the lasting effects of the 1629–30 epidemic on the Italian cities" (PDF). The Economic History Review. 72 (4): 1175–1201. doi:10.1111/ehr.12652. ISSN 1468-0289. S2CID 131730725.
- Baramova, Maria (2014). Asbach, Olaf; Schröder, Peter (eds.). Non-splendid isolation: the Ottoman Empire and the Thirty Years War in The Ashgate Research Companion to the Thirty Years' War. Routledge. ISBN 978-1-4094-0629-7.
- Bassett, Richard (2015). For God and Kaiser; the Imperial Austrian Army. Yale University Press. ISBN 978-0-300-17858-6.
- Bely, Lucien (2014). Asbach, Olaf; Schröder, Peter (eds.). France and the Thirty Years War in The Ashgate Research Companion to the Thirty Years' War. Ashgate. ISBN 978-1-4094-0629-7.
- Bireley, Robert (1976). "The Peace of Prague (1635) and the Counterreformation in Germany". The Journal of Modern History. 48 (1): 31–69. doi:10.1086/241519. S2CID 143376778.
- Bonney, Richard (2002). The Thirty Years' War 1618–1648. Osprey Publishing.
- Briggs, Robin (1996). Witches & Neighbors: The Social And Cultural Context of European Witchcraft. Viking. ISBN 978-0-670-83589-8.
- Brzezinski, Richard (2001). Lützen 1632: Climax of the Thirty Years War: The Clash of Empires. Osprey. ISBN 978-1-85532-552-4.
- Chandler, David (1990). The Art of Warfare in the Age of Marlborough. Spellmount Publishers Ltd. ISBN 978-0946771424.
- Clodfelter, Micheal (2008). Warfare and Armed Conflicts: A Statistical Encyclopedia of Casualty and Other Figures, 1492–2015 (2017 ed.). McFarland. ISBN 978-0-7864-7470-7.
- Costa, Fernando Dores (2005). "Interpreting the Portuguese War of Restoration (1641-1668) in a European Context". Journal of Portuguese History. 3 (1).
- Cramer, Kevin (2007). The Thirty Years' War & German Memory in the Nineteenth Century. University of Nebraska. ISBN 978-0-8032-1562-7.
- Croxton, Derek (2013). The Last Christian Peace: The Congress of Westphalia as A Baroque Event. Palgrave Macmillan. ISBN 978-1-137-33332-2.
- Croxton, Derek (1998). "A Territorial Imperative? The Military Revolution, Strategy and Peacemaking in the Thirty Years War". War in History. 5 (3): 253–279. doi:10.1177/096834459800500301. JSTOR 26007296. S2CID 159915965.
- Davenport, Frances Gardiner (1917). European Treaties Bearing on the History of the United States and Its Dependencies (2014 ed.). Literary Licensing. ISBN 978-1-4981-4446-9.
- Duffy, Christopher (1995). Siege Warfare: The Fortress in the Early Modern World 1494-1660. Routledge. ISBN 978-0415146494.
- Ferretti, Giuliano (2014). "La politique italienne de la France et le duché de Savoie au temps de Richelieu; Franco-Savoyard Italian policy in the time of Richelieu". Dix-septième Siècle (in French). 1 (262): 7. doi:10.3917/dss.141.0007.
- Friehs, Julia Teresa. "Art and the Thirty Years' War". Habsburger.net. Retrieved 8 August 2021.
- Gnanaprakasar, Nalloor Swamy (2003). Critical History of Jaffna – The Tamil Era. Asian Educational Services. ISBN 978-81-206-1686-8.
- Gutmann, Myron P. (1988). "The Origins of the Thirty Years' War". Journal of Interdisciplinary History. 18 (4): 749–770. doi:10.2307/204823. JSTOR 204823.
- Hanlon, Gregory (2016). The Twilight Of A Military Tradition: Italian Aristocrats And European Conflicts, 1560-1800. Routledge. ISBN 978-1-138-15827-6.
- Hayden, J. Michael (1973). "Continuity in the France of Henry IV and Louis XIII: French Foreign Policy, 1598-1615". The Journal of Modern History. 45 (1): 1–23. doi:10.1086/240888. JSTOR 1877591. S2CID 144914347.
- Helfferich, Tryntje (2009). The Thirty Years War: A Documentary History. Hackett Publishing Co, Inc. ISBN 978-0872209398.
- Heitz, Gerhard; Rischer, Henning (1995). Geschichte in Daten. Mecklenburg-Vorpommern; History in data; Mecklenburg-Western Pomerania (in German). Koehler&Amelang. ISBN 3-7338-0195-4.
- Israel, Jonathan (1995). Spain in the Low Countries, (1635-1643) in Spain, Europe and the Atlantic: Essays in Honour of John H. Elliott. Cambridge University Press. ISBN 978-0-521-47045-2.
- Jensen, Gary F. (2007). The Path of the Devil: Early Modern Witch Hunts. Rowman & Littlefield. ISBN 978-0-7425-4697-4.
- Kamen, Henry (2003). Spain's Road to Empire. Allen Lane. ISBN 978-0140285284.
- Kohn, George (1995). Encyclopedia of Plague and Pestilence: From Ancient Times to the Present. Facts on file. ISBN 978-0-8160-2758-3.
- Lee, Stephen (2001). The Thirty Years War (Lancaster Pamphlets). Routledge. ISBN 978-0-415-26862-2.
- Lesaffer, Randall (1997). "The Westphalia Peace Treaties and the Development of the Tradition of Great European Peace Settlements prior to 1648". Grotiana. 18 (1): 71–95. doi:10.1163/187607597X00064.
- Levy, Jack S (1983). War in the Modern Great Power System: 1495 to 1975. University Press of Kentucky.
- Lockhart, Paul D (2007). Denmark, 1513–1660: the rise and decline of a Renaissance monarchy. Oxford University Press. ISBN 978-0-19-927121-4.
- Maland, David (1980). Europe at War, 1600–50. Palgrave Macmillan. ISBN 978-0-333-23446-4.
- McMurdie, Justin (2014). The Thirty Years' War: Examining the Origins and Effects of Corpus Christianum's Defining Conflict (PhD thesis). George Fox University.
- Milton, Patrick; Axworthy, Michael; Simms, Brendan (2018). Towards The Peace Congress of Münster and Osnabrück (1643–1648) and the Westphalian Order (1648–1806) in "A Westphalia for the Middle East". C Hurst & Co Publishers Ltd. ISBN 978-1-78738-023-3.
- Mitchell, Andrew Joseph (2005). Religion, revolt, and creation of regional identity in Catalonia, 1640–1643 (PHD thesis). Ohio State University.
- Murdoch, Steve (2000). Britain, Denmark-Norway and the House of Stuart 1603–1660. Tuckwell. ISBN 978-1-86232-182-3.
- Murdoch, S.; Zickerman, K; Marks, H (2012). "The Battle of Wittstock 1636: Conflicting Reports on a Swedish Victory in Germany". Northern Studies. 43.
- Murdoch, Steve; Grosjean, Alexia (2014). Alexander Leslie and the Scottish generals of the Thirty Years' War, 1618–1648. London: Pickering & Chatto.
- Nicklisch, Nicole; Ramsthaler, Frank; Meller, Harald; Others (2017). "The face of war: Trauma analysis of a mass grave from the Battle of Lützen (1632)". PLOS ONE. 12 (5): e0178252. Bibcode:2017PLoSO..1278252N. doi:10.1371/journal.pone.0178252. PMC 5439951. PMID 28542491.
- Norrhem, Svante (2019). Mercenary Swedes; French subsidies to Sweden 1631–1796. Translated by Merton, Charlotte. Nordic Academic Press. ISBN 978-91-88661-82-1.
- O'Connell, Daniel Patrick (1968). Richelieu. Weidenfeld & Nicolson.
- O'Connell, Robert L (1990). Of Arms and Men: A History of War, Weapons, and Aggression. OUP. ISBN 978-0195053593.
- Outram, Quentin (2001). "The Socio-Economic Relations of Warfare and the Military Mortality Crises of the Thirty Years' War" (PDF). Medical History. 45 (2): 151–184. doi:10.1017/S0025727300067703. PMC 1044352. PMID 11373858.
- Outram, Quentin (2002). "The Demographic impact of early modern warfare". Social Science History. 26 (2): 245–272. doi:10.1215/01455532-26-2-245.
- Parker, Geoffrey (2008). "Crisis and Catastrophe: The global crisis of the seventeenth century reconsidered". American Historical Review. 113 (4): 1053–1079. doi:10.1086/ahr.113.4.1053.
- Parker, Geoffrey (1976). "The "Military Revolution," 1560-1660--a Myth?". The Journal of Modern History. 48 (2): 195–214. doi:10.1086/241429. JSTOR 1879826. S2CID 143661971.
- Parker, Geoffrey (1984). The Thirty Years' War (1997 ed.). Routledge. ISBN 978-0-415-12883-4. (with several contributors)
- Parker, Geoffrey (1972). Army of Flanders and the Spanish Road, 1567-1659: The Logistics of Spanish Victory and Defeat in the Low Countries' Wars (2004 ed.). CUP. ISBN 978-0-521-54392-7.
- Parrott, David (2001). Richelieu's Army: War, Government and Society in France, 1624–1642. Cambridge University Press. ISBN 978-0-521-79209-7.
- Pazos, Conde Miguel (2011). "El tradado de Nápoles. El encierro del príncipe Juan Casimiro y la leva de Polacos de Medina de las Torres (1638–1642): The Treaty of Naples; the imprisonment of John Casimir and the Polish Levy of Medina de las Torres". Studia Histórica, Historia Moderna (in Spanish). 33.
- Pfister, Ulrich; Riedel, Jana; Uebele, Martin (2012). "Real Wages and the Origins of Modern Economic Growth in Germany, 16th to 19th Centuries" (PDF). European Historical Economics Society. 17.
- Porshnev, Boris Fedorovich (1995). Dukes, Paul (ed.). Muscovy and Sweden in the Thirty Years' War, 1630–1635. Cambridge University Press. ISBN 978-0-521-45139-0.
- Pursell, Brennan C. (2003). The Winter King: Frederick V of the Palatinate and the Coming of the Thirty Years' War. Ashgate. ISBN 978-0-7546-3401-0.
- Ryan, E.A. (1948). "Catholics and the Peace of Westphalia" (PDF). Theological Studies. 9 (4): 590–599. doi:10.1177/004056394800900407. S2CID 170555324. Retrieved 7 October 2020.
- Schmidt, Burghart; Richefort, Isabelle (2006). "Les relations entre la France et les villes hanséatiques de Hambourg, Brême et Lübeck : Moyen Age-XIXe siècle; Relations between France and the Hanseatic ports of Hamburg, Bremen and Lubeck from the Middle Ages to the 19th century". Direction des Archives, Ministère des affaires étrangères (in French).
- Schulze, Max-Stefan; Volckart, Oliver (2019). "The Long-term Impact of the Thirty Years War: What Grain Price Data Reveal" (PDF). Economic History.
- Sharman, J.C (2018). "Myths of military revolution: European expansion and Eurocentrism". European Journal of International Relations. 24 (3): 491–513. doi:10.1177/1354066117719992. S2CID 148771791.
- Spielvogel, Jackson (2017). Western Civilisation. Wadsworth Publishing. ISBN 978-1-305-95231-7.
- Storrs, Christopher (2006). The Resilience of the Spanish Monarchy 1665–1700. OUP. ISBN 978-0-19-924637-3.
- Stutler, James Oliver (2014). Lords of War: Maximilian I of Bavaria and the Institutions of Lordship in the Catholic League Army, 1619–1626 (PDF) (PhD thesis). Duke University.
- Sutherland, NM (1992). "The Origins of the Thirty Years War and the Structure of European Politics". The English Historical Review. CVII (CCCCXXIV): 587–625. doi:10.1093/ehr/cvii.ccccxxiv.587.
- Talbott, Siobhan (2021). "'Causing misery and suffering miserably': Representations of the Thirty Years' War in Literature and History". Sage. 30 (1): 3–25. doi:10.1177/03061973211007353. S2CID 234347328.
- Thion, Stephane (2008). French Armies of the Thirty Years' War. Auzielle: Little Round Top Editions.
- Thornton, John (2016). "The Kingdom of Kongo and the Thirty Years' War". Journal of World History. 27 (2): 189–213. doi:10.1353/jwh.2016.0100. JSTOR 43901848. S2CID 163706878.
- Trevor-Roper, Hugh (1967). The Crisis of the Seventeenth Century: Religion, the Reformation and Social Change (2001 ed.). Liberty Fund. ISBN 978-0-86597-278-0.
- Van Gelderen, Martin (2002). Republicanism and Constitutionalism in Early Modern Europe: A Shared European Heritage Volume I. Cambridge University Press. ISBN 978-0-521-80203-1.
- Van Groesen, Michiel (2011). "Lessons Learned: The Second Dutch Conquest of Brazil and the Memory of the First". Colonial Latin American Review. 20 (2): 167–193. doi:10.1080/10609164.2011.585770. S2CID 218574377.
- Van Nimwegen, Olaf (2010). The Dutch Army and the Military Revolutions, 1588–1688. Boydell Press. ISBN 978-1-84383-575-2.
- Wedgwood, C.V. (1938). The Thirty Years War (2005 ed.). New York Review of Books. ISBN 978-1-59017-146-2.
- White, Matthew (2012). The Great Big Book of Horrible Things. W.W. Norton & Co. ISBN 978-0-393-08192-3.
- Wilson, Peter H. (2009). Europe's Tragedy: A History of the Thirty Years War. Allen Lane. ISBN 978-0-7139-9592-3.
- Wilson, Peter H. (2018). Lützen: Great Battles Series. Oxford: Oxford University Press. ISBN 978-0199642540.
- Wilson, Peter (2008). "The Causes of the Thirty Years War 1618–48". The English Historical Review. 123 (502): 554–586. doi:10.1093/ehr/cen160. JSTOR 20108541.
- Zaller, Robert (1974). "'Interest of State': James I and the Palatinate". Albion: A Quarterly Journal Concerned with British Studies. 6 (2): 144–175. doi:10.2307/4048141. JSTOR 4048141.
- Åberg, A. (1973). "The Swedish Army from Lützen to Narva". In Roberts, M. (ed.). Sweden's Age of Greatness, 1632–1718. St. Martin's Press.
- Benecke, Gerhard (1978). Germany in the Thirty Years War. St. Martin's Press.
- Grosjean, Alexia (2003). An Unofficial Alliance: Scotland and Sweden, 1569–1654. Leiden: Brill.
- Dukes, Paul, ed. (1995). Muscovy and Sweden in the Thirty Years' War 1630–1635. Cambridge University Press. ISBN 978-0-521-45139-0.
- Kamen, Henry (1968). "The Economic and Social Consequences of the Thirty Years' War". Past and Present. 39 (39): 44–61. doi:10.1093/past/39.1.44. JSTOR 649855.
- Langer, Herbert (1980). The Thirty Years' War (1990 ed.). Dorset Press. ISBN 978-0-88029-262-7.
- Lynn, John A. (1999). The Wars of Louis XIV: 1667–1714. Harlow, England: Longman.
- Murdoch, Steve (2001). Scotland and the Thirty Years' War, 1618–1648. Brill.
- Polišenský, J. V. (1954). "The Thirty Years' War". Past and Present. 6 (6): 31–43. doi:10.1093/past/6.1.31. JSTOR 649813.
- Polišenský, J. V. (1968). "The Thirty Years' War and the Crises and Revolutions of Seventeenth-Century Europe". Past and Present. 39 (39): 34–43. doi:10.1093/past/39.1.34. JSTOR 649854.
- Polisensky, Joseph (2001). "A Note on Scottish Soldiers in the Bohemian War, 1619–1622". In Murdoch, Steve (ed.). A Note on Scottish Soldiers in the Bohemian War, 1619–1622 in 'Scotland and the Thirty Years' war, 1618–1648. Brill. ISBN 978-90-04-12086-0.
- Prinzing, Friedrich (1916). Epidemics Resulting from Wars. Clarendon Press.
- Rabb, Theodore K. (1962). "The Effects of the Thirty Years' War on the German Economy". Journal of Modern History. 34 (1): 40–51. doi:10.1086/238995. JSTOR 1874817. S2CID 154709047.
- Reilly, Pamela (1959). "Friedrich von Spee's Belief in Witchcraft: Some Deductions from the 'Cautio Criminalis'". The Modern Language Review. 54 (1): 51–55. doi:10.2307/3720833. JSTOR 3720833.
- Ringmar, Erik (1996). Identity, Interest and Action: A Cultural Explanation of the Swedish Intervention in the Thirty Years War (2008 ed.). Cambridge University Press. ISBN 978-0-521-02603-1.
- Roberts, Michael (1958). Gustavus Adolphus: A History of Sweden, 1611–1632. Longmans, Green and C°.
- Theibault, John (1997). "The Demography of the Thirty Years War Re-revisited: Günther Franz and his Critics". German History. 15 (1): 1–21. doi:10.1093/gh/15.1.1.
- Ward, A.W. (1902). The Cambridge Modern History. Vol. 4: The Thirty Years War. |
Multiplication Flash Cards is the latest worksheet that you can find. This worksheet was uploaded on June 23, 2020 by admin in Chart.
Studying multiplication after counting, addition, as well as subtraction is perfect. Young children understand arithmetic through a all-natural progression. This growth of discovering arithmetic is often the subsequent: counting, addition, subtraction, multiplication, and finally department. This assertion results in the concern why understand arithmetic with this sequence? Most importantly, why find out multiplication right after counting, addition, and subtraction before division?
Understanding multiplication soon after counting, addition, and subtraction is perfect. Young childr
Multiplication Flash Cards in your computer by clicking resolution image in Download by size:. Don't forget to rate and comment if you like this worksheet. |
In the previous set we started with arithmetic operations on vectors. We’ll take this a step further now, by practising functions to summarize, sort and round the elements of a vector.
Sofar, the functions we have practised (
acos) always return a vector with the same length as the input vector. In other words, the function is applied element by element to the elements of the input vector. Not all functions behave this way though. For example, the function
min(x) returns a single value (the minimum of all values in
x), regardless of whether x has length 1, 100 or 100,000.
Before starting the exercises, please note this is the third set in a series of five: In the first two sets, we practised creating vectors and vector arithmetics. In the fourth set (posted next week) we will practise regular sequences and replications.
You can find all sets right now in our ebook Start Here To Learn R – vol. 1: Vectors, arithmetic, and regular sequences. The book also includes all solutions (carefully explained), and the fifth and final set of the series. This final set focuses on the application of the concepts you learned in the first four sets, to real-world data.
One more thing: I would really appreciate your feedback on these exercises: Which ones did you like? Which ones were too easy or too difficult? Please let me know what you think here!
Did you know R has actually lots of built-in datasets that we can use to practise? For example, the
rivers data “gives the lengths (in miles) of 141 “major” rivers in North America, as compiled by the US Geological Survey” (you can find this description, and additonal information, if you enter
help(rivers) in R. Also, for an overview of all built-in datasets, enter
Have a look at the
rivers data by simply entering
rivers at the R prompt. Create a vector
v with 7 elements, containing the number of elements (
rivers, their sum (
sum), mean (
mean), median (
median), variance (
var), standard deviation (
sd), minimum (
min) and maximum (
For many functions, we can tweak their result through additional arguments. For example, the
mean function accepts a
trim argument, which trims a fraction of observations from both the low and high end of the vector the function is applied to.
- What is the result of
mean(c(-100, 0, 1, 2, 3, 6, 50, 73), trim=0.25)? Don’t use R, but try to infer the result from the explanation of the
trimargument I just gave. Then check your answer with R.
- Calculate the mean of
riversafter trimming the 10 highest and lowest observations. Hint: first calculate the trim fraction, using the
Some functions accept multiple vectors as inputs. For example, the
cor function accepts two vectors and returns their correlation coefficient. The
women data “gives the average heights and weights for American women aged 30-39”. It contains two vectors
weight, which we access after entering
attach(women) (we’ll discuss the details of
attach in a later chapter).
- Using the
corfunction, show that the average height and weight of these women are almost perfectly correlated.
- Calculate their covariance, using the
corfunction accepts a third argument
methodwhich allows for three distinct methods (“pearson”, “kendall”, “spearman”) to calculate the correlation. Repeat part (a) of this exercise for each of these methods. Which is the method chosen by the default (i.e. without specifying the method explicitly?)
In the previous three exercises, we practised functions that accept one or more vectors of any length as input, but return a single value as output. We’re now returning to functions that return a vector of the same length as their input vector. Specifically, we’ll practise rounding functions. R has several functions for rounding. Let’s start with
floor(x)rounds to the largest integer not greater than
ceiling(x)rounds to the smallest integer not less than
trunc(x)returns the integer part of
To appreciate the difference between the three, I suggest you first play around a bit in R with them. Just pick any number (with or without a decimal point, positive and negative values), and see the result each of these functions gives you. Then make it somewwat closer to the next integer (either above or below), or flip the sign, and see what happens. Then continue with the following exercise:
Below you will find a series of arguments (x), and results (y), that can be obtained by choosing one or more of the 3 functions above (e.g.
y <- floor(x)). Which of the above 3 functions could have been used in each case? First, choose your answer without using R, then check with R.
x <- c(300.99, 1.6, 583, 42.10)
y <- c(300, 1, 583, 42)
x <- c(152.34, 1940.63, 1.0001, -2.4, sqrt(26))
y <- c(152, 1940, 1, 5, -2)
x <- -c(3.2, 444.35, 1/9, 100)
y <- c(-3, -444, 0, -100)
x <- c(35.6, 670, -5.4, 3^3)
y <- c(36, 670, -5, 27)
In addition to
ceiling, R also has
signif rounding functions. The latter two accept a second argument
digits. In case of
round, this is the number of decimal places, and in case of
signif, the number of significant digits. As with the previous exercise, first play around a little, and see how these functions behave. Then continue with the exercise below:
Below you will find a series of arguments (x), and results (y), that can be obtained by choosing one, or both, of the 2 functions above (e.g.
y <- round(x, digits=d)). Which of these functions could have been used in each case, and what should the value of
d be? First, choose your answer without using R, then check with R.
x <- c(35.63, 300.20, 0.39, -57.8)
y <- c(36, 300, 0, -58)
x <- c(153, 8642, 10, 39.842)
y <- c(153.0, 8640.0, 10.0, 39.8)
x <- c(3.8, 0.983, -23, 7.1)
y <- c(3.80, 0.98, -23.00, 7.10)
Ok, let’s continue with a really interesting function:
cumsum. This function returns a vector of the same length as its input vector. But contrary to the previous functions, the value of an element in the output vector depends not only on its corresponding element in the input vector, but on all previous elements in the input vector. So, its results are cumulative, hence the
cum prefix. Take for example:
cumsum(c(0, 1, 2, 3, 4, 5)), which returns: 0, 1, 3, 6, 10, 15. Do you notice the pattern?
Functions that are similar in their behavior to
cummin. From just their naming, you might already have an idea how they work, and I suggest you play around a bit with them in R before continuing with the exercise.
nhtempdata contain “the mean annual temperature in degrees Fahrenheit in New Haven, Connecticut, from 1912 to 1971”. (Although
nhtempis not a vector, but a timeseries object (which we’ll learn the details of later), for the purpose of this exercise this doesn’t really matter.) Use one of the four functions above to calculate the maximum mean annual temperature in New Haven observed since 1912, for each of the years 1912-1971.
- Suppose you put $1,000 in an investment fund that will exhibit the following annual returns in the next 10 years: 9% 18% 10% 7% 2% 17% -8% 5% 9% 33%. Using one of the four functions above, show how much money your investment will be worth at the end of each year for the next 10 years, assuming returns are re-invested every year. Hint: If an investment returns e.g. 4% per year, it will be worth 1.04 times as much after one year, 1.04 * 1.04 times as much after two years, 1.04 * 1.04 * 1.04 times as much after three years, etc.
R has several functions for sorting data:
sort takes a vector as input, and returns the same vector with its elements sorted in increasing order. To reverse the order, you can add a second argument:
- Use the
womendata (exercise 3) and create a vector
xwith the elements of the
heightvector sorted in decreasing order.
- Let’s look at the
riversdata (exercise 1) from another perspective. Looking at the 141 data points in
rivers, at first glance it seems quite a lot have zero as their last digit. Let’s examine this a bit closer. Using the modulo operator you practised in exercise 9 of the previous exercise set, to isolate the last digit of the
riversvector, sort the digits in increasing order, and look at the sorted vector on your screen. How many are zero?
- What is the total length of the 4 largest rivers combined? Hint: Sort the rivers vector from longest to shortest, and use one of the
cum...functions to show their combined length. Read off the appropriate answer from your screen.
Another sorting function is
rank, which returns the ranks of the values of a vector. Have a look at the following output:
x <- c(100465, -300, 67.1, 1, 1, 0) rank(x)
## 6.0 1.0 5.0 3.5 3.5 2.0
- Can you describe in your own words what
- In exercise 3(c) you estimated the correlation between
weight, using Spearman’s rho statistic. Try to replicate this using the
corfunction, without the
methodargument (i.e., using its default Pearson method, and using
rankto first obtain the ranks of
A third sorting function is
order. Have a look again at the vector
x introduced in the previous exercise, and the output of
order applied to this vector:
x <- c(100465, -300, 67.1, 1, 1, 0) order(x)
## 2 6 4 5 3 1
- Can you describe in your own words what
orderdoes? Hint: look at the output of
sort(x)if you run into trouble.
- Remember the time series of mean annual temperature in New Haven, Connecticut, in exercise 6? Have a look at the output of
## 6 15 29 9 13 3 5 12 7 23 1 24 47 25 51 14 18 32 16 11 56 8 17 ## 28 45 52 31 37 4 22 36 39 54 19 34 26 49 30 33 53 55 21 27 58 10 50 ## 57 59 43 44 35 2 46 48 40 20 60 41 38 42
Given that the starting year for this series is 1912, in which years did the lowest and highest mean annual temperature occur?
- What is the result of order(sort(x)), if x is a vector of length 100, and all of its elements are numbers? Explain your answer.
In exercise 1 of this set, we practised the
max function, followed by the
cummax function in exercise 6. In the final exercise of this set, we’re returning to this topic, and will practise yet another function to find a maximum. While the former two functions applied to a single vector, it’s also possible to find a maximum across multiple vectors.
- First let’s see how
maxdeals with multiple vectors. Create two vectors
xcontains the first 5 even numbers greater than zero, and
ycontains the first 5 uneven numbers greater than zero. Then see what
maxdoes, as in
max(x, y). Is there a difference with
- Now, try
pmax(x, y), where p stands for “parallel”. Without using R, what do you think intuitively, what it will return? Then, check, and perhaps refine, your answer with R.
- Now try to find the parallel minimum of x and y. Again, first try to write down the output you expect. Then check with R (I assume, you can guess the appropriate name of the function).
- Let’s move from two to three vectors. In addition to
-xas a third vector. Write down the expected output for the parallel minima and maxima, then check your answer with R.
- Finally, let’s find out how
pmaxhandles vectors of different lenghts Write down the expected output for the following statements, then check your answer with R.
pmax(c(x, x), y)
pmin(x, c(y, y), 3) |
In computer programming, assembly language (or assembler language), sometimes abbreviated asm, is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Assembly language may also be called symbolic machine code.
Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages [nb 2] can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, a much more complicated task than assembling.
Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s.
All of the IBM assemblers for System/360, by default, have a label in column 1, fields separated by delimiters in columns 2-71, a continuation indicator in column 72 and a sequence number in columns 73-80. The delimiter for label, opcode, opeands and comments is spaces, while individual operands are separated by commas and parentheses.
- A macro assembler is an assembler that includes a
macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code.
- Open code refers to any assembler input oputside of a macro definition.
- A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
- A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures ( IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
- A microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer.
- A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. [nb 3] Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
- inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code (" opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.[ citation needed]
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different
syntax for a particular
instruction set architecture. For instance, an instruction to add memory data to a register in a
x86-family processor might be
add eax,[ebx], in original
Intel syntax, whereas this would be written
addl (%ebx),%eax in the
AT&T syntax used by the
GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric
machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as
TASM-syntax, ideal mode, etc., in the special case of
x86 assembly programming).
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
- One-pass assemblers go through the source code once. Any symbol used before it is defined will require "errata" at the end of the object code (or, at least, no earlier than the point where the symbol is defined) telling the linker or the loader to "go back" and overwrite a placeholder which had been left where the as yet undefined symbol was used.
- Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more " no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
S1 B FWD ... FWD EQU * ... BKWD EQU * ... S2 B BKWD
More sophisticated high-level assemblers provide language abstractions such as:
- High-level procedure/function declarations and invocations
- Advanced control structures (IF/THEN/ELSE, SWITCH)
- High-level abstract data types, including structures/records, unions, classes, and sets
- Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
- Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance
See Language design below for more details.
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied," which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/ IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 means 'Move a copy of the following value into AL, and
61 is a hexadecimal representation of the value 01100001, which is 97 in
decimal. Assembly language for the 8086 family provides the
MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the
61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax
MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The
[nb 4] hexadecimal form of this instruction is:
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand
61h is a valid hexadecimal numeric constant and is not a valid register name, so only the
B0 instruction can be applicable. In the second example, the operand
AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the
88 instruction can be applicable.
Assembly languages are always designed so that this sort of unambiguousness is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as
AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (
B0) copies an 8-bit value into the AL register, 10110001 (
B1) moves it into CL and 10110010 (
B2) does so into DL. Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1 MOV CL, 2h ; Load CL with immediate value 2 MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD [nb 5] and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
- Opcode mnemonics
- Data definitions
- Assembly directives
Instructions (statements) in assembly language are generally very simple, unlike those in
high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an
opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more
operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use
B as an extended mnemonic for
BC with a mask of 15 and
NOP ("NO OPeration" – do nothing for one step) for
BC with a mask of 0.
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction
xchg ax,ax is used for
nop being a pseudo-opcode to encode the instruction
xchg ax,ax. Some disassemblers recognize this and will decode the
xchg ax,ax instruction as
nop. Similarly, IBM assemblers for
System/370 use the extended mnemonics
BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction
ld hl,bc is recognized to generate
ld l,c followed by
These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names ( labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly [nb 6] a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s. [nb 7]
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or " unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine ( VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/ TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
foo: macro a load a*b
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter
a-c, the macro expansion of
load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package.
A curious design was A-natural, a "stream-oriented" assembler for 8080/ Z80, processors[ citation needed] from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library .code demomain: REPEAT 20 switch rv(nrandom, 9) ; generate a number between 0 and 8 mov ecx, 7 case 0 print "case 0" case ecx ; in contrast to most other programming languages, print "case 7" ; the Masm32 switch allows "variable cases" case 1 .. 3 .if eax==1 print "case 1" .elseif eax==2 print "case 2" .else print "cases 1 to 3: other" .endif case 4, 6, 8 print "cases 4, 6 or 8" default mov ebx, 19 ; print 20 stars .Repeat print "*" dec ebx .Until Sign? ; loop until the sign flag is set endsw print chr$(13, 10) ENDM exit end demomain
Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP ( Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses.
Assembly languages were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.
Historically, numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Most early microcomputers relied on hand-coded assembly language, including most operating systems and large applications. This was because these systems had severe resource constraints, imposed idiosyncratic memory and display architectures, and provided limited, buggy system services. Perhaps more important was the lack of first-class high-level language compilers suitable for microcomputer use. A psychological factor may have also played a role: the first generation of microcomputer programmers retained a hobbyist, "wires and pliers" attitude.
In a more commercial context, the biggest reasons for using assembly language were minimal bloat (size), minimal overhead, greater speed, and reliability.
Typical examples of large assembly language programs from this time are IBM PC DOS operating systems, the Turbo Pascal compiler and early applications such as the spreadsheet program Lotus 1-2-3. Assembly language was used to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for. The 1993 arcade game NBA Jam is another example.
Assembly language has long been the primary development language for many popular home computers of the 1980s and 1990s (such as the MSX, Sinclair ZX Spectrum, Commodore 64, Commodore Amiga, and Atari ST). This was in large part because interpreted BASIC dialects on these systems offered insufficient execution speed, as well as insufficient facilities to take full advantage of the available hardware on these systems. Some systems even have an integrated development environment (IDE) with highly advanced debugging and macro facilities. Some compilers available for the Radio Shack TRS-80 and its successors had the capability to combine inline assembly source with high-level program statements. Upon compilation, a built-in assembler produced inline machine code.
There have always been debates over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
As of July 2017 [update], the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
There are some situations in which developers might choose to use assembly language:
- Writing code for systems with clarification needed] that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures. [
- Code that must interact directly with the hardware, for example in device drivers and interrupt handlers.
- In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second.
- Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition.
- A stand-alone executable of compact size is required that must execute without recourse to the run-time components or libraries associated with a high-level language. Examples have included firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, security systems, and sensors.
- Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264 ).
- Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
- Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. However, some higher-level languages incorporate run-time components and operating system interfaces that can introduce such delays. Choosing assembly or lower level languages for such systems gives programmers greater visibility and control over processing details.
- Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
- Modify and extend legacy code written for IBM mainframe computers.
- Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
- Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
- Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum.
- Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available.
Reverse-engineering and modifying program files such as:
- existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
- Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages.
- Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. ( BIOS on IBM-compatible PC systems and CP/M is an example.)
- Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running.
- Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
- Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
- Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies.
- Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM.
- Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
- Comparison of assemblers
- Instruction set architecture
- Little man computer – an educational computer model with a base-10 assembly language
- Typed assembly language
- Other than meta-assemblers
- However, that does not mean that the assembler programs implementing those languages are universal.
- "Used as a meta-assembler, it enables the user to design his own programming languages and to generate processors for such languages with a minimum of effort."
- This is one of two redundant forms of this instruction that operate identically. The 8086 and several other CPUs from the late 1970s/early 1980s have redundancies in their instruction sets, because it was simpler for engineers to design these CPUs (to fit on silicon chips of limited sizes) with the redundant codes than to eliminate them (see don't-care terms). Each assembler will typically generate only one of two or more redundant instruction encodings, but a disassembler will usually recognize any of them.
- AMD manufactured second-source Intel 8086, 8088, and 80286 CPUs, and perhaps 8080A and/or 8085A CPUs, under license from Intel, but starting with the 80386, Intel refused to share their x86 CPU designs with anyone—AMD sued about this for breach of contract—and AMD designed, made, and sold 32-bit and 64-bit x86-family CPUs without Intel's help or endorsement.
- In 7070 Autocoder, a macro definition is a 7070 macro generator program that the assembler calls; Autocoder provides special macros for macro generators to use.
- "The following minor restriction or limitation is in effect with regard to the use of 1401 Autocoder when coding macro instructions ..."
- "Assembler language". High Level Assembler for z/OS & z/VM & z/VSE Language Reference Version 1 Release 6. IBM. 2014 . SC26-4940-06.
- Saxon, James A.; Plette, William S. (1962). Programming the IBM 1401, a self-instructional programmed manual. Englewood Cliffs, New Jersey, USA: Prentice-Hall. LCCN 62-20615. (NB. Use of the term assembly program.)
- Kornelis, A. F. (2010) . "High Level Assembler – Opcodes overview, Assembler Directives". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- "Macro instructions". High Level Assembler for z/OS & z/VM & z/VSE Language Reference Version 1 Release 6. IBM. 2014 . SC26-4940-06.
- Wilkes, Maurice Vincent; Wheeler, David John; Gill, Stanley J. (1951). The preparation of programs for an electronic digital computer (Reprint 1982 ed.). Tomash Publishers. ISBN 978-0-93822803-5. OCLC 313593586.
- Fairhead, Harry (2017-11-16). "History of Computer Languages - The Classical Decade, 1950s". I Programmer. Archived from the original on 2020-01-02. Retrieved 2020-03-06.
- "Assembly: Review" (PDF). Computer Science and Engineering. College of Engineering, The Ohio State University. 2016. Archived (PDF) from the original on 2020-03-24. Retrieved 2020-03-24.
- Archer, Benjamin (November 2016).
Assembly Language For Students. North Charleston, South Carolina, USA:
CreateSpace Independent Publishing.
Assembly language may also be called symbolic machine code.
- "How do assembly languages depend on operating systems?". Stack Exchange. Stack Exchange Inc. 2011-07-28. Archived from the original on 2020-03-24. Retrieved 2020-03-24. (NB. System calls often vary, e.g. for MVS vs. VSE vs. VM/CMS; the binary/executable formats for different operating systems may also vary.)
- Daintith, John, ed. (2019). "meta-assembler". A Dictionary of Computing. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Xerox Data Systems (Oct 1975). Xerox Meta-Symbol Sigma 5-9 Computers Language and Operations Reference Manual (PDF). p. vi. Retrieved 2020-06-07.
- Sperry Univac Computer Systems (1977). Sperry Univac Computer Systems Meta-Assembler (MASM) Programmer Reference (PDF). Retrieved 2020-06-07.
- "How to Use Inline Assembly Language in C Code". gnu.org. Retrieved 2020-11-05.
- Salomon, David (February 1993) . Written at California State University, Northridge, California, USA. Chivers, Ian D. (ed.). Assemblers and Loaders (PDF). Ellis Horwood Series In Computers And Their Applications (1 ed.). Chicester, West Sussex, UK: Ellis Horwood Limited / Simon & Schuster International Group. pp. 7, 237–238. ISBN 0-13-052564-2. Archived (PDF) from the original on 2020-03-23. Retrieved 2008-10-01. (xiv+294+4 pages)
- Beck, Leland L. (1996). "2". System Software: An Introduction to Systems Programming. Addison Wesley.
- Hyde, Randall (September 2003) [1996-09-30]. "Foreword ("Why would anyone learn this stuff?") / Chapter 12 – Classes and Objects". The Art of Assembly Language (2 ed.). No Starch Press. ISBN 1-886411-97-2. Archived from the original on 2010-05-06. Retrieved 2020-06-22. Errata: (928 pages)
- Intel Architecture Software Developer's Manual, Volume 2: Instruction Set Reference (PDF). 2. Intel Corporation. 1999. Archived from the original (PDF) on 2009-06-11. Retrieved 2010-11-18.
- Ferrari, Adam; Batson, Alan; Lack, Mike; Jones, Anita (2018-11-19) [Spring 2006]. Evans, David (ed.). "x86 Assembly Guide". Computer Science CS216: Program and Data Representation. University of Virginia. Archived from the original on 2020-03-24. Retrieved 2010-11-18.
- "The SPARC Architecture Manual, Version 8" (PDF). SPARC International. 1992. Archived from the original (PDF) on 2011-12-10. Retrieved 2011-12-10.
- Moxham, James (1996). "ZINT Z80 Interpreter". Z80 Op Codes for ZINT. Archived from the original on 2020-03-24. Retrieved 2013-07-21.
- Hyde, Randall. "Chapter 8. MASM: Directives & Pseudo-Opcodes" (PDF). The Art of Computer Programming. Archived (PDF) from the original on 2020-03-24. Retrieved 2011-03-19.
- Users of 1401 Autocoder. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Griswold, Ralph E. (1972). "Chapter 1". The Macro Implementation of SNOBOL4. San Francisco, California, USA: W. H. Freeman and Company. ISBN 0-7167-0447-1.
- "Macros (C/C++), MSDN Library for Visual Studio 2008". Microsoft Corp. 2012-11-16. Archived from the original on 2020-03-24. Retrieved 2010-06-22.
- Kessler, Marvin M. (1970-12-18). "*Concept* Report 14 - Implementation of Macros To Permit Structured Programming in OS/360". MVS Software: Concept 14 Macros. Gaithersburg, Maryland, USA: International Business Machines Corporation. Archived from the original on 2020-03-24. Retrieved 2009-05-25.
- "High Level Assembler Toolkit Feature Increases Programmer Productivity". Announcement Letters. IBM. 1995-12-12. A95-1432.
- "assembly language: Definition and Much More from Answers.com". answers.com. Archived from the original on 2009-06-08. Retrieved 2008-06-19.
- Provinciano, Brian (2005-04-17). "NESHLA: The High Level, Open Source, 6502 Assembler for the Nintendo Entertainment System". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Dufresne, Steven (2018-08-21). "Kathleen Booth: Assembling Early Computers While Inventing Assembly". Archived from the original on 2020-03-24. Retrieved 2019-02-10.
Booth, Andrew Donald;
Britten, Kathleen Hylda Valerie (September 1947) [August 1947].
General considerations in the design of an all purpose electronic digital computer (PDF) (2 ed.). The Institute for Advanced Study, Princeton, New Jersey, USA:
Birkbeck College, London.
Archived (PDF) from the original on 2020-03-24. Retrieved 2019-02-10.
The non-original ideas, contained in the following text, have been derived from a number of sources, ... It is felt, however, that acknowledgement should be made to Prof. John von Neumann and to Dr. Herman Goldstein for many fruitful discussions ...
- Campbell-Kelly, Martin (April 1982). "The Development of Computer Programming in Britain (1945 to 1955)". IEEE Annals of the History of Computing. 4 (2): 121–139. doi: 10.1109/MAHC.1982.10016. S2CID 14861159.
- Campbell-Kelly, Martin (1980). "Programming the EDSAC". IEEE Annals of the History of Computing. 2 (1): 7–36. doi: 10.1109/MAHC.1980.10009.
- "1985 Computer Pioneer Award 'For assembly language programming' David Wheeler".
- Wilkes, Maurice Vincent (1949). "The EDSAC – an Electronic Calculating Machine". Journal of Scientific Instruments. 26 (12): 385–391. Bibcode: 1949JScI...26..385W. doi: 10.1088/0950-7671/26/12/301.
- da Cruz, Frank (2019-05-17). "The IBM 650 Magnetic Drum Calculator". Computing History - A Chronology of Computing. Columbia University. Archived from the original on 2020-02-15. Retrieved 2012-01-17.
- Pettus, Sam (2008-01-10). "SegaBase Volume 6 - Saturn". Archived from the original on 2008-07-13. Retrieved 2008-07-25.
- Kauler, Barry (1997-01-09).
Windows Assembly Language and Systems Programming: 16- and 32-Bit Low-Level Programming for the PC and Windows.
978-1-48227572-8. Retrieved 2020-03-24.
Always the debate rages about the applicability of assembly language in our modern programming world.
- Hsieh, Paul (2020-03-24) [2016, 1996].
Archived from the original on 2020-03-24. Retrieved 2020-03-24.
... design changes tend to affect performance more than ... one should not skip straight to assembly language until ...
- "TIOBE Index". TIOBE Software. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Rusling, David A. (1999) . "Chapter 2 Software Basics". The Linux Kernel. Archived from the original on 2020-03-24. Retrieved 2012-03-11.
- Markoff, John Gregory (2005-11-28). "Writing the Fastest Code, by Hand, for Fun: A Human Computer Keeps Speeding Up Chips". The New York Times. Seattle, Washington, USA. Archived from the original on 2020-03-23. Retrieved 2010-03-04.
- "Bit-field-badness". hardwarebug.org. 2010-01-30. Archived from the original on 2010-02-05. Retrieved 2010-03-04.
- "GCC makes a mess". hardwarebug.org. 2009-05-13. Archived from the original on 2010-03-16. Retrieved 2010-03-04.
- Hyde, Randall. "The Great Debate". Archived from the original on 2008-06-16. Retrieved 2008-07-03.
- "Code sourcery fails again". hardwarebug.org. 2010-01-30. Archived from the original on 2010-04-02. Retrieved 2010-03-04.
- Click, Cliff; Goetz, Brian. "A Crash Course in Modern Hardware". Archived from the original on 2020-03-24. Retrieved 2014-05-01.
- "68K Programming in Fargo II". Archived from the original on 2008-07-02. Retrieved 2008-07-03.
- "BLAS Benchmark-August2008". eigen.tuxfamily.org. 2008-08-01. Archived from the original on 2020-03-24. Retrieved 2010-03-04.
- "x264.git/common/x86/dct-32.asm". git.videolan.org. 2010-09-29. Archived from the original on 2012-03-04. Retrieved 2010-09-29.
- Bosworth, Edward (2016). "Chapter 1 – Why Study Assembly Language". www.edwardbosworth.com. Archived from the original on 2020-03-24. Retrieved 2016-06-01.
- "z/OS Version 2 Release 3 DFSMS Macro Instructions for Data Sets" (PDF). IBM. 2019-02-15. Retrieved 2021-09-14.
- Paul, Matthias R. (2001) , "Specification and reference documentation for NECPINW", NECPINW.CPI - DOS code page switching driver for NEC Pinwriters (2.08 ed.), FILESPEC.TXT, NECPINW.ASM, EUROFONT.INC from NECPI208.ZIP, archived from the original on 2017-09-10, retrieved 2013-04-22
- Paul, Matthias R. (2002-05-13). "[fd-dev] mkeyb". freedos-dev. Archived from the original on 2018-09-10. Retrieved 2018-09-10.
- Bartlett, Jonathan (2004). Programming from the Ground Up - An introduction to programming using linux assembly language. Bartlett Publishing. ISBN 0-9752838-4-7. Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Britton, Robert (2003). MIPS Assembly Language Programming. Prentice Hall. ISBN 0-13-142044-5.
- Calingaert, Peter (1979) [1978-11-05]. Written at University of North Carolina at Chapel Hill. Horowitz, Ellis (ed.). Assemblers, Compilers, and Program Translation. Computer software engineering series (1st printing, 1st ed.). Potomac, Maryland, USA: Computer Science Press, Inc. ISBN 0-914894-23-4. ISSN 0888-2088. LCCN 78-21905. Retrieved 2020-03-20. (2+xiv+270+6 pages)
- Duntemann, Jeff (2000). Assembly Language Step-by-Step. Wiley. ISBN 0-471-37523-3.
- Kann, Charles W. (2015). "Introduction to MIPS Assembly Language Programming". Archived from the original on 2020-03-24. Retrieved 2020-03-24.
- Kann, Charles W. (2021). " Introduction to Assembly Language Programming: From Soup to Nuts: ARM Edition"
- Norton, Peter; Socha, John (1986). Peter Norton's Assembly Language Book for the IBM PC. New York, USA: Brady Books.
- Singer, Michael (1980). PDP-11. Assembler Language Programming and Machine Organization. New York, USA: John Wiley & Sons.
- Sweetman, Dominic (1999). See MIPS Run. Morgan Kaufmann Publishers. ISBN 1-55860-410-3.
- Waldron, John (1998). Introduction to RISC Assembly Language Programming. Addison Wesley. ISBN 0-201-39828-1.
- Yurichev, Dennis (2020-03-04) . "Understanding Assembly Language (Reverse Engineering for Beginners)" (PDF). Archived (PDF) from the original on 2020-03-24. Retrieved 2020-03-24.
- "ASM Community Book". 2009. Archived from the original on 2013-05-30. Retrieved 2013-05-30. ("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.)
- Assembly language at Curlie
- Unix Assembly Language Programming
- Linux Assembly
- PPR: Learning Assembly Language
- NASM – The Netwide Assembler (a popular assembly language)
- Assembly Language Programming Examples
- Authoring Windows Applications In Assembly Language
- Assembly Optimization Tips by Mark Larson
- The table for assembly language to machine code |
- slide 1 of 12
All About Lines and Rays In Angles
In this lesson plan for mid to upper elementary levels, lines and rays in relation to angles will be taught. After learning vocabulary, students will complete a hands on activity to help them remember the new geometry terms they learned. Lesson plans for teaching lines and rays have never been more fun!
- slide 2 of 12
Students will need glue, markers, popsicle sticks, pom pom balls and construction paper or poster board.
- slide 3 of 12
Students can identify lines and rays and also draw these geometric terms alone and within angles. Students can name each line, ray and angle.
- slide 4 of 12
A ray starts at one point and goes in one direction forever.
A line travels in two directions forever.
An angle has two rays that come together in a vertex. Angles can also be formed when two or more lines intersect each other.
A vertex is the point where two rays intersect.
A point is a dot on a geometric line or figure that is identified with a letter or number. Lines have two points to help name it.
An endpoint is a point on a line segment or ray that defines its stopping point. A ray has one endpoint and a line does not have any.
See visuals of all of these terms at mathleague.com.
- slide 6 of 12
Using one hand, point to the sky. Ask students to imagine their pointed finger going on forever and ever. This is a ray. A ray starts at one point and goes on forever. Now, put your arms straight out to both sides and point. Have students imagine that your pointed fingers go on for eternity. This is a line. Next, have students make a ray with their arms again. Now, they need to go shoulder to shoulder with another student so that they form an angle. Explain that the point where their shoulders meet is called a vertex. Finally, have students cross their arms in front to make an X. Sometimes when two lines intersect each other, they make four angles.
Draw a visual representation for each vocabulary word on the board. Now, explain that each ray, line and angle have a name. Draw one endpoint at the end of the ray and one point near the arrow. Write A under the first dot and B under the second dot. Explain that this is ray AB. Now, do the same for the line. Next, draw an angle and label it XYZ. Y should label the vertex. Explain that this is angle XYZ.
- slide 7 of 12
Hands on Activity
Give each student six Popsicle sticks, two pom poms and a poster board. Each student needs to make a ray, a line and two angles. The pom poms should represent the vertex. Each ray, line and angle should be named. Students should glue each geometric term to a poster board. This activity can serve as an informal assessment.
- slide 8 of 12
Have students draw a ray, line and angle. They should label each one and circle the vertex.
- slide 9 of 12
Students find rays, lines and angles in everyday objects around the room. They draw a picture of each and draw an angle where the image shows an angle.
- slide 10 of 12
Lesson plans for teaching lines, rays, and angles should not stop at this lesson. Revisit the concepts in daily warm-ups, homework and reviews. Take opportunities to point out angles, lines and rays in real life. Finally, continue having students draw lines, rays and angles. After all, these terms are the building blocks for more difficult concepts.
- slide 11 of 12
Angles and Angle Terms, mathleague.com
- slide 12 of 12
Images taken from: |
Event Horizon Telescope (EHT)
Black holes are found in the centers of most galaxies, where they can influence star formation and the distribution of atoms in the environment surrounding them. However, direct observation of a black hole is difficult because it is so small relative to their masses. The Event Horizon Telescope (EHT) captured the first image ever taken of a black hole: specifically, the ring of light produced by matter just as it falls into the black hole at the center of the nearby galaxy M87. The EHT is a virtual observatory consisting of telescopes spanning the planet, from Greenland to the South Pole. The international collaboration operating the EHT includes observatories affiliated with the Center for Astrophysics | Harvard & Smithsonian: the CfA’s Submillimeter Array (SMA) and the Greenland Telescope.
The Telescopes and the Science
Black holes in the modern sense were first predicted as a consequence of Albert Einstein’s general theory of relativity in 1915. These objects are so dense, they are surrounded by a boundary called an event horizon; anything crossing that boundary can never return to the outside universe. The first candidate black hole was Cygnus X-1, discovered by the Uhuru X-ray satellite. That discovery was followed closely by the identification of the Milky Way’s supermassive black hole Sagittarius A* in 1974.
Today we know black holes are common throughout the cosmos. Nearly every large galaxy contains at least one supermassive black hole weighing millions or billions of times the mass of the Sun. Despite their ubiquity and large mass, these black holes are relatively small in size meaning even our best telescopes can’t take images of them — at least when working alone. The EHT uses a method known as “very long baseline interferometry” (VLBI) to yoke multiple telescopes together into a single virtual observatory the size of the planet. That combined power gave it the resolution necessary to take an image of the supermassive black hole in the giant elliptical galaxy M87. With the addition of four observatories, including CfA’s Greenland Telescope, the EHT continues to observe both M87 and Sagittarius A*.
These images will provide valuable information about the behavior of matter right as it falls into the black hole. This matter emits light at a range of wavelengths around 1.3 millimeters, which fortunately passes through Earth’s atmosphere and the interstellar gas between us and the center of the Milky Way. This wavelength is observable by many large existing telescopes, so the EHT is a collaboration between those observatories. The collaboration is led from the Center for Astrophysics, and includes the CfA’s SMA in Hawaii, the Greenland Telescope, the National Radio Astronomy Observatory’s Atacama Large Millimeter/Submillimeter Array (ALMA) in Chile, the Submillimeter Telescope in Arizona, and the Large Millimeter Telescope in Mexico.
The EHT images the black hole “shadow”: a perfectly dark region due to the black hole blocking all light coming from farther away, surrounded by the bright ring of 1.3 mm emissions. This absence has the shape of the black hole’s event horizon, so it provides an essential test of Einstein’s theory of gravity. It also provides estimates of the black hole’s spin, as well as information about the behavior of matter under strong gravity. |
What is the logical form of an argument?
The logical form of an argument is composed from the logical forms of its component statements or sentences. These logical forms are especially helpful for assessing the validity of deductive arguments.
How do you identify the logical forms of a statements?
A statement form (or propositional form, or logical form) is an expression made up of statement variables, called compo- nent statements, (such as p, q, and r), and logical connectives (such as ∼, ∨ and ∧) that becomes a statement when actual statements are substituted for the component statement variables.
What is standard logical form?
The standard form of an argument is a way of presenting the argument which makes clear which statements are premises, how many premises there are, and which statements is the conclusion. In standard form, the conclusion of the argument is listed last.
What is an if then argument?
If–then arguments , also known as conditional arguments or hypothetical syllogisms, are the workhorses of deductive logic. They make up a loosely defined family of deductive arguments that have an if–then statement —that is, a conditional—as a premise. The conditional has the standard form If P then Q.
What is an example of logical form?
Thus, for example, the expression “all A’s are B’s” shows the logical form which is common to the sentences “all men are mortals,” “all cats are carnivores,” “all Greeks are philosophers,” and so on.
What is an example of a logical argument?
Example. The argument “All cats are mammals and a tiger is a cat, so a tiger is a mammal” is a valid deductive argument. Both the premises are true. To see that the premises must logically lead to the conclusion, one approach would be use a Venn diagram.
What is conditional argument?
A conditional argument composed of categorical statements is readily judged to be either valid or invalid; validity is not a matter of degree, and the truth of the conclusion of a valid argument is guaranteed by the truth of its premises.
What are forms of conditional arguments?
Conditional statements take several forms including the converse, inverse, contrapositive, and necessary which take the following forms:
- Converse: If q, then p.
- Inverse: If not p, then not q.
- Contrapositive: If not q, then not p.
- Necessary: If, and only if, p, then q.
What is a conditional statement in an argument?
A conditional asserts that if its antecedent is true, its consequent is also true; any conditional with a true antecedent and a false consequent must be false. For any other combination of true and false antecedents and consequents, the conditional statement is true.
What is a valid argument and how is it different from a sound argument?
An argument form is valid if and only if whenever the premises are all true, then conclusion is true. An argument is valid if its argument form is valid. For a sound argument, An argument is sound if and only if it is valid and all its premises are true.
What is an example of a conditional statement?
Example. Conditional Statement: “If today is Wednesday, then yesterday was Tuesday.” Hypothesis: “If today is Wednesday” so our conclusion must follow “Then yesterday was Tuesday.” So the converse is found by rearranging the hypothesis and conclusion, as Math Planet accurately states.
What is the logical form of denying the consequent?
(also known as: inverse error, inverse fallacy) Description: It is a fallacy in formal logic where in a standard if/then premise, the antecedent (what comes after the “if”) is made not true, then it is concluded that the consequent (what comes after the “then”) is not true.
What is a valid argument?
An argument is valid if the premises and conclusion are related to each other in the right way so that if the premises were true, then the conclusion would have to be true as well.
Is denying the consequent valid or invalid?
The opposite statement, denying the consequent, is a valid form of argument. Denying the consequent can be considered a form of abductive reasoning.
Why is denying the consequent valid?
Like modus ponens, modus tollens is a valid argument form because the truth of the premises guarantees the truth of the conclusion; however, like affirming the consequent, denying the antecedent is an invalid argument form because the truth of the premises does not guarantee the truth of the conclusion.
Is affirming the consequent a valid argument form?
Affirming the consequent is a valid argument form. An argument of this form—If p, then q; p; therefore, q—is called modus ponens. An argument of this form—If p, then q; not p; therefore, not q—is called modus tollens. This argument form known as modus tollens is valid.
What are logical fallacies in an argument?
Logical fallacies are arguments that may sound convincing, but are based on faulty logic and are therefore invalid. They may result from innocent errors in reasoning, or be used deliberately to mislead others. Taking logical fallacies at face value can lead you to make poor decisions based on unsound arguments. |
The standard error of the regression is also known as the standard error of estimate(s). It is the measure of the variation of an observation made around the computed regression line. It represents the average distance that the observed values from the regression line. Smaller values of the standard error of regression indicate that the observations are closer to the regression line. If the standard error is ‘0’ that shows no variation corresponding to the computed line and the correlation is perfect. Standard deviation measures the variation in the set of data from its mean, likewise, the standard error of estimate also measures the variation in the actual values of Y from the predicted (computed) values of Y on the regression line. The standard error of the estimate is valid for linear as well as non-linear regression models. It is important in the calculation of confidence and prediction intervals.
The standard error of an estimate is calculated by the following formula:
- Syx = Standard error of estimate of y on x
- ye = Estimated value of y for a given value of x
- Sxy = Standard error of estimate of x on y
- xe = Estimated value of x for a given value of y
The large the value of Syx or Sxy the greater the scatter on the line of regression. In such a case the degree of correlation series is poor. The error of an estimate is an absolute measure and is given by the ratio S/σ. This ratio is also used for finding the value of the coefficient of correlation.
Make sure you also check our other amazing Article on : Application of Excel in Statistical Analysis |
European scientists have gathered tiny fungi that take shelter in Antarctic rocks and sent them to the International Space Station. After 18 months on board in conditions similar to those on Mars, more than 60% of their cells remained intact, with stable DNA. The results provide new information for the search for life on the red planet. Lichens from the Sierra de Gredos (Spain) and the Alps (Austria) also travelled into space for the same experiment.
The McMurdo Dry Valleys, located in the Antarctic Victoria Land, are considered to be the most similar earthly equivalent to Mars. They make up one of the driest and most hostile environments on our planet, where strong winds scour away even snow and ice. Only so-called cryptoendolithic microorganisms, capable of surviving in cracks in rocks, and certain lichens can withstand such harsh climatological conditions.
A few years ago a team of European researchers travelled to these remote valleys to collect samples of two species of cryptoendolithic fungi: Cryomyces antarcticus and Cryomyces minteri. The aim was to send them to the International Space Station (ISS) for them to be subjected to Martian conditions and space to observe their responses.
The tiny fungi were placed in cells (1.4 centimetres in diameter) on a platform for experiments known as EXPOSE-E, developed by the European Space Agency to withstand extreme environments. The platform was sent in the Space Shuttle Atlantis to the ISS and placed outside the Columbus module with the help of an astronaut from the team led by Belgian Frank de Winne.
For 18 months half of the Antarctic fungi were exposed to Mars-like conditions. More specifically, this is an atmosphere with 95% CO2, 1.6% argon, 0.15% oxygen, 2.7% nitrogen and 370 parts per million of H2O; and a pressure of 1,000 pascals. Through optical filters, samples were subjected to ultra-violet radiation as if on Mars (higher than 200 nanometres) and others to lower radiation, including separate control samples.
"The most relevant outcome was that more than 60% of the cells of the endolithic communities studied remained intact after 'exposure to Mars', or rather, the stability of their cellular DNA was still high," highlights Rosa de la Torre Noetzel from Spain's National Institute of Aerospace Technology (INTA), co-researcher on the project.
The scientist explains that this work, published in the journal Astrobiology, forms part of an experiment known as the Lichens and Fungi Experiment (LIFE), "with which we have studied the fate or destiny of various communities of lithic organisms during a long-term voyage into space on the EXPOSE-E platform."
"The results help to assess the survival ability and long-term stability of microorganisms and bioindicators on the surface of Mars, information which becomes fundamental and relevant for future experiments centred around the search for life on the red planet," states De la Torre.
Also lichens from Gredos and the Alps
Researchers from the LIFE experiment, coordinated from Italy by Professor Silvano Onofri from the University of Tuscany, have also studied two species of lichens (Rhizocarpon geographicum and Xanthoria elegans) which can withstand extreme high-mountain environments. These have been gathered from the Sierra de Gredos (Avila, Spain) and the Alps (Austria), with half of the specimens also being exposed to Martian conditions.
Another range of samples (both lichens and fungi) was subjected to an extreme space environment (with temperature fluctuations of between -21.5 and +59.6 ºC, galactic-cosmic radiation of up to 190 megagrays, and a vacuum of between 10-7 to 10-4 pascals). The effect of the impact of ultra-violet extraterrestrial radiation on half of the samples was also examined.
After the year-and-a-half-long voyage, and the beginning of the experiment on Earth, the two species of lichens 'exposed to Mars' showed double the metabolic activity of those that had been subjected to space conditions, even reaching 80% more in the case of the species Xanthoria elegans.
The results showed subdued photosynthetic activity or viability in the lichens exposed to the harsh conditions of space (2.5% of samples), similar to that presented by the fungal cells (4.11%). In this space environment, 35% of fungal cells were also seen to have kept their membranes intact, a further sign of the resistance of Antarctic fungi.
The above post is reprinted from materials provided by FECYT - Spanish Foundation for Science and Technology. Note: Content may be edited for style and length.
Cite This Page: |
Protected areas are widely considered essential for biodiversity conservation. However, few global studies have demonstrated that protection benefits a broad range of species. Here, using a new global biodiversity database with unprecedented geographic and taxonomic coverage, we compare four biodiversity measures at sites sampled in multiple land uses inside and outside protected areas. Globally, species richness is 10.6% higher and abundance 14.5% higher in samples taken inside protected areas compared with samples taken outside, but neither rarefaction-based richness nor endemicity differ significantly. Importantly, we show that the positive effects of protection are mostly attributable to differences in land use between protected and unprotected sites. Nonetheless, even within some human-dominated land uses, species richness and abundance are higher in protected sites. Our results reinforce the global importance of protected areas but suggest that protection does not consistently benefit species with small ranges or increase the variety of ecological niches.
Protected areas are considered an essential strategy for habitat and species conservation1. Parties to the Convention on Biological Diversity have committed to increase the terrestrial area currently under protection2 from 15.4% to at least 17% in ‘effectively and equitably managed, ecologically representative and well connected’ protected areas by 2020 (Aichi biodiversity target 11)3. A recent assessment of progress suggests that the coverage target will be met, and that protected area representativeness and management are improving4.
However, there is some doubt over the success of protected areas. Management effectiveness reports suggest only 22% of protected areas have ‘sound management’5, experts estimate that only half of all tropical reserves are effective6 and human pressures are increasing in Latin American, African and Asian protected areas7. Declines in animal and plant abundance have been documented inside protected areas6,8,9 and in many countries the effectiveness of protection is being compromised by external pressures and inadequate government support10,11. Protecting all terrestrial sites of conservation significance could cost US$ 76 billion annually12, plus associated opportunity costs (up to US$ 6,500 ha−1 for productive agricultural land13). Quantifying the effectiveness of protected areas is therefore crucial to justify maintaining and expanding the network.
Evidence from remote sensing suggests that protected areas slow change from ‘natural’ to ‘human-modified’ land cover14 and successfully retain forests1. In the tropics, protection reduces deforestation15,16,17, loss of carbon18 and forest fires19. There is some evidence that changes in land cover and human pressure vary with IUCN Protected Area Management Categories (henceforth ‘IUCN category’)20, although protected areas that have been assigned categories with more restrictive management objectives have not consistently experienced less land-use change, as their location may be more important than their IUCN category16,17,19,21,22,23,24.
Importantly, preventing land-use change does not necessarily conserve species within protected areas (for example, hunting may still occur25) and there are few regional or global assessments of how protection affects species and assemblages. Geldmann et al.1 found that most of the 42 studies in their meta-analysis reported that abundances of species were higher inside protected areas, but acknowledged the limited evidence base. Coetzee et al.22, reviewing 86 studies, found a positive effect of protection on species richness and abundance. These two meta-analyses are informative but their use of effect sizes—rather than primary data—constrained the biodiversity measures that could be considered. Consequently, it is not yet clear whether protection solely maintains greater numbers of individuals (and therefore higher species richness) or additionally maintains a greater variety of niches. If the latter is true, we expect protection to be associated with more species for a given number of individuals (that is, a higher rarefaction-based richness26—hereafter ‘rarefied richness’).
A further limitation of some previous studies is that the apparent success of protected areas may be caused by their location rather than protection per se27. To account for this bias, several analyses of land cover have matched protected locations or pixels to unprotected counterparts having similar values of potential confounding variables such as elevation or slope14,17,19,28,29,30. Alternatively, potential confounding variables have been included as covariates in models of land-use change31,32, including some analyses that have compared matched pairs of protected and unprotected locations as differences between paired locations remained even after matching24. Both previous meta-analyses of protected-area effects on species1,22 were limited in their ability to account for potential confounding factors using either of these approaches as they lacked geographic coordinates for individual sample sites and data on specific land use types. Therefore, it remains undecided whether protection offers benefits to biodiversity beyond those attributable to reduced land-use change, that is, whether protection can benefit biodiversity even within a particular land use.
Here we assess the effect of protection on species and assemblages using collated primary data rather than effect sizes, quantifying the effects of protection both among and within land uses, while controlling for potentially confounding variables (see Methods). Using the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) database33, which collated data on species’ presence or abundance at sampled sites from peer-reviewed spatial comparisons of different types and intensities of anthropogenic pressures, we calculate four biodiversity measures based on sampled abundances and occurrences at each site (henceforth ‘within-sample’ biodiversity measures). The database records geographic coordinates of sampled sites, allowing selection of comparably surveyed sites located inside and outside protected areas by intersecting with the World Database on Protected Areas (WDPA)34. We extract data from 156 studies, including 13,669 species of vertebrates, invertebrates and plants, that had sites both inside (n=1,939 sites) and outside (n=4,592 sites) 359 terrestrial protected areas (Fig. 1) and use mixed-effects models to assess the effects of protection while accounting for among-study differences in sampling methodology. Although this represents a small fraction of the protected area network (0.18%), it is substantially larger than previous meta-analyses1,22 and spans 48 countries, 101 ecoregions and 13 of the 14 terrestrial biomes. The sampled protected areas show a similar distribution to that of all terrestrial protected areas in the WDPA34 in terms of size, year of establishment and IUCN category and are reasonably representative in terms of how their total area is divided among land uses, ecoregions and biomes (Supplementary Fig. 1). We find that within-sample species richness and total abundance are significantly higher at sites inside than outside protected areas. Importantly, significant interactions between protection and land use in our models indicate that protection does more than merely prevent land-use change (suggesting protection benefits biodiversity even within human-dominated land uses). However, in contrast to our expectations, protection has no effect on either rarefied richness (suggesting protection has little effect on the number of species present for a given number of individuals, and therefore does not increase the variety of viable niches available) or endemicity (suggesting protection has little effect on the proportion of individuals within a community that have narrow geographic ranges). Finally, we estimate that on average the global protected area network is 41% (95% confidence interval (CI): 2.0 to 81%) effective at retaining within-sample species richness and 54% (95% CI: 0 to 136%) effective at retaining local abundance.
Local biodiversity inside and outside protected areas
Samples from protected sites (Fig. 1b) contained 10.6% more species (95% CI: 4.1 to 17.6%; χ2=9.99, df=1, P=0.002; Fig. 2a) and 14.5% more individuals (95% CI: 2.0 to 28.7%; χ2=5.09, df=1, P=0.024; Fig. 2b) than samples from unprotected sites. If protection maintains a wider set of viable niches, we would also expect rarefied richness (that is, the number of species expected if each site within a study had yielded the same number of individuals) to be higher inside protected areas. However, there was no significant effect of protection on rarefied richness (95% CI: −31.06 to 13.5%; χ2=1.33, df=1, P>0.2; Fig. 2c and Supplementary Table 1), suggesting that the higher within-sample species richness of protected sites largely reflects higher overall abundance (as the number of species detected increases with the number of individuals sampled26). Assemblages in protected areas might also be expected to have a higher proportion of endemic species and a smaller proportion of widespread species. However, we found the effect of protection on endemicity (a measure we calculated as the inverse of community-weighted mean geographic range size) was marginally nonsignificant (95% CI: −0.6 to 11.1%; χ2=2.99, df=1, P=0.08; Fig. 2d).
Analyses of spatial comparisons cannot reveal the mechanism behind differences between protected and unprotected sites. Protected areas may have been selected based on pre-existing biodiversity gradients, or protected area management may result in the preservation of populations lost from surrounding areas. These mechanisms are not mutually exclusive. Time-series data on species and assemblages inside and outside protected areas would be helpful for determining their relative importance. Additionally, we may underestimate the overall contribution of protection, as many protected areas aim to protect biodiversity features not included in our analyses, such as beta diversity, particular rare species, migratory routes or ecological processes20.
Protected areas with more restrictive management objectives are expected to retain more biodiversity. Although IUCN categories are not necessarily applied consistently across countries20, several studies have used them to compare biodiversity across management objectives16,17,19,21,22,23. As most of these studies focus on land cover (particularly deforestation), there is little information on how effects on species differ among protected areas in different IUCN categories; the limited available evidence indicates a positive effect size for both the most restrictive IUCN categories and for one category that allows some extraction22. We found no significant differences among protected area management category groups (three groups, in order of lowest to highest restriction on human activity: IUCN category III–VI, IUCN category unknown (a mixture of categories, considered an intermediate category on average) and IUCN category I and II; Fig. 2a; all comparisons among protected groups gave P>0.2), but samples from protected sites in each management category group had higher species richness than those from unprotected sites (Fig. 2a; χ2=11.18, df=3, P=0.011. There was large variation but no significant difference in abundance, rarefied richness or endemicity among management category groups and unprotected sites (Fig. 2 and Supplementary Table 1). The heterogeneity within groups could reflect conservation objectives not captured in a protected area’s IUCN category35 or differences in how these categories have been applied20.
Effects of protection within and among land uses
As all analyses above accounted for differences in elevation, slope and agricultural suitability, differences in land use are the most likely explanation for higher species richness and total abundance in samples from protected sites. We used two approaches to test this hypothesis. First, we included site-specific land use as a fixed effect in our models; and second, we restricted our dataset to sites matched by land use across the protected area boundary (Fig. 1d). In both cases, we also explored whether the effect of protection varied with latitudinal zone and taxonomic group.
Land use and the effects of protection
The effect of protection on within-sample species richness, abundance and endemicity varied among land uses (Fig. 3; χ2=15.26, df=7, P=0.033; χ2=19.12, df=7, P=0.008; χ2=25.05, df=7, P=0.001, respectively) but again the effect on rarefied richness did not (Fig. 3c and Supplementary Table 3). The last result reinforces the finding that samples from protected sites do not have more species for a given number of individuals, even across land uses. Protection had little effect on biodiversity measures at sites within primary and secondary vegetation, probably because such sites tend to experience limited human pressure whether outside or inside protected areas. However, in human-dominated land uses—particularly plantation and cropland—samples from protected sites contained significantly more individuals and species than those from unprotected sites (Fig. 3a,b), but only in the tropics (Supplementary Table 2, Supplementary Fig. 2e and f). The greater effect within the tropics is encouraging given they are often considered a high conservation priority (for example, ref. 36). However, given recent acceleration in human activity in the tropics37, tropical landscapes may be experiencing an extinction debt that has already been repaid in temperate protected areas. Perhaps surprisingly, endemicity was lower at protected than unprotected sites in human-dominated land uses, particularly in cropland (Fig. 3d and Supplementary Table 2) and for vertebrates and plants (Supplementary Table 2 and Supplementary Fig. 2d). This effect suggests that protected areas in human-dominated land uses may either benefit species with wide ranges or may have been located specifically to protect them (for example, migratory birds).
If protection ameliorates human pressures even within land uses, differences in land-use intensity could explain the higher biodiversity at protected than unprotected sites within the same land-use type. The PREDICTS database specifies three levels of use intensity within each land-use type33: minimal (for example, very limited levels of disturbance for natural land uses, low-intensity agriculture), light (for example, some extraction of timber, hunting or pesticide application) and intense (for example, clear-felling, high level of hunting, intensive agriculture, highly urbanized). Even when comparing unprotected and protected sites experiencing the same use intensity in human-dominated land uses, samples from protected sites consistently contained more individuals and species than those from unprotected sites (Supplementary Table 2 and Supplementary Fig. 2i,j). These results suggest an influence of factors not captured in our measure of use intensity (for example, the condition of the wider landscape, or habitat restoration).
Effect of protection when sites are matched by land use
Analysing only the sites within each study for which land use could be matched across the protected area boundary (Fig. 1d), we found no significant effect of protection on any biodiversity measure for any management category group, taxonomic group or latitudinal zone (Supplementary Table 3 and Supplementary Fig. 4). However, samples from protected areas that were both young (<20 years) and small (<400 km2) had higher species richness than samples from unprotected sites (Fig. 4; χ2=16.22, df=4, P=0.003); rarefied richness again did not differ significantly (Supplementary Table 3). It is possible that more recently designated protected areas have targeted areas of high species richness more precisely through use of spatial prioritization algorithms (for example, ref. 38), or that older protected areas have declined in species richness. The effect of protected area size/age class on abundance and endemicity also varied among taxonomic groups (χ2=25.38, df=8, P=0.001; χ2=16.64, df=8, P=0.034, respectively, Supplementary Fig. 4), with higher invertebrate abundance, vertebrate abundance and plant endemicity in samples from larger, older protected areas. These results suggest that larger, older protected areas may have been designated to protect, or have retained or increased the local abundance of animals and less geographically widespread plants.
Estimating the global effectiveness of protected areas
We extrapolated from our models of local biodiversity inside and outside protected areas (Fig. 3) to estimate the effectiveness of the current global protected area network at retaining site-level biodiversity. Our measure of effectiveness would be 0% if sites within protected areas are, on average, as diverse as unprotected sites, and 100% if the biodiversity of protected sites is, on average, as high as for ‘pristine’ sites (minimally-impacted primary vegetation; note that the scale is not bounded at either 0% or 100%). On this scale, we estimate that on average the global protected area network is 41% (95% CI: 2 to 81%) effective at retaining within-sample species richness and 54% (95% CI: 0 to 136%) effective at retaining local abundance.
How to improve the global protected area network
An important question for environmental decision makers is whether biodiversity will benefit more from increasing restrictions on human activity in existing protected areas or from expanding the network39,40. Although the trend towards higher species richness and abundance at sites in protected areas with more restrictive management objectives was not statistically significant, the coefficients suggest the effects of management restrictiveness may be large (Fig. 2a,b). The large uncertainty seen in these models reflects both a lack of precise data on the objectives of protected area management and on effectiveness, and a lower number of sites with the most restrictive management. Better information on protected area management intent and effectiveness is needed to confidently quantify the biodiversity outcome of increasing management restrictions. However, to demonstrate the importance of having improved information on management, we used our model coefficients (Fig. 2a,b) to estimate that the effectiveness of the protected area network could be increased to 94% (95% CI: 50 to 139%) for average within-sample species richness and 167% (95% CI: 0 to 392%) for average within-sample abundance if all protected areas enforced the most restrictive management objectives (that is, IUCN categories I or II). To raise average within-sample species richness worldwide by the same amount solely through expanding the current protected area network (i.e., with the existing distribution among IUCN categories) would require protection of 22% (95% CI: 12 to 63%) of terrestrial area; for within-sample abundance, the corresponding figure is 31% (95% CI: 13 to 299%). The wide CIs on these estimates highlight the urgent need for improved data on the management of protected areas. To help decide whether to expand or change the management of the protected area network requires information on the costs of expansion versus changes in management, the representation of species not currently under protection41 and the extent to which these options achieve other globally agreed conservation targets42. Nevertheless, other recent studies also suggest that increasing the performance, rather than the total coverage of protected areas, may achieve the desired outcomes for biodiversity more efficiently39,40,43,44: this is an important issue requiring further study.
In summary, these first detailed global analyses of site-level data on a large, taxonomically broad set of species show that, overall, samples from protected sites contained more individuals and species than samples from unprotected sites. By contrast, protected sites did not consistently have higher rarefied richness or levels of endemicity—both measures of community characteristics that are often considered when setting conservation priorities. The greatest differences in species richness and abundance occurred across land uses: protected areas are most effective where they minimize human-dominated land use, especially where they safeguard primary or mature secondary vegetation. However, the positive effect of protection within human-dominated land uses, particularly in the tropics, shows that land use is not the only cause of higher biodiversity within protected areas. Better data on management is needed to quantify biodiversity benefits of restricting human activity inside protected areas but, if the trend in our data is confirmed, more restrictive protected area management across the current network could be as important as extending the network. Importantly, we cannot discern whether protection has prevented losses in site-level biodiversity seen in surrounding areas, increased numbers of individuals, or retained a pre-existing biodiversity gradient. Nonetheless, these analyses represent a substantial advance in knowledge about several measures of biodiversity inside versus outside protected areas. Our results reinforce recent calls45 for increased support and recognition of the importance of protected areas worldwide10,11, but highlight that the network is not currently effective for all measures of local biodiversity.
For each sampling location or site in the PREDICTS database (November 2014 version), we calculated within-sample species richness, total abundance of individuals, rarefied richness (based on the fewest individuals at any site within each study) and community weighted mean log10 geographic range size—the inverse of which was then plotted to give our endemicity measure. Each species’ range size was derived from its global occurrence in the Global Biodiversity Information Facility database. We recognise biases in the Global Biodiversity Information Facility data, but these are mitigated to some extent by our hierarchical modelling approach and our estimates compare reasonably well with estimates based on other data sources, listed in full in the Supplementary Information. Land use was classified using the study authors’ description for each site; this method has been shown to be repeatable33. Sites were considered to be protected if their geographical coordinates fell inside protected areas from the World Database on Protected Areas34 (see Supplementary Methods). We then derived two datasets: the first included all studies with sites inside and outside protected areas (all-sites data; Fig. 1b); the second retained only those sites from each study for which land use could be matched across the protected area boundary (matched-sites data 2; Fig. 1d). All sources of biodiversity data are listed in the Supplementary Information.
We used generalized linear mixed-effects models to account for differences in response variables due to study-specific methodologies and the spatial structure of sites46. The PREDICTS data present a rare opportunity to compare sites inside and outside protected areas, but do not have the geographic coverage required for a stricter counterfactual approach14,17 in which sites are individually matched. To reduce the risk that any differences observed between sites inside and outside protected areas were caused by biases in the location of protected areas27, we considered elevation47 and derived slope at c. 1 km2 resolution and agricultural suitability48 at 10 km2 resolution as covariates in all models (see Supplementary Information for further details). To ensure independence of all variables in the model, we intentionally included only these three confounding variables that we considered to be fully independent of the presence of a protected area. For example, distances to roads and markets are affected by the presence of protected areas so are not independent confounding factors (see Supplementary Information for details). We sequentially compared models with and without each fixed effect and at each step dropped the term with the highest P-value, until all terms had P<0.05 (ref. 49).
Assessing protection effects
We tested for biodiversity differences between sites inside and outside protected areas using the all-sites data, treating protection status (inside vs outside a protected area) as a fixed effect. We then tested whether biodiversity measures differed between management category groups by re-coding IUCN category as a four-level factor: unprotected, IUCN category III–VI, IUCN category unknown, and IUCN category I and II.
Assessing protection effects within and among land uses
We used two approaches to test whether biodiversity differences between protected and unprotected sites varied with land use. First, using the all-sites data, we modelled the response of each biodiversity measure to protection status, land use, and their interaction. We also tested for the three-way interaction between land use, protection and either use intensity, latitudinal zone or taxonomic group. Second, using the matched-sites data, we re-ran models with protection status, and then with management category group as a fixed effect. We also split the matched-sites data by latitudinal zone and taxonomic groups to assess whether these factors influenced the effect of protection. Finally, we tested whether the site-level biodiversity response to protection varied with the size/age class of the protected area [four-level factor with all combinations of young (<20 years), old (20–85 years), small (<400 km2) and large (400–12,000 km2); these thresholds between categories were selected to give a similar number of sites in each group].
Estimating global protected area effectiveness
The global effectiveness of protected areas (e) was estimated from e=1−(1−i)/(1−o), where modelled site-level biodiversity inside (i) and outside (o) protected areas are expressed as a proportion of that under ‘pristine’ conditions. We calculated the ratio of i/o from the model estimates for biodiversity inside relative to outside protected areas in each land use (Fig. 3), where each land-use parameter was weighted by the proportion of global terrestrial area within that land-use type. This value of i/o could then be used to solve an equation expressing the global state of site-level biodiversity: 1−r=ai+(1−a)o, where r is the estimated global average loss of site-level biodiversity relative to pristine46 and a is the fraction of the total land area that is protected50. Solving this equation for i and o allowed us to estimate e. Finally, by using estimates for the effect of protection in IUCN categories I and II (Fig. 2a,b) to give i/o, we estimated e under the more restrictive management scenario. By rearranging the equations we estimated the total protected area (a) needed to obtain the same average local biodiversity outcome (1−r) inferred under this more restrictive management scenario. See Supplementary Information for more details.
The biodiversity data that support the findings of this study are available in the Natural History Museum data portal (data.nhm.ac.uk) with the identifier dx.doi.org/10.5519/0095544. R scripts are available at http://github.com/claudialouisegray/PREDICTS_WDPA.
How to cite this article: Gray, C. L. et al. Local biodiversity is higher inside than outside terrestrial protected areas worldwide. Nat. Commun. 7:12306 doi: 10.1038/ncomms12306 (2016).
Geldmann, J. et al. Effectiveness of terrestrial protected areas in reducing habitat loss and population declines. Biol. Conserv. 161, 230–238 (2013).
Juffe-Bignoli, D. et al. Protected Planet Report 2014. Available at http://www.unep-wcmc.org/resources-and-data/protected-planet-report-2014 (UNEP-WCMC, 2014).
CBD. Decision X/2, The strategic plan for biodiversity 2011–2020 and the Aichi Biodiversity Targets, Nagoya, Japan, 18 to 29 October 2010. Available at http://www.cbd.int/decision/cop/default.shtml?id=13164 (2010).
Tittensor, D. P. et al. A mid-term analysis of progress toward international biodiversity targets. Science 346, 241–244 (2014).
Leverington, F., Costa, K. L., Pavese, H., Lisle, A. & Hockings, M. A global analysis of protected area management effectiveness. Env. Manag. 46, 685–698 (2010).
Laurance, W. F. et al. Averting biodiversity collapse in tropical forest protected areas. Nature 489, 290–294 (2012).
Geldmann, J., Joppa, L. N. & Burgess, N. D. Mapping change in human pressure globally on land and within protected areas. Conserv. Biol. 28, 1604–1616 (2014).
Craigie, I. D. et al. Large mammal population declines in Africa’s protected areas. Biol. Conserv. 143, 2221–2228 (2010).
WWF. Living Planet Report 2014: species and spaces, people and places. Available at http://wwf.panda.org/about_our_earth/all_publications/living_planet_report. (WWF, (2014).
Watson, J. E. M., Dudley, N., Segan, D. B. & Hockings, M. The performance and potential of protected areas. Nature 515, 67–73 (2014).
Scheffer, M. et al. Creating a safe operating space for iconic ecosystems. Science 347, 1317–1319 (2015).
McCarthy, D. P. et al. Financial costs of meeting global biodiversity conservation targets: current spending and unmet needs. Science 338, 946–949 (2012).
Naidoo, R. & Iwamura, T. Global-scale mapping of economic benefits from agricultural lands: Implications for conservation priorities. Biol. Conserv. 140, 40–49 (2007).
Joppa, L. N. & Pfaff, A. Global protected area impacts. Proc. R. Soc. B 278, 1633–1638 (2011).
DeFries, R., Hansen, A., Newton, A. C. & Hansen, M. C. Increasing isolation of protected areas in tropical forests over the past twenty years. Ecol. Appl. 15, 19–26 (2005).
Joppa, L. N., Loarie, S. R. & Pimm, S. L. On the protection of ‘protected areas’. Proc. Natl Acad. Sci. USA 105, 6673–6678 (2008).
Nolte, C., Agrawal, A., Silvius, K. M. & Soares-Filho, B. S. Governance regime and location influence avoided deforestation success of protected areas in the Brazilian Amazon. Proc. Natl Acad. Sci. USA 110, 4956–4961 (2013).
Scharlemann, J. P. W. et al. Securing tropical forest carbon: the contribution of protected areas to REDD. Oryx 44, 352–357 (2010).
Nelson, A. & Chomitz, K. M. Effectiveness of strict vs. multiple use protected areas in reducingtropical forestfires: A global analysis using matching methods. PLoS ONE 6, e22722 (2011).
Dudley N. (ed). Guidelines for Applying Protected Area Management Categories IUCN (2008).
Brun, C. et al. Analysis of deforestation and protected area effectiveness in Indonesia: a comparison of Bayesian spatial models. Glob. Environ. Change 31, 285–295 (2015).
Coetzee, B. W. T., Gaston, K. J. & Chown, S. L. Local scale comparisons of biodiversity as a test for global protected area ecological performance: a meta-analysis. PLoS ONE 9, e105824 (2014).
Ferraro, P. J. et al. More strictly protected areas are not necessarily more protective: evidence from Bolivia, Costa Rica, Indonesia, and Thailand. Environ. Res. Lett. 8, 025011 (2013).
Pfaff, A., Robalino, J., Sandoval, C. & Herrera, D. Protected area types, strategies and impacts in Brazil’s Amazon: public protected area strategies do not yield a consistent ranking of protected area types by impact. Phil. Trans. R. Soc. B 370, 20140273 (2015).
Abernethy, K. A., Coad, L., Taylor, G., Lee, M. E. & Maisels, F. Extent and ecological consequences of hunting in Central African rainforests in the twenty-first century. Phil. Trans. R. Soc. B 368, 20120303 (2013).
Gotelli, N. J. & Colwell, R. K. Quantifying biodiversity: procedures and pitfalls in the measurement and comparison of species richness. Ecol. Lett. 4, 379–391 (2001).
Joppa, L. N. & Pfaff, A. High and far: biases in the location of protected areas. PLoS ONE 4, e8273 (2009).
Beresford, A. E. et al. Protection reduces loss of natural land-cover at sites of conservation importance across Africa. PLoS ONE 8, e65370 (2013).
Carranza, T., Balmford, A., Kapos, V. & Manica, A. Protected area effectiveness in reducing conversion in a rapidly vanishing ecosystem: The Brazilian Cerrado. Conserv. Lett. 7, 216–223 (2014).
Andam, K. S., Ferraro, P. J., Pfaff, A., Sanchez-Azofeifa, G. A. & Robalino, J. A. Measuring the effectiveness of protected area networks in reducing deforestation. Proc. Natl Acad. Sci. USA 105, 16089–16094 (2008).
Chomitz, K. M. & Gray, D. A. Roads, land use, and deforestation: a spatial model applied to Belize. World Bank Econ. Rev. 10, 487–512 (1996).
Deininger, K. & Minten, B. Determinants of deforestation and the economics of protection: an application to Mexico. Am. J. Agric. Econ. 84, 943–960 (2002).
Hudson, L. N. et al. The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts. Ecol. Evol. 4, 4701–4735 (2014).
IUCN and UNEP. The World Database on Protected Areas (WDPA), July 2014. (Available at www.protectedplanet.net (UNEP-WCMC, 2014).
Boitani, L. et al. Change the IUCN protected area categories to reflect biodiversity outcomes. PLoS Biol. 6, e66 (2008).
Myers, N., Mittermeier, R. a., Mittermeier, C. G., da Fonseca, G. a. & Kent, J. Biodiversity hotspots for conservation priorities. Nature 403, 853–858 (2000).
Ellis, E. C. et al. Used planet: a global history. Proc. Natl Acad. Sci. USA 110, 7978–7985 (2013).
Kremen, C. et al. Aligning conservation priorities across taxa in Madagascar with high-resolution planning tools. Science 320, 222–226 (2008).
Barnes, M. Aichi targets: protect biodiversity, not just area. Nature 526, 195 (2015).
Pressey, R. L., Visconti, P. & Ferraro, P. J. Making parks make a difference: poor alignment of policy, planning and management with protected-area impact, and ways forward. Phil. Trans. R. Soc. B 370, 20140280 (2015).
Venter, O. et al. Targeting global protected area expansion for imperiled biodiversity. PLoS Biol. 12, e1001891 (2014).
Di Marco, M. et al. Synergies and trade-offs in achieving global biodiversity targets. Conserv. Biol. 30, 189–195 (2016).
Fuller, R. A. et al. Replacing underperforming protected areas achieves better conservation outcomes. Nature 466, 365–367 (2010).
Costelloe, B. et al. Global biodiversity indicators reflect the modeled impacts of protected area policy change. Conserv. Lett. 9, 14–20 (2015).
Noss, R. F. et al. Bolder thinking for conservation. Conserv. Biol. 26, 1–4 (2012).
Newbold, T. et al. Global effects of land use on local terrestrial biodiversity. Nature 520, 45–50 (2015).
Danielson, J. J. & Gesch, G. B. Global multi-resolution terrain elevation data 2010 (GMTED2010). US Geological Survey Open File Report 2011–1073 (2011).
Fischer, G., van Velthuizen, H. T., Shah, M. M. & Nachtergaele, F. O. Global Agro-Ecological Assessment for Agriculture in the 21st Century: Methodology and Results. Plate 46: Suitability for rain-fed crops (maximizing technology mix). International Institute for Applied Systems Analysis and Food and Agriculture Organization of the United Nations (2002).
Bates, D. lme4: Mixed-effects modeling with R. Available at http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf (2010).
Butchart, S. H. M. et al. Shortfalls and solutions for meeting national and global conservation area targets. Conserv. Lett. 8, 329–337 (2015).
Sandvik, B. World Borders Dataset 0.3. Available at http://thematicmapping.org/downloads/world_borders.php (2016).
We thank the hundreds of data contributors; all PREDICTS project volunteers, masters and PhD students that collated records; and Adriana De Palma, Helen Phillips, Diego Juffe-Bignoli, Neil Burgess, Max Gray, Daniel Ingram, Valerie Kapos, Naomi Kingston, Sarah Luke and the protected areas team at UNEP-WCMC for comments and assistance. We thank the School of Life Sciences at the University of Sussex for support and the Natural History Museum for a GIA travel award. The PREDICTS project is funded by the UK Natural Environment Research Council (NERC, grant number: NE/J011193/2). PREDICTS is endorsed by the Group on Earth Observations Biodiversity Observation Network (GEO BON). This is a contribution from the Imperial College Grand Challenges in Ecosystem and the Environment Initiative, and the Sussex Sustainability Research Programme.
The authors declare no competing financial interests.
About this article
Cite this article
Gray, C., Hill, S., Newbold, T. et al. Local biodiversity is higher inside than outside terrestrial protected areas worldwide. Nat Commun 7, 12306 (2016). https://doi.org/10.1038/ncomms12306
This article is cited by
Shifting agriculture is the dominant driver of forest disturbance in threatened forest species’ ranges
Communications Earth & Environment (2022)
Nature Communications (2022)
Challenges and opportunities of area-based conservation in reaching biodiversity and sustainability goals
Biodiversity and Conservation (2022)
Effects of climate and protection status on growth and fruit yield of Strychnos spinosa Lam., a tropical wild fruit tree in West Africa |
Scientists have spotted a strange new world.
It's almost exactly the size of Earth. It's rocky. It's relatively close (41 light-years away). And, for the first time, astronomers used the most powerful space telescope ever built — the James Webb Space Telescope — to find this exoplanet, which is a planet beyond our solar system. It's called LHS 475 b.
"Webb is bringing us closer and closer to a new understanding of Earth-like worlds outside our solar system, and the mission is only just getting started," Mark Clampin, the director of NASA's Astrophysics Division, said in a statement.
The planet, however, differs from Earth in some major ways. LHS 475 b whips around its small star every two days, which is an extremely close orbit. But the star, called a "red dwarf," is half the size of the sun, so it's cooler. In sum, this world is a "few hundred degrees warmer than Earth," NASA noted.
"With this telescope, rocky exoplanets are the new frontier."
Importantly, LHS 475 b may still have an atmosphere. But confirming what exactly it's composed of will require repointing Webb at this planet and capturing more detailed information, which is scheduled to happen later this year. "The observatory’s data are beautiful," Erin May, an astrophysicist at the Johns Hopkins University Applied Physics Laboratory, said in a statement. "The telescope is so sensitive that it can easily detect a range of molecules, but we can’t yet make any definitive conclusions about the planet’s atmosphere." The large Webb telescope, with a mirror over 21-feet across, is designed to capture light from some of the earliest galaxies that ever formed, billions of years ago. But it's also equipped with special instruments, called spectrographs, that can detect what's in an exoplanet's skies. Mashable previously reported how we can peer into far-off worlds:
Astronomers wait for planets to travel in front of their bright stars. This starlight passes through the exoplanet's atmosphere, then through space, and ultimately into instruments called spectrographs aboard the Webb telescope (a strategy called "transit spectroscopy"). They're essentially hi-tech prisms, which separate the light into a rainbow of colors. Here's the big trick: Certain molecules, like water, in the atmosphere absorb specific types, or colors, of light. Each molecule has a specific diet. So if that color doesn't show up in the spectrum of colors observed by a Webb spectrograph, that means it got absorbed by (or "consumed" by) the exoplanet's atmosphere. In other words, that element is present in that planet's skies.
There isn't another operational telescope around today that can sleuth out what lies in the atmosphere of an Earth-sized planet. Earth is relatively small. That's why Jupiter-like exoplanets are easier to detect and analyze.
It's likely that Webb will detect and analyze other Earth-sized, rocky worlds. "These first observational results from an Earth-size, rocky planet open the door to many future possibilities for studying rocky planet atmospheres with Webb," NASA's Clampin said.
Some of these rocky orbs orbit in a solar system's habitable zone, a temperate region where liquid water can exist on the surface. Webb can help reveal what they're really like.
"With this telescope, rocky exoplanets are the new frontier," Johns Hopkins astronomer Kevin Stevenson also shared in a statement. |
What Is the Circle of Fifths?
The circle of fifths is a diagram that shows the relationship between musical keys. It's called the "circle of fifths" because each key is a fifth higher than the previous key.
Starting from C, which is located at the top of the circle, you can move clockwise around the circle to find the next key. The next key is G, which is a fifth higher than C. The next key after G is D, which is a fifth higher than G. If you keep going around the circle in this manner, you'll eventually end up back at C.
Why use the circle of fifths?
The circle of fifths is important because it helps you understand the relationships between keys and the chords that are commonly used in those keys. For example, the chords in the key of C are C, Dm, Em, F, G, Am, and Bdim. If you move clockwise around the circle to the next key, G, the chords in that key are G, Am, Bm, C, D, Em, and F#dim. You'll notice that the chords in the key of G are mostly the same as the chords in the key of C, but they're in a different order and a few of them are different.
Here are some practical applications of the circle of fifths:
- Transposing: If you need to play a song in a different key, you can use the circle of fifths to figure out which chords to play. For example, if a song is in the key of C and you want to play it in the key of G, you can use the circle of fifths to figure out that you can play the chords G, Am, Bm, C, D, Em, and F#dim.
- Chord Progressions: The circle of fifths can also be used to create chord progressions that sound good together. You can start with a chord in one key, move to a chord in the next key on the circle, and continue around the circle to create a pleasing sequence of chords.
- Understanding Modes: The circle of fifths can help you understand the different modes, which, simply put, are scales that are based on different starting notes within a key. For example, the Dorian mode is based on the second note of the major scale, and the Phrygian mode is based on the third note. The circle of fifths can help you visualize the relationships between those different modes.
- Harmonic Analysis: You can use the circle of fifths to analyze the harmony of a piece of music. You can identify the key of the piece and then use the circle of fifths to identify the chords that are commonly used in that key. This can help you understand how the piece is structured and how the different chords relate to each other.
The circle of fifths and ear training
The circle of fifths can be a very useful tool for ear training, especially for training the ear to recognize chord progressions and key changes. By listening to the way chords progress around the circle of fifths, you can learn to recognize the patterns that are common to many different types of music. For example, you might notice that the chords in the key of C are often followed by the chords in the key of G, and that this progression creates a sense of resolution and forward momentum.
You can also use the circle of fifths to practice identifying different chords by ear. You can listen to a chord progression and try to identify which chords are being used, and then use the circle of fifths to understand the relationships between those chords.
In addition, the circle of fifths can be used to develop a sense of relative pitch. By practicing singing or playing scales and arpeggios around the circle, you can learn to recognize the interval relationships between different notes and chords.
Overall, the circle of fifths can be a valuable tool to improve your ear training skills and develop a deeper understanding of music theory, so make sure to have one in your guitar or violin case, on your desk, or anywhere else where you might need it - at least until you have memorized it!
Training with the circle of fifths in the EarMaster app
You can train using the circle of fifths in most of the ear training and sight-singing exercises of the EarMaster app on iOS, Android, Windows and Mac.
Simply choose Customized Exercise in the home screen, then pick the keys of your choice on the circle of fifths under "Keys and roots" and choose whether the keys will be changed following the circle of fifths (clockwise) or the circle of fourths (counterclockwise).
You can use that option when training with intervals, chords, scales, progressions, and melodies.
Happy ear training! |
DIY Circuit Design: Waveform Clipping
The electronic circuits like amplifiers, modulators etc. have a particular range at which they can accept the input signals. Any signals which have amplitude higher than this particular rage can cause distortion in the output of the circuit or may even cause damage to the circuit components itself. Since most of the devices works on single positive supply the input range will also be in the positive side. Since the natural signals like sine wave, audio signals etc. has both positive and negative cycles and varying amplitude they have to be modified in such a way the single supply electronic circuits can operate them.
Clipping of a wave is the common technique that is applied on the input signals to modify them so that they fall in the operating range of the circuits. The clipping is done by eliminating the portions of the waveform which are out of the input range of the circuit into which it needs to be applied. This article discusses the details of the clipping and the practical positive and negative clipping circuits.
The most commonly found circuit in which the clipping of a waveform is used is the rectifier circuit. In a rectifier circuit which produces positive output voltage from the sinusoidal input voltage, the negative half cycles of the sinusoidal waves are simply clipped away. In this case the method of clipping is called negative clipping and the method in which the entire positive half cycles are clipped away is called positive clipping. The positive clipping is used to obtain negative voltages with rectifiers in dual voltage power supplies. Simply using the technique of clipping the waveforms having both positive and negative half cycles can be clipped to either positive or negative voltage side only.
The following figures represent the clipping of a sine wave using both the positive and negative clipping devices.
The positive or negative clipping of a sine wave can be achieved by using a single diode only. Here the positive and negative clipping of a sine wave is demonstrated with the help of a Wien Bridge oscillator circuit which can generate a sine wave. The Wien Bridge is a circuit which can generate the pure sine wave with minimum distortion.
The circuit used here to generate the sine wave based on Wien Bridge has both the frequency and amplitude adjustments. The circuit diagram of the variable frequency sine wave oscillator is shown in the following:
The frequency of the above circuit can be varied by simply varying the potentiometer R2 and the amplitude of the wave form can be adjusted by varying the potentiometer R. The frequency of the sine wave generated by the above circuit depends on the components R1, R2, C1 and C2 and the equation for the frequency is given below:
For the ease of adjusting the amplitude of the wave to obtain proper sinusoidal sweep, a coarse and fine adjustment has been implemented using potentiometers. A low value (1K) potentiometer is connected in series with the high value (100K) potentiometer so that the coarse adjustment can be done with the high value resistor and the fine adjustment with the low value resistor.
The clipping circuit requires only a diode and in the case of positive clipping the negative end of the diode is grounded and the wave is received at the positive end. The circuit of the positive clipper is shown below:
The above circuit forms the simplest positive clipper, but it cannot be directly used in most of the circuits since it cannot be loaded. When a load of very small resistance is applied at the output end of the circuit it may not work properly. Hence a buffer (current amplifier) must be added at the output end of the circuit as shown in the following diagram:
The diode 1N4148 is used in the circuit since it can be operated in very high frequencies and also the bias voltage is as low as 0.3 V. |
Temporal range: Early Jurassic-Recent, 190–0Ma
|Monarch butterfly and Luna moth, two widely recognized lepidopterans|
Lepidoptera ( ) is a large order of insects that includes moths and butterflies (both called lepidopterans). It is one of the most widespread and widely recognizable insect orders in the world, encompassing moths and the three superfamilies of butterflies, skipper butterflies, and moth-butterflies. The term was coined by Linnaeus in 1735 and is derived from Ancient Greek λεπίδος (scale) and πτερόν (wing). Comprising an estimated 174,250 species, in 126 families and 46 superfamilies, the Lepidoptera show many variations of the basic body structure that have evolved to gain advantages in lifestyle and distribution. Recent estimates suggest that the order may have more species than earlier thought, and is among the four most speciose orders, along with the Hymenoptera, Diptera, and the Coleoptera.
Lepidopteran species are characterized by more than three derived features, some of the most apparent being the scales covering their bodies and wings, and a proboscis. The scales are modified, flattened "hairs", and give butterflies and moths their extraordinary variety of colors and patterns. Almost all species have some form of membranous wings, except for a few that have reduced wings or are wingless. Like most other insects, butterflies and moths are holometabolous, meaning they undergo complete metamorphosis. Mating and the laying of eggs are carried out by adults, normally near or on host plants for the larvae. The larvae are commonly called caterpillars, and are completely different from their adult moth or butterfly form, having a cylindrical body with a well-developed head, mandible mouth parts, and from 0 to 11 (usually 8) pairs of prolegs. As they grow, these larvae will change in appearance, going through a series of stages called instars. Once fully matured, the larva develops into a pupa, referred to as a chrysalis in the case of butterflies and a cocoon in the case of moths. A few butterflies and many moth species spin a silk case or cocoon prior to pupating, while others do not, instead going underground.
The Lepidoptera have, over millions of years, evolved a wide range of wing patterns and coloration ranging from drab moths akin to the related order Trichoptera, to the brightly colored and complex-patterned butterflies. Accordingly, this is the most recognized and popular of insect orders with many people involved in the observation, study, collection, rearing of and commerce in these insects. A person who collects or studies this order is referred to as a lepidopterist.
Butterflies and moths play an important role in the natural ecosystem as pest species.
- Etymology 1
- Distribution and diversity 2
External morphology 3
- Head 3.1
- Thorax 3.2
- Abdomen 3.3
- Scales 3.4
- Internal morphology 4
- Polymorphism 5
Reproduction and development 6
- Mating 6.1
Life cycle 6.2
- Eggs 6.2.1
- Larvae 6.2.2
- Wing development 6.2.3
- Pupa 6.2.4
- Adult 6.2.5
- Navigation 7.1.1
- Migration 7.1.2
- Communication 7.2
- Flight 7.1
- Defense and predation 8.1
- Pollination 8.2
- Mutualism 8.3
- Parasitism 8.4
- Other biological interactions 8.5
Evolution and systematics 9
- History of study 9.1
- Fossil record 9.2
- Phylogeny 9.3
- Taxonomy 9.4
Relationship to people 10
- Culture 10.1
- Pests 10.2
- Beneficial insects 10.3
- Food 10.4
- Health 10.5
See also 11
- Lists 11.1
- References 12
- Further reading 13
- External links 14
The word Lepidoptera comes from the Latin word for "scaly wing", from the Ancient Greek λεπίς (lepis) meaning scale and πτερόν (pteron) meaning wing. Sometimes the term Rhopalocera is used to group the species that are butterflies, derived from the Ancient Greek ῥόπαλον (rhopalon):4150 and κέρας (kæras):3993 meaning "club" and "horn", respectively; coming from the shape of the antennae of butterflies.
The origins of the common names "butterfly"and "moth" are varied and often obscure. The English word butterfly is from Old English buttorfleoge, with many variations in spelling. Other than that, the origin is unknown, although it could be derived from the pale yellow color of many species' wings suggesting the color of butter. The species of Heterocera are commonly called moths. The origins of the English word moth are more clear, deriving from the Old English moððe" (cf. Northumbrian dialect mohðe) from Common Germanic (compare Old Norse motti, Dutch mot and German Motte all meaning "moth"). Perhaps its origins are related to Old English maða meaning "maggot" or from the root of "midge", which until the 16th century was used mostly to indicate the larva, usually in reference to devouring clothes.
The etymological origins of the word "caterpillar", the larval form of butterflies and moths, are from the early 16th century, from Middle English catirpel, catirpeller, probably an alteration of Old North French catepelose: cate, cat (from Latin cattus) + pelose, hairy (from Latin pilōsus).
Distribution and diversity
Lepidoptera are among the most successful groups of insects. They are found on all continents, except Antarctica. Lepidoptera inhabit all terrestrial habitats ranging from desert to rainforest, from lowland grasslands to montane plateaus but almost always associated with higher plants, especially angiosperms (flowering plants). Among the most northern dwelling species of butterflies and moths is the Arctic Apollo (Parnassius arcticus), which is found in the Arctic Circle in northeastern Yakutia, at an altitude of 1500 meters above sea level. In the Himalayas, various Apollo species such as Parnassius epaphus, besides others, have been recorded to occur up to an altitude of 6,000 meters above sea level.:221
Some lepidopteran species exhibit Coprophagous pyralid moth species, called sloth moths, such as Bradipodicola hahneli and Cryptoses choloepi, are unusual in that they are exclusively found inhabiting the fur of sloths, mammals found in Central and South America. Two species of Tinea moths have been recorded as feeding on horny tissue and have been bred from the horns of cattle. The larva of Zenodochium coccivorella is an internal parasite of the coccid Kermes species. Many species have been recorded as breeding in natural materials or refuse such as owl pellets, bat caves, honey-combs or diseased fruit.
Of the approximately 174,250 lepidopteran species described until 2007, butterflies and skippers are estimated to comprise approximately 17,950, with moths making up the rest. The vast majority of Lepidoptera are to be found in the tropics, but substantial diversity exists on most continents. North America has over 700 species of butterflies and over 11,000 species of moths, while there are about 400 species of butterflies and 14,000 species of moths reported from Australia. The diversity of Lepidoptera in each faunal region has been estimated by John Heppner in 1991 based partly on actual counts from the literature, partly on the card indices in the Natural History Museum (London) and the National Museum of Natural History (Washington), and partly on estimates:
(comprising Indomalayan and Australian regions)
|Estimated number of species||22,465||11,532||44,791||20,491||47,286|
Lepidoptera are morphologically distinguished from other orders principally by the presence of scales on the external parts of the body and appendages, especially the wings. Butterflies and moths vary in size from microlepidoptera only a few millimeters long, to conspicuous animals with a wingspan of many inches, such as the Monarch butterfly and Atlas moth.:246 Lepidopterans undergo a four-stage life cycle: egg; larva or caterpillar; pupa or chrysalis; and imago (plural: imagines) / adult and show many variations of the basic body structure, which have evolved to gain advantages in lifestyle and distribution.
The head is where many sensing organs and the mouth parts are found. Like the adult, the larva also have a toughened, or sclerotized head capsule. Here, there are two compound eyes, and chaetosema, raised spots or clusters of sensory bristles unique to Lepidoptera, even though many taxa have lost one or both of these spots. The antennae have a wide variation in form among species and even between different sexes. The antennae of butterflies are usually filiform and shaped like clubs, those of the skippers are hooked while those of moths have flagellar segments variously enlarged or branched. Some moths have antennae that are enlarged, or tapered and hooked at the ends.:559–560
The maxillary galeae are modified and form an elongated proboscis. The proboscis consists of one to five segments, usually kept coiled up under the head by small muscles when it is not being used to suck up nectar from flowers or other liquids. Some basal moths still have mandibles, or separate moving jaws, like their ancestors and these form the family Micropterigidae.:560
The larvae, called Philipp Christoph Zeller published The Natural History of the Tineinae also on Microlepidoptera (1855).
Among the first entomologists to study fossil insects and their evolution was Samuel Hubbard Scudder (1837–1911), who worked on butterflies. He published a study of the Florissant deposits of Colorado, including the exceptionally preserved Prodryas persephone. Andreas V. Martynov (1879–1938) recognized the close relationship between Lepidoptera and Trichoptera in his studies on phylogeny.
Major contributions in the 20th century included the creation of the monotrysia and ditrysia (based on female genital structure) by Borner in 1925 and 1939. Willi Hennig (1913–1976) developed the cladistic methodology and applied it to insect phylogeny. Niels P. Kristensen, E. S. Nielsen and D. R. Davis studied the relationships among monotrysian families and Kristensen worked more generally on insect phylogeny and higher Lepidoptera too. While it is often found that DNA-based phylogenies differ from those based on morphology, this has not been the case for the Lepidoptera; DNA phylogenies correspond to a large extent to morphology-based phylogenies.
Many attempts have been made to group the superfamilies of the Lepidoptera into natural groups, most of which fail because one of the two groups is not monophyletic: Microlepidotera and Macrolepidoptera, Heterocera and Rhopalocera, Jugatae and Frenatae, Monotrysia and Ditrysia.
The fossil record for Lepidoptera is lacking in comparison to other winged species, and tending not to be as common as some other insects in the habitats that are most conducive to fossilization, such as lakes and ponds, and their juvenile stage has only the head capsule as a hard part that might be preserved. The location and abundance of the most common moth species are indicative that mass migrations of moths occurred over the Palaeogene North Sea, which is why there is a serious lack of moth fossils. Yet there are fossils, some preserved in amber and some in very fine sediments. Leaf mines are also seen in fossil leaves, although the interpretation of them is tricky.
Putative fossil stem group representatives of Amphiesmenoptera (the clade comprising Trichoptera and Lepidoptera) are known from the Triassic.:567 The earliest known fossil lepidopteran is Archaeolepis mane from the Jurassic, about in Dorset, UK. The fossil belongs to a small primitive moth-like species, and its wings are showing scales with parallel grooves under a scanning electron microscope and a characteristic wing venation pattern shared with Trichoptera (Caddisflies). Only two more sets of Jurassic lepidopteran fossils have been found, as well as 13 sets from the Cretaceous, which all belong to primitive moth-like families. Many more fossils are found from the Tertiary, and particularly the Eocene Baltic amber. The oldest genuine butterflies of the superfamily Papilionoidea have been found in the Paleocene MoClay or Fur Formation of Denmark. The best preserved fossil lepidopteran is the Eocene Prodryas persephone from the Florissant Fossil Beds.
Lepidoptera and Trichoptera (caddisflies) are more closely related to each other than to any other insect order, sharing many similarities that are lacking in others; for example the females of both orders are heterogametic, meaning they have two different sex chromosomes, whereas in most species the males are heterogametic and the females have two identical sex chromosomes. The adults in both orders display a particular wing venation pattern on their forewings. The larvae of both orders have mouth structures and gland with which they make and manipulate silk. Willi Hennig grouped the two sister orders into the Amphiesmenoptera superorder. This group probably evolved in the Jurassic, having split from the now extinct order Necrotaulidae. Lepidoptera descend from a diurnal moth-like common ancestor that either fed on dead or living plants.
Micropterigidae, Agathiphagidae and Heterobathmiidae are the oldest and most basal lineages of Lepidoptera. The adults of these families do not have the curled tongue or proboscis, that are found in most members order, but instead have chewing mandibles adapted for a special diet. Micropterigidae larvae feed on leaves, fungi, or liverworts (much like the Trichoptera). Adult Micropterigidae chew the pollen or spores of ferns. In the Agathiphagidae, larvae live inside kauri pines and feed on seeds. In Heterobathmiidae the larvae feed on the leaves of Nothofagus, the southern beech tree. These families also have mandibles in the pupal stage, which help the pupa emerge from the seed or cocoon after metamorphosis.
The Eriocraniidae have a short coiled proboscis in the adult stage, and though they retain their pupal mandibles with which they escaped the cocoon, their mandibles are non-functional thereafter. Most of these non-ditrysian families, are primarily leaf miners in the larval stage. In addition to the proboscis, there is a change in the scales among these basal lineages, with later lineages showing more complex perforated scales.
With the evolution of the Ditrysia in the mid-Cretaceous, there was a major reproductive change. The Ditrysia, which comprise 98% of the Lepidoptera, have two separate openings for reproduction in the females (as well as a third opening for excretion), one for mating, and one for laying eggs. The two are linked internally by a seminal duct. (In more basal lineages there is one cloaca, or later, two openings and an external sperm canal.) Of the early lineages of Ditrysia, Gracillarioidea and Gelechioidea are mostly leaf miners, but more recent lineages feed externally. In the Tineoidea, most species feed on plant and animal detritus and fungi, and build shelters in the larval stage.
The Yponomeutoidea is the first group to have significant numbers of species whose larvae feed on herbaceous plants, as opposed to woody plants. They evolved about the time that flowering plants underwent an expansive adaptive radiation in the mid-Cretaceous, and the Gelechioidea that evolved at this time also have great diversity. Whether the processes involved coevolution or sequential evolution, the diversity of the Lepidoptera and the angiosperms increased together.
In the so-called "
- uk/ British Butterflies and Moths
- Lepidoptera Lepidoptera.pro
- Butterflies of Bulgaria
- Photography of European Butterflies and Moths
- Butterflies and Moths in the Netherlands
- Swedish Moths and Butterflies Lepidoptera.se
- Photos of Larvae and Pupae butterflies and moths. Spain
- Butterflies of Asturias – Spain
- Lepidoptera of French Antilles
- Butterflies of Asian Russia
- Butterflies from Indo China
- Butterflies of Turkey
- uk/jamaicamoths/jamaicahome.htm Moths of Jamaica
- Historic Moth illustrations
- Caught Between the Pages: Treasures from the Franclemont Collection Online virtual exhibit featuring a selection of historic entomological writings and images from the Comstock Library of Entomology at Cornell University
- Japmoth Japanese moths. Access images via the numbers on the left.
- Literaturatenbank Free downloads
- Lamas, Gerardo (1990). "An Annotated List of Lepidopterological Journals" (PDF). Journal of Research on the Lepidoptera 29 (1-2): 92–104.
- Kristensen, N. P. (Ed.) 1999. Lepidoptera, Moths and Butterflies. Volume 1: Evolution, Systematics, and Biogeography. Handbuch der Zoologie. Eine Naturgeschichte der Stämme des Tierreiches / Handbook of Zoology. A Natural History of the phyla of the Animal Kingdom. Band / Volume IV Arthropoda: Insecta Teilband / Part 35: 491 pp. Walter de Gruyter, Berlin, New York.
- Nemos, F. (c. 1895). ]Europe's best known butterflies. Description of the most important species and instructions for recognising and collecting butterflies and caterpillars [Europas bekannteste Schmetterlinge. Beschreibung der wichtigsten Arten und Anleitung zur Kenntnis und zum Sammeln der Schmetterlinge und Raupen (PDF). Berlin: Oestergaard Verlag.
- Nye, I. W. B. & Fletcher, D. S. 1991. Generic Names of Moths of the World. Volume 6: xxix + 368 pp. Trustees of the British Museum (Natural History), London.
- O'Toole, Christopher. 2002. Firefly Encyclopedia of Insects and Spiders. ISBN 1-55297-612-2.
- Powell, Jerry A. (2009). "Lepidoptera". In Resh, Vincent H.; Cardé, Ring T. Encyclopedia of Insects (2 (illustrated) ed.). Academic Press. pp. 557–587.
- Harper, Douglas. "lepidoptera". The Online Etymology Dictionary. Retrieved 8 February 2011.
- Mallet, Jim (12 June 2007). "Taxonomy of Lepidoptera: the scale of the problem". The Lepidoptera Taxome Project. University College, London. Retrieved 8 February 2011.
- Capinera, John L. (2008). "Butterflies and moths". Encyclopedia of Entomology 4 (2nd ed.).
- Kristensen, Niels P.; Scoble, M. J. & Karsholt, Ole (2007). "Lepidoptera phylogeny and systematics: the state of inventorying moth and butterfly diversity". In Z.-Q. Zhang & W. A. Shear. Linnaeus Tercentenary: Progress in Invertebrate Taxonomy (Zootaxa:1668). Magnolia Press. pp. 699–747.
- Partridge, Eric (2009). Origins: an etymological dictionary of modern English. Routledge.
- Harpe, Douglas; Dan McCormack (November 2001). "Online Etymological Dictionary". Online Etymological Dictionary.
- Arnett, Ross H. (July 28, 2000). "Part I: 27". American insects: a handbook of the insects of America north of Mexico (2nd ed.).
- Harper, Douglas. "moth". The Online Etymology Dictionary. Retrieved 31 March 2011.
- "Caterpillar". Dictionary.com. Retrieved 5 October 2011.
- Gullan, P. J.; P. S. Cranston (September 13, 2004). "7". The insects: an outline of entomology (3 ed.). Wiley-Blackwell. pp. 198–199.
- Stumpe, Felix. "Parnassius arctica Eisner, 1968". Russian-Insects.com. Retrieved 9 November 2010.
- Mani, M. S. (1968). Ecology and Biogeography of High Altitude Insects. Volume 4 of Series entomologica. Springer. p. 530.
- Sherman, Lee. ""An OSU scientist braves an uncharted rainforest in a search for rare and endangered species" in "Expedition to the Edge"jsessionid=B178230AA02D37492C9794327FB8DB71?sequence=1 ". Terra, Spring 2008. Oregon State University. Retrieved 14 February 2011.
- Rau, P (1941). "Observations on certain lepidopterous and hymenopterous parasites of Polistes wasps". Annals of the Entomological Society of America 34: 355–366(12). Retrieved 14 February 2011.
- Mallet, Jim (12 June 2007). "Taxonomy of butterflies: the scale of the problem". The Lepidoptera Taxome Project. University College, London. Retrieved 8 February 2011.
- Eaton, Eric R.; Kaufman, Kenn (2007). Kaufman field guide to insects of North America. Houghton Mifflin Harcourt. p. 391.
- Tuskes, Paul M.; Tuttle, James P.& Collins, Michael M. (1996). The wild silk moths of North America: a natural history of the Saturniidae of the United States and Canada. The Cornell series in arthropod biology (illustrated ed.). Cornell University Press. p. 250.
- Green, Ken; Osborne, William S. (1994). Wildlife of the Australian snow-country: a comprehensive guide to alpine fauna (illustrated ed.). Reed. p. 200.
- Gillot, C. (1995). "Butterflies and moths". Entomology (2 ed.). pp. 246–266.
- Scoble (1995). Section The Adult Head - Feeding and Sensation, (pp. 4–22).
- Resh, Vincent H.; Ring T. Carde (July 1, 2009). Encyclopedia of Insects (2 ed.). U. S. A.: Academic Press.
- Christopher, O'Toole. Firefly Encyclopedia of Insects and Spiders (1 ed.).
- Heppner, J. B. (2008). "Butterflies and moths". In Capinera, John L. Encyclopedia of Entomology. Gale virtual reference library 4 (2 ed.). Springer Reference. p. 4345.
- Scoble, MJ. (1992). The Lepidoptera: Form, function, and diversity. Oxford Univ. Press.
- Scoble (1995). Section Scales, (pp. 63–66).
- Vukusic, P. (2006). "Structural color in Lepidoptera". Current Biology 16 (16): R621–3.
- Hall, Jason P. W.; Harvey, Donald J. (2002). "A survey of androconial organs in the Riodinidae (Lepidoptera)" (
- Williams, C. M. 1947. Physiology of insect diapause. II. Interaction between the pupal brain and prothoracic glands in the metamorphosis of the giant silkworm "Platysamia cecropia". Biol. Bull. 92:89–180.
- Gullan, P. J.; P. S. Cranston (March 22, 2010). The Insects: An Outline of Entomology (4 ed.). Oxford: Wiley, John & Sons, Incorporated.
- Lighton J. R. B., Lovegrove B. G. (1990). "A temperature-induced switch from diffusive to convective ventilation in the honeybee". Journal of Experimental Biology 154 (1): 509–516.
- Gullan & Cranston (2005). "Polymorphism and polyphenism". The Insects: An Outline of Entomology. pp. 163–164.
- Noor M. A., Parnell R. S., Grant B. S. (2008). Humphries, Stuart, ed. ) caterpillars"Biston betularia cognataria"A reversible color polyphenism in American Peppered Moth (.
- Kunte, Krushnamegh (2000). Butterflies of Peninsular India. Part of Project lifescape. Orient Blackswan. ISBN 81-7371-354-5, ISBN 978-81-7371-354-5.
- Ivy I. G., Morgun D. V., Dovgailo K. E., Rubin N. I., Solodovnikov I. A. Дневные бабочки (Hesperioidea and Papilionoidea, Lepidoptera) Восточной Европы. " CD determinant, database and software package «Lysandra». Minsk, Kiev, Moscow: 2005. In Russian
- "Psychidae at Bug Guide".
- Sanderford, M. V.; W. E. Conner (July 1990). "Syntomeida epilais"Courtship sounds of the polka-dot wasp moth, .
- Wiklund, Christer (July 1984). "Egg-laying patterns in butterflies in relation to their phenology and the visual apparency and abundance of their host plants".
- P. J. Gullan & P. S. Cranston (2010). "Life-history patterns and phases". The Insects: an Outline of Entomology (4th ed.).
- Dugdale, J. S. (1996). )"Mus musculus"Natural history and identification of litter-feeding Lepidoptera larvae (Insecta) in beech forests, Orongorongo Valley, New Zealand, with especial reference to the diet of mice (. Journal of the Royal Society of New Zealand 26 (4): 251–274.
- Triplehorn, Charles A.; Johnson, Norman F. (2005). Borror and Delong's Introduction to the Study of Insects. Belmont, California: Thomson Brooks/Cole.
- Elmes, G.W.; Wardlaw J.C.; Schönrogge, K.; Thomas, J.A. & Clarke, R.T. "Food stress causes differential survival of socially parasitic caterpillars of Maculinea rebeli integrated in colonies of host and non-host Myrmica ant species". Entomologia experimentalis et applicata 110 (1): 53–63.
- Arnett, Ross H. Jr. (July 28, 2000). American Insects. A Handbook of the Insects of America North of Mexico (2 ed.). CRC press LLC. pp. 631–632.
- Clifford O. Berg (1950). "Biology of certain aquatic caterpillars (Pyralididae: Nymphula spp.) which feed on Potamogeton".
- Ehrlich, P. R.; Raven, P. H. (1964). "Butterflies and plants: a study in coevolution". Evolution 18 (4): 586–608.
- Nijhout, H. Frederik (August 17, 1991). The Development and Evolution of Butterfly Wing Patterns(Smithsonian Series in Comparative Evolutionary Biology) (1 ed.). Smithsonian Institution Scholarly Press. pp. 2–4.
- Dole, Claire Hagen (May 28, 2003). The Butterfly Gardener's Guide. Brooklyn Botanic Garden.
- James V. Ward & Peter E. Ward (1992). Aquatic Insect Ecology, Biology And Habitat. John Wiley & Sons.
- Benjamin Jantzen & Thomas Eisner (July 28, 2008). "Hindwings are unnecessary for flight but essential for execution of normal evasive flight in Lepidoptera".
- "Skippers Butterflies and Moths: Lepidoptera - Behavior And Reproduction". Net Industries and its Licensors. 2011. Retrieved 20 February 2011.
- Alex Reisner. "Speed of animals". Retrieved 20 February 2011.
- Scoble, Malcolm (July 1, 1995). The Lepidoptera: Form, Function and Diversity. Oxford University Press, 1995. pp. 66–67.
- Sauman, Ivo; Adriana D. Briscoe; Haisun Zhu; Dingding Shi; Oren Froy; Julia Stalleicken; Quan Yuan; Amy Casselman; Steven M. Reppert (May 5, 2005). "Connecting the Navigational Clock to Sun Compass Input in Monarch Butterfly Brain". Neuron 46 (3): 457–467.
- Southwood, T. R. E. (1962). wiley. com/journal/119869297/abstract "Migration of terrestrial arthropods in relation to habitat".
- Dennis, Roger L. H.; Tim G. Shreeve; Henry R. Arnold; David B. Roy (September 2005). "Does diet breadth control herbivorous insect distribution size? Life history and resource outlets for specialist butterflies". Journal of Insect Conservation (Springer Netherlands) 9 (3): 187–200.
- Made, J. G. van der; Josef Blab; Rudi Holzberger; H. van den Bijtel (1989). Actie voor Vlinders, zo kunnen we ze redden. (in Dutch). Weert: M & P cop. p. 192.
- Baker, R. Robin (February 1987). google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=2892ea838fdde22df54095b4505c1c2b "Integrated use of moon and magnetic compasses by the heart-and-dart moth, Agrotis exclamationis". Animal Behaviour 35 (1): 94–101.
- Breen, Amanda (May 7, 2008). "Scientists make compass discovery in migrating moths". University of Greenwich at Medway. p. 1. Retrieved December 9, 2009.
- Chapman, Jason W.; Don R. Reynolds; Henrik Mouritsen; Jane K. Hill; Joe R. Riley; Duncan Sivell; Alan D. Smith; Ian P. Woiwod (8 April 2008). "Wind selection and drift compensation optimize migratory pathways in a high-flying moth". Current Biology 18 (7): 514–518.
- Srygley, Robert B.; Evandro G. Oliveira; Andre J. Riveros (2005). "Experimental evidence for a magnetic sense in Neotropical migrating butterflies (Lepidoptera: Pieridae)". The British Journal of Animal Behaviour 71 (1): 183–191.
- Elliot, Debbie; Dr. May Berenbaum (August 18, 2007). "Why are Moths Attracted to Flame? (audio)". National Public Radio. p. 1. Retrieved December 12, 2009.
- Hsiao, Henry S. (1972). Attraction of moths to light and to infrared radiation. San Francisco Press.
- Williams, C. B. (1927). "A study of butterfly migration in south India and Ceylon, based largely on records by Messrs. G. Evershed, E. E. Green, J. C. F. Fryer and W. Ormiston".
- Urquhart, F. A. & N. R. Urquhart (1977). "Overwintering areas and migratory routes of the Monarch butterfly (Danaus p. plexippus, Lepidoptera: Danaidae) in North America, with special reference to the western population".
- Wassenaar L. I. & K. A. Hobson (1998). "Natal origins of migratory monarch butterflies at wintering colonies in Mexico: new isotopic evidence".
- Smith, N. G.; Janzen, D. H. (editor) (1983). Urania fulgens (Calipato Verde, Green Urania). Costa Rican Natural History. Chicago:
- Chapman, R. F. (1998). The Insects: Structure and Function (4 ed.). New York: Cambridge University Press. p. 715.
- Meyer, John R. (2006). "Acoustic Communication". Department of Entomology, C State University. Retrieved 25 February 2011.
- "Caterpillar and Butterfly Defense Mechanisms". EnchantedLearning.com. Retrieved December 7, 2009.
- Kricher, John (August 16, 1999). "6". A Neotropical Companion.
- Santos, J. C.; Cannatella, D. C.; Cannatella, DC (2003). "Multiple, recurring origins of aposematism and diet specialization in poison frogs". Proceedings of the National Academy of Sciences (http://www.pnas.org) 100 (22): 12792–12797. (Abstract).
- "osmeterium". Merriam-Webster, Incorporated. Retrieved December 9, 2009.
- Hadley, Debbie. "Osmeterium". About.com Guide. Retrieved December 9, 2009.
- "Mourning Cloak". Study of Northern Virginia Ecology. Fairfax County Public Schools.
- Latimer, Jonathan P.; Karen Stray Nolting (May 30, 2000). Butterflies. Houghton Mifflin Harcourt Trade & Reference Publis. p. 12.
- Insects and Spiders of the World, 10. Marshall Cavendish Corporation. Marshall Cavendish. January 2003. pp. 292–293.
- Carroll, Sean (2005). Endless forms most beautiful: the new science of evo devo and the making of the animal kingdom. W. W. Norton & Co. pp. 205–210.
- Ritland, D. B.; L. P. Brower (1991). "The viceroy butterfly is not a Batesian mimic".
- Meyer, A. (2006). "Repeating patterns of mimicry".
- Jones, G; D A Waters (2000). "Moth hearing in response to bat echolocation calls manipulated independently in time and frequency". Proceedings of the Royal Society B Biological Sciences 267 (1453): 1627–32.
- Ratcliffe, John M.; James H. Fullard, Benjamin J. Arthur and Ronald R. Hoy (2009). "Tiger moths and the threat of bats: decision-making based on the activity of a single sensory neuron". Biology Letters 5 (3): 368–371.
- Gilbert L. E. (1972). butterflies"Heliconius"Pollen feeding and reproductive biology of .
- Goulson, D., J. Ollerton and C. Sluman. (1997). "Foraging strategies in the small skipper butterfly, Thymelicus flavus: when to switch?". Animal Behavior 53 (5): 1009–1016.
- Helen J. Young and Lauren Gravitz (2002). "The effects of stigma age on receptivity in Silene alba (Caryophyllaceae)". American Journal of Botany 89 (8): 1237–1241.
- Oliveira PE, PE Gibbs, and AA Barbosa (2004). "Moth pollination of woody species in the Cerrados of Central Brazil: a case of so much owed to so few?". Plant Systematics and Evolution 245 (1–2): 41–54.
- Devries, P. J. (1988). "The larval ant-organs of Thisbe irenea (Lepidoptera: Riodinidae) and their effects upon attending ants". Zoological Journal of the Linnean Society 94 (4): 379.
- Devries, Pj (Jun 1990). "Enhancement of Symbioses Between Butterfly Caterpillars and Ants by Vibrational Communication". Science 248 (4959): 1104–1106.
- Benton, Frank (1895). ]Europe's best known butterflies. Description of the most important species and instructions for recognizing and collecting butterflies and caterpillars [The honey bee: a manual of instruction in apiculture. 1-6, 33. Oestergaard Verlag. pp. 113–114.
- Rubinoff, Daniel; Haines, William P. (2005). "Web-spinning caterpillar stalks snails". Science 5734 (309): 575.
- Pierce, N. E. (1995). "Predatory and parasitic Lepidoptera: Carnivores living on plants". Journal of the Lepidopterist's Society 49 (4): 412–453.
- Grabe, Albert (1942). Eigenartige Geschmacksrichtungen bei Kleinschmetterlingsraupen ("Strange tastes among micromoth caterpillars"). 27 (in German). pp. 105–109.
- Scoble, Malcolm J. (September 1995). "2". The Lepidoptera: Form, Function and Diversity (1 ed.). Oxford University: Oxford University Press. pp. 4–5.
- Rust, Jest (2000). "Palaeontology: Fossil record of mass moth migration". Nature 405 (6786): 530–531.
- Kaila, Lauri; Marko Mutanen; Tommi Nyman (27 August 2011). "Phylogeny of the mega-diverse Gelechioidea (Lepidoptera): Adaptations and determinants of success". Molecular Phylogenetics and Evolution 61 (3): 801–809.
- N. P. Kristensen (1999). "The non-Glossatan moths". In N. P. Kristensen. Lepidoptera, Moths and Butterflies Volume 1: Evolution, Systematics, and Biogeography. Handbook of Zoology. A Natural History of the phyla of the Animal Kingdom. Volume IV Arthropoda: Insecta Part 35.
- Dumbleton, 1952"Agathiphaga queenslandensis"Species .
- "Lepindex vitiensis". The Global Lepidoptera Names Index. The Natural History Museum, London. December 23, 2003. Retrieved March 6, 2011.
- "Heterobathmiina". The Global Lepidoptera Names Index. The Natural History Museum, London. December 23, 2003. Retrieved March 6, 2011.
- Larsen, Torben B. (1994). "Butterflies of Egypt". Saudi Aramco world (Saudi Aramco world) 45 (5): 24–27. Retrieved December 18, 2009.
- "Table complete with real butterflies embedded in resin". Mfjoe.com. December 18, 2009. Archived from the original on May 6, 2010. Retrieved April 28, 2012.
- Rabuzzi, Matthew (November 1997). "Butterfly Etymology Cultural Entomology Digest 4". Cupertino, California: Bugbios. p. 4. Retrieved December 18, 2009.
- Miller, Mary (1993). The Gods and Symbols of Ancient Mexico and the Maya. Thames & Hudson.
- Cook, Kelly A.; Weinzier, R. (2004). "IPM: Field Crops: Corn Earworm (Heliothis Zea)". IPM. p. 1. Retrieved January 17, 2009.
- Jeff Hahn (June 15, 2003). "Friendly Flies: Good News, Bad News". Yard & Garden Line News (
- R. Weinzierl, T. Henn, P. G. Koehler and C. L. Tucker (June 2005). "Insect Attractants and Traps". Alternatives in Insect Management. Entomology and Nematology Department, University of Florida (Office of Agricultural Entomology, University of Illinois at Urbana-Champaign).
- Goldsmith M. R., T. Shimada & H. Abe (2005). "The genetics and genomics of the silkworm, Bombyx mori".
- Yoshitake, N. (1968). "Phylogenetic aspects on the origin of Japanese race of the silkworm, Bombyx mori". Journal of Sericological Sciences of Japan 37: 83–87.
- Coombs, E. M. (2004). Biological Control of Invasive Plants in the United States. Corvallis: Oregon State University Press. p. 146.
- Butterfly Farms | Rainforest Conservation | Butterfly Ranching
- Martin Robinson, Ray Bartlett, Rob Whyte. Korea (2007). Lonely Planet publications, ISBN 978-1-74104-558-1. (pg 63)
- Ana María Acuña, Laura Caso, Mario M. Aliphat & Carlos H. Vergara; Caso; Aliphat; Vergara (2011). "Edible insects as part of the traditional food system of the Popoloca town of Los Reyes Metzontla, Mexico".
- Mika Zagrobelny, Angelo Leandro Dreon, Tiziano Gomiero, Gian Luigi Marcazzan, Mikkel Andreas Glaring, Birger Lindberg Møller & Maurizio G. Paoletti; Dreon; Gomiero; Marcazzan; Glaring; Møller; Paoletti (2009). "Toxic moths: source of a truly safe delicacy".
- Diaz, HJ (2005). "The evolving global epidemiology, syndromic classification, management, and prevention of caterpillar envenoming".
- Redd, J.; Voorhees, R.; Török, T. (2007). "Outbreak of lepidopterism at a Boy Scout camp". Journal of the American Academy of Dermatology 56 (6): 952–955.
- Kowacs, PA; Cardoso, J; Entres, M; Novak, EM; Werneck, LC (December 2006). "Fatal intracerebral hemorrhage secondary to Lonomia obliqua caterpillar envenoming: case report" (Free full text). Arquivos de neuro-psiquiatria 64 (4): 1030–2.
- Patel RJ, Shanbhag RM (1973). "Ophthalmia nodosa – (a case report)". Indian J Ophthalmol 21 (4): 208.
- Corrine R Balit, Helen C Ptolemy, Merilyn J Geary, Richard C Russell and Geoffrey K Isbister (2001). "Outbreak of caterpillar dermatitis caused by airborne hairs of the mistletoe browntail moth (Euproctis edwardsi)" (Free full text). The Medical journal of Australia 175 (11–12): 641–3.
- List of butterflies of Australia
- List of butterflies of Great Britain
- List of butterflies of India
- List of butterflies of Minorca
- List of butterflies of North America
- List of butterflies of Taiwan
- List of butterflies of Tobago
- List of moths
- Comparison of butterflies and moths
- Lepidoptera in the 10th edition of Systema Naturae
- McGuire Center for Lepidoptera and Biodiversity, University of Florida
- Societas Europaea Lepidopterologica
These hairs have also been known to cause kerato-conjunctivitis. The sharp barbs on the end of caterpillar hairs can get lodged in soft tissues and mucous membranes such as the eyes. Once they enter such tissues, they can be difficult to extract, often exacerbating the problem as they migrate across the membrane. This becomes a particular problem in an indoor setting. The hairs easily enter buildings through ventilation systems and accumulate in indoor environments because of their small size, which makes it difficult for them to be vented out. This accumulation increases the risk of human contact in indoor environments.
Some larvae of both moths and butterflies have a form of hair that has been known to be a cause of human health problems. Caterpillar hairs sometimes have toxins in them and species from approximately 12 families of moths or butterflies worldwide can inflict serious human injuries (Urticarial dermatitis and atopic asthma to osteochondritis, consumption coagulopathy, renal failure, and intracerebral hemorrhage). Skin rashes are the most common, but there have been fatalities. Lonomia is a frequent cause of economization in humans in Brazil, with 354 cases reported between 1989 and 2005. Lethality ranging up to 20% with death caused most often by intracranial hemorrhage.
Lepidoptera feature prominently in entomophagy as food items on almost every continent. While in most cases, adults, larvae or pupae are eaten as staples by indigenous people, beondegi or silkworm pupae are eaten as a snack in Korean cuisine while Maguey worm is considered a delicacy in Mexico. In the Carnia region of Italy, children catch and eat ingluvies of the toxic Zygaena moths in early summer. The ingluvies, despite having a very low cyanogenic content, serve as a convenient, supplementary source of sugar to the children who can include this resource as a seasonal delicacy at minimum risk.
Breeding butterflies and moths, or butterfly gardening/rearing, has become an ecologically viable process of introducing species into the ecosystem to benefit it. Butterfly ranching in Papua New Guinea permits nationals of that country to 'farm' economically valuable insect species for the collectors market in an ecologically sustainable manner.
The preference of the larvae of most Lepidopteran species to feed on a single species or limited range of plants is used as a mechanism for biological control of weeds in place of herbicides. The pyralid cactus moth was introduced from Argentina to Australia, where it successfully suppressed millions of acres of Prickly pear cactus.:567 Another species of the Pyralidae, called the alligator weed stem borer (Arcola malloi), was used to control the aquatic plant known as alligator weed (Alternanthera philoxeroides) in conjunction with the alligator weed flea beetle; in this case, the two insects work in synergy and the weed rarely recovers.
Even though most butterflies and moths affect the economy negatively, some species are a valuable economic resource. The most prominent example is that of the Domesticated silkworm moth (Bombyx mori), the larvae of which make their cocoons out of silk, which can be spun into cloth. Silk is and has been an important economic resource throughout history. The species Bombyx mori has been domesticated to the point where it is completely dependent on mankind for survival. A number of wild moths such as Bombyx mandarina, and Antheraea species, besides others, provide commercially important silks.
Species of moths that are detritivores would naturally eat detritus containing keratin, such as hairs or feathers. Well known species are cloth moths (T. bisselliella, T. pellionella, and T. tapetzella), feeding on foodstuffs that people find economically important, such as cotton, linen, silk and wool fabrics as well as furs; furthermore they have been found on shed feathers and hair, bran, semolina and flour (possibly preferring wheat flour), biscuits, casein, and insect specimens in museums.
Ecological ways of removing pest lepidoptera species are becoming more economically viable, as research has shown ways like introducing parasitic wasp and flies. For example Sarcophaga aldrichi, a fly which deposited larvae feed upon the pupae of the Forest Tent Caterpillar Moth. Pesticides can affect other species other than the species they are targeted to eliminate, damaging the natural ecosystem. Another good biological pest control method is the use of pheromone traps. A pheromone trap is a type of insect trap that uses pheromones to lure insects. Sex pheromones and aggregating pheromones are the most common types used. A pheromone-impregnated lure is encased in a conventional trap such as a Delta trap, water-pan trap, or funnel trap.
Butterflies and moths are one of the largest taxa to solely feed and be dependent on living plants, in terms of the number of species, and they are in many ecosystems, making up the largest biomass to do so. In many species, the female may produce anywhere from 200 to 600 eggs, while in some others it may go as high as 30,000 eggs in one day. This can create many problems for agriculture, where many caterpillars can affect acres of vegetation. Some reports estimate that there have been over 80,000 caterpillars of several different taxa feeding on a single oak tree. In some cases, phytophagous larvae can lead to the destruction of entire trees in relatively short periods of time.:567
The larvae of many Lepidopteran species are major pests in agriculture. Some of the major pests include Tortricidae, Noctuidae, and Pyralidae. The larvae of the Noctuidae genus Spodoptera (armyworms), Helicoverpa (corn earworm), or Pieris brassicae can cause extensive damage to certain crops. Helicoverpa zea larvae (cotton bollworms or tomato fruitworms) are polyphagous, meaning they eat a variety of crops, including tomatoes and cotton.
In the ancient Mesoamerican city of Teotihuacan, the brilliantly colored image of the butterfly was carved into many temples, buildings, jewelry, and emblazoned on incense burners in particular. The butterfly was sometimes depicted with the maw of a jaguar and some species were considered to be the reincarnations of the souls of dead warriors. The close association of butterflies to fire and warfare persisted through to the Aztec civilization and evidence of similar jaguar-butterfly images has been found among the Zapotec, and Maya civilizations.
In many cultures the soul of a dead person is associated with the butterfly. As in Ancient Greece, where the word for butterfly ψυχή (psyche) also means soul and breath. In Latin, as in Ancient Greece, the word for "butterfly" papilio was associated with the soul of the dead. The skull-like marking on the thorax of the Death's-head Hawkmoth has helped these moths, particularly A. atropos, earn a negative reputation, such as associations with the supernatural and evil. The moth has been prominently featured in art and movies such as Un Chien Andalou (by Buñuel and Dalí) and The Silence of the Lambs, and in the artwork of the Japanese metal band Sigh's album Hail Horror Hail. According to Kwaidan: Stories and Studies of Strange Things, by Lafcadio Hearn, a butterfly was seen in Japan as the personification of a person's soul; whether they be living, dying, or already dead. One Japanese superstition says that if a butterfly enters your guestroom and perches behind the bamboo screen, the person whom you most love is coming to see you. However, large numbers of butterflies are viewed as bad omens. When Taira no Masakado was secretly preparing for his famous revolt, there appeared in Kyoto so vast a swarm of butterflies that the people were frightened—thinking the apparition to be a portent of coming evil.
Artistic depictions of butterflies have been used in many cultures including as early as 3500 years ago, in Egyptian hieroglyphs. Today, butterflies are widely used in various objects of art and jewelry: mounted in frames, embedded in resin, displayed in bottles, laminated in paper, and in some mixed media artworks and furnishings. Butterflies have also inspired the "butterfly fairy" as an art and fictional character.
Relationship to people
- Heterobathmiina was first described by Kristensen and Nielsen in 1979. There are about 10 species, which are day-flying, metallic moths, confined to southern South America, the adults eat the pollen of Nothofagus or Southern Beech and the larvae mine the leaves.:569
- Aglossata it is the second most primitive lineage of lepidoptera; being first described in 1952 by Lionel Jack Dumbleton. Agathiphagidae and Heterobathmiidae are the only families in Aglossata. Agathiphagidae only contains about 2 species in its genus Agathiphaga.:569. Agathiphaga queenslandensis and Agathiphaga vitiensis, being found along the north-eastern coast of Queensland, Australia, and in Fiji to Vanuatu and the Solomon Islands, respectively.
- Glossata contains a majority of the species, with the most obvious difference is non-functioning mandibles, and elongated maxillary galeae or the proboscis. The basal clades still retaining some of the ancestral features of the wings such as similarly shaped fore- and hindwings with relatively complete venation. Glossata also contains the division Ditrysia, which contains 98% of all described species in Lepidoptera.:569
- Zeugloptera is a clade with Micropterigoidea being its only superfamily, containing the single family Micropterigidae. Species of Micropterigoidea are practically living fossils, being one of the most primitive lepidopteran groups, still retaining chewing mouthparts (mandibles) in adults, unlike other clades of butterflies and moths. About 120 species are known worldwide, with more than half the species in the genus Micropteryx in the Paleartic region. There are only 2 known in North America (Epimartyria), with many more being found Asia and the southwest Pacific, particularly New Zealand with about 50 species.:569
Taxonomy is the classification of species in selected taxa, the process of naming being called nomenclature. There are over 120 families in lepidoptera, in 45 to 48 superfamilies. Lepidoptera have always been, historically, classified in five suborders, one of which is of primitive moths that never lost the morphological features of its ancestors. The rest of the moths and butterflies make up ninety-eight percent of the other taxa, making Ditrysia. More recently, findings of new taxa, larvae and pupa have aided in detailing the relationships of primitive taxa, phylogenetic analysis showing the primitive lineages to be paraphyletic compared to the rest of Lepidoptera lineages. Recently lepidopterists have abandoned clades like suborders, and those between orders and superfamilies.:569
There is quite a good fossil record for this group, with the oldest skipper dating from . (moth-butterflies), are the most recently evolved.Hedyloidea (skippers), and the Hesperioidea (butterflies), Papilionoidea The Rhopalocera, comprising the group.monophyletic. Bombycoidea plus Lasiocampidae plus Mimallonoidea may be a Rhopalocera and Geometroidea, Mimallonoidea, Lasiocampidae, Bombycoidea, Noctuoidea The main lineages in the Macrolepidoptera are the Linnaeus in
History of study
Evolution and systematics
Larvae of some species of moths in the humus.:567 Well known species include the cloth moths (Tineola bisselliella, T. pellionella, and T. tapetzella), which feed on detritus containing keratin, including hair, feathers, cobwebs, bird nests (particularly of Domestic Pigeons, Columba livia domestica) and fruits or vegetables. These species are important to ecosystems as they remove substances that would otherwise take a long time to decompose.
A few species of Lepidoptera are secondary consumers, or predators. These species typically prey upon the eggs of other insects, aphids, scale insects, or ant larvae.:567 Some caterpillars are cannibals, and others prey on caterpillars of other species (e. g. Hawaiian Eupithecia ). Those of the 15 species in Eupithecia that mirror inchworms, are the only known species of butterflies and moths that are ambush predators. There are 4 known species that eat snails. For example, the Hawaiian caterpillar, (H. molluscivora), uses silk traps, in a manner similar to that of spiders to capture certain species of snails (typically Tornatellides).
Other biological interactions
In response to a parsitoid egg or larvae in the caterpillar's body, the plasmatocytes, or simply the host's cells can form a multilayered capsule that eventually cause the endoparasite to asphyxiate, and die. The process is called encapsulation, and is one of the caterpillar's only means of defense against parasitoids.:748
In reverse, moths and butterflies may be subject to parasitic wasps and flies, which may lay eggs on the caterpillars, which hatch and feed inside its body, resulting in death. Although, in a form of parasitism called idiobiont, the adult paralyzes the host, so as not to kill it but for it to live as long as possible, in order for the parasitic larvae to benefit the most. Another form of parasitism, is koinobiont, where the species live off their host while inside or endoparasitic. These parasites live inside the host caterpillar throughout its life cycle, or may affect it later on as an adult. In other orders, koinobionts include flies, a majority of coleopteran, and many hymenopteran parasitoids.:748–749 Some species may be subject to a variety of parasites, such as the Gypsy moth (Lymantaria dispar), which is attacked by a series of 13 species, in 6 different taxa throughout its life cycle.:750
There are only 41 known species of pests; they are also found in bumblebee and wasp nests, albeit to a lesser extent. In northern Europe the wax moth is regarded as the most serious parasitoid of the bumblebee, and is found only in bumblebee nests. In some areas in southern England as many as eighty percent of nests can be destroyed. Other parasitic larvae are known to prey upon cicadas and leaf hoppers.
Mutualism is a form of biological interaction where each individual involves benefits in some shape or form. An example of a mutualistic relationship would be the relationship shared by yucca moths (Tegeculidae) and their host, yucca flowers (Liliaceae). Female yucca moths enter the host flowers, collect the pollen into a ball using specialized maxillary palps, then move to the apex of the pistil where pollen is deposited on the stigma, and lay eggs into the base of the pistil where seeds will develop. The larvae develop in the fruit pod and feed on a portion of the seeds. Thus, both insect and plant benefit, forming a highly mutualistic relationship.:814 Another form of mutualism occurs between some larvae of butterflies and certain species of ants (e. g. Lycaenidae). The larvae communicate with the ants using vibrations that are transmitted through a substrate, such as the wood of a tree or stems, as well as using chemical signals. The ants provide some degree of protection to these larvae and they in turn gather honeydew secretions.
Among the more important moth pollinator groups are the hawk moths of the family Sphingidae. Their behavior is similar to hummingbirds, i.e., using rapid wing beats to hover in front of flowers. Most hawk moths are nocturnal or crepuscular, so moth-pollinated flowers (e.g., Silene latifolia ) tend to be white, night-opening, large and showy with tubular corollas and a strong, sweet scent produced in the evening, night or early morning. A lot of nectar is produced to fuel the high metabolic rates needed to power their flight. Other moths (e.g., noctuids, geometrids, pyralids) fly slowly and settle on the flower. They do not require as much nectar as the fast-flying hawk moths, and the flowers tend to be small (though they may be aggregated in heads).
Flowers pollinated by butterflies tend to be large and flamboyant, being pink or lavender in color, frequently having a landing area, and are usually scented, as butterflies are typically day-flying. Since butterflies do not digest pollen (except for Heliconid species), more nectar is offered than pollen. The flowers have simple nectar guides with the nectaries usually hidden in narrow tubes or spurs, reached by the long "tongue" of the butterflies. Butterflies like the Thymelicus flavus have been observed to engage in flower constancy, which means that they are more likely to transfer pollen to other conspecific plants. This can be beneficial for the plants being pollinated, as flower constancy prevents the loss of pollen during different flights and the pollinators from clogging stigmas with pollen of other flower species.
Most species of Lepidoptera engage in some form of entomophily (more specifically psychophily and phalaenophily for butterflies and moths respectively), or the pollination of flowers. Most adult butterflies and moths feed on the nectar inside flowers, using their proboscis to reach the nectar hidden at the base of the petals. In the process, the adult brushes against the flower's stamen, on which the flower's reproductive pollen is made and stored. The pollen is transferred on appendages on the adult, who flies to the next flower to feed and unwittingly deposits the pollen on the stigma of the next flower, where the pollen germinates and fertilizes the seeds.:813–814
There is evidence moths are able to hear the range emitted by bats, which in effect causes flying moths to make evasive maneuvers because bats are a main predator of moths. Ultrasonic frequencies trigger a reflex action in the noctuid moth that cause it to drop a few inches in its flight to evade attack. Tiger moths in a defense emit clicks within the same range of the bats, which interfere with the bats, and foil their attempts to echolocate it.
Batesian and Müllerian mimicry complexes are commonly found in Lepidoptera. Genetic polymorphism and natural selection give rise to otherwise edible species (the mimic) gaining a survival advantage by resembling inedible species (the model). Such a mimicry complex is referred to as Batesian and is most commonly known in the example between the limenitidine Viceroy butterfly in relation to the inedible danaine Monarch. Later research has discovered that the Viceroy is, in fact more toxic than the Monarch and this resemblance should be considered as a case of Müllerian mimicry. In Müllerian mimicry, inedible species, usually within a taxonomic order, find it advantageous to resemble each other so as to reduce the sampling rate by predators who need to learn about the insects' inedibility. Taxa from the toxic genus Heliconius form one of the most well known Müllerian complexes. The adults of the various species now resemble each other so well that the species cannot be distinguished without close morphological observation and, in some cases, dissection or genetic analysis.
Camouflage is also an important defense strategy, which involves the use of coloration or shape to blend into the surrounding environment. Some lepidopteran species blend with its surroundings, making them difficult to be spotted by predators. Caterpillars can exhibit shades of green that match its host plant. Others look like inedible objects, such as twigs or leaves. For instance, the Mourning Cloak will fade into the backdrop of trees when it folds its wings back. The larvae of some species, such as the Common Mormon (Papilio polytes) and the Western Tiger Swallowtail look like bird droppings. For example, adult Sesiidae species (also known as clearwing moths) have a general appearance that is sufficiently similar to a wasp or hornet to make it likely that the moths gain a reduction in predation by Batesian mimicry. Eyespots are a type of automimicry used by some butterflies and moths. In butterflies, the spots are composed of concentric rings of scales in different colors. The proposed role of the eyespots is to deflect attention to predators. Their resemblance to eyes provokes the predator's instinct to attack these wing patterns.
Some species of lepidoptera are poisonous to predators, such as the
An "evolutionary arms race" can be seen between predator and prey species. Lepidoptera have developed a number of strategies for defense and protection including evolution of morphological characters and changes in ecological life-style and in behavior. These include aposematism, mimicry, camouflage, development of threat patterns and displays and so on. Only a few birds, such as the nightjars, hunt nocturnal lepidoptera. Their main predators are bats. Again, an "evolutionary race" exists, which has led to numerous evolutionary adaptations of moths to escape from their main predators, such as the ability to hear ultrasonic sounds, or even to emit sounds in some cases. Lepidoptera eggs are also predated upon. Some caterpillars, such as the zebra swallowtail butterfly larvae, are cannibalistic and may eat other larvae of the same species. Lepidopteran species rely on a variety of strategies.
Lepidopteran species are soft bodied, fragile and almost defenseless while the immature stages move slowly or are immobile, hence all stages are exposed to predation. Adult butterflies and moths are predated upon by birds, lizards, amphibians, dragonflies and spiders. Caterpillars and pupa fall prey, not only to birds but invertebrate predators, small mammals, as well as fungi and bacteria. Parasitoid and parasitic wasps and flies may lay eggs in the caterpillar, which eventually kill it as they hatch inside its body and eat its tissues. Insect-eating birds are probably the worst predators. Lepidoptera, especially the immature stages, are an ecologically important food to many insectivorous birds, such as the Great Tit in Europe.
Defense and predation
Moths and Butterflies are important in the natural ecosystem. They are integral participants in the food chain, having co-evolved with flowering plants and predators, lepidopteran species have formed a network of trophic relationships between autotrophs and heterotrophs, which are included in the stages of Lepidoptera larvae, pupae and adults. Larvae and pupae are links in the diet of birds and parasitic entomophagous insects. The adults are included in food webs in a much broader range of consumers (including birds, small mammals, reptiles, etc.).:567
Most moths lack bright colors as many species use coloration as camouflage but butterflies engage in visual communication. Female cabbage butterflies, for example, use ultraviolet light to communicate, with scales colored in this range on the dorsal wing surface. When they fly, each down stroke of the wing creates a brief flash of ultraviolet light that the males apparently recognize as the flight signature of a potential mate. These flashes from the wings may attract several males who engage in aerial courtship displays.
Moths are known to engage in acoustic forms of communication; most often as courtship, attracting mates using sound or vibration. Like most other insects, moths pick up these sounds using tympanic membranes in the abdomen. An example is that of the polka-dot wasp moth (Syntomeida epilais), which produce sounds with a frequency above that normally detectable by humans (~20 kHz). These sounds also function as tactile communication, or communication through touch, as they stridulate, or vibrate a substrate like leaves and stems.
Pheromones are commonly involved in mating rituals among species, especially moths, but they are also an important aspect of other forms of communication. Usually the pheromones are produced by either the male or the female and detected by members of the opposite sex with their antennae. In many species, a gland between the eighth and ninth segment under the abdomen in the female produces the pheromones. Communication can also occur through stridulation, or producing sounds by rubbing various parts of the body together.
Moths also undertake migrations, an example being the uraniids. Urania fulgens undergoes population explosions and massive migrations that may be not surpassed by any other insect in the Neotropics. In Costa Rica and Panama, the first population movements may begin in July and early August and, depending on the year, may be very massive, continuing unabated for as long as five months.
Lepidopteran migration is typically seasonal, the insects moving to escape dry seasons or other disadvantageous conditions. Most lepidopterans that migrate are butterflies, the distance travelled varyies. Some butterflies that migrate include the Mourning Cloak, Painted Lady, American Lady, Red Admiral, and the Common Buckeye.:29–30 The most well- known migration are those of the eastern population of the Monarch butterfly from Mexico to northern United States and southern Canada, a distance of about 4,000–4,800 km (2,500–3,000 mi). Other well-known migratory species include the Painted Lady and several of the danaine butterflies. Spectacular and large-scale migrations associated with the Monsoons are seen in peninsular India. Migrations have been studied in more recent times using wing tags and also using stable hydrogen isotopes.
Moths exhibit a tendency to circle artificial lights repeatedly. This suggests that they use a technique of celestial navigation called transverse orientation. By maintaining a constant angular relationship to a bright celestial light, such as the Moon, they can fly in a straight line. Celestial objects are so far away, that even after traveling great distances, the change in angle between the moth and the light source is negligible; further, the moon will always be in the upper part of the visual field or on the horizon. When a moth encounters a much closer artificial light and uses it for navigation, the angle changes noticeably after only a short distance, in addition to being often below the horizon. The moth instinctively attempts to correct by turning toward the light, causing airborne moths to come plummeting downwards, and—at close range—which results in a spiral flight path that gets closer and closer to the light source. Other explanations have been suggested, such as the idea that moths may be impaired with a visual distortion called a Mach band by Henry Hsiao in 1972. He stated that they fly towards the darkest part of the sky in pursuit of safety and are thus inclined to circle ambient objects in the Mach band region.
Many studies have also shown that moths navigate. One study showed that many moths may use the Earth's magnetic field to navigate, as a study of the moth Heart and Dart suggests. Another study, this time of the migratory behavior of the Silver Y, showed that even at high altitudes the species can correct its course with changing winds, and prefers flying with favourable winds, suggesting a great sense of direction. Aphrissa statira in Panama loses its navigational capacity when exposed to a magnetic field, suggesting it uses the Earth’s magnetic field.
Navigation is important to lepidoptera species, especially for those that migrate. Butterflies, which have more species that migrate, have been shown to navigate using time compensated sun compasses. They can see polarized light and therefore can orient even in cloudy conditions. The polarized light in the region close to the ultraviolet spectrum is suggested to be particularly important. It is suggested that most migratory butterflies are those that live in semi-arid areas where breeding seasons are short. The life-histories of their host plants also influence the strategies of the butterflies. Other theories include the use of landscapes. Lepidoptera may use coastal lines, mountains and even roads to orient themselves. Above sea it has been observed that the flight direction is much more accurate if the coast is still visible.
Some species of butterfly can reach fast speeds, such as the Southern Dart, which can go as fast as 48.4 km/h. Sphingids are some of the fastest flying insects, some are capable of flying at over 50 km/h (30 miles per hour), having a wingspan of 35–150 mm. In some species, there is sometimes a gliding component to their flight. Flight occurs either as hovering, or as forward or backward motion. In butterfly and in moth species, like hawk moths, hovering is important as they need to maintain a certain stability over flowers when feeding on the nectar.
Lepidopteran species have to be warm, about 77 to 79 °F (25 to 26 °C) in order to fly. They depend on their body temperature being sufficiently high and since they can't regulate it themselves, this is dependent on their environment. Butterflies living in cooler climates may use their wings to warm their bodies. They will bask in the sun, spreading out their wings so that they get maximum exposure to the sunlight. In hotter climates butterflies can easily overheat, so they are usually active only during the cooler parts of the day, early morning, late afternoon or early evening. During the heat of the day they rest in the shade. Some larger thick-bodied moths (e. g. Sphingidae) can generate their own heat to a limited degree by vibrating their wings. The heat generated by the flight muscles warms the thorax while the temperature of the abdomen is unimportant for flight. To avoid overheating some moths rely on hairy scales, internal air sacs, and other structures to separate the thorax and abdomen and keep the abdomen cooler.
Flight is an important aspect of the lives of butterflies and moths and is used for evading predators, searching for food and finding mates in a timely manner as lepidopteran species do not live long after eclosion. It is the main form of locomotion in most species. In lepidoptera, the forewings and hindwings are mechanically coupled and flap in synchrony. Flight is anteromotoric, or being driven primarily by action of the forewings. Although it has been reported that lepidopteran species can still fly when their hindwings are cut off, it reduces their linear flight and turning capabilities.
While most butterflies and moths are semi-aquatic.:22
Most lepidopteran species do not live long after eclosion, only needing a few days to find a mate and then lay their eggs. Others may remain active for a longer period (from one to several weeks), or go through diapause and overwintering as monarch butterflies do, or waiting out environmental stress. Some adult species of Microlepidoptera go through a stage where there is no reproductive-related activity lasting through summer and winter, followed by mating and oviposition, or egg laying, in the early spring.:564
The length of time before the pupa ecloses (emerges) varies greatly. The monarch butterfly may stay in its chrysalis for two weeks, while other species may need to stay for more than 10 months in diapause. The adult will emerge from the pupa either by using abdominal hooks or from projections located on the head. The mandibles found in the most primitive moth families are used to escape from their cocoon (e. g., Micropterigoidea).:564
While encased, some of the lower segments are not fused, and are able to move using small muscles found in between the membrane. Moving may help the pupa, for example, escape the sun, which would otherwise kill it. The pupa of the Mexican jumping bean moth (Cydia deshaisiana) does this. The larvae cut a trapdoor in the bean (species of Sebastiania) and use the bean as a shelter. When there is a sudden rise in temperature, the pupa inside twitches and jerks, pulling on the threads inside. Wiggling may also help to deter parasitoid wasps from laying eggs on the pupa. Other species of moth are able to make clicks to deter predators.:564, 566
After about 5 to 7 instars,:26–28 or molts, certain hormones, like prothoracicotropic hormone, stimulate the production of ecdysone, which initiates insect molting. Then, the larva puparium, a sclerotized or hardened cuticle of the last larval instar, develops into the pupa. Depending on the species, the pupa may be covered in silk and attached to many different types of debris or may not be covered at all. The pupa stays attached to the leaf by silk spun by the caterpillar before it spins the silk for the full pupa.:566 Features of the imago are externally recognizable in the pupa. All the appendages on the adult head and thorax are found cased inside the cuticle (antennae, mouthparts, etc.), with the wings wrapped around, adjacent to the antennae.:564
Near pupation, the wings are forced outside the epidermis under pressure from the hemolymph, and although they are initially quite flexible and fragile, by the time the pupa breaks free of the larval cuticle they have adhered tightly to the outer cuticle of the pupa (in obtect pupae). Within hours, the wings form a cuticle so hard and well-joined to the body that pupae can be picked up and handled without damage to the wings.
No form of wing is externally visible on the larva, however when larvae are dissected, developing wings can be seen as disks, which can be found on the second and third thoracic segments, in place of the spiracles that are apparent on abdominal segments. Wing disks develop in association with a trachea that runs along the base of the wing, and are surrounded by a thin peripodial membrane, which is linked to the outer epidermis of the larva by a tiny duct. Wing disks are very small until the last larval instar, when they increase dramatically in size, are invaded by branching tracheae from the wing base that precede the formation of the wing veins, and begin to develop patterns associated with several landmarks of the wing.
The larvae of both butterflies and moths exhibit mimicry to deter potential predators. Some caterpillars have the ability to inflate parts of their head to appear snake-like. Many have false eye-spots to enhance this effect. Some caterpillars have special structures called osmeteria (Papilionidae family), which are exposed to produce smelly chemicals. These are used in defense. Host plants often have toxic substances in them and caterpillars are able to sequester these substances and retain them into the adult stage. This helps making them unpalatable to birds and other predators. Such unpalatability is advertised using bright red, orange, black or white warning colors. The toxic chemicals in plants are often evolved specifically to prevent them from being eaten by insects. Insects in turn develop countermeasures or make use of these toxins for their own survival. This "arms race" has led to the coevolution of insects and their host plants.
The larvae develop rapidly with several generations in a year; however, some species may take up to 3 years to develop and exceptional examples like Gynaephora groenlandica take as long as seven years. The larval stage is where the feeding and growing stages occur, and the larvae periodically undergo hormone-induced ecdysis, developing further with each instar, until they undergo the final larval-pupal molt. Lepidoptera pupa, known as chrysalis in the case of butterflies, have functional mandibles and with appendages fused or glued to the body in most species, while the pupal mandibles are not functional in others.
Different herbivore species have adapted to feed on every part of the plant and are normally considered pests to their host plant; some species have been found to lay their eggs on the fruit and other species lay their eggs on clothing or fur (e. g., Tineola bisselliella, the common clothes moth). Some species are carnivorous and others are even parasitic. Some lycaenid species such as Maculinea rebeli are social parasites of Myrmica ants nests. A species of Geometridae from Hawaii has carnivorous larvae that catch and eat flies. Some pyralid caterpillars are aquatic.
The larvae or caterpillars are the first stage in the life cycle after hatching. Caterpillars, are "characteristic polypod larvae with cylindrical bodies, short thoracic legs and abdominal prolegs (pseudopods)". They have a toughened (sclerotised) head capsule with an adfrontal suture formed by medial fusion of the sclerites, mandibles (mouthparts) for chewing, and a soft tubular, segmented body, that may have hair-like or other projections, three pairs of true legs, and additional prolegs (up to five pairs). The body consists of thirteen segments, of which three are thoracic and ten are abdominal. Most larvae are herbivores, but a few are carnivores (some eat ants or other caterpillars) and detritivores.
The egg stage lasts a few weeks in most butterflies but eggs laid close to winter, especially in temperate regions, go through a diapause, and hatching may be delayed until spring. Other butterflies may lay their eggs in the spring and have them hatch in the summer. These butterflies are usually northern species (e. g. Nymphalis antiopa).
The egg is covered by a hard-ridged protective outer layer of shell, called the chorion. It is lined with a thin coating of wax, which prevents the egg from drying out before the larva has had time to fully develop. Each egg contains a number of micropyles, or tiny funnel-shaped openings at one end, the purpose of which is to allow sperm to enter and fertilize the egg. Butterfly and moth eggs vary greatly in size between species, but they are all either spherical or ovate.
Lepidoptera usually reproduce sexually and are oviparous (egg-laying), though some species exhibit live birth in a process called ovoviviparity. There are a variety of differences in egg-laying and the number of eggs laid. Some species simply drop their eggs in flight (these species normally have polyphagous larvae, meaning they eat a variety of plants e. g., hepialids and some nymphalids) while most Lepidoptera will lay their eggs near or on the host plant that the larvae feed on. The number of eggs laid may vary from only a few to several thousand. The females of both butterflies and moths select the host plant instinctively, and primarily, by chemical cues.:564
Adaptations include undergoing one seasonal generation, two or even more, called voltinism (Univoltism, bivoltism and multivism respectively). Most lepidoptera in temperate climates are univoltine, while in tropical climates most have two seasonal broods. Some others may take advantage of any opportunity they can get, and mate continuously throughout the year. These seasonal adaptations are controlled by hormones, and these delays in reproduction are called diapause.:567 Many lepidopteran species, after mating and laying their eggs, die shortly afterwards, having only lived for a few days after eclosion. Others may still be active for several weeks and then overwinter and become sexually active again when the weather becomes more favorable, or diapause. The sperm of the male that mated most recently with the female is most likely to have fertilized the eggs but the sperm from a prior mating may still prevail.:564
Males usually get a head start, and start eclosion or emergence, earlier than females and peak in numbers before females. Both of the sexes are sexually mature by the time of eclosion.:564 Butterflies and moths normally don't associate with each other, except for migrating species, staying relatively asocial. Mating begins with an adult (female or male) attracting a mate, normally using visual stimuli, especially in diurnal species like most butterflies. However, the females of most nocturnal species, including almost all moth species, use pheromones to attract males, sometimes from long distances. Some species engage in a form of acoustic courtship, or attract mates using sound or vibration such as the polka-dot wasp moth, Syntomeida epilais.
Species of Lepidoptera undergo holometabolism or "complete metamorphosis". Their life cycle normally consists of an egg, larva, pupa, and an imago or adult. The larvae are commonly called caterpillars, and the pupae of moths that are encapsulated in silk are called cocoons while the uncovered pupae of butterflies are called chrysalides.
Reproduction and development
Sexual dimorphism is the occurrence of differences between males and females in a species. In Lepidoptera, sexual dimorphism is widespread and almost completely set by genetic determination. Sexual dimorphism is present in all families of the Papilionoidea and more prominent in the Lycaenidae, Pieridae and certain taxa of the Nymphalidae. Apart from color variation, which may differ from slight to completely different color-pattern combinations, secondary sexual characteristics may also be present.:25 Different genotypes maintained by natural selection may also be expressed at the same time. Polymorphic and/or mimetic females occur in the case of some taxa in the Papilionidae primarily to obtain a level of protection not available to the male of their species. The most distinct case of sexual dimorphism is that of adult females of many Psychidae species who have only vestigial wings, legs, and mouthparts as compared to the adult males who are strong fliers with well-developed wings and feathery antennae.
Geographical polymorphism is where geographical isolation causes a divergence of a species into different morphs. A good example is the Indian White Admiral Limenitis procris, which has five forms, each geographically separated from the other by large mountain ranges.:26 An even more dramatic showcase of geographical polymorphism is the Apollo butterfly (Parnassius apollo). Because the Apollos live in small local populations, thus having no contact with each other, coupled with their strong stenotopic nature and weak migration ability, interbreeding between populations of one species practically does not occur; by this, they form over 600 different morphs, with the size of spots on the wings of which varies greatly.
Environmental polymorphism, in which traits are not inherited, is often termed as polyphenism. Polyphenism in Lepidoptera is commonly seen in the form of seasonal morphs especially in the butterfly families of Nymphalidae and Pieridae. An Old World pierid butterfly, the Common Grass Yellow (Eurema hecabe) has a darker summer adult morph, triggered by a long day exceeding 13 hours in duration, while the shorter diurnal period of 12 hours or less induces a paler morph in the post-monsoon period. Polyphenism also occurs in caterpillars, an example being the Peppered Moth, Biston betularia.
Polymorphism is the appearance of forms or "morphs", which differ in color and number of attributes within a single species.:163 In Lepidoptera, polymorphism can be seen not only between individuals in a population, but also between the sexes as sexual dimorphism, between geographically separated populations in geographical polymorphism and also between generations flying at different seasons of the year (seasonal polymorphism or polyphenism). In some species, the polymorphism is limited to one sex, typically the female. This often includes the phenomenon of mimicry when mimetic morphs fly alongside non-mimetic morphs in a population of a particular species. Polymorphism occurs both at specific level with heritable variation in the overall morphological design of individuals as well as in certain specific morphological or physiological traits within a species.
In the trachea.:69 Air is taken in through spiracles along the sides of the abdomen and thorax supplying the trachea with oxygen as it goes through the lepidopteran's respiratory system. There are three different tracheae supplying and diffusing oxygen throughout the species body: The dorsal, ventral, and visceral. The dorsal tracheae supply oxygen to the dorsal musculature and vessels, while the ventral tracheae supply the ventral musculature and nerve cord, and the visceral tracheae supply the guts, fat bodies, and gonads.:71, 72
In the digestive system, the anterior region of the foregut has been modified to form a pharyngeal sucking pump as they need it for the food they eat, which are for the most part liquids. An esophagus follows and leads to the posterior of the pharynx and in some species forms a form of crop. The midgut is short and straight, with the hindgut being longer and coiled. Ancestors of lepidopteran species, stemming from Hymenoptera, had midgut ceca, although this is lost in current butterflies and moths. Instead, all the digestive enzymes other than initial digestion, are immobilized at the surface of the midgut cells. In larvae, long-necked and stalked goblet cells are found in the anterior and posterior midgut regions, respectively. In insects, the goblet cells excrete positive potassium ions, which are absorbed from leaves ingested by the larvae. Most butterflies and moths display the usual digestive cycle, however species that have a different diet require adaptations to meet these new demands.:279
In the genitalia are complex and unclear. In females there are three types of genitalia based on the relating taxa: monotrysian, exoporian, and ditrysian. In the monotrysian type there is an opening on the fused segments of the sterna 9 and 10, which act as insemination and oviposition. In the exoporian type (in Hepaloidae and Mnesarchaeoidea) there are two separate places for insemination and oviposition, both occurring on the same sterna as the monotrysian type, i.e. 9 and 10. The ditrysian groups have an internal duct that carry sperm, with separate openings for copulation and egg-laying In most species the genitalia are flanked by two soft lobes, although they may be specialized and sclerotized in some species for ovipositing in area such as crevices and inside plant tissue. Hormones and the glands that produce them run the development of butterflies and moths as they go through their life cycle, called the endocrine system. The first insect hormone PTTH (Prothoracicotropic hormone) operates the species life cycle and diapause (see the relates section). This hormone is produced by corpora allata and corpora cardiaca, where it is also stored. Some glands are specialized to perform certain task such as producing silk or producing saliva in the palpi.:65, 75 While the corpora cardiaca produce PTTH, the corpora allata also produces jeuvanile hormones, and the prothorocic glands produce moulting hormones.
The lumen or surface of the lamella, has a complex structure. It gives color either by colored pigments that it contains, or through structural coloration with mechanisms that include photonic crystals and diffraction gratings.
The wings, head, parts of the thorax and abdomen of Lepidoptera are covered with minute scales, a feature from which the order 'Lepidoptera' derives its name. Most scales are lamellar, or blade-like and attached with a pedicel, while other forms may be hair-like or specialized as secondary sexual characteristics.
The abdomen of the caterpillar has 4 pairs of prolegs, normally located on the third to sixth segments of the abdomen, and a separate pair of prolegs by the anus, which have a pair of tiny hooks called crotchets. These aid in gripping and walking, especially in species that lack many prolegs (e. g. larvae of Geometridae). In some basal moths, these prolegs may be on every segment of the body, while prolegs may be lost completely in other groups, which are more adapted to boring and living in sand (e. g., Prodoxidae and Nepticulidae respectively).:563
:561 In the females of basal moths, there is only one sex organ, which is used for
The abdomen, which is less sclerotized than the thorax, consists of 10 segments with membranes in between allowing for articulated movement. The sternum, on the first segment, is small in some families and is completely absent in others. The last 2 or 3 segments form the external parts of the species' sex organs. The genitalia of Lepidoptera are highly varied and are often the only means of differentiating between species. Male genitals include a valva, which is usually large, as it is used to grasp the female during mating. Female genitalia include three distinct sections.
The caterpillar has an elongated soft body that may have hair-like or other projections, 3 pairs of true legs, with 0–11 pairs of abdominal legs (usually 8) and hooklets, called apical crochets. The thorax will usually have a pair of legs on each segment. The thorax is also lined with many spiracles on both the mesothorax and metathorax, except for a few aquatic species, who instead have a form of gill.:563
The two pairs of wings are found on the middle and third segment, or
The thorax is made of three fused segments, the In the larval form there are 3 pairs of true legs, with 0–11 pairs of abdominal legs (usually 8) and hooklets, called apical crochets. |
ENG203: Literary Analysis and Composition II
This list is representative of the materials provided or used in this course. Keep in mind that the actual materials used may vary, depending on the school in which you are enrolled, and whether you are taking the course as Independent Study.
For a complete list of the materials to be used in this course by your enrolled student, please visit MyInfo. All lists are subject to change at any time.
Scope & Sequence : Scope & Sequence documents describe what is covered in a course (the scope) and also the order in which topics are covered (the sequence). These documents list instructional objectives and skills to be mastered. K12 Scope & Sequence documents for each course include:
In this course, students build on existing literature and composition skills and move on to higher levels of sophistication.
LITERATURE: Students hone their skills of literary analysis by reading short stories, poetry, drama, novels, and works of nonfiction, both classic and modern. Authors include W. B. Yeats, Sara Teasdale, Langston Hughes, Robert Frost, Edgar Allan Poe, Nathaniel Hawthorne, Kate Chopin, Amy Tan, Richard Rodriguez, and William Shakespeare. Students have a choice of novels and longer works to study, including works by Jane Austen, Charles Dickens, Elie Wiesel, and many others.
LANGUAGE SKILLS: In this course, students become more proficient writers and readers. In composition lessons, students analyze model essays from readers' and writers' perspectives, focusing on ideas and content, structure and organization, style, word choice, and tone. Students receive feedback during the writing process to help them work toward a polished final draft. In addition to writing formal essays, applications, and business letters, students write and deliver a persuasive speech. Students expand their knowledge of grammar, usage, and mechanics through sentence analysis and structure, syntax, agreement, and conventions. Students strengthen their vocabularies through thematic units focused on word roots, suffixes and prefixes, context clues, and other important vocabulary-building strategies.back to top
Two Semestersback to top
ENG103: Literary Analysis and Composition I, or equivalentback to top
Students read writings from diverse traditions and genres, including poetry, drama, short stories, nonfiction, and novels. Online lessons help students develop skills of close reading. Students analyze formal features of literary works; explore theme, character, and uses of language; and learn to articulate an interpretation based on textual evidence. Many lessons provide background information to help students connect the work to the historical or biographical context. Students also practice the critical reading and analysis skills that are necessary for taking standardized assessments.
Novels (choose any two of the following):
- Sense and Sensibility by Jane Austen
- The Scarlet Pimpernel by Baroness Orczy
- Cry, the Beloved Country by Alan Paton
- Night by Elie Wiesel
- The Way to Rainy Mountain by N. Scott Momaday
- Frankenstein by Mary Shelley
- Macbeth by William Shakespeare
Prose Fiction and Nonfiction
- Works by Edgar Allan Poe, Anton Chekhov, Kate Chopin, O. Henry, Flannery O'Connor, Sherwood Anderson, Tillie Olsen, Jerome Weidman, Richard Rodriguez, Dr. Martin Luther King, Jr., Amy Tan, and others
- Works by William Shakespeare, Lord Byron, Walt Whitman, Stephen Crane, Edna St. Vincent Millay, Ezra Pound, William Carlos Williams, Langston Hughes, Robert Frost, D. H. Lawrence, Wilfred Owen, Sara Teasdale, Rita Dove, Dudley Randall, Judith Ortiz Cofer, and others
Partial List of Skills Taught:
- Analyze the relationship between a literary work and its historical period and cultural influences.
- Recognize and examine the impact of voice, persona, and the choice of narrator on a work of literature.
- Identify character traits and motivations.
- Describe and analyze characters based on speech, actions, or interactions with others.
- Analyze the relationship between character actions/interactions and plot.
- Identify elements of plot and analyze plot development.
- Identify conflict and resolution.
- Recognize literary devices, such as foreshadowing, flashbacks, suspense, irony, metaphor, simile, symbolism, and other figures of speech.
- Identify author's purpose, style, tone, and intended audience.
- Identify and understand universal themes.
- Compare and contrast characters based on their actions, traits, and motives.
- Compare and contrast themes in different works and across different genres.
- Recognize the impact of word choice, style, and figurative language on tone, mood, and theme.
- Analyze imagery, personification, irony, hyperbole, paradox, and figures of speech in poetry and fiction.
- Examine the use of sound devices to create rhythm, appeal to the senses, or establish mood in literature.
- Recognize and examine a writer's use of poetic conventions and structures, such as line, stanza, rhythm, rhyme, meter, and sound devices.
- Interpret oral readings from literary and informational texts.
- Recite poetry using effective delivery skills, such as tone, rate, volume, pitch, gesture, pronunciation, and enunciation.
Students begin the Composition units by reading model essays and analyzing the essays from the perspective of both a reader and a writer. In writing their own essays, students apply the concepts they have learned from studying the models. Using the writing process, students plan, organize, write, revise, and proofread their essays, implementing feedback they receive from teachers and mentors. In addition to writing full-length essays, students also write timed responses to prompts, similar to those found on standardized tests.
Narrative: I Believe
- Students analyze a sample narrative with the theme of "I Believe" and then write their own narrative that explains something they believe and how they arrived at that belief.
- Students analyze a sample persuasive essay, learn about the importance of using logical and emotional appeals and connotative language, and understand the significance of conceding a point and issuing a call to action. They then plan, write, and revise their persuasive essays.
- Students first read and then listen to a speech, based on the model persuasive essay. They study how an oral presentation differs from a written one. Then they use their own persuasive essays as the basis for writing and delivering their persuasive speeches.
- Students analyze a model research paper on a scientific topic and learn how to locate appropriate resources and evaluate the reliability of the sources. They take notes, create a formal outline, and write and revise their own research papers.
- In this optional unit, students read a model cover letter and application for a job. Then they create their own cover letter and application for their "dream" job.
III. GRAMMAR, USAGE, AND MECHANICS
K12's Grammar covers not only grammar but also usage and mechanics. Often referred to as "GUM," this online course helps students understand how language works so that they can apply the concepts in their own writing. In addition to GUM skills, lessons on such topics as clear sentences, sentence combining, parallel structure, placement of modifiers, wordiness, diction, and idioms help students learn skills frequently tested on standardized tests. Each lesson ends with an optional activity that provides additional practice.
Partial List of Topics Include:
- Prepositional Phrases
- Sentences and Sentence Errors
- Clauses: Adjective, Adverb, Noun
- Clear Sentences: Coordination, Subordination, Combining Sentences
- Subject-Verb Agreement
- Verb Forms and Usage
- Pronouns and Pronoun Usage
- Verbals and Verbal Phrases: Participles, Gerunds, Infinitives
- Refining Sentences: Modifiers, Parallel Structure
K12's Vocabulary program uses the Vocabulary Achievement workbook (from Great Source Publisher) to provide a systematic approach to new vocabulary acquisition, application, and retention. Students study logical grouping of words in clearly structured lessons. To unlock word meaning, students apply a variety of strategies including contextual clues and determining roots and affixes. Students also practice the kinds of items that are frequently used in sentence-completion and critical-reading assessments, including the SAT.back to top |
A gravitational-wave observatory (or gravitational-wave detector) is any device designed to measure gravitational waves, tiny distortions of spacetime that were first predicted by Einstein in 1916. Gravitational waves are perturbations in the theoretical curvature of spacetime caused by accelerated masses. The existence of gravitational radiation is a specific prediction of general relativity, but is a feature of all theories of gravity that obey special relativity. Since the 1960s, gravitational-wave detectors have been built and constantly improved. The present-day generation of resonant mass antennas and laser interferometers has reached the necessary sensitivity to detect gravitational waves from sources in the Milky Way. Gravitational-wave observatories are the primary tool of gravitational-wave astronomy.
A number of experiments have provided indirect evidence, notably the observation of binary pulsars, the orbits of which evolve precisely matching the predictions of energy loss through general relativistic gravitational-wave emission. The 1993 Nobel Prize in Physics was awarded for this work.
The direct detection of gravitational waves is complicated by the extraordinarily small effect the waves produce on a detector. The amplitude of a spherical wave falls off as the inverse of the distance from the source. Thus, even waves from extreme systems such as merging binary black holes die out to a very small amplitude by the time they reach the Earth. Astrophysicists predicted that some gravitational waves passing the Earth might produce differential motion on the order 10−18 m in a LIGO-size instrument.
Resonant mass antennasEdit
A simple device to detect the expected wave motion is called a resonant mass antenna – a large, solid body of metal isolated from outside vibrations. This type of instrument was the first type of gravitational-wave detector. Strains in space due to an incident gravitational wave excite the body's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. However, up to 2018, no gravitational wave observation that would have been widely accepted by the research community has been made on any type of resonant mass antenna, despite certain claims of observation by researchers operating the antennas.
There are three types of resonant mass antenna that have been built: room-temperature bar antennas, cryogenically cooled bar antennas and cryogenically cooled spherical antennas.
The earliest type was the room-temperature bar-shaped antenna called a Weber bar; these were dominant in 1960s and 1970s and many were built around the world. It was claimed by Weber and some others in the late 1960s and early 1970s that these devices detected gravitational waves; however, other experimenters failed to detect gravitational waves using them; thus it became consensus that Weber bars could not detect gravitational waves.
The second generation of resonant mass antennas, developed in the 1980s and 1990s, were the cryogenic bar antennas which are also sometimes called Weber bars. In the 1990s there five major cryogenic bar antennas: AURIGA (Padua, Italy), NAUTILUS (Rome, Italy), EXPLORER (CERN, Switzerland), ALLEGRO (Louisiana, US), NIOBE (Perth, Australia). In 1997, these five antennas run by four research groups formed the International Gravitational Event Collaboration (IGEC) for collaboration. Over the years, many claims of detection of gravitational waves have been made by scientist using cryogenic bar antennas but none of these was accepted by the larger community.
In 1980s there was also a cryogenic bar antenna called ALTAIR, which along with a room-temperature bar antenna called GEOGRAV was built in Italy as a prototype for later bar antennas. Operators of the GEOGRAV-detector claimed to have observed gravitational waves coming from the supernova SN1987A (along with another room-temperature bar of Weber), but these claims were also dismissed by the wider community.
These modern cryogenic forms of the Weber bar operated with superconducting quantum interference devices to detect vibration (ALLEGRO, for example). Some of them are still in operation, such as AURIGA, an ultracryogenic resonant cylindrical bar gravitational wave detector based at INFN in Italy. The AURIGA and LIGO teams have collaborated in joint observations.
It is the current consensus that current cryogenic Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. As of 2018, no detection of gravitational waves by cryogenic Weber bars has occurred.
In the 2000s, the third generation of resonant mass antennas, the spherical cryogenic antennas, emerged. Four spherical antennas were proposed around year 2000 and two of them were built (others were cancelled) as downsized versions. The proposed antennas were GRAIL (Netherlands, proposal that when downsized became MiniGRAIL), TIGA (US, small prototypes made), SFERA (Italy), and Graviton (Brasil, proposal that when downsized became Mario Schenberg).
Currently there are two cryogenic spherical gravitational wave antennas in the world, the MiniGRAIL and the Mario Schenberg. These antennas are actually a collaborative effort, having much in common.
MiniGRAIL is based at Leiden University, and consists of an exactingly machined 1,150 kg (2,540 lb) sphere cryogenically cooled to 20 mK (−273.1300 °C; −459.6340 °F). The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers.
A more sensitive detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana; the other two (in the same vacuum tubes) at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 2 to 4 kilometres (1.2 to 2.5 mi) in length. These are at 90 degree angles to each other, with the light passing through 1 m (3 ft 3 in) diameter vacuum tubes running the entire 4 kilometres (2.5 mi). A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which an interferometer is most sensitive.
Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as . Upgrades to LIGO and other detectors such as VIRGO, GEO 600, and TAMA 300 should increase the sensitivity still further; the next generation of instruments (Advanced LIGO and Advanced Virgo) will be more than ten times more sensitive. Another highly sensitive interferometer (KAGRA) is currently in the design phase. A key point is that a ten-times increase in sensitivity (radius of "reach") increases the volume of space accessible to the instrument by one thousand. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year.
Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall – the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these "stationary" (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other "non-stationary" noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational-wave event.
Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind.
Matter-wave interferometry can provide alternative means to detect gravitational waves. There has been early proposals in the beginning of the 2000s. Atom interferometry is proposed to extend the detection bandwidth of GW detectors in the infrasound band (10 mHz – 10 Hz), where actual ground based detectors are limited by low frequency gravity noise. Adopting as probes arrays of atomic ensembles in free fall, and tracking their motion on geodesics with atom interferometry allows the suppression of Newtonian Noise, enables low frequency sensitivity, and opens the way toward the realization of low frequency GW detectors on Earth. A Matter wave – laser based Interferometer Gravitation Antenna (MIGA) is currently under construction in the underground environment of LSBB (Rustrel, France).
In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, bumpy neutron star would be "monochromatic" – like a pure tone in acoustics. It would not change very much in amplitude or frequency.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
High frequency detectorsEdit
There are currently two detectors focusing on detections at the higher end of the gravitational-wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of , with an expectation to reach a sensitivity of . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ~ 1010 Hz (10 GHz) and h ~ 10−30 to 10−31.
Pulsar timing arraysEdit
A different approach to detecting gravitational waves is used by pulsar timing arrays, such as the European Pulsar Timing Array, the North American Nanohertz Observatory for Gravitational Waves, and the Parkes Pulsar Timing Array. These projects propose to detect gravitational waves by looking at the effect these waves have on the incoming signals from an array of 20–50 well-known millisecond pulsars. As a gravitational wave passing through the Earth contracts space in one direction and expands space in another, the times of arrival of pulsar signals from those directions are shifted correspondingly. By studying a fixed set of pulsars across the sky, these arrays should be able to detect gravitational waves in the nanohertz range. Such signals are expected to be emitted by pairs of merging supermassive black holes.
Cosmic microwave background polarizationEdit
The cosmic microwave background, radiation left over from when the Universe cooled sufficiently for the first atoms to form, can contain the imprint of gravitational waves from the very early Universe. The microwave radiation is polarized. The pattern of polarization can be split into two classes called E-modes and B-modes. This is in analogy to electrostatics where the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes can be created by a variety of processes, but the B-modes can only be produced by gravitational lensing, gravitational waves, or scattering from dust.
On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of the imprint gravitational waves in the cosmic microwave background, which, if confirmed, would provide strong evidence for inflation and the Big Bang. However, on 19 June 2014, lowered confidence in confirming the findings was reported; and on 19 September 2014, even more lowered confidence. Finally, on January 30, 2015, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
Operational and planned gravitational-wave detectorsEdit
- (1995) TAMA 300
- (1995) GEO 600
- (2002) LIGO
- (2003) Mario Schenberg (Gravitational Wave Detector)
- (2003) MiniGrail
- (2005) Pulsar timing array (for Parkes radio-telescope)
- (2006) CLIO
- (2007) Virgo interferometer
- (2015) Advanced LIGO
- (2016) Advanced Virgo
- (2018) KAGRA (LCGT)
- (2023) IndIGO (LIGO-India)
- (2025) TianQin
- (2027) Deci-hertz Interferometer Gravitational wave Observatory (DECIGO)
- (2034) Laser Interferometer Space Antenna (Lisa Pathfinder, a development mission was launched December 2015)
- (2030s) Einstein Telescope
- Clark, Stuart (17 March 2014). "What are gravitational waves?". The Guardian. Retrieved 22 May 2014.
- Schutz, Bernard F. (1984). "Gravitational waves on the back of an envelope". American Journal of Physics. 52 (5): 412–419. Bibcode:1984AmJPh..52..412S. doi:10.1119/1.13627. hdl:11858/00-001M-0000-0013-747D-5.
- "Press Release: The Nobel Prize in Physics 1993". Nobel Prize. 13 October 1993. Retrieved 6 May 2014.
- Castelvecchi, Davide; Witze, Witze (February 11, 2016). "Einstein's gravitational waves found at last". Nature News. doi:10.1038/nature.2016.19361. Retrieved 2016-02-11.
- B. P. Abbott; LIGO Scientific Collaboration; Virgo Collaboration; et al. (2016). "Observation of Gravitational Waves from a Binary Black Hole Merger". Physical Review Letters. 116 (6): 061102. arXiv:1602.03837. Bibcode:2016PhRvL.116f1102A. doi:10.1103/PhysRevLett.116.061102. PMID 26918975.
- "Gravitational waves detected 100 years after Einstein's prediction | NSF - National Science Foundation". www.nsf.gov. Retrieved 2016-02-11.
- Whitcomb, S.E., "Precision Laser Interferometry in the LIGO Project", Proceedings of the International Symposium on Modern Problems in Laser Physics, August 27-September 3, 1995, Novosibirsk, LIGO Publication P950007-01-R
- For a review of early experiments using Weber bars, see Levine, J. (April 2004). "Early Gravity-Wave Detection Experiments, 1960-1975". Physics in Perspective. 6 (1): 42–75. Bibcode:2004PhP.....6...42L. doi:10.1007/s00016-003-0179-6.
- AURIGA Collaboration; LIGO Scientific Collaboration; Baggio; Cerdonio, M; De Rosa, M; Falferi, P; Fattori, S; Fortini, P; et al. (2008). "A Joint Search for Gravitational Wave Bursts with AURIGA and LIGO". Classical and Quantum Gravity. 25 (9): 095004. arXiv:0710.0497. Bibcode:2008CQGra..25i5004B. doi:10.1088/0264-9381/25/9/095004. hdl:11858/00-001M-0000-0013-72D5-D.
- Gravitational Radiation Antenna In Leiden
- de Waard, Arlette; Gottardi, Luciano; Frossati, Giorgio (2000). "Spherical Gravitational Wave Detectors: cooling and quality factor of a small CuAl6% sphere - In: Marcel Grossmann meeting on General Relativity". Rome, Italy.
- The idea of using laser interferometry for gravitational-wave detection was first mentioned by Gerstenstein and Pustovoit 1963 Sov. Phys.–JETP 16 433. Weber mentioned it in an unpublished laboratory notebook. Rainer Weiss first described in detail a practical solution with an analysis of realistic limitations to the technique in R. Weiss (1972). "Electromagnetically Coupled Broadband Gravitational Antenna". Quarterly Progress Report, Research Laboratory of Electronics, MIT 105: 54.
- Chiao, R.Y. (2004). "Towards MIGO, the matter-wave interferometric gravitational-wave observatory, and the intersection of quantum mechanics with general relativity". J. Mod. Opt. 51: 861–99. arXiv:gr-qc/0312096. doi:10.1080/09500340408233603.
- Bender, Peter L. (2011). "Comment on "Atomic gravitational wave interferometric sensor"". Physical Review D. 84 (2): 028101. Bibcode:2011PhRvD..84b8101B. doi:10.1103/PhysRevD.84.028101.
- Johnson, David Marvin Slaughter (2011). "AGIS-LEO". Long Baseline Atom Interferometry. Stanford University. pp. 41–98.
- Chaibi, W. (2016). "Low frequency gravitational wave detection with ground-based atom interferometer arrays". Phys. Rev. D. 93: 021101(R). doi:10.1103/PhysRevD.93.021101.
- Canuel, B. (2018). "Exploring gravity with the MIGA large scale atom interferometer". Scientific Reports. 8: 14064. doi:10.1038/s41598-018-32165-z.
- "Einstein@Home". Retrieved 5 April 2019.
- Janssen, G. H.; Stappers, B. W.; Kramer, M.; Purver, M.; Jessner, A.; Cognard, I.; Bassa, C.; Wang, Z.; Cumming, A.; Kaspi, V. M. (2008). "European Pulsar Timing Array". AIP Conference Proceedings (Submitted manuscript). 983: 633–635. doi:10.1063/1.2900317.
- North American Nanohertz Observatory for Gravitational Waves (NANOGrav) homepage
- Parkes Pulsar Timing Array homepage
- Hobbs, G. B.; Bailes, M.; Bhat, N. D. R.; Burke-Spolaor, S.; Champion, D. J.; Coles, W.; Hotan, A.; Jenet, F.; et al. (2008). "Gravitational wave detection using pulsars: status of the Parkes Pulsar Timing Array project". Publications of the Astronomical Society of Australia. 26 (2): 103–109. arXiv:0812.2721. Bibcode:2009PASA...26..103H. doi:10.1071/AS08023.
- Staff (17 March 2014). "BICEP2 2014 Results Release". National Science Foundation. Retrieved 18 March 2014.
- Clavin, Whitney (17 March 2014). "NASA Technology Views Birth of the Universe". NASA. Retrieved 17 March 2014.
- Overbye, Dennis (17 March 2014). "Detection of Waves in Space Buttresses Landmark Theory of Big Bang". The New York Times. Retrieved 17 March 2014.
- Overbye, Dennis (24 March 2014). "Ripples From the Big Bang". The New York Times. Retrieved 24 March 2014.
- Overbye, Dennis (19 June 2014). "Astronomers Hedge on Big Bang Detection Claim". The New York Times. Retrieved 20 June 2014.
- Amos, Jonathan (19 June 2014). "Cosmic inflation: Confidence lowered for Big Bang signal". BBC News. Retrieved 20 June 2014.
- Ade, P.A.R.; et al. (BICEP2 Collaboration) (19 June 2014). "Detection of B-Mode Polarization at Degree Angular Scales by BICEP2". Physical Review Letters. 112 (24): 241101. arXiv:1403.3985. Bibcode:2014PhRvL.112x1101B. doi:10.1103/PhysRevLett.112.241101. PMID 24996078.
- Planck Collaboration Team (19 September 2014). "Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes". Astronomy & Astrophysics. 586: A133. arXiv:1409.5738. Bibcode:2016A&A...586A.133P. doi:10.1051/0004-6361/201425034.
- Overbye, Dennis (22 September 2014). "Study Confirms Criticism of Big Bang Finding". The New York Times. Retrieved 22 September 2014.
- Cowen, Ron (2015-01-30). "Gravitational waves discovery now officially dead". Nature. doi:10.1038/nature.2015.16830.
- Moore, Christopher; Cole, Robert; Berry, Christopher (19 July 2013). "Gravitational Wave Detectors and Sources". Archived from the original on 16 April 2014. Retrieved 17 April 2014.
- Bhattacharya, Papiya (2016-03-25). "India's LIGO Detector Has the Money it Needs, a Site in Sight, and a Completion Date Too". The Wire. Retrieved 2016-06-16.
- Video (04:36) – Detecting a gravitational wave, Dennis Overbye, NYT (11 February 2016).
- Video (71:29) – Press Conference announcing discovery: "LIGO detects gravitational waves", National Science Foundation (11 February 2016). |
Each generation of landing technology addresses the challenges posed by the previous generation.
People have been fascinated with the idea of exploring Mars since the very beginning of the space age. Largely because of the belief that some form of life may have existed there at one time, surface exploration has been the ultimate ambition of this exploration. Unfortunately, engineers and scientists discovered early on that landing a spacecraft on the surface of Mars would be one of the most difficult and treacherous challenges of robotic space exploration.
Upon arrival at Mars, a spacecraft is traveling at velocities of 4 to 7 kilometers per second (km/s). For a lander to deliver its payload to the surface, 100 percent of this kinetic energy must be safely removed. Fortunately, Mars has an atmosphere substantial enough for the combination of a high-drag heat shield and a parachute to remove 99 percent and 0.98 percent respectively of the kinetic energy. Unfortunately, the Martian atmosphere is not substantial enough to bring a lander to a safe touchdown. This means that an additional landing system is necessary to remove the remaining kinetic energy.
On previous successful missions, the landing system consisted of two major elements, a propulsion subsystem to remove an additional 0.002 percent (~50 to 100 meters per second [m/s]) of the original kinetic energy and a dedicated touchdown system. The first-generation Mars landers used legs to accomplish touchdown. The second generation of touchdown systems used air bags to mitigate the last few meters per second of residual velocity. The National Aeronautics and Space Administration (NASA) is currently developing a third-generation landing system in an effort to reduce cost, mass, and risk while simultaneously improving performance as measured by payload fraction to the surface and the roughness of accessible terrain.
Legged Landing SystemsThe legs of the 1976 Viking mission lander represent the first-generation landing system technology (Pohlen et al., 1977). Basic landing-leg technology was developed for the lunar Surveyor and Apollo programs in the early 1960s. In conjunction with a variable-thrust liquid propulsion system and a closed-loop guidance and control system, legs represented an elegant solution to the touchdown problem. They are simple, reliable mechanisms that can be added to an integrated structure that houses the scientific and engineering subsystems for a typical surface mission.
The first challenge for a legged system is to enable the lander to touch down safely in regions with rocks. For this the legs must either be long enough to raise the belly of the lander above the rocks, or the belly of the lander must be made strong enough to withstand contact with the rocks. Neither solution is attractive. Either the lander becomes top heavy and incapable of landing on sloped terrain or a significant amount of structural reinforcement must be carried along for the remote chance that the lander will directly strike a rock. The decreased stability because of the high center of mass is exacerbated if a mission carries a large rover to the surface. Because of the rover's configurational requirements, it is typically placed on top of the lander. The Soviet Lunokhod lunar landers are an excellent example of this type of configuration.
A second major challenge of the legged-landing architecture is ensuring safe engine cutoff. To prevent the guidance and control system from inadvertently destabilizing the lander during touchdown, contact sensors have been used to shut down the propulsion system at the moment of first contact. On sloped terrain, this causes the lander to free fall the remaining distance, which can significantly increase the total kinetic energy present at touchdown and, in turn, decrease landing stability and increase mission risk. Implementation and testing of fault protection for engine cutoff logic has been, and continues to be, a difficult problem.
The first in-flight problem associated with engine shut off occurred on the lunar Surveyor lander mission when the propulsion system failed to shut off at touchdown, resulting in a significant amount of postimpact hopping. Fortunately, the terrain was benign, and the problem was not catastrophic. The second in-flight problem occurred on the Mars 98 lander mission when the engines were inadvertently shut off prematurely because of a spurious contact signal generated by the landing gear during its initial deployment. This problem resulted in a catastrophic loss of the vehicle. As a result, the Apollo missions all reverted to a man in the loop to perform engine shut off.
A third major challenge with a legged landing system for missions with rovers is rover egress. Once the lander has come to rest on the surface, the rover must be brought to the surface. For legged landers, a ramped egress system is the most logical configuration. Because rovers are bidirectional, the most viable arrangement has been considered two ramps, one at the front and one at the rear of the lander. The Soviet Lunokhod missions landed in relatively benign terrain, and in all cases, both ramps were able to provide safe paths for the rover. In the Mars Pathfinder mission, one of the two ramps was not able to provide a safe egress path for the Sojourner rover, but the second ramp did provide safe egress. For vehicles designed to explore a larger fraction of the Martian surface and, therefore, land in more diverse terrain, combinations of slopes and rocks could conceivably obstruct or render useless the two primary egress paths.
Air-Bag Landing SystemsThe second-generation landing system was developed for the Mars Pathfinder mission and subsequently improved upon for the Mars Exploration Rover (MER) missions. These second-generation systems have a combination of fixed-thrust solid rocket motors and air bags to perform the touchdown task. The solid rocket motors, which are ignited two to three seconds prior to impacting the surface, slow the lander down to a stop 10 meters above the surface, from an initial velocity of approximately 120 meters/second. The lander is then cut away from the over-slung rockets and free falls for the remaining distance.
The air-bag system, which was developed to reduce cost and increase landing robustness, is designed to provide omnidirectional protection of the payload by bouncing over rocks and other surface hazards. Because the system can also right itself from any orientation, the challenge of stability during landing has been completely eliminated. Because the lander comes to rest prior to righting itself, the challenge of rock strikes has been reduced to strikes associated with the righting maneuver, which are significantly more benign. The challenge of thrust termination, in this case cutting the lander away from the rockets, remains but has been decoupled from the problem of landing stability. The problems of rover egress were addressed systematically on the MER missions; a triple ramp-like system provided egress paths in any direction, 360 degrees around the lander.
Although the air-bag landing system has addressed some of the challenges and limitations of legged landers, it has also introduced some challenges of its own. Horizontal velocity control using solid rockets and air-bag testing were significant challenges for both the Mars Pathfinder and MER missions.
The Sky-Crane Landing SystemAs Mars surface explorations mature, roving is becoming more important in the proposed mission architectures. The MER missions demonstrated the value of a fully functional rover not reliant on the lander to complete its surface mission. In the 2009 Mars Science Laboratory (MSL) and other future missions, the rover's capabilities and longevity will be extended. Future missions are also being designed to access larger areas of the planet and, therefore, will require more robust landing systems that are tolerant to slope and rock combinations that were previously considered too hazardous to land or drive on. The third-generation landing system, the sky-crane landing system (SLS), currently being developed for the MSL mission, will directly address all of the major challenges presented by the first- and second-generation landing systems. It will also eliminate the problem of rover egress.
SLS eliminates the dedicated touchdown system and lands the fully deployed rover directly on the surface of Mars, wheels first. This is possible because the rover is no longer placed on top of the lander. In the SLS, the propulsion module is above the rover, so the rover can be lowered on a bridle, similar to the way a cargo helicopter delivers underslung payloads.
The landing sequence for future missions will be similar to the Viking mission, except for the last several seconds when the sky-crane maneuver is performed. After separating from the parachute, the SLS follows a Viking-lander-like propulsive descent profile in a one-body mode from 1,000 meters above the surface down to approximately 35 meters above the surface. During this time, a throttleable liquid-propulsion system coupled with an active guidance and control system controls the velocity and position of the vehicle. At 35 meters, the sky-crane landing maneuver is initiated, and the rover is separated from the propulsion module. The rover is lowered several meters as the entire system continues to descend. The two-body system then descends the final few meters to set the rover onto the surface and cut it away from the propulsion module. The propulsion module then performs an autonomous fly-away maneuver and lands 500 to 1,000 meters away.
The central feature of the SLS architecture is that the propulsion hardware and terrain sensors are placed high above the rover during touchdown. As a result, their operation is uninterrupted during the entire landing sequence. One important result of this feature is that the velocity control of the whole system is improved, and, therefore, the rover touches down at lower velocity. Thus, there is no last-meter free fall associated with engine cutoff, and, because dust kick-up is minimal, the radar antennas can continue to operate even while the rover is being set down on the surface.
The lower impact velocity has two effects. First, the touchdown velocities can now be reliably brought down to the levels the rover has already been designed for so it can traverse the Martian surface. Second, the low velocity, coupled with the presence of bridles until the rover's full weight has been transferred to the surface, results in much more stability during landing.
Because the rover does not have to be protected from the impact energy at landing and because there is no need to augment stability at landing, there is no longer a need for a dedicated touchdown system. This, in turn, eliminates the need for a dedicated egress system. The SLS takes advantage of the fact that the rover's mobility system is inherently designed to interact with rough, sloping natural terrain. Rovers are designed to have high ground clearance, high static stability, reinforced belly pans, and passive terrain adaptability/conformability. These are all features of an ideal touchdown system.
Touchdown SensingTouchdown sensing can be done in several ways. The simplest and most robust way is to use a logic routine that monitors the commanded up-force generated by the guidance and control computer. The landing sequence is specifically designed to provide a constant descent velocity of approximately 0.75 m/s until touchdown has been declared. Prior to surface contact, the commanded up-force is equal to the mass of the rover plus the mass of the descent stage (which are roughly equal) times the gravity of Mars. During the touchdown event, the commanded up-force fluctuates depending on the specific geometry of the terrain.
Once the weight of the rover has been fully transferred to the surface of Mars, the commanded up-force takes on a new steady-state value equal to the mass of the descent stage times the gravity of Mars, approximately one-half of its pretouchdown magnitude. The system declares touchdown after the new lower commanded up-force has lasted for at least 1.5 seconds. This approach provides an unambiguous touchdown signature without the use of dedicated sensors.
The fly-away phase of the landing sequence is initiated when touchdown has been declared. During the fly-away phase, separation of the rover is accomplished by the pyrotechnic cutting of the bridle and umbilical lines connecting the rover and descent stage. The descent stage then uses its onboard computer to guide the propulsion module up and away from the rover and land it several hundred meters away.
ConclusionAs Mars explorers have learned the hard way, it's not typically the fall that kills you, it's the landing. Landing technology has matured significantly in the 40 years since NASA began exploring extraterrestrial surfaces. Each generation of landing technology has attempted to resolve the challenges posed by the previous generation. The SLS represents the latest stage in that evolution.
AcknowledgmentThe research described in this paper was carried out at the Jet Propulsion Laboratory of the California Institute of Technology under a contract with the National Aeronautics and Space Administration.
ReferencesPohlen, J., B. Maytum, I. Ramsey, and U.J. Blanchard. 1977. The Evolution of the Viking Landing Gear. JPL Technical Memorandum 33-777. Pasadena, Calif.: Jet Propulsion Laboratory.
Posted 1 year ago
Posted 1 year and 3 months ago
Posted 1 year and 10 months ago
Posted 2 years ago
Posted 2 years and 7 months ago |
To solve a problem step by step
As a search algorithm, the Backtracking method can find a general algorithm for all or part of the solutions, and it is especially suitable for constraint satisfaction problems, such as N queens, solving Sudoku, and so on today.
Backtracking uses the idea of Trial And Error. It tries to solve a problem step by step. In the process of solving the problem step by step, when it can’t get an effective and correct solution by trying to find the existing step-by-step answer, it will cancel the calculation of the previous step or even the previous step, and then pass other possible steps. Step by step to try again to find the answer to the question.
The backtracking method is usually implemented by the simplest Recursive method. After repeating the above steps repeatedly, two situations may occur:
- Find a possible correct answer
- Declaring the question unanswered after trying every possible step-by-step approach
In the worst case, backtracking results in a computation of exponential time complexity.
The backtracking method is actually a kind of DFS (depth-first search algorithm). The difference is that the backtracking method has the ability to prune. The following two examples are used to analyze the backtracking algorithm in detail:
The N-queen problem is a further development based on the Eight Queens Puzzle. How can
n queens be placed on an
n*n chessboard so that no queen can directly capture other queens? To achieve this, no two queens can be on the same horizontal, vertical, or diagonal line. The figure below shows one of the solutions to the Eight Queens Puzzle:
Here’s an analysis of the problem:
Each position on the board contains two states: with a queen and without a queen. Listing all combinations without considering constraints, we will obtain a binary tree of depth
N * N.
The diagram above depicts the possibilities of the top two positions on the board.
The easiest way is to exhaustively enumerate all possibilities, and then filter out the matching solutions. This binary tree can be traversed by the DFS algorithm, and there will be two
N * N power possibilities for an
N * N chessboard, which is obviously unacceptable.
But we can prune through rules. The rules that can be used are as follows:
- A total of
Nqueens need to be placed
- There can only be one queen per row
- There can only be one queen per column
- There can only be one queen per slash
With the above four conditions, we can subtract most of the paths.
Now, go back to the backtracking method to look at this question. The backtracking method uses the idea of trial and error to solve the problem step by step.
We can first assume that the queen is placed in the first position, and then according to the rules, find the second legal position and then place the second one. If a suitable position cannot be found, it means that the path is wrong, backtracking to the previous position to continue.
One of the characteristics of backtracking is that it uses arrays or other data structures to store traversal information, thereby skipping illegal paths.
This question uses three arrays to store the column, the upper left to the lower right hypotenuse, and the upper right to the seated hypotenuse of the queen placement data.
Since there can only be one queen per row, we traverse row by row, trying to place queens in every position on the current row. Then skip to the next line to continue.
- Time complexity:
O(N!): The first queen has
Nplacements, the second queen must not be in the same column as the first as well as at an oblique angle, so the second queen has
N-1possibilities, and so on, with a time complexity of
- Spatial Complexity:
O(N): Need to use arrays to save information.
The Sudoku game is the one we commonly see solving sudoku.
- The numbers 1–9 can only appear once in each row.
- Numbers 1–9 can only appear once in each column.
- Numbers 1–9 can only appear once in each 3×3 box separated by a thick solid line.
The idea is the same as for the
N queens. Traverse all the spaces, place the 1–9 arrays one by one in the spaces, use the rules to determine if they are legal, and finally find the solution.
Here again, three arrays are defined to hold the traversed data: each row, each column, and each 3×3 cell.
Sn represents the
nth 3×3 cell, then
Sn = (row / 3) * 3 + column / 3.
The input for this problem is a fixed nine-box grid, so it is straightforward to count the actual number of times.
The first line has no more than nine spaces to be filled with numbers, and since this cannot be repeated, there are
9! ways to do this, and there are nine lines in total, so it takes at most
We have defined three arrays, each with 81 elements, for a total of 3×81=243 elements.
I have put the above code on GitHub, and there is a lot of other data structure- and algorithm-related code in there if you need it: |
An experiment is an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis. Controlled experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Controlled experiments vary greatly in their goal and scale, but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
A child may carry out basic experiments to understand the nature of gravity, while teams of scientists may take years of systematic investigation to advance the understanding of a phenomenon. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences.
In the scientific method, an experiment is an empirical method that arbitrates between competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.
An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment will reveal, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some Philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. Similarly, an experiment that provides a counterexample can disprove a theory or hypothesis. An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific control and/or, in randomized experiments, through random assignment.
In engineering and other physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields will focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.
In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study will typically not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis.
Of course, these differences between experimental practice in each of the branches of science have exceptions. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers). Similarly, experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.
|“||The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and,.. attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.||”|
One aspect associated with the optical research of Alhazen (c. 965 – c. 1040 CE) relates to systemic and methodological reliance on experimentation (i'tibar)(Arabic: إختبار) and controlled testing in his scientific inquiries. Moreover, his experimental directives rested on combining classical physics (ilm tabi'i) with mathematics (ta'alim; geometry in particular). This mathematical-physical approach to experimental science supported most of his propositions in Kitab al-Manazir (The Optics; De aspectibus or Perspectivae) and grounded his theories of vision, light and colour, as well as his research in catoptrics and in dioptrics (the study of the refraction of light). Bradley Steffens in his book Ibn Al-Haytham: First Scientist has argued that Alhazen's approach to testing and experimentation made an important contribution to the scientific method. According to Matthias Schramm, Alhazen:
was the first to make a systematic use of the method of varying the experimental conditions in a constant and uniform manner, in an experiment showing that the intensity of the light-spot formed by the projection of the moonlight through two small apertures onto a screen diminishes constantly as one of the apertures is gradually blocked up.
G. J. Toomer expressed some skepticism regarding Schramm's view, arguing that caution is needed to avoid reading anachronistically particular passages in Alhazen's very large body of work, and while acknowledging Alhazen's importance in developing experimental techniques, argued that he should not be considered in isolation from other Islamic and ancient thinkers.
Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an early and influential supporter of experimental science. He disagreed with the method of answering scientific questions by deduction and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession." Bacon wanted a method that relied on repeatable observations, or experiments. Notably, he first ordered the scientific method as we understand it today.
There remains simple experience; which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment]; commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.
— Francis Bacon. Novum Organum. 1620.
In the centuries that followed, people who applied the scientific method in different areas made important advances and discoveries. For example, Galileo Galilei (1564-1642) accurately measured time and experimented to make accurate measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743-1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822-1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible.
A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century, with contributions from statisticians such as Ronald Fisher (1890-1962), Jerzy Neyman (1894-1981), Oscar Kempthorne (1919-2000), Gertrude Mary Cox (1900-1978), and William Gemmell Cochran (1909-1980), among others. This early work has largely been synthesized[by whom?] under the label of the Rubin causal model, which formalizes earlier statistical approaches to the analysis of experiments.
Types of experiment
Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study. In some disciplines (e.g., Psychology or Political Science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects in order to neutralize the potential for experimenter bias, and ensures, over a large number of iterations of the experiment, that all confounding factors are controlled for.
A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure that is very similar to the actual experimental test but which is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to subtract from the test sample results. Sometimes the positive control takes the quadrant of a standard curve.
An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) in order to determine an estimate of the amount of protein in the unknown sample.
Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups.
Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated.
In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations – or hypotheses. These hypotheses suggest reasons to explain a phenomenon, or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out - and the results analysed - in order to confirm, refute, or define the accuracy of the hypotheses.
The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult or impossible. In this case researchers resort to natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study.
Much research in several important science disciplines, including economics, political science, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "suns are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a sun. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed.
Field experiments are so named in order to draw a contrast with laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory.
Contrast with observational study
An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). In order for an observational science to be valid, confounding factors must be known and accounted for. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data.
Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.
A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and the results will not be meaningful if a covariate is neither randomized nor included in the model.
To avoid conditions that render an experiment far less useful, physicians conducting medical trials, say for U.S. Food and Drug Administration approval, will quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with complicated statistical methods such as propensity score matching methods, which require large populations of subjects and extensive information on covariates. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing.
By placing the distribution of the independent variable(s) under the control of the researcher, an experiment - particularly when it involves human subjects - introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice. It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors.
Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects.
Experimental method in Law
The experimental method can be useful in solving juridical problems (R. Zippelius, Die experimentierende Methode im Recht, 1991, ISBN 3-515-05901-6).
- Design of experiments
- Experimental physics
- List of experiments
- Long-term experiment
- Concept development and experimentation
- Cooperstock, Fred I. General Relativistic Dynamics: Extending Einstein's Legacy Throughout the Universe. Page 12. World Scientific. 2009. ISBN 978-981-4271-16-5
- Griffith, W. Thomas. The Physics of Everyday Phenomena: A Conceptual Introduction to Physics. Page 4. New York: McGraw-Hill Higher Education. 2001. ISBN 0-07-232837-1.
- Devine, Betsy. Fantastic realities: 49 mind journeys and a trip to Stockholm. Page 62. Wilczek, Frank. World Scientific. 2006. ISBN 978-981-256-649-2
- Griffith, W. Thomas. The Physics of Everyday Phenomena: A Conceptual Introduction to Physics. Page 3. New York: McGraw-Hill Higher Education. 2001. ISBN 0-07-232837-1.
- Holland, Paul W. 1986. "Statistics and Causal Inference." Journal of the American Statistical Association 81 (396): 945–960. http://www.jstor.org/stable/2289064.
- Druckman, James N., Donald P. Green, James H. Kuklinski, and Arthur Lupia. 2011. Cambridge Handbook of Experimental Political Science. New York: Cambridge University Press.
- Rayan, Sobhi. "Analogical Reasoning Roots in Ibn al-Haaytham's Scientific Method of Research". International Journal of Computational Bioinformatics and In Silico Modeling 3 (1): 325. ISSN 2320-0634. Retrieved 23 May 2014.
- (El-Bizri 2005a)
- (Toomer 1964, pp. 463–4)
- (Toomer 1964, p. 465)
- Bacon, Francis. Novum Organum, i, 63. Quoted in Durant, Will (1924). The Story of Philosophy. Simon and Schuster (published 2012). p. 166. ISBN 9781476702605. Retrieved 2014-07-31. "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession."
- Durant, Will. The Story of Philosophy. Page 101 Simon & Schuster Paperbacks. 1926. ISBN 978-0-671-69500-2
- Bell (2005; p.57)
- Dubos (1986; p.155)
- Dunning, Thad. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge University Press.
- *Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9.
- David A. Freedman, R. Pisani, and R. A. Purves. Statistics, 4th edition (W.W. Norton & Company, 2007) ISBN 978-0-393-92972-0
- David A. Freedman (2009) Statistical Models: Theory and Practice, Second edition, (Cambridge University Press) ISBN 978-0-521-74385-3
- Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line.
- Dunning, Thad. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge University Press.
- Shadish, William R., Thomas D. Cook, and Donald T. Campbell. (2001) Experimental and Quasi-experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin. ISBN 0-395-61556-9 Excerpts
- Teigen, Jeremy. 2014. "Experimental Methods in Military and Veteran Studies." in Routledge Handbook of Research Methods in Military Studies edited by Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. pp. 228 – 238. New York: Routledge.
|Library resources about
- Lessons In Electric Circuits - Volume VI - Experiments
- Description of weird experiments (with film clips)
- Science Experiments for Kids
- Science Project ideas
- Experiment in Physics from Stanford Encyclopedia of Philosophy
- Kids Science Experiments |
About This Chapter
AEPA Math: Exponents & Exponential Expressions - Chapter Summary
Exponents and exponential expressions can trip up anyone, even if you're a seasoned teacher almost ready to instruct at the secondary level. Don't lose points on the AEPA Math exam due to overconfidence about the more basic subjects of math. This chapter will prepare you for any test questions asking about:
- The order to use in solving exponential expressions
- The five main exponent properties
- How to apply exponent properties to solving problems
- The difference between using zero, positive numbers and negative numbers as exponents
- Simplifying exponential expressions
- How to simplify expressions that have rational exponents
The video lessons in this chapter are brief and engaging so you can quickly and effectively check off exponents from your list of review topics. Short, multiple-choice quizzes following each lesson will help you gauge your preparedness.
1. Exponential Expressions & The Order of Operations
When performing mathematical operations, there is a specific order in which the operations should be performed. This includes when to simplify the exponents. This lesson will describe in what order exponents should be solved as you are performing mathematical operations.
2. What Are the Five Main Exponent Properties?
We'll look at the five important exponent properties and an example of each. You can think of them as the order of operations for exponents. Learn how to handle math problems with exponents here!
3. The Power of Zero: Simplifying Exponential Expressions
Raising a number to the power of zero is not the same as if the exponent is a number other than zero. This lesson will explain the rule involving using zero as an exponent and will give some examples of how it works.
4. Negative Exponents: Writing Powers of Fractions and Decimals
Most numbers can be written in different ways, either as a fraction, decimal or exponent. This lesson will teach you how to write fractions and decimals using exponents and how to convert between the two.
5. Simplifying and Solving Exponential Expressions
What do we do with an exponent? In this lesson, we'll learn how to simplify and solve expressions containing exponents. We'll solve a variety of types of exponential expressions.
6. Rational Exponents
In this video, learn how to go from a rational exponent to a radical expression and back. No tricks or magic, just good math! We'll review the basics and look at a few examples.
7. Simplifying Expressions with Rational Exponents
Simplifying expressions with rational exponents is so easy. In fact, you already know how to do it! We simply use the exponent properties but with fractions as the exponent!
Earning College Credit
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the AEPA Mathematics (NT304): Practice & Study Guide course
- AEPA Math: Properties of Real Numbers
- AEPA Math: Fractions
- AEPA Math: Decimals & Percents
- AEPA Math: Ratios & Proportions
- AEPA Math: Units of Measure & Conversions
- AEPA Math: Logic
- AEPA Math: Reasoning
- AEPA Math: Vector Operations
- AEPA Math: Matrix Operations & Determinants
- AEPA Math: Algebraic Expressions
- AEPA Math: Linear Equations
- AEPA Math: Inequalities
- AEPA Math: Absolute Value
- AEPA Math: Quadratic Equations
- AEPA Math: Polynomials
- AEPA Math: Rational Expressions
- AEPA Math: Radical Expressions
- AEPA Math: Systems of Equations
- AEPA Math: Complex Numbers
- AEPA Math: Functions
- AEPA Math: Piecewise Functions
- AEPA Math: Exponential & Logarithmic Functions
- AEPA Math: Continuity of Functions
- AEPA Math: Limits
- AEPA Math: Rate of Change
- AEPA Math: Derivative Rules
- AEPA Math: Graphing Derivatives
- AEPA Math: Applications of Derivatives
- AEPA Math: Area Under the Curve & Integrals
- AEPA Math: Integration Techniques
- AEPA Math: Applications of Integration
- AEPA Math: Foundations of Geometry
- AEPA Math: Geometric Figures
- AEPA Math: Properties of Triangles
- AEPA Math: Triangle Theorems & Proofs
- AEPA Math: Parallel Lines & Polygons
- AEPA Math: Quadrilaterals
- AEPA Math: Circles & Arc of a Circle
- AEPA Math: Conic Sections
- AEPA Math: Geometric Solids
- AEPA Math: Analytical Geometry
- AEPA Math: Using Trigonometric Functions
- AEPA Math: Trigonometric Graphs
- AEPA Math: Solving Trigonometric Equations
- AEPA Math: Trigonometric Identities
- AEPA Math: Sequences & Series
- AEPA Math: Graph Theory
- AEPA Math: Set Theory
- AEPA Math: Statistics Overview
- AEPA Math: Summarizing Data
- AEPA Math: Tables, Plots & Graphs
- AEPA Math: Probability
- AEPA Math: Discrete Probability Distributions
- AEPA Math: Continuous Probability Distributions
- AEPA Math: Sampling
- AEPA Math: Regression & Correlation
- AEPA Mathematics Flashcards |
How to find the derivative of a graph
Secant Lines and the Slope of a Curve
The following applet can be used to approximate the slope of the curve y=f(x) at x=a. Simply enter the function f(x) and the values a and b. The applet automatically draws the secant line through the points (a,f(a)) and (b,f(b)). As b approaches a. the slope of the secant line approaches the slope of the line tangent to f(x) at x=a.
By selecting "h= " instead of "b= ", the applet automatically draws the secant line through the points (a,f(a)) and (a+h,f(a+h)). As h approaches 0. the slope of the secant line approaches the slope of the line tangent to f(x) at x=a. In other words,
the applet can be used to investigate the following two equivalent definitions for the derivative of f(x) at x=a.
The values a. b and/or h can be changed by simply typing a new value, such as "1.2345", "pi/2", "sqrt(5)+cos(3)", etc. You may also change these values by using the up/down arrow keys or dragging the corresponding point left or right. To move the center of the graph, simply drag any point to a new location. To label the x -axis in radians (i.e. multiples of pi), click on the graph and press "control-r". To switch back, simply press "control-r" again.
Here is a list of functions that can be used with this applet.Source: www.personal.psu.edu |
Division by two
In mathematics, division by two or halving has also been called mediation or dimidiation. The treatment of this as a different operation from multiplication and division by other numbers goes back to the ancient Egyptians, whose multiplication algorithm used division by two as one of its fundamental steps. Some mathematicians as late as the sixteenth century continued to view halving as a separate operation, and it often continues to be treated separately in modern computer programming. Performing this operation is simple in decimal arithmetic, in the binary numeral system used in computer programming, and in other even-numbered bases.
In binary arithmetic, division by two can be performed by a bit shift operation that shifts the number one place to the right. This is a form of strength reduction optimization. For example, 1101001 in binary (the decimal number 105), shifted one place to the right, is 110100 (the decimal number 52): the lowest order bit, a 1, is removed. Similarly, division by any power of two 2k may be performed by right-shifting k positions. Because bit shifts are often much faster operations than division, replacing a division by a shift in this way can be a helpful step in program optimization. However, for the sake of software portability and readability, it is often best to write programs using the division operation and trust in the compiler to perform this replacement. An example from Common Lisp:
(setq number #b1101001) ; #b1101001 — 105 (ash number -1) ; #b0110100 — 105 >> 1 ⇒ 52 (ash number -4) ; #b0000110 — 105 >> 4 ≡ 105 / 2⁴ ⇒ 6
The above statements, however, are not always true when dealing with dividing signed binary numbers. Shifting right by 1 bit will divide by two, always rounding down. However, in some languages, division of signed binary numbers round towards 0 (which, if the result is negative, means it rounds up). For example, Java is one such language: in Java,
-3 / 2 evaluates to
-3 >> 1 evaluates to
-2. So in this case, the compiler cannot optimize division by two by replacing it by a bit shift, when the dividend could possibly be negative.
Binary floating point
In binary floating-point arithmetic, division by two can be performed by decreasing the exponent by one (as long as the result is not a subnormal number). Many programming languages provide functions that can be used to divide a floating point number by a power of two. For example, the Java programming language provides the method
java.lang.Math.scalb for scaling by a power of two, and the C programming language provides the function
ldexp for the same purpose.
- Write out N, putting a zero to its left.
- Go through the digits of N in overlapping pairs, writing down digits of the result from the following table.
|If first digit is||Even||Even||Even||Even||Even||Odd||Odd||Odd||Odd||Odd|
|And second digit is||0 or 1||2 or 3||4 or 5||6 or 7||8 or 9||0 or 1||2 or 3||4 or 5||6 or 7||8 or 9|
Write 01738. We will now work on finding the result.
- 01: even digit followed by 1, write 0.
- 17: odd digit followed by 7, write 8.
- 73: odd digit followed by 3, write 6.
- 38: odd digit followed by 8, write 9.
From the example one can see that 0 is even.
If the last digit of N is odd digit one should add 0.5 to the result.
See also
- One half
- Median, a value that splits a set of data values into two equal subsets
- Bisection, the partition of a geometric object into two equal halves
- Dimidiation, a heraldic method of joining two coats of arms by splitting their designs into halves
- Steele, Robert (1922), The Earliest arithmetics in English, Early English Text Society 118, Oxford University Press, p. 82.
- Chabert, Jean-Luc; Barbin, Évelyne (1999), A history of algorithms: from the pebble to the microchip, Springer-Verlag, p. 16, ISBN 978-3-540-63369-3.
- Jackson, Lambert Lincoln (1906), The educational significance of sixteenth century arithmetic from the point of view of the present time, Contributions to education 8, Columbia University, p. 76.
- Waters, E. G. R. (1929), "A Fifteenth Century French Algorism from Liége", Isis 12 (2): 194–236, JSTOR 224785.
- Wadleigh, Kevin R.; Crawford, Isom L. (2000), Software optimization for high-performance computing, Prentice Hall, p. 92, ISBN 978-0-13-017008-8.
- Hook, Brian (2005), Write portable code: an introduction to developing software for multiple platforms, No Starch Press, p. 133, ISBN 978-1-59327-056-8.
- "Math.scalb". Java Platform Standard Ed. 6. Retrieved 2009-10-11.
- Programming languages — C, International Standard ISO/IEC 9899:1999, Section 220.127.116.11. |
Nearly 47 years ago, the crew of Apollo 8 took an image of planet Earth from the Moon that has been called “the most influential environmental photograph ever taken.” Called Earthrise, the picture represented the first time human eyes saw their homeworld come into view around another planetary body.
Wow, this doesn’t happen very often: Earth and Mars together in one photo. To make the image even more unique, it was taken from lunar orbit by the Lunar Reconnaissance Orbiter. This two-for-one photo was was acquired in a single shot on May 24, 2014, by the Narrow Angle Camera (NAC) on LRO as the spacecraft was turned to face the Earth, instead of its usual view of looking down at the Moon.
The LRO imaging team said seeing the planets together in one image makes the two worlds seem not so far apart, and that the Moon still might have a role to play in future exploration.
“The juxtaposition of Earth and Mars seen from the Moon is a poignant reminder that the Moon would make a convenient waypoint for explorers bound for the fourth planet and beyond!” said the LRO team on their website. “In the near-future, the Moon could serve as a test-bed for construction and resource utilization technologies. Longer-range plans may include the Moon as a resource depot or base of operations for interplanetary activities.”
Watch a video created from this image where it appears you are flying from the Earth to Mars:
The LROC team said this imaging sequence required a significant amount of planning, and that prior to the “conjunction” event, they took practice images of Mars to refine the timing and camera settings.
When the spacecraft captured this image, Earth was about 376,687 kilometers (234,062 miles) away from LRO and Mars was 112.5 million kilometers away. So, Mars was about 300 times farther from the Moon than the Earth.
The NAC is actually two cameras, and each NAC image is built from rows of pixels acquired one after another, and then the left and right images are stitched together to make a complete NAC pair. “If the spacecraft was not moving, the rows of pixels would image the same area over and over; it is the spacecraft motion, combined with fine-tuning of the camera exposure time, that enables the final image, such as this Earth-Mars view,” the LRO team explained.
Check out more about this image on the LRO website, which includes a zoomable, interactive version of the photo.
NASA’s planetary senior review panel harshly criticized the scientific return of the Curiosity rover in a report released yesterday (Sept. 3), saying the mission lacks focus and the team is taking actions that show they think the $2.5-billion mission is “too big to fail.”
While the review did recommend the mission receive more funding — along with the other six NASA extended planetary missions being scrutinized — members recommended making several changes to the mission. One of them would be reducing the distance that Curiosity drives in favor of doing more detailed investigations when it stops.
The role of the senior review, which is held every two years, is to help NASA decide what money should be allocated to its extended missions. This is important, because the agency (as with many other departments) has limited funds and tries to seek a balance between spending money on new missions and keeping older ones going strong.
Engineering acumen means that many missions are now operating well past their expiry dates, such as the Cassini orbiter at Saturn and the Opportunity rover on Mars. In examining the seven missions being reviewed, the panel did recommend keeping funding for all, but said that 4/7 are facing significant problems.
In the case of Curiosity, the panel called out principal investigator John Grotzinger for not showing up in person on two occasions, preferring instead to interact by phone. The review also said there is a “lack of science” in its extended mission proposal with regard to “scientific questions to be answered, testable hypotheses, and proposed measurements and assessment of uncertainties and limitations.”
Other concerns were the small number of samples over the prime and extended missions (13, a “poor science return”), and a lack of clarity on how the ChemCam and Mastcam instruments will play into the extended mission. Additionally, the panel expressed concern that NASA would cut short its observations of clays (which could help answer questions of habitability) in favor of heading to Mount Sharp, the mission’s ultimate science destination.
“In summary, the Curiosity … proposal lacked scientific focus and detail,” the panel concluded, adding in its general recommendations for the reviews that principal investigators must be present to avoid confusion while answering questions. The other missions facing concern from the panel included the Lunar Reconnaissance Orbiter, Mars Express and Mars Odyssey.
LRO: Its extended mission (the second) is supposed to look at how the moon’s surface, subsurface and exosphere changes through processes such as meteorites and interaction with space. The panel was concerned with a “lack of detail” in the proposal and in answers to follow-up questions. The panel also recommended turning off certain instruments “at the end of their useful science mission”.
Mars Express: The extended mission is focusing on the ionosphere and atmosphere as well as the planet’s surface and subsurface. Concerns were raised about matters such as why funding is needed to calibrate its high-resolution stereo camera after 11 years — especially given the instrument has been rarely cited in published journal reports lately — and how people involved in the extended mission would meet the goals. The panel also saw a “lack of communication” in the team.
Mars Odyssey: If approved, the spacecraft will move to the day/night line of Mars to look at the planet’s radiation, gamma rays, distribution of water/carbon dioxide/dust in the atmosphere, and the planet’s surface. The panel, however, said there are no “convincing arguments” as to how the new science relates to the Decadal Survey objectives for planetary science. Odyssey, which is in its 11th year, may also be nearing the end of its productive lifespan given fewer publications using its data in recent years, the panel said.
The panel also weighed in on the success of the Cassini and Opportunity missions:
Cassini received the highest rating — “Excellent” — due to its scientific merit, the only mission this time around to do so. The panel was particularly excited about seasonal changes that will be seen on Titan in the coming years, as well as measurements of Saturn’s rings and magnetosphere and its icier moons (such as Enceladus). The spacecraft is noted to be in good condition and the new mission will be a success because of “the unique aspect of the new observations.”
Opportunity, which is more than 10 years into its Mars exploration, is still “in sufficiently good condition” to do science, although the panel raised concerns about software and communication problems. The panel, however, said more time with the rover would allow it to look for evidence of past water on Mars that would not be visible from orbit — even though it’s unclear if phyllosilicates around its current location (Endeavour crater) are from the Noachian period, the earliest period in Mars’ history.
The panel is just one step along the road to figuring out how NASA chooses to spend its money in the coming years. Funding availability depends on how much money Congress allocates to the agency.
Forty-five years ago yesterday, the Sea of Tranquility saw a brief flurry of activity when Neil Armstrong and Buzz Aldrin dared to disturb the ancient lunar dust. Now the site has lain quiet, untouched, for almost half a century. Are any traces of the astronauts still visible?
The answer is yes! Look at the picture above of the site taken in 2012, two years ago. Because erosion is a very gradual process on the moon — it generally takes millions of years for meteors and the sun’s activity to weather features away — the footprints of the Apollo 11 crew have a semi-immortality. That’s also true of the other five crews that made it to the moon’s surface.
In honor of the big anniversary, here are a few of NASA’s Lunar Reconnaissance Orbiter’s pictures of the landing sites of Apollo 11, Apollo 12, Apollo 14, Apollo 15, Apollo 16 and Apollo 17. (Apollo 13 was slated to land on the moon, but that was called off after an explosion in its service module.)
If you’re a fan of moon observation, it’s lucky for you that spacecraft such as the Lunar Reconnaissance Orbiter exist. For about the past five years, the NASA spacecraft has been in orbit around a closest large neighbor, taking images of the surface in high-definition.
To celebrate LRO’s fifth anniversary, NASA is asking members of the public to vote on which of those images (above) is their favorite. This isn’t so much a statement about the scientific data it has collected, NASA said, but more appreciating the images as art.
Voting runs from May 23 to June 6, and the winner will be announced with the full collection’s release on June 18 — the actual official fifth anniversary of the launch. You can find more information about the vote at this page.
By the way, LRO not only takes good pictures of the moon, but also of other spacecraft. You can check out its pictures of LADEE and Chang’e-3 in these past Universe Today articles.
Meanwhile, James Garvin — NASA’s chief scientist of the sciences and exploration directorate — eloquently weighs in below on his favorite images of the moon. His description of Aristarchus is interesting: “Here is Mother Nature’s expression of a gigantic landform made by a cosmic collision.” You can check out the other four below.
While people across North America marvelled at the blood-red moon early this morning, some NASA engineers had a different topic on their minds: making sure the Lunar Reconnaissance Orbiter would survive the period of extended shadow during the eclipse.
LRO uses solar panels to get energy for its batteries, so for two passes through the Earth’s shadow it would not be able to get any sunlight at all. Tweets on the official account show all as well in the first few hours after the eclipse.
“The spacecraft will be going straight from the moon’s shadow to the Earth’s shadow while it orbits during the eclipse,” stated Noah Petro, LRO’s deputy project scientist at NASA’s Goddard Space Flight Center, in a release before the eclipse occurred.
“We’re taking precautions to make sure everything is fine,” Petro added. “We’re turning off the instruments and will monitor the spacecraft every few hours when it’s visible from Earth.”
LRO’s Twitter account asked “Who turned off the heat and lights?” during the eclipse, then reported a happy acquisition of signal after the shadow passed by. “AOS, and sunlight, sweet sunlight! My batteries are charging again before I make another trip to the lunar far side.”
The Earth has a single moon, while Saturn has more than 60, with new moons being discovered all the time. But here’s a question, can a moon have a moon? Can that moon’s moon have its own moon? Can it be moons all the way down?
First, consider that we have a completely subjective idea of what a moon is. The Moon orbits the Earth, and the Earth orbits the Sun, and the Sun orbits the center of the Milky Way, which orbits within the Local Group, which is a part of the Virgo Supercluster. The motions of objects in the cosmos act like a set of Russian nesting dolls, with things orbiting things, which orbit other things. So maybe a better question is: could any of the moons in the Solar System have moons of their own? Well actually, one does.
Right now, NASA’s Lunar Reconnaissance Orbiter is happily orbiting around the Moon, photographing the place in high resolution. But humans sent it to the Moon, and just like all the artificial satellites sent there in the past, it’s doomed. No satellite we’ve sent to the Moon has ever orbited for longer than a few years before crashing down into the lunar surface. In theory, you could probably get a satellite to last a few hundred years around the Moon.
But why? How come we can’t make moons for our moon to have a moon of it’s own for all time? It all comes down to gravity and tidal forces. Every object in the Universe is surrounded by an invisible sphere of gravity. Anything within this volume, which astronomers call the “Hill Sphere”, will tend to orbit the object.
So, if you had the Moon out in the middle of space, without any interactions, it could easily have multiple moons orbiting around it. But you get problems when you have these overlapping spheres of influence. The strength of gravity from the Earth tangles with the force of gravity from the Moon.
Although a spacecraft can orbit the Moon for a while, it’s just not stable. The tidal forces will cause the spacecraft’s orbit to decay until it crashes. But further out in the Solar System, there are tiny asteroids with even tinier moons. This is possible because they’re so far away from the Sun. Bring these asteroids closer to the Sun, and someone’s losing a moon.
The object with the largest Hill Sphere in the Solar System is Neptune. Because it’s so far away from the Sun, and it’s so massive, it can truly influence its environment. You could imagine a massive moon distantly orbiting Neptune, and around that moon, there could be a moon of its own. But this doesn’t appear to be the case.
NASA is considering a mission to capture an asteroid and put it into orbit around the Moon. This would be safer than having it orbit the Earth, but still keep it close enough to extract resources. But without any kind of orbital boost, those tidal forces will eventually crash it onto the Moon. So no, in our Solar System, we don’t know of any moons with moons of their own. In fact, we don’t even have a name for them. What would you suggest?
Scientists from the Lunar Reconnaissance Orbiter say that Icarus Crater is one of a kind on the Moon because its central peak rises higher than about half its rim. Most central peaks rise only about halfway to the crater rim. But at just the ring angle and lighting conditions, the shadow this central peak creates on the rolling and jagged crater rim looks like the Star Wars Character Yoda. Interestingly, this crater is located on what some people erroneously call the “Dark Side” of the Moon – what is actually the lunar farside.
Below you can see a closeup of the central peak of Icarus crater rising out of the shadows to greet a new lunar day.
Icarus is located just west of Korolev crater on the lunar farside. The light-colored plains surrounding the craters were deposited during the formation of the Orientale basin, which is located over 1500 km away.
Find out more about these images from LRO and see larger versions at the LROC website.
While NASA’s newest lunar probe was tracking the stars, it also captured the moon! This series of star tracker images shows Earth’s closest large neighbour from a close-up orbit. And as NASA explains, the primary purpose of these star-tracking images from the Lunar Atmosphere and Dust Environment Explorer (LADEE) was not the lunar pictures themselves.
This dissolve animation compares the LRO image (geometrically corrected) of LADEE captured on Jan 14, 2014 with a computer-generated and labeled image of LADEE . LRO and LADEE are both NASA science spacecraft currently in orbit around the Moon. Credit: NASA/Goddard/Arizona State University
A pair of NASA spacecraft orbiting Earth’s nearest celestial neighbor just experienced a brief ‘Close Encounter of the Lunar Kind’.
Proof of the rare orbital tryst has now been revealed by NASA in the form of spectacular imagery (see above and below) just released showing NASA’s recently arrived Lunar Atmosphere and Dust Environment Explorer (LADEE) lunar orbiter being photographed by a powerful camera aboard NASA’s five year old Lunar Reconnaissance Orbiter (LRO) – as the two orbiters met for a fleeting moment just two weeks ago.
See above a dissolve animation that compares the LRO image (geometrically corrected) of LADEE captured on Jan. 14, 2014 with a computer-generated and labeled LADEE image.
All this was only made possible by a lot of very precise orbital calculations and a spacecraft ballet of sorts that had to be nearly perfectly choreographed and timed – and spot on to accomplish.
Both sister orbiters were speeding along at over 3600 MPH (1,600 meters per second) while traveling perpendicularly to one another!
The LRO orbiter did a pirouette to precisely point its high resolution narrow angle camera (NAC) while hurtling along in lunar orbit, barely 5.6 miles (9 km) above LADEE.
And it was all over in less than the wink of an eye!
LADEE entered LRO’s Narrow Angle Camera (NAC) field of view for 1.35 milliseconds and a smeared image of LADEE was snapped. LADEE appears in four lines of the LROC image, and is distorted right-to-left.
Both spacecraft are tiny – barely two meters in length.
“Since LROC is a pushbroom imager, it builds up an image one line at a time, thus catching a target as small and fast as LADEE is tricky!” wrote Mark Robinson, LROC principal investigator of Arizona State University.
So the fabulous picture was only possible as a result of close collaboration and extraordinary teamwork between NASA’s LADEE, LRO and LROC camera mission operations teams.
LADEE passed directly beneath the LRO orbit plane a few seconds before LRO crossed the LADEE orbit plane, meaning a straight down LROC image would have just missed LADEE, said NASA.
Therefore, LRO was rolled 34 degrees to the west so the LROC detector (one line) would be precisely oriented to catch LADEE as it passed beneath.
“Despite the blur it is possible to find details of the spacecraft. You can see the engine nozzle, bright solar panel, and perhaps a star tracker camera (especially if you have a correctly oriented schematic diagram of LADEE for comparison),” wrote Robinson in a description.
See the LADEE schematic in the lead image herein.
LADEE was launched Sept. 6, 2013 from NASA Wallops in Virginia on a science mission to investigate the composition and properties of the Moon’s pristine and extremely tenuous atmosphere, or exosphere, and untangle the mysteries of its lofted lunar dust.
Since LADEE is now more than halfway through its roughly 100 day long mission, timing was of the essence before the craft takes a death dive into the moon’s surface.
You can see a full scale model of LADEE at the NASA Wallops visitor center, which offers free admission.
LRO launched Sept. 18, 2009 from Cape Canaveral, Florida to conduct comprehensive investigations of the Moon with seven science instruments and search for potential landing sites for a return by human explorers. It has collected astounding views of the lunar surface, including the manned Apollo landing sites as well as a treasure trove of lunar data.
In addition to NASA’s pair of lunar orbiters, China recently soft landed two probes on the Moon.
So be sure to read my new story detailing how LRO took some stupendous Christmas time 2013 images of China’s maiden lunar lander and rover; Chang’e-3 and Yutu from high above- here.
Stay tuned here for Ken’s continuing LADEE, Chang’e-3, Orion, Orbital Sciences, SpaceX, commercial space, Mars rover and more news. |
How To Teach Decimals Conceptually
Decimals are a huge part of your math curriculum. They are connected to so many standards in various domains; place value, fractions, and measurement. Teaching this topic conceptually will give your students the exposure and confidence they need to see beyond standard algorithms and truly understand the reasoning behind their work. It’s easy to teach the algorithms for adding, subtracting, multiplying, and dividing decimals and breeze through this content, but you’re missing out on an opportunity to bring concrete understanding to these four operations.
Here is a breakdown of how I teach the four operations in connected to decimals in my classroom.
Addition and Subtraction of Decimals:
Adding decimals – the easiest of the four, addition of decimals seems to be a nice starting point for students. Before I start on addition I really want my students to have a strong conceptual understanding of decimal place values, how they relate to one another (the hundredths place is one-tenth the value of the tenths place, but 10 times greater than the thousandths place), and be able to visually represent decimals using models.
We use place value disks and a place value chart to represent the decimals we are adding. Students will regroup disks when needed and see how that act changes the value of their numbers.
We also practice representing addition and subtraction of decimals with hundredth grids. Students will use different colored highlighters to represent the values being combined and cross out values when subtracting.
Multiplication of Decimals:
I think multiplication and division of decimals are far more “fun” than addition and subtraction. There are various ways to model and represent decimals using pattern blocks, fraction tiles, hundredth grids, and base ten blocks.
Here are some examples of the types of questions we review in class.
Our Power Problems have rigorous questions for you to implement in your classroom!
We work on these visual representations long before I introduce any algorithms. With a concrete understanding of multiplication with decimals your students will be able to understand the work behind the formula.
Division of Decimals:
You want to continue modeling and requiring students to conceptually understand this operation, just like the other three. Again, we use models to represent dividing a decimal by a decimal, decimal by a whole number, and a whole number by a decimal. Your students should be able to represent each of these scenarios using a hundredth grid. I integrate word problems anywhere I can so students have practice identifying which operation to use.
If you set your students up with conceptual understanding of decimals, this will help with fraction and measurement standards later on in the year. Next year’s teacher will also appreciate your efforts and you’re building number sense too!
Sign up below to receive our newsletter and gain access to your templates so you can start teaching decimals conceptually TODAY!
DON'T MISS OUT!
SUBSCRIBE NOW TO OUR NEWSLETTER TO RECEIVE YOUR FREE PRINTABLES FOR THIS LESSON!
We won't send you spam. Unsubscribe at any time.
Powered by ConvertKit
Picture Books That Teach Fractions, Ratios, Percentages, and Decimals
You will love this list of picture books that teach fractions, ratios, percentages, and decimals! These topics can be tough for children to grasp…until you bring them to life through picture books.
You’ve heard me say it a million times, but picture books can go a long way in helping children understand tough topics. Math is abstract. That means it isn’t something that our children can easily grasp if they have no prior knowledge or some frame of reference to start with. Picture books (and hands-on activities) are great for doing just that.
This post contains affiliate links.
Even if your children don’t seem to be having trouble understanding fractions, ratios, percentages, and decimals, it never hurts to add an extra layer of connection through picture books.
Picture Books to Teach Fractions
It wasn’t until I started teaching fractions using picture books and hands-on manipulatives that I truly begin understanding them. Seriously. My experiences in school had been too abstract and I usually ended up in tears anytime fractions came around throughout my learning. Books like these and “playing” with manipulatives like fraction bars and circles turned on the light bulb!
You can easily play with fractions in your kitchen, too! Whip up a pizza or a pie and have the kids help cut it in halves, fourths, eighths, etc. Using measuring cups while cooking is also a great way to practice fractions. Your children will be able to see how they can use fractions in the “real world” if they accidentally put a teaspoon of salt into a recipe that only calls for 1/4 teaspoon of salt. Several of these books include food fractions!
- A Fractions Goal – Parts of a Whole by Brian P. Cleary
- Apple Fractions by Jerry Pallotta
- Clean-Sweep Campers by Lucille Recht Penner
- The Doorbell Rang by Pat Hutchins
- Fraction Action by Loreen Leedy
- Fraction Fun by David Adler
- Fractions, Decimals, and Percents by David A. Adler
- Fractions in Disguise by Edward Einhorn
- Full House: An Invitation to Fractions by Dayle Ann Dodds
- Gator Pie by Louise Mathews
- Give Me Half! by Stuart J. Murphy
- How Many Ways Can You Cut a Pie by Jane Belk Moncure
- Mulitplying Menace: The Revenge of Rumplestiltskin by Pam Calvert
- Rabbit and Hare Divide an Apple by Harriet Ziefert
- Sir Cumference and the Fraction Faire by Cindy Neuschwander
- The Hershey’s Milk Chocolate Fractions by Jerry Pallotta
- Whole-y Cow! Fractions are Fun by Taryn Souders
Picture Books to Teach Ratios and Proportions
Ratios and proportions are basically fractions, although they explain different information. A ratio is a comparison of two quantities and a proportion is a statement that two ratios are equal. So, for instance, if there are 2 computers in your house and 5 people live in your house, the ratio of computers to people is 2/5. A proportion of 2/5 would be something that is equal to 2/5 like 4/10 or 6/15.
Ratios and proportions sound hard, but they can be understood by most elementary students who have a basic understanding of fractions and multiplication. These books can help bring the concepts to life.
- A Very Improbable Story by Edward Einhorn
- Beanstalk: The Measure of a Giant by Ann McCallum
- Cut Down To Size at High Noon by Scott Sundby
- If You Hopped Like a Frog by David M. Schwartz
- Is a Blue Whale the Biggest Thing There Is? by Robert E. Wells
- Pythagoras and the Ratios by Julie Ellis
- Ratios and Rates Reasoning by Melanie Alvarez
- The Warlord’s Puppeteers by Virginia Pilegard
Picture Books to Teach Percentages
Percentages are basically fractions, too! They are based on parts of 100, which can make them a little easier to understand once the concept of fractions is down pat. There are only a few books to help specifically with percentages, so be sure to read them all!
- Fractions, Decimals, and Percents by David A. Adler
- The Grizzly Gazette by Stuart J. Murphy
- Twizzlers Percentage Book by Jerry Pallotta
Picture Books to Teach Decimals and Place Value
You guessed it, decimals are basically fractions, too, and they are great to teach right alongside place value. I’ve included books specifically for decimals and place value in this fun list.
Consider using place value blocks, decimal disks, and decimal dominoes to help further visualize these topics!
- A Fair Bear Share by Stuart J. Murphy
- A Place for Zero by Angeline Sparagna LoPresti
- Do You Know Dewey?: Exploring the Dewey Decimal System by Brain P. Cleary
- Earth Day – Hooray! by Stuart J. Murphy
- Fractions, Decimals, and Percents by David A. Adler
- How Much is a Million? by David M. Schwartz
- Parting Is Such Sweet Sorrow: Fractions and Decimals by Linda Powley
- Penguin Place Value by Kathleen L. Stone
- Place Value by David A. Adler
- Sir Cumference and All the King’s Tens by Cindy Neuschwander
- The King’s Commissioners by Aileen Friedman
- Zero the Hero by Joan Holub and Tom Litchenheld
Other Posts You May Like
Products You May Like
Multiplication of fractions: a simple instruction — Lifehacker
January 15, 2021
A simple cheat sheet for those who have forgotten the school curriculum in mathematics.
Multiplying fractions with each other
It’s simple: multiply the numerator by the numerator, and the denominator by the denominator. Then check if the fraction can be reduced. For example:
The rule works for fractions with both different and the same denominators. If the fraction is large, let’s say 24 / 35 , try to shorten it right away — it will be easier to keep count.
If there is a mixed number in the example, first convert it to an improper fraction, and then multiply it in the way described above. Convert the result back to a mixed number.
Remember the basics 💡
- What are fractions and how to add them
The multiplication process takes place in three steps:
- Write the fractions in a column and multiply as natural numbers, without thinking about commas for now.
- Find out how many decimal places were in each fraction and add them up.
- Moving from right to left, count as many digits as a result of multiplication as in the previous step. Put a comma there. This is the answer. For example:
If you multiply by 0.1, 0.01, 0.001 and so on, then move the decimal point to the left as many places as there are after the decimal point in the multiplier: 0.18 × 0.1 = 0.018; 0.5 × 0.001 = 0.0005.
Refresh your knowledge 👈
- How to convert a fraction to decimal
Multiplying fractions by natural numbers
Only the numerator needs to be multiplied, and the denominator is left unchanged. If the result is an improper fraction, subtract the integer part from it to get a mixed number. For example:
If you need to multiply a mixed number, convert it to an improper fraction and multiply in the same way. That is:
There is a second way: divide the denominator by the natural number given to you, and do not touch the numerator. This method is more convenient to use when the denominator is divisible by this natural number without a remainder. For example:
Compare this method with the first one — the result is the same.
In this case, use the same method as for multiplying a fraction by a fraction. Multiply the numbers in a column, then count as many digits as there were after the decimal point in the decimal fraction, and put a comma there. That is:
If you need to multiply a decimal by 10, 100, 1000, and so on, just move the comma to the right as many places as there are zeros after the one. For example: 0.045 × 10 = 0.45; 0.045 x 100 = 4.5.
Read also 🧮👌🤓
- Multiply, divide, add like Sheldon Cooper? Math Hacks…
- How to Teach Your Child to Count Easily
- 6 ways to calculate the percentage of the amount with and without a calculator
- How to learn the multiplication table easily and quickly
- How to master mental counting for schoolchildren and adults
How to teach a child to convert fractions to decimals? – Wiki Reviews
Just like how do you convert a fraction to a decimal for kids? So, to convert a fraction to a decimal, divide the numerator by the denominator . If necessary, you can use a calculator for this. This will give us our answer as a decimal.
How do you turn a mixed fraction into a decimal? How to convert mixed number to decimal
- Convert a fraction to a decimal: Divide the numerator by the denominator.
- Add this decimal number to the integer part of the mixed number.
How to convert a fraction to a decimal without a calculator? Find a number that can be multiplied by the denominator to get 10, 100, 1000, or any 1 followed by 0. This can be an easy way to replace a common fraction with a decimal without using a calculator or doing up division.
Second, how do you turn a mixed fraction into a decimal? How to convert mixed number to decimal? To convert a mixed number to decimal form, find the decimal value of the fractional part of the number, and then add it to the integer part of . For example, 1 4/5 can be converted to decimal as 1 + 4/5 = 1 + 0.8 = 1.8.
How do I change a common fraction to a decimal?
To convert a common fraction to a decimal, divide the numerator by the denominator , So 3 / 4 can be changed to decimal 0.75. However, not all common fractions can be converted to such exact decimals: 2 / 3 as a decimal is an endless series of sixes to the right of the decimal point.
then how do you turn .33 into a fraction? Simplified 1/3 is actually equivalent to 33 and 1/3 percent. Some instructors do not allow students to round up to this number. In this case, 33/100 is the exact equivalent.
What is 46 in a fraction? Because there are 46 digits in 2, the last digit is the «100th» decimal place. So we can just say that. 46 same as 46/100 . dividing the numerator and denominator by 2.
What is 39 in a fraction?
Thirty-nine percent expressed as a fraction, 39/100 . This can also be written in decimal as 0.39.
What is 94 as a fraction?
94% shot 94/100 . If you want, you can simplify it to 47/50.
What is 11% as a fraction?
Answer: The value of 11% as a fraction in its simplest form is 11/100 .
What is 75 as a fraction?
Answer: 75% is written as 3/4 as a fraction in its simplest form.
How to multiply fractions? There are 3 easy steps to multiply fractions
- Multiply the top numbers (numerators).
- Multiply the bottom numbers (denominators).
- Simplify the fraction if necessary.
What is 58 in a fraction? Since there are 58 digits in 2, the last digit is the «100th» decimal place. So we can just say that. 58 same as 58/100 .
Is 93 older than 100 in its simplest form?
As you can see, 93/100 is not further simplified by , so the result is the same as at the beginning.
What is 66.6% as a fraction? What is 66.6% as a fraction? 66.6% as fraction 66.6/100 . If you want, you can simplify it to 333/500. 3.
How to turn 0.916 into a fraction?
- 0.91666…= 0.91¯6. |
Here we look in detail at the Phase I report for Orbiting Rainbows. NASA provided the funding for a phase 2 study project. They would use several lasers to trap and shape billions of reflective dust particles into single or multiple lenses that could grow to reach tens of meters to thousands of kilometers in diameter. According to Swartzlander, the unprecedented resolution and detail might be great enough to spot clouds on exoplanets. The diameter of the lens would be similar to what hypertelescopes could achieve in space however, the laser shaped dust clouds could more cheaply have a filled in lens.
Swartzlander developed and patented the techniques known as “optical lift,” in which light from a laser produces radiation pressure that controls the position and orientation of small objects.
Ideally, the dielectric particles should have 50% transmissivity and 50% reflectivity, no absorption, and to avoid diffraction they must be smaller than the wavelength of light. This giant, but tenuous, optical assembly has to be maintained either continuously or intermittently via separated free-flying pulsed lasers, which must have enough power, continuous
operation capabilities, and adequate pointing capability to maintain the cloud stably in orbit
Researchers propose to construct an optical system in space in which the nonlinear optical properties of a cloud of micron-sized particles are shaped into a specific surface by light pressure, allowing it to form a very large and lightweight aperture of an optical system, hence reducing overall mass and cost. Other potential advantages offered by the cloud properties as optical system involve possible combination of properties (combined transmit/receive), variable focal length, combined refractive and reflective lens designs, and hyper-spectral imaging. A cloud of highly reflective particles of micron size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Suns light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft clouds to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exo-planet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exo-planet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities.
The evolution of space telescopes, from Hubble, James Webb, inflatable concepts, formation flying, up to hyper-telescopes, where distributed apertures form the primary, naturally leads to the concept investigated in this study. This concept would increase the aperture several times compared to ATLAS, allowing for a true Terrestrial Planet Imager that would be able to resolve exo-planet details and do meaningful spectroscopy on distant worlds. The aperture does not need to be continuous. Used interferometrically, for example, as in a Golay array, imagery can be synthesized over an enormous scale. We leveraged our experience working with large optical systems to consider refractive, reflective and holographic systems. Finding a way to manipulate such distribution of matter in space would lead to a potentially affordable new way of generating very large and potentially re-shapeable optics in space, and indirectly open the way to future technologies for space construction by means of light. It will also enable new astrophysical imaging systems, exo-planet search, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange points.
Radiation pressure positions the dust in a coherent pattern oriented toward an astronomical object. The reflective particles form a lens and channel light to a sensor, or a large array of detectors, on a satellite. Controlling the dust to reflect enough light to the sensor to make it work will be a technological hurdle.
Two RIT graduate students on Swartzlander’s team are working on different aspects of the project. Alexandra Artusio-Glimpse, a doctoral student in imaging science, is designing experiments in low-gravity environments to explore techniques for controlling swarms of particle and to determine the merits of using a single or multiple beams of light.
Swartzlander expects the telescope will produce speckled and grainy images. Xiaopeng Peng, a doctoral student in imaging science, is developing software algorithms for extracting information from the blurred image the sensor captures. The laser that will shape the smart dust into a lens also will measure the optical distortion caused by the imaging system. Peng will use this information to develop advanced image processing techniques to reverse the distortion and recover detailed images.
Assuming 10 million grains per aerosol patch, a grain density of 2500 kg/m3, 3 patches of diameter 1 meter, difficulty level 2, cloud thickness 1 micron.
SOURCES – Rochester Institute of Technology, NASA, Wikipedia
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements. |
Presentation is loading. Please wait.
Published byMegan Franklin Modified over 7 years ago
Measurement book reference p. 51 -62 Accuracy The accuracy of the measurement refers to how close the measured value is to the true value. For example, if you used a balance to find the mass of a 100.00 g object. If you got a measurement of 78.55 g, your measurement would not be very accurate. Precision Precision refers to how close together a group of measurements actually are to each other.
Accuracy and precision examples Preciseaccurate and precise
Errors Accuracy with error & precision with error
Electron configuration practice Give the electron configuration: Pb Fe Identify the element: [Kr]4d 10 5s 2 5p 2 [Ne]3s 2 3p 1
Significant figures sig fig (sf) Rules for reporting meaningful experimental results. Prevents propagation of error.
Sig figs Any non-zero number is significant Zeros between sig figs are significant Zeros in front of all nonzero digits are not significant Zeros at the end of number and to right of the decimal point are significant Zeros to the left of the decimal are tough. If they were measured they are significant. Use scientific notation.
Sig fig examples 24.7 g 0.346 g 2005 m 3.509 ml 0.00067 g 56.00 6.010 600 720 3 sf 4 sf 2 sf 4 sf 1 or 3 6.00 x 10 2 2 or 3 7.20 x 10 2
Calculations and sig fig Addition subtraction An answer can not be more precise than the least precise measurement 4.34 cm - 2.3 cm = 2.04 so 2.0 cm Rounding rules if the digit after the last significant digit is 5 or greater round up 10.345 g + 2.3 g = 12.645 so 12.6 g
Calculations and sig fig multiplication division The LEAST MOST rule - the most sig fig reported in your answer comes from the least number of sig fig in the calculation 3.4 cm x 5.43 cm = 18.462 so 18 cm 2 18.45 g /3.45 g = 5.347826087 so 5.35
© 2023 SlidePlayer.com Inc.
All rights reserved. |
Introduction to Queue in C++
Queue is a kind of data structure that operates in the form of first in first out(FIFO) which means the element is going to be entered from the back and will be removed from the front. Like we have a general queue system in the practical world. The queue is a container adapter which holds data of the same type. Container adapter does not contain iterators so we cannot manipulate our data. The queue in c++ just provides us two methods for inserting elements and for removing element i.e. push() and pop().
template <class Obj, class Container = deque<Obj> > class queue;
- Obj: It represents the type of element it is going to contain.
- Container: Type of container object.
As we know queue is the container adapter so it should support the following operation which is mentioned below:
How Does Queue Work in C++?
As now we know that queue works in FIFO order. We can take an example of a ticket counter where whosoever entering the queue will be on the first most position and the first person who gets the tickets. Initially, the queue was empty then A enters the queue after that B enters so now A will be the one who is going to be removed first too. So this is FIFO. But in our technical language we can say like this:
- If we put any item in the queue then is it: ‘enqueue’.
- If we remove any item from the queue then it is: ‘dequeue’.
Operations of Queue
So it is the data structure that we follow so we can write it in any language. So we can say a queue is an object which allows us the following operations below:
- Peek: By this, we can get the value of the first element from the queue without removing it.
- Dequeue: This is the process of removing an element from the queue form front.
- Enqueue: This is the process of adding an element to the queue at the end.
- IsFull: It allows us to check if the queue is full.
- IsEmpty: It allows us to check if the queue is empty.
The operations which take place in Queue:
- We have a two-pointer in the queue which takes care of the front and end element in the queue which are: FRONT and REAR.
- For the first time when we try to initialize the queue, we kept the value for both this pointer i.e. REAR and FRONT as -1.
- when we enqueuing any element in the queue, we just increase the value of the REAR pointer and place this new element on this position.
- when we dequeuing any element from the queue return value of FRONT and increase the FRONT pointer.
- But before we enqueuing any element first we check the queue is already full or not.
- And now before dequeuing any element from the queue, we check if the queue is already empty or not.
- So on enqueuing the very first element, we set the FRONT pointer value to 0.
- So on dequeuing the very last element again, we reset the value for both pointer i.e. FRONT and REAR to -1 and the process continues.
But there is some limitation of the queue is like sometimes the size of the queue got reduced and the only solution we have is to reset the queue again.
Example of Queue in C++
Let us see the example of queue in C++ with code implementation and output.
using namespace std;
void queueDemoshow(queue <int> gq1)
queue <int> g1 = gq1;
cout << '\t' << g1.front();
cout << '\n';
queue <int> queuedemo;
cout << "elements in the queue are : ";
cout << "\nPrinting the size of the queue (queuedemo.size()) : " << queuedemo.size();
cout << "\nPrinting the first elemnt from the queue (queuedemo.front()) : " << queuedemo.front();
cout << "\nPrintitng the last element from the queue (queuedemo.back()) : " << queuedemo.back();
cout << "\nUse of pop () method (queuedemo.pop()) : ";
Queue Member Types in C++
The queue member types in C++ are as follows,
- value_type: This is used to represent the type for the elements which are going to insert in the queue.
- container_type: This is used to specify the container type.
- size_type: This is used to specifies the size of the elements in the queue.
- reference: This is used to specifies what is going to be the reference type for the container.
- const_reference: This is the reference for the constant container.
Queue Functions in C++
Queue provides us some function to manipulate our variable or object to perform some action. Some of the functions are mentioned below which are as follows:
- swap: This function is used to swap the elements. It generally interchanges the elements.
- size: This function is used to know the size of the queue. It will calculate the number of the element present in the queue.
- empty: This function is used to check if the queue is empty or not. it will return a Boolean value. If there is no element present in the queue then it will return true else it will return false.
- emplace: This function will insert a new element. But this new element will be added one position above to the REAR element, not to the end.
- pop: As we know this method will remove an element from the queue and the element will be removed from the FRONT because it follows FIFO.
- push: This will add a new element to the queue and this element is going to add at the end because it follows FIFO.
- back: This we used to access the last element in the queue i.e. REAR element. This is important because all insertion happens at the end.
- front: This we used to access the first element. This is important because all the removal of element happens at FRONT only.
- Relational Operators: It provides the relational operators that are going to use in the queue.
- uses_allocator<queue>: This function is used, allocator.
C++ queue works in the FIFO technique. It is a data structure only the C++ syntax is a different process is the same. We have FRONT and REAR as an important keyword in this.
This is a guide to Queue in C++. Here we discuss the Introduction and how does queue work in C++ along with an example and its queue member types, functions. You may also look at the following articles to learn more – |
At the beginning of the 14th century Europe suffered a serious agricultural crisis. This was the result of a succession of poor harvests, the result of adverse weather and the cultivation of poor quality land. The agricultural crisis led to the spread of hunger throughout all of Europe. In addition, the devastation caused by the black plague epidemic which devastated Europe and caused numerous deaths.
These events brought an abrupt halt to the long period of economic prosperity that had been experienced in Europe between the 12th and early 14th centuries. The period of recession that began in the 14th century lasted until the 15th century.
Historians agree that the demographic collapse that occurred as a result of the late medieval crisis was more significant in the city than in the countryside. Epidemics and famines were more deadly in the cities, as large numbers of people lived in concentration and there was a greater lack of hygiene. However, in the medium and long term, the effects of population loss became more visible in the countryside. Again, migration of people from the countryside to the city was a major factor.
Why were people moving to the cities? Many craft workshops in the cities fell empty. As there was a demand for labour, wages tended to rise. In addition, we must consider the fact that many farmers fled from the countryside to avoid the abuses of the feudal lords.
At the same time, there were also migratory movements within the rural world. Some farmers simply changed their land. Some rural areas came up practically empty.
As a result of the crisis, population centres were rearranged. Cities quickly recovered the population level they had in the Early Middle Ages. The countryside lost population. All this had repercussions for the functioning of the feudal system.
This crisis had its first and most spectacular effect on the decline and reduction of cultivated areas because of the abandonment of the fields. We call this phenomenon rural depopulation. It was a selective abandonment.
Marginal lands, those that had been cultivated later, were abandoned because they were the worst. At the same time, the best lands got the most attention.
How did this reduction in the rural population manifest itself?
Following the crisis of the 14th century, the peasantry had good reason for wanting to emigrate towards the more prosperous cities. The abandonment of the countryside brought about a change in economic orientation. From that moment on, livestock farming, needing fewer people than agriculture, flourished.
In Germany, the phenomenon of rural depopulation is known as Wüstung (abandoned villages). Around 1300, there were about 170,000 villages, while by 1500 the number had fallen to 130,000. In Alsace, between 1300 and 1800, 224 villages disappeared.
In France, the phenomenon of depopulation is known as "désert". In England, rural depopulation is also known as "lost villages". The depopulated nuclei in England had more to do with a reorganization of agricultural work. Many farms moved towards cattle raising. The area with the most depopulated nuclei was the central area (from London upwards, mountain areas).
The drop in agricultural production was quite spectacular. We should start by looking at the relationship between man and the land. How does it look before and after the crisis?
Before 1300, i.e. during the growth period, the value of land was high. We know this thanks to:
This was the general situation until 1300. Up to the beginning of the 14th century, small-scale farming was predominant.
What happened after 1300? The positive trend of the previous period was reversed. These factors can be observed:
Revenue for the nobility declined and this is exactly the point of the so-called late-medieval crisis. Abandoned plots of land were taken over. This was attractive to both farmers and lords. The latter provided facilities, since if they wanted farmers to occupy land they had to provide incentives (lower incomes and more affordable entry prices).
However, when it came to accessing land, different people did not have the same opportunities: a selection process occurred. Not all farmers were equal. Some could have large farms, while others could not. There was a strong hierarchy of peasant work. A minority of this peasantry could consolidate, and it was powerful. This group, economically stronger, got to be known as the "greasy peasants" (the most favoured).
Often, these richer peasants could not work all their land. This is why many of them rented out salaried labour or made sub-establishments and/or sub-leases and this caused that with the passing of time the "greasy peasants" stopped working the land, dedicating themselves only to its control and management.
This fact created huge differences and contrasts within the peasant work.
The changes brought about by the crisis required new criteria for land management and exploitation. The system of mansus indominicatus - the reserve part of the property that the landowner gave to his serfs or settlers to exploit in exchange for a commitment to pay for it with various jobs and obligations - receded. The feudal lords had great difficulty in receiving the corvées (the obligation of the farmer to work for free on the lord's land), as the peasantry refused to make them and rebelled.
The increase in wages made it extremely difficult for the feudal lords to obtain free labour. For the lords, it was a real ruin. As they could not get free labour, this system was a totally unviable form of exploitation. It was therefore necessary to look for alternatives, such as a change of direction in production and new contractual formulas. Direct management of the land began to be carried out, monitoring the exploitation while orienting production towards supplying the urban market (diversified demand). Land management by tenants or owners. The communal spaces were often taken over by the lords.
The large managers, fat farmers, oriented production and turned the countryside into an area dependent on urban demand. The members of the urban bourgeoisie channelled the suburban area towards the needs of supply of the cities. These needs were increasing due to the growing population.
The agricultural areas adjacent to the cities could hardly supply them with wheat, so they had to go to farther areas. There was a need for grain and wine (obtained from rural areas), plants used for craft production (flax, hemp, dyeing), highly specialized agricultural products (rice, sugar, vegetables …) and, most importantly, the great potential of livestock, especially cattle (sheep's wool was very profitable because textiles were the great industry of the Middle Ages).
The needs of the market implied a major involvement with the adjacent areas of the city. This new production required the implementation of other ways to exploit the land. We cannot say these were new forms of production, although they were greatly expanded, for example:
Before 1300, surplus marketable grain was scarce. There were difficulties in supplying the cities. The product was scarce and its price was high. Although the surpluses were small, a good price was obtained. Once the initial effects of the Black Death were overcome, the situation was reversed: there was more surplus and more abundance due to the demographic fall. Prices fell and the income of the peasantry decreased.
Farmers, from 1300 on, had more at a lower price. In terms of subsistence, the situation was much better than before. Farmers had more resources. The big problem was that the crisis affected the feudal lords, who suffered a big drop in income, so the following changes took place:
At some points, the demands of the nobility increased. Jurisdictional demands increased, those burdens that were mainly directed at greasy farmers. It was these rich peasants who led the revolts of the 14th century. The rich peasants were the ones who led the revolts.
Following the great mortality rate of the 14th century in the cities, a reduction in demand was experienced and therefore production declined. The basic manufacturing activity was textiles. Other activities were related to the resources of each particular area. For its part, the textile industry was the most universal in the Middle Ages.
Nevertheless, it is difficult to establish the real dimensions of the textile industry at that time. Data is only available for England. There, the agricultural product is estimated at around three million pounds. The textile industry, considering the product marketed, accounted for some 100,000 pounds. It represented only 2% of the workforce. However, it did not cover all the demand. To meet this demand, rural areas also produced. The rural population was largely self-sufficient. The textile activity was expensive, and it was destined to a clientele with a high purchasing power.
During the late medieval period, feudal-type production was in crisis. The major production centres (northern Italy and Flanders) suffered the most. This phenomenon was more visible in these more active areas, but this does not mean that it was general. In the countryside there was an increase in productivity, and it was possible to supply the cities better. The cities paid for the supplies that arrived from the countryside with the sale of artisan products. But prices were more rigid. Urban productivity did not increase and therefore prices did not fall. Thus, the rural community was still unable to buy urban products, which led to a reduction in demand. This phenomenon is called price shearing. On the other hand, the imbalance between prices and wages must also be considered.
The demand of the most powerful groups also declined, as their incomes decreased. In Florence, in 1300, some 100,000 pieces of textiles were produced. In 1350, about 70,000 pieces. By 1373, it dropped to 30,000 pieces. And in 1382 the quantity decreased to 19,000 pieces. In Flanders, it seems that the proportion would be similar. We must relate this phenomenon of reduced demand to the urban revolts of the time.
The distance between a rich minority and a poor majority of the population is increasing. In the face of this collapse, alternatives had to be found to restore activity. The solution was to adapt to the demand. In this sense, three lines of action stand out:
The European space had only been surpassed during the Crusades. Towards the end of the 13th century, there were sporadic appearances in Atlantic Africa. These journeys had little continuity. They were sporadic as technical means remained underdeveloped and because there was a high risk, so they did not receive financial support.
At the beginning of the 14th century, the Castilians arrived to the Canaries. Religious communities were also established there.
Towards the end of the 14th century, a major change took place. The last decade of the century marked the beginning of the Castilian and Portuguese expeditions. They had the character of preaching as well as of obtaining booty. They were a continuation of the process of the Reconquest. The Castilians also occupied areas of the Moroccan coast, such as Melilla, Bujía… The participation of the Castilians and the Portuguese in this new Atlantic area inevitably led to a confrontation. This is how the areas of influence had to be defined. A first agreement was the Treaty of Alcáçovas-Toledo (1479).
The Canaries became a key point on the route to America and were a place of experimentation for the subsequent American conquest. The conquest of the Canaries was of a feudal nature. The native population was exterminated and the populated islands of the Iberian Peninsula were colonized. However, there was a lack of labour, so African slave labour was used. On the islands, people worked in the sugar cane plantations, roccella canariensis and the dragon's blood (both to make dyes).
The Portuguese developed an intense commercial activity. The bourgeoisie was consolidated, and it was actually powerful. It had many contacts with the rest of Europe. The Portuguese went to Africa to capture slaves and gold. The Portuguese merchants wanted to find a new route for the species across the Atlantic, circumnavigating Africa.
The development of new mercantile, technical and financial instruments was another way of overcoming the crisis of the late Middle Ages.
In the Middle Ages, as far as transport and routes were concerned, land, river and sea were used. However, these methods were slow and expensive (they represented up to 25% of the total cost). Transport was chosen based on distance, yet the preferred method was by sea. The most widely used was coastal shipping. A toll had to be paid on each stretch of coast, the lezda.
It was necessary, however, to improve navigation. New sailing methods were needed. New boats were developed, such as the caravel which mounted new types of canvas, such as the square one. This new boat could carry more tonnage (up to 300 tonnes). The compass, portolan charts, trigonometric tables, the astrolabe, etc. were also used systematically.
Gradually, the transport of goods was improved with the aim of achieving safer journeys. However, it was still dangerous because of the corsairs and pirates. Insurance and convoys were developed to protect against these setbacks.
Commercial relations were therefore becoming more and more complex. Trade was regulated and commercial codes were drawn up. The Book of the Consulate of the Sea of Barcelona is very well known. It was a code in constant evolution so that it could be revised according to the needs.
The development of business management and administration was another aspect of the crisis. Procedures were universalized in large trading centres. From the merchant-traveller, the focus shifted to a capitalist who controlled a network of commercial agents. Activities and investments were diversified. Companies traded any item. This is how investment was secured. If one product failed, another compensated. Speculation on currency exchange and loans followed. Trading companies became more and more stable and various capitals were pooled to cope more strongly with investments.
Financial and credit instruments flourished at this time. They became necessary due to the high volume of operations and the distance of travel. Carrying money around was dangerous, and currency exchange had to be borne in mind. Solutions were sought and systems were developed to facilitate the exchange.
The Arabic numbering was adopted. Accounting books were kept with entries and exits. Capital transfers and loans were practised. Cheques and bills of exchange served as the basis for this.
For all these reasons some historians speak of the development of a proto-capitalism. |
Gross Domestic Product (GDP) is commonly regarded as one of the most important economic indicators for measuring the health and performance of a country’s economy. It provides a comprehensive snapshot of the economic activity within a specific time frame, usually a year or a quarter. Although GDP is primarily used to assess the well-being of a nation, understanding its importance and impact extends beyond individual countries and has a significant effect on the global economy.
Firstly, GDP serves as a barometer for economic growth. By measuring the total value of goods and services produced within a country’s borders, it indicates the overall direction of economic development. A higher GDP suggests that the country is experiencing economic growth, while a lower GDP implies a sluggish or shrinking economy. These trends can impact the global economy as they influence investor confidence and the allocation of resources. For instance, high GDP growth rates attract foreign investment, leading to increased job opportunities and economic expansion, which can benefit not only the host country but also have spill-over effects on other nations.
Secondly, GDP allows for international comparisons and benchmarking. When comparing GDP figures across countries, economists can observe disparities in economic performance and identify areas for improvement. Countries with lower GDP levels may seek to implement policies or adopt strategies used by more prosperous nations to stimulate their own economic growth. Furthermore, GDP comparisons enable policymakers to identify potential trading partners and investment opportunities. Countries with higher GDP often have greater purchasing power and can serve as valuable markets for foreign goods and services, leading to increased global trade and economic interdependence.
Moreover, GDP plays a crucial role in economic forecasting and monetary policy decisions. Central banks and governments rely on GDP data to make informed decisions regarding interest rates, fiscal policies, and currency exchange rates. GDP figures help policymakers gauge the current state of the economy and identify vulnerabilities or imbalances, which can inform the design of economic policies aimed at stabilizing or stimulating growth. These decisions can have global consequences, as changes in monetary policy or shifts in fiscal priorities can influence exchange rates, capital flows, and investor sentiment, leading to either instability or reassurance in the global financial markets.
However, despite its widespread use as an economic indicator, GDP has inherent limitations that must be considered to avoid misinterpretations or misguided policy decisions. Firstly, GDP does not account for the distribution of wealth within a country. A high GDP does not guarantee equitable wealth distribution or improved living standards for all citizens. It is possible for economic growth to benefit only a small portion of the population, leading to increased income inequality and social unrest. Therefore, policymakers should consider other indicators, such as the Gini coefficient or poverty rates, to assess the overall well-being of a society.
Furthermore, GDP does not encompass several crucial factors that contribute to human welfare and quality of life, such as environmental degradation, social cohesion, education, and healthcare. While economic growth is essential for improving living standards, it should not come at the expense of sustainability or social progress. To address this limitation, some economists have proposed alternative indicators like the Genuine Progress Indicator (GPI) or the Human Development Index (HDI), which attempt to capture a more holistic view of well-being beyond purely economic considerations.
In conclusion, understanding the importance and impact of Gross Domestic Product (GDP) on the global economy is crucial for policymakers, investors, and individuals. GDP serves as a measure of economic growth, allowing for international benchmarking and identifying areas for improvement. It also influences economic forecasting, monetary policy decisions, and global trade patterns. However, its limitations must be acknowledged, and policymakers should consider supplementary indicators to ensure a comprehensive assessment of societal well-being and progress. |
Part of a series on the
|History of Ireland|
|Peoples and polities|
Gaelic Ireland (Irish: Éire Ghaelach) was a Gaelic political and social order that existed in Ireland from sometime in prehistoric era until the early 17th century. Before the Norman invasion of 1169, Gaelic Ireland comprised the whole island. Thereafter, it comprised that part of the country not under English or at least foreign dominion at a given time. For most of its history, Ireland was a 'patchwork' hierarchy of territories ruled by a hierarchy of kings or chiefs, who were elected by a system known as tanistry. Warfare between these territories was common. Occasionally, a powerful ruler was acknowledged as High King of Ireland. Society was separated into kin groups and, like the rest of Europe, was structured hierarchically according to class. Throughout this period, the economy was mainly pastoral and money generally not used. A Gaelic Irish style of dress, music, dance, sport, architecture, and art can be identified, with Irish art later merging with Anglo-Saxon styles from Great Britain developing Insular art.
Gaelic-Irish culture was initially pagan and was mainly based on an oral tradition, although inscription in the ogham alphabet began in the protohistoric period, perhaps as early as the 1st century BCE. The conversion to Christianity accompanied the introduction of literature, and much of Ireland's rich pre-Christian mythology and sophisticated law code were preserved, albeit Christianized. Ireland was an important centre of learning and preserved knowledge during the Early Middle Ages. During this time, Irish monks helped to (re-)spread Christianity along with elements of Gaelic art and culture to Anglo-Saxon Britain and on to non-Christian areas of mainland Europe in the Hiberno-Scottish mission.
In the 9th century, the Vikings began raiding and founding settlements along Ireland's coasts and waterways. These became Ireland's first large towns. Over time, these settlers were assimilated into Gaelic society and became the Norse-Gaels. After the Norman invasion of 1169–71, large swathes of Ireland came under the control of Norman lords. The King of England claimed sovereignty over this territory – the Lordship of Ireland – and over the island as a whole. However, the Gaelic system continued in areas outside Anglo-Norman control. The territory under English control gradually shrank to an area known as the Pale and, outside this, many Hiberno-Norman lords adopted Gaelic culture. There was regular conflict between the Gaels and the Norman settlers.
In 1542, Henry VIII of England declared the Lordship a Kingdom and himself King of Ireland. The English then began to conquer (or re-conquer) the island. By 1607, Ireland was fully under English control, bringing the old Gaelic political and social order to an end.
- 1 Culture and society
- 1.1 Religion and mythology
- 1.2 Social and political structure
- 1.3 Law
- 1.4 Settlements and architecture
- 1.5 Economy
- 1.6 Transport
- 1.7 Dress
- 1.8 Warfare
- 1.9 Arts
- 1.10 Sport
- 1.11 Assemblies
- 2 List of finte, túatha and kings
- 3 History
- 4 See also
- 5 References
- 6 Further reading
Culture and society
Gaelic culture and society was centred around the fine (clan, from Gaelic-Irish clann "children (of the family)"), and the landscape and history of Ireland was wrought with inter-fine relationships, marriages, friendships, wars, vendettas, trading, and so on. Gaelic Ireland had a rich oral culture and appreciation of deeper and intellectual pursuits. Filí and draoithe (druids) were held in high regard during Pagan times and orally passed down the history and traditions of their people. Later, many of their spiritual and intellectual tasks were passed on to Christian monks, after said religion prevailed from the 5th century onwards. However, the filí continued to hold a high position. Poetry, music, storytelling, literature and other art forms were highly prized and cultivated in both pagan and Christian Gaelic Ireland. Hospitality, bonds of kinship and the fulfilment of social and ritual responsibilities were held sacred.
Like Britain, Gaelic Ireland consisted not of one single unified kingdom, but several. The principal kingdoms were Ulaid, Mide, Laigin, Muma (consisting of Iarmuman, Tuadhmhumhain, Deas-Mhumhain, hence the three crowns of Munster), Connacht, Bréifne, In Tuaiscert, and Airgíalla. Each of these overkingdoms were built upon lordships known as túatha (singular: túath).
Law tracts from the early 700s describe a hierarchy of kings: kings of túath subject to kings of several túatha who again were subject to the regional overkings. Already before the 8th century these overkingships had begun to dissolve the túatha as the basic sociopolitical unit.
Religion and mythology
Before Christianization, the Gaelic Irish were polytheistic or pagan. They had many gods and goddesses, which generally have parallels in the pantheons of European nations. They were also animists, believing that all aspects of the natural world contained spirits, and that these spirits could be communicated with. Burial practices—which included burying food, weapons, and ornaments with the dead—suggest a belief in life after death. Some have equated this afterlife with the realms known as Magh Meall and Tír na nÓg in Irish mythology. There were four main religious festivals each year, marking the traditional four divisions of the year – Imbolc, Bealtaine, Lughnasadh and Samhain.
The mythology of Ireland was originally passed down orally, but much of it was eventually written down by Irish monks, who Christianized and modified it to an extent. This large body of work is often split into three overlapping cycles: the Mythological Cycle, the Ulster Cycle, and the Fenian Cycle. The first cycle is a pseudo-history that describes how Ireland, its people and its society came to be. The second cycle tells of the lives and deaths of Ulaidh heroes such as Cúchulainn. The third cycle tells of the exploits of Fionn mac Cumhaill and the Fianna. There are also a number of tales that do not fit into these cycles – this includes the immrama and echtrai, which are tales of voyages to the 'otherworld'. Two groups of supernatural beings who appear throughout Irish mythology—the Tuatha Dé Danann and Fomorians—are believed to represent the Gaelic pantheon.
|This section needs expansion. You can help by adding to it. (October 2010)|
Social and political structure
In Gaelic Ireland each person belonged to an agnatic kin-group known as a fine (plural: finte). This was a large group of related people supposedly descended from one progenitor through male forebears. It was headed by a male chieftain, known in Old Irish as a cennfine or toísech (plural: toísig). Although these groups were primarily based on blood kinship, they also included those who were fostered into the group and those who were accepted into it for other reasons.
Nicholls suggests that they would be better thought of as akin to the modern-day corporation. Within each fine, the family descended from a common great grandparent was term a derbfine (modern form dearbhfhine), lit. "close clan". The cland (modern form clann) referred to the children of the nuclear family.
Succession to the chieftainship or kingship was through tanistry. When a man became chieftain or king, a relative was elected to be his deputy or 'tanist' (Irish: tánaiste, plural tanaistí). When the chieftain or king died, his tanist would automatically succeed him. The tanist had to share the same great-grandfather as his predecessor (i.e. was of the same derbfhine) and he was elected by freemen who also shared the same great-grandfather. Tanistry meant that the kingship usually went to whichever relative was deemed to be the most fitting. Sometimes there would be more than one tanist at a time and they would succeed each other in order of seniority. Some Anglo-Norman lordships later adopted tanistry from the Irish.
Gaelic Ireland was divided into a hierarchy of territories ruled by a hierarchy of kings or chiefs. The smallest territory was the túath (plural: túatha), which was typically the territory of a single kin-group. It was ruled by a rí túaithe (king of a túath) or toísech túaithe (leader of a túath). Several túatha formed a mór túath (overkingdom), which was ruled by a rí mór túath or ruirí (overking). Several mór túatha formed a cóiced (province), which was ruled by a rí cóicid or rí ruirech (provincial king). In the early Middle Ages the túatha was the main political unit, but over time they were subsumed into bigger conglomerate territories and became much less important politically.
Gaelic society was structured hierarchically, with those further up the hierarchy generally having more privileges, wealth and power than those further down.
- The top social layer was the sóernemed, which included kings, tanists, chieftains, highly skilled poets (fili), clerics, and their immediate families. The roles of a fili included reciting traditional lore, eulogizing the king and satirizing injustices within the kingdom. Before the Christianization of Ireland, this group also included the druids (druí) and vates (fáith). The druids combined the roles of priest, judge, scholar, poet, physician, and religious teacher, while the vates were oracles.
- Below that were the dóernemed, which included professionals such as jurists (brithem), physicians, skilled craftsmen, skilled musicians, scholars, and so on. A master in a particular profession was known as an ollam (modern spelling: ollamh). The various professions—including law, poetry, medicine, history and genealogy—were associated with particular families and the positions became hereditary. Since the poets, jurists and doctors depended on the patronage of the ruling families, the end of the Gaelic order brought their demise.
- Below that were freemen who owned land and cattle (for example the bóaire).
- Below that were freemen who did not own land or cattle, or who owned very little.
- Below that were the unfree, which included serfs and slaves. Slaves were typically criminals (debt slaves) or prisoners of war. Slavery and serfdom was inherited, though slavery in Ireland had died out by 1200.
- The warrior bands (fianna) generally lived apart from society. A fian was typically composed of young men who had not yet come into their inheritance of land. A member of a fian was called a fénnid and the leader of a fian was a rígfénnid. Geoffrey Keating, in his 17th-century History of Ireland, says that during the winter the fianna were quartered and fed by the nobility, during which time they would keep order on their behalf. But during the summer, from Bealtaine to Samhain, they were beholden to live by hunting for food and for hides to sell.
Although distinct, these ranks were not utterly exclusive castes like those of India. It was possible to rise or sink from one rank to another. Rising upward could be achieved a number of ways, such as by gaining wealth, by gaining skill in some department, by qualifying for a learned profession, by showing conspicuous valour, or by performing some service to the community. An example of the latter is a person choosing to become a briugu (hospitaller). A briugu had to have his house open to any guests, which included feeding no matter how big the group. For the briugu to fulfill these duties, he was allowed more land and privileges, but this could be lost if he ever refused guests.
A freeman could further himself by becoming the client of one or more lords. The lord made his client a grant of property (i.e. livestock or land) and, in return, the client owed his lord yearly payments of food and fixed amounts of work. The clientship agreement could last until the lord's death. If the client died, his heirs would carry on the agreement. This system of clientship enabled social mobility as a client could increase his wealth until he could afford clients of his own, thus becoming a lord. Clientship was also practised between nobles, which established hierarchies of homage and political support.
Gaelic law was originally passed down orally, but was written down in Old Irish during the period 600–900 AD. This collection of oral and written laws is known as the Fénechas or, in English, as the Brehon Law(s). The brehons (Old Irish: brithem, plural brithemain) were the jurists in Gaelic Ireland. Becoming a brehon took many years of training and the office was, or became, largely hereditary. Most legal cases were contested privately between opposing parties, with the brehons acting as arbitrators.
Offences against people and property were primarily settled by the offender paying compensation to the victims. Although any such offence required compensation, the law made a distinction between intentional and unintentional harm, and between murder and manslaughter. If an offender did not pay outright, his property was seized until he did so. Should the offender be unable to pay, his family would be responsible for doing so. Should the family be unable or unwilling to pay, responsibility would broaden to the wider kin-group. Hence, it has been argued that "the people were their own police". Acts of violence were generally settled by payment of compensation known as an éraic fine; the Gaelic equivalent of the Welsh galanas and the Germanic weregild. If a free person was murdered, the éraic was equal to 21 cows, regardless of the victim's rank in society. Each member of the murder victim's agnatic kin-group received a payment based on their closeness to the victim, their status, and so forth. There were separate payments for the kin-group of the victim's mother, and for the victim's foster-kin.
Execution seems to have been rare and carried out only as a last resort. If a murderer was unable/unwilling to pay éraic and was handed to his victim's family, they might kill him if they wished should nobody intervene by paying the éraic. Habitual or particularly serious offenders might be expelled from the kin-group and its territory. Such people became outlaws (with no protection from the law) and anyone who sheltered him became liable for his crimes. If he still haunted the territory and continued his crimes there, he was proclaimed in a public assembly and after this anyone might lawfully kill him.
Each person had an honour-price, which varied depending on their rank in society. This honour-price was to be paid to them if their honour was violated by certain offences. Those of higher rank had a higher honour-price. However, an offence against the property of a poor man (who could ill afford it), was punished more harshly than a similar offence upon a wealthy man. The clergy were more harshly punished than the laity. When a layman had paid his fine he would go through a probationary period and then regain his standing, but a clergyman could never regain his standing.
Most of the laws are pre-Christian in origin. These secular laws existed in parallel, and sometimes in conflict, with Church law. Although brehons usually dealt with legal cases, kings would have been able to deliver judgments also, but it is unclear how much they would have had to rely on brehons. Kings had their own brehons to deal with cases involving the king's own rights and to give him legal advice. Unlike other kingdoms in Europe, Gaelic kings—by their own authority—could not enact new laws as they wished and could not be "above the law". They could, however, enact temporary emergency laws. It was mainly through these emergency powers that the Church attempted to change Gaelic law.
The law texts take great care to define social status, the rights and duties that went with that status, and the relationships between people. For example, chieftains had to take responsibility for members of their fine, acting as a surety for some of their deeds and making sure debts were paid. He would also be responsible for unmarried women after the death of their fathers.
Marriage, women and children
Ancient Irish culture was patriarchal. The Brehon law excepted women from the ordinary course of the law so that, in general, every woman had to have a male guardian. However, women had some legal capacity. By the 8th century, the preferred form of marriage was one between social equals, under which a woman was technically legally dependent on her husband and had half his honor price, but could exercise considerable authority in regard to the transfer of property. Such women were called "women of joint dominion". Thus historian Patrick Weston Joyce could write that, relative to other European countries of the time, free women in Gaelic Ireland "held a good position" and their social and property rights were "in most respects, quite on a level with men".
Gaelic Irish society was also patrilineal, with land being primarily owned by men and inherited by the sons. Only when a man had no sons would his land pass to his daughters, and then only for their lifetimes. Upon their deaths, the land was redistributed among their father's male relations. Under Brehon law, rather than inheriting land, daughters had assigned to them a certain number of their father's cattle as their marriage-portion. It seems that, throughout the Middle Ages, the Gaelic Irish kept many of their marriage laws and traditions sundered from those of the Church. Under Gaelic law, married women could hold property independent of their husbands, a link was maintained between married women and their own families, couples could easily divorce or separate, and men could have concubines (which could be lawfully bought). These laws differed from most of contemporary Europe and from Church law.
The lawful age of marriage was fifteen for girls and eighteen for boys, the respective ages at which fosterage ended. Upon marriage, the families of the bride and bridegroom were expected to contribute to the match. It was custom for the bridegroom and his family to pay a coibche (modern spelling: coibhche) and the bride was allowed a share of it. If the marriage ended owing to a fault of the husband then the coibche was kept by the wife and her family, but if the fault lay with the wife then the coibche was to be returned. It was custom for the bride to receive a spréid (modern spelling: spréidh) from her family (or foster family) upon marriage. This was to be returned if the marriage ended through divorce or the death of the husband. Later, the spréid seems to have been converted into a dowry. Women could seek divorce/separation as easily as men could and, when obtained on her petition, she kept all the property she had brought her husband during their marriage. Trial marriages seem to have been popular among the rich and powerful, and thus it has been argued that cohabitation before marriage must have been acceptable. It also seems that the wife of a chieftain was entitled to some share of the chief's authority over his territory. This led to some Gaelic Irish wives wielding a great deal of political power.
Before the Norman invasion, it was common for priests and monks to have wives. This remained mostly unchanged after the Norman invasion, despite protests from bishops and archbishops. The authorities classed such women as priests' concubines and there is evidence that a formal contract of concubinage existed between priests and their women. However, unlike other concubines, they seem to have been treated just as wives were.
In Gaelic Ireland a kind of fosterage was common, whereby (for a certain length of time) children would be left in the care of other fine members, namely their mother's family, preferably her brother. This may have been used to strengthen family ties or political bonds. Foster parents were beholden to teach their foster children or to have them taught. Foster parents who had properly done their duties were entitled to be supported by their foster children in old age (if they were in need and had no children of their own). As with divorce, Gaelic law again differed from most of Europe and from Church law in giving legal standing to both "legitimate" and "illegitimate" children.
Settlements and architecture
For most of the Gaelic period, dwellings and farm buildings were circular with conical thatched roofs (see roundhouse). Square and rectangle-shaped buildings gradually became more common, and by the 14th or 15th century they had replaced round buildings completely. In some areas, buildings were made mostly of stone. In others, they were built of timber, wattle and daub, or a mix of materials. Most ancient and early medieval stone buildings were of dry stone construction. Some buildings would have had glass windows. It was common for women to have their own 'apartment' called a grianan (anglicized "greenan") in the sunniest part of the homestead.
The dwellings of freemen and their families were often surrounded by a circular rampart called a "ringfort". There are two main kinds of ringfort. The ráth is an earthen ringfort, averaging 30m diameter, with a dry outside ditch. The cathair or caiseal is a stone ringfort. The ringfort would typically have enclosed the family home, small farm buildings or workshops, and animal pens. Most date to the period 500–1000 CE and there is evidence of large-scale ringfort desertion at the end of the first millennium. The remains of between 30,000 and 40,000 lasted into the 19th century to be mapped by Ordnance Survey Ireland. Another kind of native dwelling was the crannóg, which were roundhouses built on artificial islands in lakes.
There were very few nucleated settlements in Gaelic areas. However, after the 5th century some monasteries became the heart of small "monastic towns". By the 10th century the Norse-Gaelic ports of Dublin, Wexford, Cork and Limerick had grown into substantial settlements. It was at this time, perhaps as a response to Viking raids, that many of the Irish round towers were built.
In the fifty years before the Norman invasion (1169), the term "castle" (Old Irish: caistél/caislén) appears in Gaelic writings, although there are no surviving examples of pre-Norman castles. After the invasion, the Normans built motte-and-bailey castles in the areas they occupied, some of which were converted from ringforts. By 1300 "some mottes, especially in frontier areas, had almost certainly been built by the Gaelic Irish in imitation". The Normans gradually replaced wooden motte-and-baileys with stone castles and tower houses. Tower houses are free-standing multi-storey stone towers usually surrounded by a wall (see bawn) and ancillary buildings. Gaelic families had begun to build their own tower houses by the 15th century. As many as 7000 may have been built, but they were rare in areas with little Norman settlement or contact. They are concentrated in counties Limerick and Clare but are lacking in Ulster, except the area around Strangford Lough.
In Gaelic law, a 'sanctuary' called a maighin digona surrounded each person's dwelling. The maighin digona's size varied according to the owner's rank. In the case of a bóaire it stretched as far as he, while sitting at his house, could cast a cnairsech (variously described as a spear or sledgehammer). The owner of a maighin digona could offer its protection to someone fleeing from pursuers, who would then have to bring that person to justice by lawful means.
Gaelic Ireland was involved in trade with Britain and mainland Europe from ancient times, and this trade increased over the centuries. Tacitus, for example, wrote in the 1st century that most of Ireland's harbours were known to the Romans through commerce. There are many passages in early Irish literature that mention luxury items imported from foreign lands, and the fair of Carman in Leinster included a market of foreign traders. In the Middle Ages the main exports were textiles such as wool and linen while the main imports were luxury items.
Money was seldom used in Gaelic society; instead, goods and services were usually exchanged for other goods and services. The economy was mainly a pastoral one, based on livestock (cows, sheep, pigs, goats, etc.) and their products. Cattle was "the main element in the Irish pastoral economy" and the main form of wealth, providing milk, butter, cheese, meat, fat, hides, and so forth. They were a "highly mobile form of wealth and economic resource which could be quickly and easily moved to a safer locality in time of war or trouble". The nobility owned great herds of cattle that had herdsmen and guards. Sheep, goats and pigs were also a valuable resource but had a lesser role in Irish pastoralism.
Horticulture was practised; the main crops being oats, wheat and barley, although flax was also grown for making linen.
Transhumance was also practised, whereby people moved with their livestock to higher pastures in summer and back to lower pastures in the cooler months. The summer pasture was called the buaile (anglicized as booley) and it is noteworthy that the Irish word for boy (buachaill) originally meant a herdsman. Many moorland areas were "shared as a common summer pasturage by the people of a whole parish or barony".
Gaelic Ireland was well furnished with roads and bridges. Bridges were typically wooden and the roads were sometimes laid with wood and stone. There were five main roads leading from Tara and many named roads are mentioned in literature.
Horses were one of the main means of long-distance transport. Although horseshoes and reins were used, the Gaelic Irish did not use saddles, stirrups or spurs. Every man was trained to spring from the ground on to the back of his horse (an ech-léim or "steed-leap") and they urged-on and guided their horses with a rod having a hooked goad at the end.
Two-wheeled and four-wheeled chariots (singular carbad) were used in Ireland from ancient times, both in private life and in war. They were big enough for two people, made of wickerwork and wood, and often had decorated hoods. The wheels were spoked, shod all round with iron, and were from three to four and a half feet high. Chariots were generally drawn by horses or oxen, with horse-drawn chariots being more common among chiefs and military men. War chariots furnished with scythes and spikes, like those of the ancient Gauls and Britons, are mentioned in literature.
Boats used in Gaelic Ireland include canoes, currachs and sailboats. Ferryboats were used to cross wide rivers and are often mentioned in the Brehon Laws as subject to strict regulations. Sometimes they were owned by individuals and sometimes they were the common property of those living round the ferry. Large boats were used for trade with mainland Europe.
Throughout the Middle Ages, the common clothing amongst the Gaelic Irish consisted of a brat (a woollen cloak) worn over a léine (a loose-fitting, long-sleeved tunic made of wool or linen). For men the léine went down to the thighs or knees and for women they were longer. Men sometimes wore tight-fitting truis on the legs, but otherwise went bare-legged. The brat was usually fastened with a crios (belt) and dealg (brooch), with men usually wearing the dealg at their shoulders and women at their chests. The ionar (a short, tight-fitting jacket) became popular later on. In Topographia Hibernica, written during the 1180s, Gerald de Barri wrote that the Irish commonly wore hoods at that time (perhaps forming part of the brat), while Edmund Spenser wrote in the 1580s that the brat was (in general) their main item of clothing. However, it is uncertain if Medieval Irish clothing fashions were influenced by other cultures they came in contact with, such as the Angles, Norse or the Romans. The discovery of the bog body in Gallagh indicates that during the Iron Age, wearing of animal skins was common. According to Gerald de Barri, most of the Irish he saw wore clothes made of black wool, apparently because most of the sheep in Ireland were black at that time. The number of colours worn came to betoken the rank or wealth of the wearer; the wealthy often wore cloth of many colours while the poor only wore cloth of one colour.
Women invariably grew their hair long and, as in other European cultures, this custom was also common among the men. It is said that the Gaelic Irish took great pride in their long hair—for example, a person could be forced to pay the heavy fine of two cows for shaving a man's head against his will. For women, very long hair was seen as a mark of beauty. Sometimes, both men and women would braid their hair and fasten hollow golden balls to the braids. Another style that was popular among some medieval Gaelic men was the glib (short all over except for a long, thick lock of hair towards the front of the head). A band or ribbon around the forehead was the typical way of holding one's hair in place. For the wealthy, this band was often a thin and flexible band of burnished gold, silver or findruine. When the Anglo-Normans and the English colonized Ireland, hair length came to signify one's allegiance. Irishmen who cut their hair short were deemed to be forsaking their Irish heritage. Likewise, English colonists who grew their hair long at the back were deemed to be giving in to the Irish life.
Gaelic men typically let their facial hair grow into a beard and mustache, and it was often seen as dishonourable for a Gaelic man to have no facial hair. Beard styles varied – the long forked beard and the rectangular Mesopotamian-style beard were fashionable at times.
Warfare was common in Gaelic Ireland, as territories fought for supremacy against each other and (later) against the Anglo-Normans. Champion warfare is a common theme in Irish mythology. In the Middle Ages all able-bodied men, apart from the learned and the clergy, were eligible for military service on behalf of the king or chief. Throughout the Middle Ages and for some time after, outsiders often wrote that the style of Irish warfare differed greatly from what they deemed to be the norm in Western Europe. The Gaelic Irish preferred hit-and-run raids (the crech), which involved catching the enemy unaware. If this worked they would then seize any valuables (mainly livestock) and potentially valuable hostages, burn the crops, and escape. The cattle raid was often called a Táin Bó in Gaelic literature. Although hit-and-run raiding was the preferred tactic in medieval times, there were also pitched battles. From at least the 11th century, kings maintained small permanent fighting forces known as "troops of the household", who were often given houses and land on the king's mensal land. These were well-equipped professional soldiers made up of infantry and cavalry. By the reign of Brian Boru, Irish kings were taking large armies on campaign over long distances and using naval forces in tandem with land forces.
A typical medieval Irish army included light infantry, heavy infantry and cavalry. The bulk of the army was made-up of light infantry called ceithern (anglicized 'kern'). The ceithern wandered Ireland offering their services for hire and usually wielded swords, skenes (a kind of long knife), short spears, bows and shields. The cavalry was usually made-up of a king or chieftain and his close relatives. They usually rode without saddles but wore armour and iron helmets and wielded swords, skenes and long spears or lances. One kind of Irish cavalry was the hobelar. After the Norman invasion there emerged a kind of heavy infantry called gallóglaigh (anglicized 'gallo[w]glass'). They were originally Scottish mercenaries who appeared in the 13th century, but by the 15th century most large túatha had their own hereditary force of Irish gallóglaigh. Some Anglo-Norman lordships also began using gallóglaigh in imitation of the Irish. They usually wore mail and iron helmets and wielded sparth axes, claymores, and sometimes spears or lances. The gallóglaigh furnished the retreating plunderers with a "moving line of defence from which the horsemen could make short, sharp charges, and behind which they could retreat when pursued". As their armour made them less nimble, they were sometimes planted at strategic spots along the line of retreat. The kern, horsemen and gallóglaigh had lightly-armed servants to carry their weapons into battle.
Warriors were sometimes rallied into battle by blowing horns and warpipes. According to Gerald de Barri (in the 12th century), they did not wear armour, as they deemed it burdensome to wear and "brave and honourable" to fight without it. Instead, most ordinary soldiers fought semi-naked and carried only their weapons and a small round shield—Spenser wrote that these shields were covered with leather and painted in bright colours. Kings and chiefs sometimes went into battle wearing helmets adorned with eagle feathers. For ordinary soldiers, their thick hair often served as a helmet, but they sometimes wore simple helmets made from animal hides.
Artwork from Ireland's Gaelic period is found on pottery, jewellery, weapons, drinkware, tableware, stone carvings and illuminated manuscripts. Like other kinds of Celtic art, Irish art from about 300 BCE is part of the wider style, which developed in west central Europe. By about 600 CE, after the Christianization of Ireland had begun, a style melding Irish, Mediterranean and Germanic Anglo-Saxon elements emerged, and was spread to Britain and mainland Europe by the Hiberno-Scottish mission. This is known as Insular art or Hiberno-Saxon art, which continued in some form in Ireland until the 12th century, although the Viking invasions ended its "Golden Age". Most surviving works of Insular art were either made by monks or made for monasteries, with the exception of brooches, which were likely made and used by both clergy and laity. Examples of Insular art from Ireland include the Book of Kells, Muiredach's High Cross, the Tara Brooch, the Ardagh Hoard the Derrynaflan Chalice, and the late Cross of Cong, which also uses Viking styles.
Music and dance
Although Gerald de Barri had a negative view of the Irish, in Topographia Hibernica (1188) he conceded that they were more skilled at playing music than any other nation he had seen. He claimed that the two main instruments were the "harp" and "tabor" (see also bodhrán), that their music was fast and lively, and that their songs always began and ended with B-flat. In A History of Irish Music (1905), W. H. Grattan Flood wrote that there were at least ten instruments in general use by the Gaelic Irish. These were the cruit (a small harp) and clairseach (a bigger harp with typically 30 strings), the timpan (a small string instrument played with a bow or plectrum), the feadan (a fife), the buinne (an oboe or flute), the guthbuinne (a bassoon-type horn), the bennbuabhal and corn (hornpipes), the cuislenna (bagpipes – see Great Irish Warpipes), the stoc and sturgan (clarions or trumpets), and the cnamha (castanets). He also mentions the fiddle as being used in the 8th century.
As mentioned before, Gaelic Ireland was split into many clann territories and kingdoms called túath (plural: túatha). Although there was no central 'government' or 'parliament', a number of local, regional and national gatherings were held. These combined features of assemblies and fairs.
In Ireland the highest of these was the feis at Teamhair na Rí (Tara), which was held every third Samhain. This was a gathering of the leading men of the whole island – kings, lords, chieftains, druids, judges etc. Below this was the óenach (modern spelling: aonach). These were regional or provincial gatherings open to everyone. Examples include that held at Tailtin each Lughnasadh, and that held at Uisneach each Bealtaine. The main purpose of these gatherings was to promulgate and reaffirm the laws – they were read aloud in public that they might not be forgotten, and any changes in them carefully explained to those present.
Each túath or clann had two assemblies of its own. These were the cuirmtig, which was open to all clann members, and the dal (a term later adopted for the Irish parliament – see Dáil Éireann), which was open only to clann chiefs. Each clann had a further assembly called a tocomra, in which the clann chief (toísech, modern taoiseach) and his deputy/successor (tanaiste) were elected.
List of finte, túatha and kings
|This section needs additional citations for verification. (October 2010) (Learn how and when to remove this template message)|
||This article duplicates the scope of other articles. (July 2013)|
400 to 800
800 to 1169
Ireland became Christianized between the 5th and 7th centuries. Pope Adrian IV, the only English pope, had already issued a Papal Bull in 1155 giving Henry II of England authority to invade Ireland as a means of curbing Irish refusal to recognize Roman law. Importantly, for later English monarchs, the Bull, Laudabiliter, maintained papal suzerainty over the island:
|“||There is indeed no doubt, as thy Highness doth also acknowledge, that Ireland and all other islands which Christ the Sun of Righteousness has illumined, and which have received the doctrines of the Christian faith, belong to the jurisdiction of St. Peter and of the holy Roman Church.||”|
In 1166, after losing the protection of High King Muirchertach Mac Lochlainn, King Diarmait Mac Murchada of Leinster was forcibly exiled by a confederation of Irish forces under King Ruaidri mac Tairrdelbach Ua Conchobair. Fleeing first to Bristol and then to Normandy, Diarmait obtained permission from Henry II of England to use his subjects to regain his kingdom. By the following year, he had obtained these services and in 1169 the main body of Norman, Welsh and Flemish forces landed in Ireland and quickly retook Leinster and the cities of Waterford and Dublin on behalf of Diarmait. The leader of the Norman force, Richard de Clare, 2nd Earl of Pembroke, more commonly known as Strongbow, married Diarmait's daughter, Aoife, and was named tánaiste to the Kingdom of Leinster. This caused consternation to Henry II, who feared the establishment of a rival Norman state in Ireland. Accordingly, he resolved to visit Leinster to establish his authority.
Henry landed in 1171, proclaiming Waterford and Dublin as Royal Cities. Adrian's successor, Pope Alexander III, ratified the grant of Ireland to Henry in 1172. The 1175 Treaty of Windsor between Henry and Ruaidhrí maintained Ruaidhrí as High King of Ireland but codified Henry's control of Leinster, Meath and Waterford. However, with Diarmuid and Strongbow dead, Henry back in England, and Ruaidhrí unable to curb his vassals, the high kingship rapidly lost control of the country. Henry, in 1185, awarded his younger son, John, the title Dominus Hiberniae, "Lord of Ireland". This kept the newly created title and the Kingdom of England personally and legally separate. However, when John unexpectedly succeeded his brother as King of England in 1199, the Lordship of Ireland fell back into personal union with the Kingdom of England.
By 1261, the weakening of the Anglo-Norman Lordship had become manifest following a string of military defeats. In the chaotic situation, local Irish lords won back large amounts of land. The invasion by Edward Bruce in 1315–18 at a time of famine weakened the Norman economy. The Black Death arrived in Ireland in 1348. Because most of the English and Norman inhabitants of Ireland lived in towns and villages, the plague hit them far harder than it did the native Irish, who lived in more dispersed rural settlements. After it had passed, Gaelic Irish language and customs came to dominate the country again. The English-controlled area shrank back to the Pale, a fortified area around Dublin. Outside the Pale, the Hiberno-Norman lords intermarried with Gaelic noble families, adopted the Irish language and customs and sided with the Gaelic Irish in political and military conflicts against the Lordship. They became known as the Old English, and in the words of a contemporary English commentator, were "more Irish than the Irish themselves."
The authorities in the Pale worried about the Gaelicisation of Norman Ireland, and passed the Statutes of Kilkenny in 1366 banning those of English descent from speaking the Irish language, wearing Irish clothes or inter-marrying with the Irish. The government in Dublin had little real authority. By the end of the 15th century, central English authority in Ireland had all but disappeared. England's attentions were diverted by the Hundred Years' War (1337–1453) and then by the Wars of the Roses (1450–85). Around the country, local Gaelic and Gaelicised lords expanded their powers at the expense of the English government in Dublin.
Gaelic kingdoms during the period
Following the failed attempt by the Scottish King Edward Bruce (see Irish Bruce Wars 1315–1318) to drive the Normans out of Ireland, there emerged a number of important Gaelic kingdoms and Gaelic-controlled lordships.
- Connacht. The Ó Conchobhair dynasty, despite their setback during the Bruce wars, had regrouped and ensured that the title King of Connacht was not yet an empty one. Their stronghold was in their homeland of Sil Muirdeag, from where they dominated much of northern and northeastern Connacht. However, after the death of Ruaidri mac Tairdelbach Ua Conchobair in 1384, the dynasty split into two factions, Ó Conchobhair Don and Ó Conchobhair Ruadh. By the late 15th century, internecine warfare between the two branches had weakened them to the point where they themselves became vassals of more powerful lords such as Ó Domhnaill of Tír Chonaill and the Clan Burke of Clanricarde. The Mac Diarmata Kings of Moylurg retained their status and kingdom during this era, up to the death of Tadhg Mac Diarmata in 1585 (last de facto King of Moylurg). Their cousins, the Mac Donnacha of Tír Ailella, found their fortunes bound to the Ó Conchobhair Ruadh. The kingdom of Uí Maine had lost much of its southern and western lands to the Clanricardes, but managed to flourish until repeated raids by Ó Domhnaill in the early 16th century weakened it. Other territories such as Ó Flaithbeheraigh of Iar Connacht, Ó Seachnasaigh of Aidhne, O'Dowd of Tireagh, O'Hara, Ó Gadhra and Ó Maddan, either survived in isolation or were vassals for greater men.
- Ulster: The Ulaid proper were in a sorry state all during this era, being squeezed between the emergent Ó Neill of Tír Eógain in the west, the MacDonnells, Clann Aodha Buidhe, and the Anglo-Normans from the east. Only Mag Aonghusa managed to retain a portion of their former kingdom with expansion into Iveagh. The two great success stories of this era were Ó Domhnaill of Tír Chonaill and Ó Neill of Tír Eógain. Ó Domhnaill was able to dominate much of northern Connacht to the detriment of its native lords, both Old English and Gaelic, though it took time to suborn the likes of Ó Conchobhair Sligigh and Ó Raghallaigh of Iar Breifne. Expansion southwards brought the hegemony of Tír Eógain, and by extension Ó Neill influence, well into the border lordships of Louth and Meath. Mag Uidir of Fear Manach would slightly later be able to build his lordship up to that of third most powerful in the province, at the expense of the Ó Raghallaigh of Iar Breifne and the MacMahons of Airgíalla.
- Leinster: Likewise, despite the adverse (and unforeseen) effects of Diarmait Mac Murchada's efforts to regain his kingdom, the fact of the matter was that, of his twenty successors up to 1632, most of them had regained much of the ground they had lost to the Normans, and exacted yearly tribute from the towns. His most dynamic successor was the celebrated Art mac Art MacMurrough-Kavanagh. The Ó Broin and Ó Tuathail largely contented themselves with raids on Dublin (which, incredibly, continued into the 18th century). The Ó Mordha of Laois and Ó Conchobhair Falaighe of Offaly – the latter's capital was Daingean – were two self-contained territories that had earned the right to be called kingdoms due to their near-invincibility against successive generations of Anglo-Irish. The great losers were the Ó Melaghlins of Meath: their kingdom collapsed despite attempts by Cormac mac Art O Melaghlain to restore it. The royal family was reduced to vassal status, confined to the east shores of the River Shannon. The kingdom was substantially incorporated into the Lordship of Meath which was granted to Hugh de Lacy in 1172.
- Desmond: See Kingdom of Desmond, Barony of Carbery, Battle of Callann
- Thomond: Despite huge setbacks, the descendants of Brian Bóruma had, by surviving the Second Battle of Athenry and winning the decisive battles of Corcomroe and Dysert O'Dea, been able to suborn their vassals and eradicate the Normans from their home kingdom of Thomond. Their spheres of interest often met with conflict with Anglo-Normans such as the Earls of Desmond and Earls of Ormond, yet they ruled right up to the end of Gaelic Ireland, and beyond, by expedient of becoming the O'Brien Earls of Thomond.
From 1536, Henry VIII of England decided to conquer Ireland and bring it under English control. The FitzGerald dynasty of Kildare, who had become the effective rulers of the Lordship of Ireland (The Pale) in the 15th century, had become unreliable allies and Henry resolved to bring Ireland under English government control so the island would not become a base for future rebellions or foreign invasions of England. To involve the Gaelic nobility and allow them to retain their lands under English law the policy of surrender and regrant was applied.
In 1541, Henry upgraded Ireland from a lordship to a full kingdom, partly in response to changing relationships with the papacy, which still had suzerainty over Ireland, following Henry's break with the church. Henry was proclaimed King of Ireland at a meeting of the Irish Parliament that year. This was the first meeting of the Irish Parliament to be attended by the Gaelic Irish princes as well as the Hiberno-Norman aristocracy.
With the technical institutions of government in place, the next step was to extend the control of the Kingdom of Ireland over all of its claimed territory. This took nearly a century, with various English administrations in the process either negotiating or fighting with the independent Irish and Old English lords. The conquest was completed during the reigns of Elizabeth and James I, after several bloody conflicts.
The flight into exile in 1607 of Hugh O'Neill, 2nd Earl of Tyrone and Rory O'Donnell, 1st Earl of Tyrconnell following their defeat at the Battle of Kinsale in 1601 and the suppression of their rebellion in Ulster in 1603 is seen as the watershed of Gaelic Ireland. It marked the destruction of Ireland's ancient Gaelic nobility following the Tudor conquest and cleared the way for the Plantation of Ulster. After this point, the English authorities in Dublin established real control over Ireland for the first time, bringing a centralised government to the entire island, and successfully disarmed the Gaelic lordships.
England and Scotland merged politically in 1707 after the crowns of both counties were united in 1603, but the crown of Ireland did not merge with the Union until 1800. Part of the attraction of the Union for many Irish Catholics was the promise of Catholic Emancipation, allowing Roman Catholic MPs, who had not been allowed in the Irish Parliament. This was however blocked by King George III who argued that emancipating Roman Catholics would breach his Coronation Oath, and was not realised until 1829.
The Gaelic roots that defined early Irish history still persisted, despite this Anglicisation of Irish culture and politics, as Christianity became the prominent expression of Irish identity in Ireland. In the time leading up to the Great Famine of the 1840s, many priests believed that parishioner spirituality was paramount, resulting in a localised morphing of Gaelic and Catholic traditions.
- Whilst Ireland had a single, strong, unifying culture, "patchwork" is a very common way to describe the political arrangement of Gaelic Ireland. For example: Dáibhí Ó Cróinín (1995). Dáibhí Ó Cróinín, ed. Early medieval Ireland, 400–1200. Longman History of Ireland. 1. London: Longman. p. 110. ISBN 0-582-01566-9.
By the time of our earliest documentary evidence (law texts, genealogies, and annals), the vision of Ireland as a unitary state, ruled by a 'high-king', had apparently disappeared, to be replaced by a patchwork of local tribal kingdoms, each confident in its own distinctiveness.
- Simms, Katharine (1978). "Guesting and Feasting in Gaelic Ireland". Journal of the Royal Society of Antiquaries of Ireland. 108: 67–100. JSTOR 25508737.
- Jaski, Bart (2005). "Kings and kingship". In Duffy, Seán. Medieval Ireland: An Encyclopedia. Routledge. pp. 417–422. ISBN 978-1-135-94824-5.
- Green, Miranda (1992). Animals in Celtic Life and Myth. London: Routledge. p. 196. ISBN 0-415-05030-8.
- Cunliffe, Barry W. (1997). The Ancient Celts. Oxford University Press. pp. 208–210. ISBN 978-0-19-815010-7.
- Dunning, Dr Ray. The Encyclopedia of World Mythology. p. 91.[full citation needed]
- Koch, John T. (2006). Celtic Culture: a Historical Encyclopedia. ABC-CLIO. p. 332. ISBN 978-1-85109-440-0.
- Nicholls, Kenneth W. (2003) . Gaelic and Gaelicised Ireland in the Middle Ages (2nd ed.). Dublin: Lilliput Press.
- Nicholls, Kenneth W. (2008) . "Chapter XIV: Gaelic society and economy". In Cosgrove, Art. A New History of Ireland, Volume II: Medieval Ireland 1169-1534. Oxford University Press. pp. 397–438. doi:10.1093/acprof:oso/9780199539703.003.0015. ISBN 978-0-19-953970-3.
- Ginnell, Laurence (1894). "Chapter IV: Legislative Assemblies". The Brehon Laws: A Legal Handbook. Library Ireland.
- Simms, Katharine (2000) . "The King's Administration". From Kings to Warlords: The Changing Political Structure of Gaelic Ireland in the Later Middle Ages. Boydell & Brewer. p. 79. ISBN 978-0-85115-784-9.
- Duffy, Seán, ed. (2005). Medieval Ireland: An Encyclopedia. Routledge. p. 11. ISBN 978-1-135-94824-5.
- Ginnell, Laurence (1894). "Chapter V: Classification of Society". The Brehon Laws: A Legal Handbook. Library Ireland.
- Hutton, Ronald (2007). The Druids. Hambledon Continuum. p. 2. ISBN 978-1-85285-533-8.
- Jefferies, Henry A. "Culture and Religion in Tudor Ireland, 1494–1558 (replacement source)". University College Cork. Retrieved 23 June 2008.
- Duffy, Seán, ed. (2005). Medieval Ireland: An Encyclopedia. Routledge. p. 713. ISBN 978-1-135-94824-5.
- Ó Cróinín, Dáibhí (1995). Early Medieval Ireland, 400-1200. Routledge. p. 88. ISBN 978-1-317-90176-1.
- Quin, E. G. (1983). Dictionary of the Irish Language: Compact Edition. Royal Irish Academy. pp. 299, 507. ISBN 978-0-901714-29-9.
- Keating, Geoffrey (2002). "The History of Ireland". University College, Cork. Section 45.
- Kelly, Fergus. A Guide to Early Irish Law. pp. 36–7.
- Duffy, Seán, ed. (2005). Medieval Ireland: An Encyclopedia. Routledge. pp. 72–74. ISBN 978-1-135-94824-5.
- Ginnell, Laurence (1894). "Chapter I: Ancient Law". The Brehon Laws: A Legal Handbook. Library Ireland.
- Ginnell, Laurence (1894). "Chapter VII: Criminal Law". The Brehon Laws: A Legal Handbook. Library Ireland.
- Kelly, Fergus. A Guide to Early Irish Law. pp. 23–5, 52.
- Kelly, Fergus. A Guide to Early Irish Law. pp. 21–22.
- Kelly, Fergus. A Guide to Early Irish Law. pp. 13–14.
- Encyclopædia Britannica (10th ed.), 1902, p. 639
- Encyclopædia Britannica (6th ed.), 1823, p. 588
- Joyce, Patrick Weston (1906). "Chapter XV: The Family, part 2". A Smaller Social History of Ancient Ireland. Library Ireland.
- Kenny, Gillian (2006). "Anglo-Irish and Gaelic marriage laws and traditions in late medieval Ireland" (PDF). Journal of Medieval History. Elsevier. 32: 27–42. doi:10.1016/j.jmedhist.2005.12.004.[permanent dead link]
- Connolly, Sean J (2007). "Chapter 2: Late Medieval Ireland: The Irish". Contested island: Ireland 1460–1630. Oxford University Press. pp. 20–24.
- Ginnell, Laurence (1894). "Chapter VIII: Leges Minores". The Brehon Laws: A Legal Handbook. Library Ireland.
- Joyce, Patrick Weston (1906). "Chapter XVI: The House, Construction, Shape, and Size". A Smaller Social History of Ancient Ireland. Library Ireland.
- Joyce, Patrick Weston (1906). "Chapter XVI: The House, Interior Arrangements and Sleeping Accommodation". A Smaller Social History of Ancient Ireland. Library Ireland.
- Barry, Terry (1995). Rural settlement in Ireland in the middle ages: an overview (PDF). Ruralia 1.
- O'Keeffe, Tadhg (1995). Rural settlement and cultural identity in Gaelic Ireland (PDF). Ruralia 1.
- Glasscock, Robin Edgar (2008) . "Chapter 8: Land and people, c.1300". In Cosgrove, Art. A New History of Ireland, Volume II: Medieval Ireland 1169–1534. Oxford University Press. pp. 205–239. doi:10.1093/acprof:oso/9780199539703.003.0009.
- Tacitus, Agricola 24
- Joyce, Patrick Weston (1906). "Chapter XXIV: Locomotion and Commerce, Foreign Commerce". A Smaller Social History of Ancient Ireland. Library Ireland.
- Evans, E Estyn (2000). "Bally and Booley". Irish Folk Ways. Courier Dover Publications. pp. 27–38.
- Joyce, Patrick Weston. A Smaller Social History of Ancient Ireland (1906). Chapter 24 part 1. Library Ireland.
- Joyce, Patrick Weston (1906). "Chapter XXIV: Locomotion and Commerce, Horse-Riding". A Smaller Social History of Ancient Ireland. Library Ireland.
- Joyce, Patrick Weston (1906). "Chapter XXIV: Locomotion and Commerce, Chariots and Cars". A Smaller Social History of Ancient Ireland. Library Ireland.
- Joyce, Patrick Weston (1906). "Chapter XXIV: Locomotion and Commerce, Communication by Water". A Smaller Social History of Ancient Ireland. Library Ireland.
- Logan, James (1831). The Scottish Gael. Smith, Elder and Co.
- Connolly, Sean J (2007). "Prologue". Contested island: Ireland 1460–1630. Oxford University Press. p. 7.
- The Topography of Ireland by Giraldus Cambrensis (English translation)
- Joyce, Patrick Weston (1906). "Chapter XVIII: Dress and Personal Adornment, The Person and the Toilet". A Smaller Social History of Ancient Ireland. Library Ireland.
- Bartlett, Robert (1994), "Symbolic Meanings of Hair in the Middle Ages", Transactions of the Royal Historical Society, Sixth series, 4: 43–60, doi:10.2307/3679214, ISSN 0080-4401, JSTOR 3679214
- Ó Cléirigh, Cormac (1997). Irish frontier warfare: a fifteenth-century case study (PDF).
- Duffy, Seán, ed. (2005). Medieval Ireland: An Encyclopedia. Routledge. pp. 54–55. ISBN 978-1-135-94824-5.
- Flanagan, Marie Therese (1996). "Warfare in Twelfth-Century Ireland". A Military History of Ireland. Cambridge University Press. pp. 52–75.
- Flood, William H. Grattan (1905). A History of Irish Music: Chapter III: Ancient Irish musical instruments.
- McCourt, Malachy (2004). Malachy McCourt's History of Ireland. Running Press.
In the Treaty of Windsor, Rory accepted Henry II as the overlord and promised to pay annual tribute gathered from all of Ireland to him. For his part, Rory would remain King of Connaught and High King of all unconquered lands in Ireland.
- Larkin, Emmet (June 1972). "The Devotional Revolution in Ireland, 1850-75". The American Historical Review. 77 (3): 625–652. doi:10.2307/1870344. JSTOR 1870344.
- Kelly, Fergus (1988). A Guide to Early Irish Law. Early Irish Law Series 3. Dublin: DIAS. ISBN 0901282952.
- Duffy, Patrick J.; David Edwards; Elizabeth FitzPatrick, eds. (2001). Gaelic Ireland, c. 1250—c.1650: land, landlordship and settlement. Dublin: Four Courts Press.
- Fitzpatrick, Elizabeth (2004). Royal inauguration in Gaelic Ireland c. 1100–1600: a cultural landscape study. Studies in Celtic History 22. Woodbridge: Boydell.
- Mooney, Canice (1969). The Church in Gaelic Ireland, thirteenth to fifteenth centuries. A History of Irish Catholicism 2/5. Dublin: Gill and Macmillan.
- Nicholls, Kenneth W. (2003) . Gaelic and Gaelicised Ireland in the Middle Ages (2nd ed.). Dublin: Lilliput Press.
- Simms, Katherine (1987). From kings to warlords: the changing political structure of Gaelic Ireland in the later Middle Ages. Studies in Celtic History 7. Woodbridge: Boydell. |
Deoxyribonucleic acid (/diˌɒksiˌraɪbɵ.njuːˌkleɪ.ɨk ˈæsɪd/ ( listen)), or DNA, is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms (with the exception of RNA viruses). The main role of DNA molecules is the long-term storage of information. DNA is often compared to a set of blueprints, like a recipe or a code, since it contains the instructions needed to construct other components of cells, such as proteins and RNA molecules. The DNA segments that carry this genetic information are called genes, but other DNA sequences have structural purposes, or are involved in regulating the use of this genetic information.
DNA consists of two long polymers of simple units called nucleotides, with backbones made of sugars and phosphate groups joined by ester bonds. These two strands run in opposite directions to each other and are therefore anti-parallel. Attached to each sugar is one of four types of molecules called bases. It is the sequence of these four bases along the backbone that encodes information. This information is read using the genetic code, which specifies the sequence of the amino acids within proteins. The code is read by copying stretches of DNA into the related nucleic acid RNA, in a process called transcription.
Within cells, DNA is organized into long structures called chromosomes. These chromosomes are duplicated before cells divide, in a process called DNA replication. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
DNA is a long polymer made from repeating units called nucleotides. As first discovered by James D. Watson and Francis Crick, the structure of DNA of all species comprises two helical chains each coiled round the same axis, and each with a pitch of 34 Ångströms (3.4 nanometres) and a radius of 10 Ångströms (1.0 nanometres). According to another study, when measured in a particular solution, the DNA chain measured 22 to 26 Ångströms wide (2.2 to 2.6 nanometres), and one nucleotide unit measured 3.3 Å (0.33 nm) long. Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the largest human chromosome, chromosome number 1, is approximately 220 million base pairs long.
In living organisms, DNA does not usually exist as a single molecule, but instead as a pair of molecules that are held tightly together. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats contain both the segment of the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand in the helix. A base linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. If multiple nucleotides are linked together, as in DNA, this polymer is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are called the 5′ (five prime) and 3′ (three prime) ends, with the 5' end having a terminal phosphate group and the 3' end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA.
The DNA double helix is stabilized by hydrogen bonds between the bases attached to the two strands. The four bases found in DNA are adenine (abbreviated A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar/phosphate to form the complete nucleotide, as shown for adenosine monophosphate.
These bases are classified into two types; adenine and guanine are fused five- and six-membered heterocyclic compounds called purines, while cytosine and thymine are six-membered rings called pyrimidines. A fifth pyrimidine base, called uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine. In addition to RNA and DNA, a large number of artificial nucleic acid analogues have also been created to study the proprieties of nucleic acids, or for use in biotechnology.
Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
Each type of base on one strand forms a bond with just one type of base on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with A bonding only to T, and C bonding only to G. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can therefore be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. Indeed, this reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.
The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, left). DNA with high GC-content is more stable than DNA with low GC-content, but contrary to popular belief, this is not due to the extra hydrogen bond of a GC base pair but rather the contribution of stacking interactions (hydrogen bonding merely provides specificity of the pairing, not stability).
As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determine the strength of the association between the two strands of DNA. Long DNA helices with a high GC content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the temperature required to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules (ssDNA) have no single common shape, but some conformations are more stable than others.
Sense and antisense
A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternate DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, as well as the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns— and also B-DNA used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternate analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction/scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James D. Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.
Although the `B-DNA form' is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partially dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, as well as in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
Alternate DNA chemistry
For a number of years exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA.
A December 2010 NASA press conference revealed that the bacterium GFAJ-1, which has evolved in an arsenic-rich environment, is the first terrestrial lifeform found which may have this ability. The bacterium was found in Mono Lake, east of Yosemite National Park. GFAJ-1 is a rod-shaped extremophile bacterium in the family Halomonadaceae that, when starved of phosphorus, may be capable of incorporating the usually poisonous element arsenic in its DNA. This discovery lends weight to the long-standing idea that extraterrestrial life could have a different chemical makeup from life on Earth. The research was carried out by a team led by Felisa Wolfe-Simon, a geomicrobiologist and geobiochemist, a Postdoctoral Fellow of the NASA Astrobiology Institute with Arizona State University.
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
In DNA fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. For example, cytosine methylation, produces 5-methylcytosine, which is important for X-chromosome inactivation. The average level of methylation varies between organisms - the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions and deletions from the DNA sequence, as well as chromosomal translocations.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, daunomycin, and doxorubicin. In order for an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators are often carcinogens, and benzo[a]pyrene diol epoxide, acridines, aflatoxin and ethidium bromide are well-known examples. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here we focus on the interactions between DNA and other molecules that mediate the function of the genome.
Genes and genomes
Genomic DNA is tightly and orderly packed in the process called DNA condensation to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, as well as small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, as well as regulatory sequences such as promoters and enhancers, which control the transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much non-coding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma". However, DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some non-coding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes. An abundant form of non-coding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA and TAG codons.
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are therefore largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter; this will change the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GAT|ATC-3′ and makes a cut at the vertical line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly ATP, to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products are copies of existing polynucleotide chains - which are called templates. These enzymes function by adding nucleotides onto the 3′ hydroxyl group of the previous nucleotide in a DNA strand. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, a DNA-dependent DNA polymerase makes a copy of a DNA sequence. Accuracy is vital in this process, so many of these polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is during chromosomal crossover when they recombine. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break either caused by an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.
DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.
However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible. This is because DNA will survive in the environment for less than one million years and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
Uses in technology
Genetic engineering, also called genetic modification, is the direct human manipulation of an organism's genetic material in a way that does not occur under natural conditions. It involves the use of recombinant DNA techniques, but does not include traditional animal and plant breeding or mutagenesis. Any organism that is generated using these techniques is considered to be a genetically modified organism. The first organisms genetically engineered were bacteria in 1973 and then mice in 1974. Insulin producing bacteria were commercialized in 1982 and genetically modified food has been sold since 1994.
Producing genetically modified organisms or tissues is a multi-step process. It first involves the isolating and copying the genetic material of interest; if necessary, changes to genetic sequence may be introduced. A construct may be built containing all the genetic elements for correct expression in a vector. This vector construct is then introduced into the host organism in a process called transformation, transfection or transduction. Successfully transformed organisms or tissues are then selectively grown, usually by growing material in conditions which require the presence of a gene in the inserted material for survival.
Genetic engineering techniques have been applied to various industries, with some success. Medicines such as insulin and human growth hormone are now produced in bacteria, experimental mice such as the oncomouse and the knockout mouse are being used for research purposes and insect resistant and/or herbicide tolerant crops have been commercialized. Plants that contain drugs and vaccines, animals with beneficial proteins in their milk and stress tolerant crops are currently being developed.
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, but may also be called "genetic fingerprinting". In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
People convicted of certain types of crimes may be required to provide a sample of DNA for a database. This has helped investigators solve old cases where only a DNA sample was obtained from the scene. DNA profiling can also be used to identify victims of mass casualty incidents. On the other hand, many convicted people have been released from prison on the basis of DNA techniques, which were not available when a crime had originally been committed.
Bioinformatics involves the manipulation, searching, and data mining of biological data, and this includes DNA sequence data. The development of techniques to store and search DNA sequences have led to widely applied advances in computer science, especially string searching algorithms, machine learning and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequenced may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based as well as using the "DNA origami" method) as well as three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.
History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; For example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.
DNA has also been used to look at modern family relationships, such as establishing family relationships between the descendants of Sally Hemings and Thomas Jefferson. This usage is closely related to the use of DNA in criminal investigations detailed above. Indeed, some criminal investigations have been solved when DNA from crime scenes has matched relatives of the guilty individual.
History of DNA research
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1919, Phoebus Levene identified the base, sugar and phosphate nucleotide unit. Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. However, Levene thought the chain was short and the bases repeated in a fixed order. In 1937 William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.
In 1928, Frederick Griffith discovered that traits of the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information—the Avery–MacLeod–McCarty experiment—when Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943. DNA's role in heredity was confirmed in 1952, when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the T2 phage.
In 1953, James D. Watson and Francis Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal Nature. Their double-helix, molecular model of DNA was then based on a single X-ray diffraction image (labeled as "Photo 51") taken by Rosalind Franklin and Raymond Gosling in May 1952, as well as the information that the DNA bases are paired — also obtained through private communications from Erwin Chargaff in the previous years. Chargaff's rules played a very important role in establishing double-helix configurations for B-DNA as well as A-DNA.
Experimental evidence supporting the Watson and Crick model were published in a series of five articles in the same issue of Nature. Of these, Franklin and Gosling's paper was the first publication of their own X-ray diffraction data and original analysis method that partially supported the Watson and Crick model; this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues, whose analysis and in vivo B-DNA X-ray patterns also supported the presence in vivo of the double-helical DNA configurations as proposed by Crick and Watson for their double-helix molecular model of DNA in the previous two pages of Nature. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. However, Nobel rules of the time allowed only living recipients, but a vigorous debate continues on who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
- DNA microarray
- DNA sequencing
- Genetic disorder
- Junk DNA
- Molecular models of DNA
- Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid
- Nucleic acid analogues
- Nucleic acid methods
- Nucleic acid modeling
- Nucleic acid notation
- Paracrystal model and theory
- X-ray crystallography
- X-ray scattering
- Polymerase chain reaction
- Proteopedia DNA
- Southern blot
- Triple-stranded DNA
- ^ Russell, Peter (2001). iGenetics. New York: Benjamin Cummings. ISBN 0-805-34553-1.
- ^ Saenger, Wolfram (1984). Principles of Nucleic Acid Structure. New York: Springer-Verlag. ISBN 0387907629.
- ^ a b Alberts, Bruce; Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts and Peter Walters (2002). Molecular Biology of the Cell; Fourth Edition. New York and London: Garland Science. ISBN 0-8153-3218-1. OCLC 48122761 57023651 69932405 145080076 48122761 57023651 69932405. http://www.ncbi.nlm.nih.gov/books/bv.fcgi?call=bv.View..ShowTOC&rid=mboc4.TOC&depth=2.
- ^ Butler, John M. (2001). Forensic DNA Typing. Elsevier. ISBN 978-0-12-147951-0. OCLC 45406517 223032110 45406517. pp. 14–15.
- ^ a b c d Watson J.D. and Crick F.H.C. (1953). "A Structure for Deoxyribose Nucleic Acid" (PDF). Nature 171 (4356): 737–738. doi:10.1038/171737a0. PMID 13054692. http://www.nature.com/nature/dna50/watsoncrick.pdf.
- ^ Mandelkern M, Elias J, Eden D, Crothers D (1981). "The dimensions of DNA in solution". J Mol Biol 152 (1): 153–61. doi:10.1016/0022-2836(81)90099-1. PMID 7338906.
- ^ Gregory S; Barlow, KF; McLay, KE; Kaul, R; Swarbreck, D; Dunham, A; Scott, CE; Howe, KL et al. (2006). "The DNA sequence and biological annotation of human chromosome 1". Nature 441 (7091): 315–21. doi:10.1038/nature04727. PMID 16710414.
- ^ a b c Berg J., Tymoczko J. and Stryer L. (2002) Biochemistry. W. H. Freeman and Company ISBN 0-7167-4955-6
- ^ Abbreviations and Symbols for Nucleic Acids, Polynucleotides and their Constituents IUPAC-IUB Commission on Biochemical Nomenclature (CBN), Accessed 03 January 2006
- ^ a b Ghosh A, Bansal M (2003). "A glossary of DNA structures from A to Z". Acta Crystallogr D Biol Crystallogr 59 (Pt 4): 620–6. doi:10.1107/S0907444903003251. PMID 12657780.
- ^ Created from PDB 1D65
- ^ Verma S, Eckstein F (1998). "Modified oligonucleotides: synthesis and strategy for users". Annu. Rev. Biochem. 67: 99–134. doi:10.1146/annurev.biochem.67.1.99. PMID 9759484.
- ^ Wing R, Drew H, Takano T, Broka C, Tanaka S, Itakura K, Dickerson R (1980). "Crystal structure analysis of a complete turn of B-DNA". Nature 287 (5784): 755–8. doi:10.1038/287755a0. PMID 7432492.
- ^ Pabo C, Sauer R (1984). "Protein-DNA recognition". Annu Rev Biochem 53: 293–321. doi:10.1146/annurev.bi.53.070184.001453. PMID 6236744.
- ^ Clausen-Schaumann H, Rief M, Tolksdorf C, Gaub H (2000). "Mechanical stability of single DNA molecules". Biophys J 78 (4): 1997–2007. doi:10.1016/S0006-3495(00)76747-6. PMID 10733978.
- ^ Yakovchuk P, Protozanova E, Frank-Kamenetskii MD (2006). "Base-stacking and base-pairing contributions into thermal stability of the DNA double helix". Nucleic Acids Res. 34 (2): 564–74. doi:10.1093/nar/gkj454. PMID 16449200. PMC 1360284. http://nar.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=16449200.
- ^ Chalikian T, Völker J, Plum G, Breslauer K (1999). "A more unified picture for the thermodynamics of nucleic acid duplex melting: a characterization by calorimetric and volumetric techniques". Proc Natl Acad Sci USA 96 (14): 7853–8. doi:10.1073/pnas.96.14.7853. PMID 10393911.
- ^ deHaseth P, Helmann J (1995). "Open complex formation by Escherichia coli RNA polymerase: the mechanism of polymerase-induced strand separation of double helical DNA". Mol Microbiol 16 (5): 817–24. doi:10.1111/j.1365-2958.1995.tb02309.x. PMID 7476180.
- ^ Isaksson J, Acharya S, Barman J, Cheruku P, Chattopadhyaya J (2004). "Single-stranded adenine-rich DNA and RNA retain structural characteristics of their respective double-stranded conformations and show directional differences in stacking pattern". Biochemistry 43 (51): 15996–6010. doi:10.1021/bi048221v. PMID 15609994.
- ^ Designation of the two strands of DNA JCBN/NC-IUB Newsletter 1989, Accessed 07 May 2008
- ^ Hüttenhofer A, Schattner P, Polacek N (2005). "Non-coding RNAs: hope or hype?". Trends Genet 21 (5): 289–97. doi:10.1016/j.tig.2005.03.007. PMID 15851066.
- ^ Munroe S (2004). "Diversity of antisense regulation in eukaryotes: multiple mechanisms, emerging patterns". J Cell Biochem 93 (4): 664–71. doi:10.1002/jcb.20252. PMID 15389973.
- ^ Makalowska I, Lin C, Makalowski W (2005). "Overlapping genes in vertebrate genomes". Comput Biol Chem 29 (1): 1–12. doi:10.1016/j.compbiolchem.2004.12.006. PMID 15680581.
- ^ Johnson Z, Chisholm S (2004). "Properties of overlapping genes are conserved across microbial genomes". Genome Res 14 (11): 2268–72. doi:10.1101/gr.2433104. PMID 15520290.
- ^ Lamb R, Horvath C (1991). "Diversity of coding strategies in influenza viruses". Trends Genet 7 (8): 261–6. PMID 1771674.
- ^ Benham C, Mielke S (2005). "DNA mechanics". Annu Rev Biomed Eng 7: 21–53. doi:10.1146/annurev.bioeng.6.062403.132016. PMID 16004565.
- ^ a b Champoux J (2001). "DNA topoisomerases: structure, function, and mechanism". Annu Rev Biochem 70: 369–413. doi:10.1146/annurev.biochem.70.1.369. PMID 11395412.
- ^ a b Wang J (2002). "Cellular roles of DNA topoisomerases: a molecular perspective". Nat Rev Mol Cell Biol 3 (6): 430–40. doi:10.1038/nrm831. PMID 12042765.
- ^ Basu H, Feuerstein B, Zarling D, Shafer R, Marton L (1988). "Recognition of Z-RNA and Z-DNA determinants by polyamines in solution: experimental and theoretical studies". J Biomol Struct Dyn 6 (2): 299–309. PMID 2482766.
- ^ Franklin RE, Gosling RG (6 March 1953). "The Structure of Sodium Thymonucleate Fibres I. The Influence of Water Content". Acta Cryst 6 (8-9): 673–7. doi:10.1107/S0365110X53001939. http://hekto.med.unc.edu:8080/CARTER/carter_WWW/Bioch_134/PDF_files/Franklin_Gossling.pdf.
Franklin RE, Gosling RG (September 1953). "The structure of sodium thymonucleate fibres. II. The cylindrically symmetrical Patterson function". Acta Cryst 6 (8-9): 678–85. doi:10.1107/S0365110X53001940.
- ^ a b Franklin, Rosalind and Gosling, Raymond (1953). "Molecular Configuration in Sodium Thymonucleate. Franklin R. and Gosling R.G" (PDF). Nature 171 (4356): 740–1. doi:10.1038/171740a0. PMID 13054694. http://www.nature.com/nature/dna50/franklingosling.pdf.
- ^ a b Wilkins M.H.F., A.R. Stokes A.R. & Wilson, H.R. (1953). "Molecular Structure of Deoxypentose Nucleic Acids" (PDF). Nature 171 (4356): 738–740. doi:10.1038/171738a0. PMID 13054693. http://www.nature.com/nature/dna50/wilkins.pdf.
- ^ Leslie AG, Arnott S, Chandrasekaran R, Ratliff RL (1980). "Polymorphism of DNA double helices". J. Mol. Biol. 143 (1): 49–72. doi:10.1016/0022-2836(80)90124-2. PMID 7441761.
- ^ Baianu, I.C. (1980). "Structural Order and Partial Disorder in Biological systems". Bull. Math. Biol. 42 (4): 137–141. http://cogprints.org/3822/
- ^ Hosemann R., Bagchi R.N., Direct analysis of diffraction by matter, North-Holland Publs., Amsterdam – New York, 1962.
- ^ Baianu, I.C. (1978). "X-ray scattering by partially disordered membrane systems.". Acta Cryst., A34 (5): 751–753. doi:10.1107/S0567739478001540.
- ^ Wahl M, Sundaralingam M (1997). "Crystal structures of A-DNA duplexes". Biopolymers 44 (1): 45–63. doi:10.1002/(SICI)1097-0282(1997)44:1<45::AID-BIP4>3.0.CO;2-#. PMID 9097733.
- ^ Lu XJ, Shakked Z, Olson WK (2000). "A-form conformational motifs in ligand-bound DNA structures". J. Mol. Biol. 300 (4): 819–40. doi:10.1006/jmbi.2000.3690. PMID 10891271.
- ^ Rothenburg S, Koch-Nolte F, Haag F (2001). "DNA methylation and Z-DNA formation as mediators of quantitative differences in the expression of alleles". Immunol Rev 184: 286–98. doi:10.1034/j.1600-065x.2001.1840125.x. PMID 12086319.
- ^ Oh D, Kim Y, Rich A (2002). "Z-DNA-binding proteins can act as potent effectors of gene expression in vivo". Proc. Natl. Acad. Sci. U.S.A. 99 (26): 16666–71. doi:10.1073/pnas.262672699. PMID 12486233. PMC 139201. http://www.pnas.org/cgi/pmidlookup?view=long&pmid=12486233.
- ^ a b "Arsenic-loving bacteria may help in hunt for alien life". BBC News. December 2, 2010. http://www.bbc.co.uk/news/science-environment-11886943. Retrieved 2010-12-02.
- ^ Bortman, Henry (2010-12-02). "Arsenic-Eating Bacteria Opens New Possibilities for Alien Life". Space.Com web site (Space.com). http://www.space.com/scienceastronomy/arsenic-bacteria-alien-life-101202.html. Retrieved 2010-12-02.
- ^ a b Greider C, Blackburn E (1985). "Identification of a specific telomere terminal transferase activity in Tetrahymena extracts". Cell 43 (2 Pt 1): 405–13. doi:10.1016/0092-8674(85)90170-9. PMID 3907856.
- ^ a b c Nugent C, Lundblad V (1998). "The telomerase reverse transcriptase: components and regulation". Genes Dev 12 (8): 1073–85. doi:10.1101/gad.12.8.1073. PMID 9553037. http://www.genesdev.org/cgi/content/full/12/8/1073.
- ^ Wright W, Tesmer V, Huffman K, Levene S, Shay J (1997). "Normal human chromosomes have long G-rich telomeric overhangs at one end". Genes Dev 11 (21): 2801–9. doi:10.1101/gad.11.21.2801. PMID 9353250. PMC 316649. http://www.genesdev.org/cgi/content/full/11/21/2801.
- ^ Created from NDB UD0017
- ^ a b Burge S, Parkinson G, Hazel P, Todd A, Neidle S (2006). "Quadruplex DNA: sequence, topology and structure". Nucleic Acids Res 34 (19): 5402–15. doi:10.1093/nar/gkl655. PMID 17012276. PMC 1636468. http://nar.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=17012276.
- ^ Parkinson G, Lee M, Neidle S (2002). "Crystal structure of parallel quadruplexes from human telomeric DNA". Nature 417 (6891): 876–80. doi:10.1038/nature755. PMID 12050675.
- ^ Griffith J, Comeau L, Rosenfield S, Stansel R, Bianchi A, Moss H, de Lange T (1999). "Mammalian telomeres end in a large duplex loop". Cell 97 (4): 503–14. doi:10.1016/S0092-8674(00)80760-6. PMID 10338214.
- ^ Seeman NC (November 2005). "DNA enables nanoscale control of the structure of matter". Q. Rev. Biophys. 38 (4): 363–71. doi:10.1017/S0033583505004087. PMID 16515737.
- ^ Klose R, Bird A (2006). "Genomic DNA methylation: the mark and its mediators". Trends Biochem Sci 31 (2): 89–97. doi:10.1016/j.tibs.2005.12.008. PMID 16403636.
- ^ Bird A (2002). "DNA methylation patterns and epigenetic memory". Genes Dev 16 (1): 6–21. doi:10.1101/gad.947102. PMID 11782440.
- ^ Walsh C, Xu G (2006). "Cytosine methylation and DNA repair". Curr Top Microbiol Immunol 301: 283–315. doi:10.1007/3-540-31390-7_11. PMID 16570853.
- ^ Kriaucionis S, Heintz N (May 2009). "The nuclear DNA base 5-hydroxymethylcytosine is present in Purkinje neurons and the brain". Science 324 (5929): 929–30. doi:10.1126/science.1169786. PMID 19372393.
- ^ Ratel D, Ravanat J, Berger F, Wion D (2006). "N6-methyladenine: the other methylated base of DNA". Bioessays 28 (3): 309–15. doi:10.1002/bies.20342. PMID 16479578.
- ^ Gommers-Ampt J, Van Leeuwen F, de Beer A, Vliegenthart J, Dizdaroglu M, Kowalak J, Crain P, Borst P (1993). "beta-D-glucosyl-hydroxymethyluracil: a novel modified base present in the DNA of the parasitic protozoan T. brucei". Cell 75 (6): 1129–36. doi:10.1016/0092-8674(93)90322-H. PMID 8261512.
- ^ Created from PDB 1JDG
- ^ Douki T, Reynaud-Angelin A, Cadet J, Sage E (2003). "Bipyrimidine photoproducts rather than oxidative lesions are the main type of DNA damage involved in the genotoxic effect of solar UVA radiation". Biochemistry 42 (30): 9221–6. doi:10.1021/bi034593c. PMID 12885257. ,
- ^ Cadet J, Delatour T, Douki T, Gasparutto D, Pouget J, Ravanat J, Sauvaigo S (1999). "Hydroxyl radicals and DNA base damage". Mutat Res 424 (1–2): 9–21. PMID 10064846.
- ^ Beckman KB, Ames BN (August 1997). "Oxidative decay of DNA". J. Biol. Chem. 272 (32): 19633–6. doi:10.1074/jbc.272.32.19633. PMID 9289489. http://www.jbc.org/cgi/pmidlookup?view=long&pmid=9289489.
- ^ Valerie K, Povirk L (2003). "Regulation and mechanisms of mammalian double-strand break repair". Oncogene 22 (37): 5792–812. doi:10.1038/sj.onc.1206679. PMID 12947387.
- ^ Ferguson L, Denny W (1991). "The genetic toxicology of acridines". Mutat Res 258 (2): 123–60. PMID 1881402.
- ^ Jeffrey A (1985). "DNA modification by chemical carcinogens". Pharmacol Ther 28 (2): 237–72. doi:10.1016/0163-7258(85)90013-0. PMID 3936066.
- ^ Stephens T, Bunde C, Fillmore B (2000). "Mechanism of action in thalidomide teratogenesis". Biochem Pharmacol 59 (12): 1489–99. doi:10.1016/S0006-2952(99)00388-3. PMID 10799645.
- ^ Braña M, Cacho M, Gradillas A, de Pascual-Teresa B, Ramos A (2001). "Intercalators as anticancer drugs". Curr Pharm Des 7 (17): 1745–80. doi:10.2174/1381612013397113. PMID 11562309.
- ^ Venter J; Adams, MD; Myers, EW; Li, PW; Mural, RJ; Sutton, GG; Smith, HO; Yandell, M et al. (2001). "The sequence of the human genome". Science 291 (5507): 1304–51. doi:10.1126/science.1058040. PMID 11181995.
- ^ Thanbichler M, Wang S, Shapiro L (2005). "The bacterial nucleoid: a highly organized and dynamic structure". J Cell Biochem 96 (3): 506–21. doi:10.1002/jcb.20519. PMID 15988757.
- ^ Wolfsberg T, McEntyre J, Schuler G (2001). "Guide to the draft human genome". Nature 409 (6822): 824–6. doi:10.1038/35057000. PMID 11236998.
- ^ Gregory T (2005). "The C-value enigma in plants and animals: a review of parallels and an appeal for partnership". Ann Bot (Lond) 95 (1): 133–46. doi:10.1093/aob/mci009. PMID 15596463. http://aob.oxfordjournals.org/cgi/content/full/95/1/133.
- ^ The ENCODE Project Consortium (2007). "Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project". Nature 447 (7146): 799–816. doi:10.1038/nature05874. PMID 17571346.
- ^ Created from PDB 1MSW
- ^ Pidoux A, Allshire R (2005). "The role of heterochromatin in centromere function". Philos Trans R Soc Lond B Biol Sci 360 (1455): 569–79. doi:10.1098/rstb.2004.1611. PMID 15905142.
- ^ Harrison P, Hegyi H, Balasubramanian S, Luscombe N, Bertone P, Echols N, Johnson T, Gerstein M (2002). "Molecular fossils in the human genome: identification and analysis of the pseudogenes in chromosomes 21 and 22". Genome Res 12 (2): 272–80. doi:10.1101/gr.207102. PMID 11827946. PMC 155275. http://www.genome.org/cgi/content/full/12/2/272.
- ^ Harrison P, Gerstein M (2002). "Studying genomes through the aeons: protein families, pseudogenes and proteome evolution". J Mol Biol 318 (5): 1155–74. doi:10.1016/S0022-2836(02)00109-2. PMID 12083509.
- ^ Albà M (2001). "Replicative DNA polymerases". Genome Biol 2 (1): REVIEWS3002. doi:10.1186/gb-2001-2-1-reviews3002. PMID 11178285. PMC 150442. http://genomebiology.com/1465-6906/2/REVIEWS3002.
- ^ Sandman K, Pereira S, Reeve J (1998). "Diversity of prokaryotic chromosomal proteins and the origin of the nucleosome". Cell Mol Life Sci 54 (12): 1350–64. doi:10.1007/s000180050259. PMID 9893710.
- ^ Dame RT (2005). "The role of nucleoid-associated proteins in the organization and compaction of bacterial chromatin". Mol. Microbiol. 56 (4): 858–70. doi:10.1111/j.1365-2958.2005.04598.x. PMID 15853876.
- ^ Luger K, Mäder A, Richmond R, Sargent D, Richmond T (1997). "Crystal structure of the nucleosome core particle at 2.8 A resolution". Nature 389 (6648): 251–60. doi:10.1038/38444. PMID 9305837.
- ^ Jenuwein T, Allis C (2001). "Translating the histone code". Science 293 (5532): 1074–80. doi:10.1126/science.1063127. PMID 11498575.
- ^ Ito T (2003). "Nucleosome assembly and remodelling". Curr Top Microbiol Immunol 274: 1–22. PMID 12596902.
- ^ Thomas J (2001). "HMG1 and 2: architectural DNA-binding proteins". Biochem Soc Trans 29 (Pt 4): 395–401. doi:10.1042/BST0290395. PMID 11497996.
- ^ Grosschedl R, Giese K, Pagel J (1994). "HMG domain proteins: architectural elements in the assembly of nucleoprotein structures". Trends Genet 10 (3): 94–100. doi:10.1016/0168-9525(94)90232-1. PMID 8178371.
- ^ Iftode C, Daniely Y, Borowiec J (1999). "Replication protein A (RPA): the eukaryotic SSB". Crit Rev Biochem Mol Biol 34 (3): 141–80. doi:10.1080/10409239991209255. PMID 10473346.
- ^ Created from PDB 1LMB
- ^ Myers L, Kornberg R (2000). "Mediator of transcriptional regulation". Annu Rev Biochem 69: 729–49. doi:10.1146/annurev.biochem.69.1.729. PMID 10966474.
- ^ Spiegelman B, Heinrich R (2004). "Biological control through regulated transcriptional coactivators". Cell 119 (2): 157–67. doi:10.1016/j.cell.2004.09.037. PMID 15479634.
- ^ Li Z, Van Calcar S, Qu C, Cavenee W, Zhang M, Ren B (2003). "A global transcriptional regulatory role for c-Myc in Burkitt's lymphoma cells". Proc Natl Acad Sci USA 100 (14): 8164–9. doi:10.1073/pnas.1332764100. PMID 12808131. PMC 166200. http://www.pnas.org/cgi/pmidlookup?view=long&pmid=12808131.
- ^ Pabo C, Sauer R (1984). "Protein-DNA recognition". Annu Rev Biochem 53: 293–321. doi:10.1146/annurev.bi.53.070184.001453. PMID 6236744.
- ^ Created from PDB 1RVA
- ^ Bickle T, Krüger D (1993). "Biology of DNA restriction". Microbiol Rev 57 (2): 434–50. PMID 8336674.
- ^ a b Doherty A, Suh S (2000). "Structural and mechanistic conservation in DNA ligases". Nucleic Acids Res 28 (21): 4051–8. doi:10.1093/nar/28.21.4051. PMID 11058099. PMC 113121. http://nar.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=11058099.
- ^ Schoeffler A, Berger J (2005). "Recent advances in understanding structure-function relationships in the type II topoisomerase mechanism". Biochem Soc Trans 33 (Pt 6): 1465–70. doi:10.1042/BST20051465. PMID 16246147.
- ^ Tuteja N, Tuteja R (2004). "Unraveling DNA helicases. Motif, structure, mechanism and function". Eur J Biochem 271 (10): 1849–63. doi:10.1111/j.1432-1033.2004.04094.x. PMID 15128295.
- ^ a b Joyce C, Steitz T (1995). "Polymerase structures and function: variations on a theme?". J Bacteriol 177 (22): 6321–9. PMID 7592405.
- ^ Hubscher U, Maga G, Spadari S (2002). "Eukaryotic DNA polymerases". Annu Rev Biochem 71: 133–63. doi:10.1146/annurev.biochem.71.090501.150041. PMID 12045093.
- ^ Johnson A, O'Donnell M (2005). "Cellular DNA replicases: components and dynamics at the replication fork". Annu Rev Biochem 74: 283–315. doi:10.1146/annurev.biochem.73.011303.073859. PMID 15952889.
- ^ Tarrago-Litvak L, Andréola M, Nevinsky G, Sarih-Cottin L, Litvak S (1 May 1994). "The reverse transcriptase of HIV-1: from enzymology to therapeutic intervention". FASEB J 8 (8): 497–503. PMID 7514143. http://www.fasebj.org/cgi/reprint/8/8/497.
- ^ Martinez E (2002). "Multi-protein complexes in eukaryotic gene transcription". Plant Mol Biol 50 (6): 925–47. doi:10.1023/A:1021258713850. PMID 12516863.
- ^ Created from PDB 1M6G
- ^ Cremer T, Cremer C (2001). "Chromosome territories, nuclear architecture and gene regulation in mammalian cells". Nat Rev Genet 2 (4): 292–301. doi:10.1038/35066075. PMID 11283701.
- ^ Pál C, Papp B, Lercher M (2006). "An integrated view of protein evolution". Nat Rev Genet 7 (5): 337–48. doi:10.1038/nrg1838. PMID 16619049.
- ^ O'Driscoll M, Jeggo P (2006). "The role of double-strand break repair - insights from human genetics". Nat Rev Genet 7 (1): 45–54. doi:10.1038/nrg1746. PMID 16369571.
- ^ Vispé S, Defais M (1997). "Mammalian Rad51 protein: a RecA homologue with pleiotropic functions". Biochimie 79 (9-10): 587–92. doi:10.1016/S0300-9084(97)82007-X. PMID 9466696.
- ^ Neale MJ, Keeney S (2006). "Clarifying the mechanics of DNA strand exchange in meiotic recombination". Nature 442 (7099): 153–8. doi:10.1038/nature04885. PMID 16838012.
- ^ Dickman M, Ingleston S, Sedelnikova S, Rafferty J, Lloyd R, Grasby J, Hornby D (2002). "The RuvABC resolvasome". Eur J Biochem 269 (22): 5492–501. doi:10.1046/j.1432-1033.2002.03250.x. PMID 12423347.
- ^ Orgel L (2004). "Prebiotic chemistry and the origin of the RNA world" (PDF). Crit Rev Biochem Mol Biol 39 (2): 99–123. doi:10.1080/10409230490460765. PMID 15217990. http://www.crbmb.com/cgi/reprint/39/2/99.pdf.
- ^ Davenport R (2001). "Ribozymes. Making copies in the RNA world". Science 292 (5520): 1278. doi:10.1126/science.292.5520.1278a. PMID 11360970.
- ^ Szathmáry E (1992). "What is the optimum size for the genetic alphabet?" (PDF). Proc Natl Acad Sci USA 89 (7): 2614–8. doi:10.1073/pnas.89.7.2614. PMID 1372984. PMC 48712. http://www.pnas.org/cgi/reprint/89/7/2614.pdf.
- ^ Lindahl T (1993). "Instability and decay of the primary structure of DNA". Nature 362 (6422): 709–15. doi:10.1038/362709a0. PMID 8469282.
- ^ Vreeland R, Rosenzweig W, Powers D (2000). "Isolation of a 250 million-year-old halotolerant bacterium from a primary salt crystal". Nature 407 (6806): 897–900. doi:10.1038/35038060. PMID 11057666.
- ^ Hebsgaard M, Phillips M, Willerslev E (2005). "Geologically ancient DNA: fact or artefact?". Trends Microbiol 13 (5): 212–20. doi:10.1016/j.tim.2005.03.010. PMID 15866038.
- ^ Nickle D, Learn G, Rain M, Mullins J, Mittler J (2002). "Curiously modern DNA for a "250 million-year-old" bacterium". J Mol Evol 54 (1): 134–7. doi:10.1007/s00239-001-0025-x. PMID 11734907.
- ^ Collins A, Morton N (1994). "Likelihood ratios for DNA identification" (PDF). Proc Natl Acad Sci USA 91 (13): 6007–11. doi:10.1073/pnas.91.13.6007. PMID 8016106. PMC 44126. http://www.pnas.org/cgi/reprint/91/13/6007.pdf.
- ^ Weir B, Triggs C, Starling L, Stowell L, Walsh K, Buckleton J (1997). "Interpreting DNA mixtures". J Forensic Sci 42 (2): 213–22. PMID 9068179.
- ^ Jeffreys A, Wilson V, Thein S (1985). "Individual-specific 'fingerprints' of human DNA". Nature 316 (6023): 76–9. doi:10.1038/316076a0. PMID 2989708.
- ^ Colin Pitchfork — first murder conviction on DNA evidence also clears the prime suspect Forensic Science Service Accessed 23 December 2006
- ^ "DNA Identification in Mass Fatality Incidents". National Institute of Justice. September 2006. http://massfatality.dna.gov/Introduction/.
- ^ Baldi, Pierre; Brunak, Soren (2001). Bioinformatics: The Machine Learning Approach. MIT Press. ISBN 978-0-262-02506-5. OCLC 45951728. .
- ^ Gusfield, Dan. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, 15 January 1997. ISBN 978-0-521-58519-4.
- ^ Sjölander K (2004). "Phylogenomic inference of protein molecular function: advances and challenges". Bioinformatics 20 (2): 170–9. doi:10.1093/bioinformatics/bth021. PMID 14734307. http://bioinformatics.oxfordjournals.org/cgi/reprint/20/2/170.
- ^ Mount DM (2004). Bioinformatics: Sequence and Genome Analysis (2 ed.). Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. ISBN 0879697121. OCLC 55106399.
- ^ Rothemund PW (March 2006). "Folding DNA to create nanoscale shapes and patterns". Nature 440 (7082): 297–302. doi:10.1038/nature04586. PMID 16541064.
- ^ Andersen ES, Dong M, Nielsen MM (May 2009). "Self-assembly of a nanoscale DNA box with a controllable lid". Nature 459 (7243): 73–6. doi:10.1038/nature07971. PMID 19424153.
- ^ Ishitsuka Y, Ha T (May 2009). "DNA nanotechnology: a nanomachine goes live". Nat Nanotechnol 4 (5): 281–2. doi:10.1038/nnano.2009.101. PMID 19421208.
- ^ Aldaye FA, Palmer AL, Sleiman HF (September 2008). "Assembling materials with DNA as the guide". Science 321 (5897): 1795–9. doi:10.1126/science.1154533. PMID 18818351.
- ^ Wray G; Martindale, Mark Q. (2002). "Dating branches on the tree of life using DNA". Genome Biol 3 (1): REVIEWS0001. doi:10.1046/j.1525-142X.1999.99010.x. PMID 11806830. PMC 150454. http://genomebiology.com/1465-6906/3/REVIEWS0001.
- ^ Lost Tribes of Israel, NOVA, PBS airdate: 22 February 2000. Transcript available from PBS.org, (last accessed on 4 March 2006)
- ^ Kleiman, Yaakov. "The Cohanim/DNA Connection: The fascinating story of how DNA studies confirm an ancient biblical tradition". aish.com (January 13, 2000). Accessed 4 March 2006.
- ^ Bhattacharya, Shaoni. "Killer convicted thanks to relative's DNA". newscientist.com (20 April 2004). Accessed 22 December 06
- ^ Dahm R (January 2008). "Discovering DNA: Friedrich Miescher and the early years of nucleic acid research". Hum. Genet. 122 (6): 565–81. doi:10.1007/s00439-007-0433-0. PMID 17901982.
- ^ Levene P, (1 December 1919). "The structure of yeast nucleic acid". J Biol Chem 40 (2): 415–24. http://www.jbc.org/cgi/reprint/40/2/415.
- ^ Astbury W, (1947). "Nucleic acid". Symp. SOC. Exp. Bbl 1 (66).
- ^ Lorenz MG, Wackernagel W (1 September 1994). "Bacterial gene transfer by natural genetic transformation in the environment". Microbiol. Rev. 58 (3): 563–602. PMID 7968924. PMC 372978. http://mmbr.asm.org/cgi/pmidlookup?view=long&pmid=7968924.
- ^ Avery O, MacLeod C, McCarty M (1944). "Studies on the chemical nature of the substance inducing transformation of pneumococcal types. Inductions of transformation by a desoxyribonucleic acid fraction isolated from pneumococcus type III". J Exp Med 79 (2): 137–158. doi:10.1084/jem.79.2.137. PMID 19871359. PMC 2135445. http://www.jem.org/cgi/reprint/149/2/297.
- ^ Hershey A, Chase M (1952). "Independent functions of viral protein and nucleic acid in growth of bacteriophage" (PDF). J Gen Physiol 36 (1): 39–56. doi:10.1085/jgp.36.1.39. PMID 12981234. PMC 2147348. http://www.jgp.org/cgi/reprint/36/1/39.pdf.
- ^ The B-DNA X-ray pattern on the right of this linked image was obtained by Rosalind Franklin and Raymond Gosling in May 1952 at high hydration levels of DNA and it has been labeled as "Photo 51"
- ^ Nature Archives Double Helix of DNA: 50 Years
- ^ Original X-ray diffraction image
- ^ The Nobel Prize in Physiology or Medicine 1962 Nobelprize .org Accessed 22 December 06
- ^ Brenda Maddox (23 January 2003). "The double helix and the 'wronged heroine'" (PDF). Nature 421 (6921): 407–408. doi:10.1038/nature01399. PMID 12540909. http://www.biomath.nyu.edu/index/course/hw_articles/nature4.pdf.
- ^ Crick, F.H.C. On degenerate templates and the adaptor hypothesis (PDF). genome.wellcome.ac.uk (Lecture, 1955). Accessed 22 December 2006
- ^ Meselson M, Stahl F (1958). "The replication of DNA in Escherichia coli". Proc Natl Acad Sci USA 44 (7): 671–82. doi:10.1073/pnas.44.7.671. PMID 16590258.
- ^ The Nobel Prize in Physiology or Medicine 1968 Nobelprize.org Accessed 22 December 06
- Calladine, Chris R.; Drew, Horace R.; Luisi, Ben F. and Travers, Andrew A. (2003). Understanding DNA: the molecule & how it works. Amsterdam: Elsevier Academic Press. ISBN 0-12-155089-3.
- Dennis, Carina; Julie Clayton (2003). 50 years of DNA. Basingstoke: Palgrave Macmillan. ISBN 1-4039-1479-6.
- Judson, Horace Freeland (1996). The eighth day of creation: makers of the revolution in biology. Plainview, N.Y: CSHL Press. ISBN 0-87969-478-5.
- Olby, Robert C. (1994). The path to the double helix: the discovery of DNA. New York: Dover Publications. ISBN 0-486-68117-3. , first published in October 1974 by MacMillan, with foreword by Francis Crick;the definitive DNA textbook,revised in 1994 with a 9 page postscript.
- Olby, Robert C. (2009). Francis Crick: A Biography. Plainview, N.Y: Cold Spring Harbor Laboratory Press. ISBN 0-87969-798-9.
- Ridley, Matt (2006). Francis Crick: discoverer of the genetic code. [Ashland, OH: Eminent Lives, Atlas Books. ISBN 0-06-082333-X.
- Berry, Andrew; Watson, James D. (2003). DNA: the secret of life. New York: Alfred A. Knopf. ISBN 0-375-41546-7.
- Stent, Gunther Siegmund; Watson, James D. (1980). The double helix: a personal account of the discovery of the structure of DNA. New York: Norton. ISBN 0-393-95075-1.
- Wilkins, Maurice (2003). The third man of the double helix the autobiography of Maurice Wilkins. Cambridge, Eng: University Press. ISBN 0-19-860665-6.
GFAJ-1 is a strain of rod-shaped bacteria in the family Halomonadaceae. It is an extremophile. It was isolated from the hypersaline and alkaline Mono Lake in eastern California by geobiologist Felisa Wolfe-Simon, a NASA research fellow in residence at the US Geological Survey. In a 2010 Science journal publication, the authors claimed that the microbe, when starved of phosphorus, is capable of substituting arsenic for a small percentage of its phosphorus and sustain its growth. Immediately after publication, other microbiologists and biochemists expressed doubt about this claim which was robustly criticized in the scientific community. Subsequent independent studies published in 2012 found no detectable arsenate in the DNA of GFAJ-1, refuted the claim, and demonstrated that GFAJ-1 is simply an arsenate-resistant, phosphate-dependent organism.
The GFAJ-1 bacterium was discovered by geomicrobiologist Felisa Wolfe-Simon, a NASA astrobiology fellow in residence at the US Geological Survey in Menlo Park, California. GFAJ stands for "Give Felisa a Job". The organism was isolated and cultured beginning in 2009 from samples she and her colleagues collected from sediments at the bottom of Mono Lake, California, U.S.A. Mono Lake is hypersaline (about 90 grams/liter) and highly alkaline (pH 9.8). It also has one of the highest natural concentrations of arsenic in the world (200 μM). The discovery was widely publicized on 2 December 2010.
Taxonomy and phylogeny
|Phylogeny of GFAJ-1 and closely related bacteria based on ribosomal DNA sequences.|
Molecular analysis based on 16S rRNA sequences shows GFAJ-1 to be closely related to other moderate halophile ("salt-loving") bacteria of the family Halomonadaceae. Although the authors produced a cladogram in which the strain is nested among members of Halomonas, including H. alkaliphila and H. venusta, they did not explicitly assign the strain to that genus. Many bacteria are known to be able to tolerate high levels of arsenic, and to have a proclivity to take it up into their cells. However, GFAJ-1 has now been proposed to go a step further; when starved of phosphorus, it may instead incorporate arsenic into its metabolites and macromolecules and continue growing.
Species or strain
In the Science journal article, GFAJ-1 is referred to as a strain of Halomonadaceae and not as a new species. The International Code of Nomenclature of Bacteria, the set of regulations which govern the taxonomy of bacteria, and certain articles in the International Journal of Systematic and Evolutionary Microbiology contain the guidelines and minimal standards to describe a new species, e.g. the minimal standards to describe a member of the Halomonadaceae. Organisms are described as new species if they meet certain physiological and genetic condition, such as generally less than 97% 16S rRNA sequence identity to other known species) and metabolic differences allowing them to be discerned apart. In addition to indicators to tell the novel species from other species, other analyses are required, such as fatty acid composition, respiratory quinone used and tolerance ranges and deposition of the strain in at least two microbiological repositories. New proposed names are given in italics followed by sp. nov. (and gen. nov. if it is a novel genus according to the descriptions of that clade). In the instance of the GFAJ-1 strain these criteria are not met, and the strain is not claimed to be a new species. When a strain is not assigned to a species (e.g. due to insufficient data or author choice) it is often labeled as the genus name followed by "sp." (i.e., undetermined species of that genus) and the strain name. In the case of GFAJ-1 the authors chose to refer to the strain by strain designation only. Strains closely related to GFAJ-1 include Halomonas sp. GTW and Halomonas sp. G27, neither of which were described as valid species. If the authors had formally assigned strain GFAJ-1 to the Halomonas genus, the name would be given as Halomonas sp. GFAJ-1.
A phosphorus-free growth medium (which actually contained 3.1 ± 0.3 μM of residual phosphate, from impurities in reagents) was used to culture the bacteria in a regime of increasing exposure to arsenate; the initial level of 0.1 mM was eventually ramped up to 40 mM. Alternative media used for comparative experiments contained either high levels of phosphate (1.5 mM) with no arsenate, or had neither added phosphate nor added arsenate. It was observed that GFAJ-1 could grow through many doublings in cell numbers when cultured in either phosphate or arsenate media, but could not grow when placed in a medium of a similar composition to which neither phosphate nor arsenate was added. The phosphorus content of the arsenic-fed, phosphorus-starved bacteria (as measured by ICP-MS) was only 0.019 (± 0.001) % by dry weight, one thirtieth of that when grown in phosphate-rich medium. This phosphorus content was also only about one tenth of the cells' average arsenic content (0.19 ± 0.25% by dry weight). The arsenic content of cells as measured by ICP-MS varies widely and can be lower than the phosphorus contents in some experiments, and up to fourteen times higher in others. Other data from the same study obtained with nano-SIMS suggest a ~75-fold excess of phosphate (P) over arsenic (As) when expressed as P:C and As:C ratios, even in cells grown with arsenate and no added phosphate. When cultured in the arsenate solution, GFAJ-1 only grew 60% as fast as it did in phosphate solution. The phosphate-starved bacteria had an intracellular volume 1.5 times normal; the greater volume appeared to be associated with the appearance of large "vacuole-like regions".
When the researchers added isotope-labeled arsenate to the solution to track its distribution, they found that arsenic was present in the cellular fractions containing the bacteria's proteins, lipids and metabolites such as ATP, as well as its DNA and RNA. Nucleic acids from stationary phase cells starved of phosphorus were concentrated via five extractions (one with phenol, three with phenol-chloroform and one with chloroform extraction solvent), followed by ethanol precipitation. Although direct evidence of the incorporation of arsenic into biomolecules is still lacking, radioactivity measurements suggested that approximately one-tenth (11.0 ± 0.1%) of the arsenic absorbed by these bacteria ended up in the fraction that contained the nucleic acids (DNA and RNA) and all other co-precipitated compounds not extracted by the previous treatments. A comparable control experiment with isotope-labeled phosphate was not performed. With the distribution of the strain in mid-2011, other labs began to independently test the validity of the discovery. Prof. Rosie Redfield from the University of British Columbia, following issues with the growth conditions, investigated the growth requirements of GFAJ-1, and found that the strain grows better on solid agar medium than in liquid culture. Redfield attributed this to low potassium levels and hypothesized that the potassium levels in basal ML60 medium may be too low to support growth. Redfield after finding and addressing further issues (ionic strength, pH and the use of glass tubes instead of polypropylene) found that arsenate marginally stimulated growth, but didn't affect the final densities of the cultures, unlike what was claimed. Subsequent studies using mass spectrometry by the same group found no evidence of arsenate being incorporated into the DNA of GFAJ-1.
Arsenate ester stability
Arsenate esters, such as those that would be present in DNA, are generally expected to be orders of magnitude less stable to hydrolysis than corresponding phosphate esters. dAMAs, the structural arsenic analog of the DNA building block dAMP, has a half-life of 40 minutes in water at neutral pH. Estimates of the half-life in water of arsenodiester bonds, which would link the nucleotides together, are as short as 0.06 seconds—compared to 30 million years for the phosphodiester bonds in DNA. The authors speculate that the bacteria may stabilize arsenate esters to a degree by using poly-β-hydroxybutyrate (which has been found to be elevated in "vacuole-like regions" of related species of the genus Halomonas) or other means to lower the effective concentration of water. Polyhydroxybutyrates are used by many bacteria for energy and carbon storage under conditions when growth is limited by elements other than carbon, and typically appear as large waxy granules closely resembling the "vacuole-like regions" seen in GFAJ-1 cells. The authors present no mechanism by which insoluble polyhydroxybutyrate may lower the effective concentration of water in the cytoplasm sufficiently to stabilize arsenate esters. Although all halophiles must reduce the water activity of their cytoplasm by some means to avoid desiccation, the cytoplasm always remains an aqueous environment.
NASA's announcement of a news conference "that will impact the search for evidence of extraterrestrial life" was criticized as sensationalistic and misleading; an editorial in New Scientist commented "although the discovery of alien life, if it ever happens, would be one of the biggest stories imaginable, this was light-years from that."
In addition, many experts who have evaluated the paper have concluded that the reported studies do not provide enough evidence to support the claims made by the authors. In an online article on Slate, science writer Carl Zimmer discussed the skepticism of several scientists: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case". Chemist Steven A. Benner has expressed doubts that arsenate has replaced phosphate in the DNA of this organism. He suggested that the trace contaminants in the growth medium used by Wolfe-Simon in her laboratory cultures are sufficient to supply the phosphorus needed for the cells' DNA. He believes that it is more likely that arsenic is being sequestered elsewhere in the cells. University of British Columbia microbiologist Rosemary Redfield said that the paper "doesn't present any convincing evidence that arsenic has been incorporated into DNA or any other biological molecule," and suggests that the experiments lacked the washing steps and controls necessary to properly validate their conclusions. Harvard microbiologist Alex Bradley said that arsenic-containing DNA would be so unstable in water it could not have survived the analysis procedure.
On 8 December 2010, Science published a response by Wolfe-Simon, in which she stated that criticism of the research was expected. In response, a "Frequently Asked Questions" page to improve understanding of the work was posted on 16 December 2010. The team plans to deposit the GFAJ-1 strain in the ATCC and DSMZ culture collections to allow widespread distribution. In late May 2011, the strain has also been made available upon request directly from the laboratory of the authors. Science has made the article freely available. The article was published in print six months after acceptance in the 3 June 2011 issue of Science. The publication was accompanied by eight technical comments addressing various concerns regarding the article's experimental procedure and conclusion, as well as a response by the authors to these concerns. The editor in chief has indicated that some issues remain and that their resolution is likely to be a long process. A review by Rosen et al., in the March 2011 issue of journal Bioessays, discusses the technical issues with the Science paper, provides alternative explanations, and highlights known biochemistry of other arsenic resistant and arsenic utilizing microbes.
On 27 May 2011, Wolfe-Simon and her team responded to the criticism in a follow-up Science journal publication. Then on January 2012 a group of researchers led by Rosie Redfield at the University of British Columbia analyzed the DNA of GFAJ-1 using liquid chromatography–mass spectrometry and could not detect any arsenic, which Redfield calls a "clear refutation" of the original paper's findings. Following the publication of the articles challenging the conclusions of the original Science article first describing GFAJ-1, the website Retraction Watch argued that the original article should be retracted because of misrepresentation of critical data.
- Arsenic biochemistry
- Arsenic poisoning
- Arsenic toxicity
- Hypothetical types of biochemistry
- Nucleic acid analogues
- Organoarsenic chemistry
- Prebiotic arsenic
- Wolfe-Simon, Felisa; Blum, Jodi Switzer; Kulp, Thomas R.; Gordon, Gwyneth W.; Hoeft, Shelley E.; Pett-Ridge, Jennifer; Stolz, John F.; Webb, Samuel M. et al. (2 December 2010). "A bacterium that can grow by using arsenic instead of phosphorus". Science 332 (6034): 1163–1166. doi:10.1126/science.1197258. PMID 21127214. Retrieved 9 June 2011.
- Katsnelson, Alla (2 December 2010). "Arsenic-eating microbe may redefine chemistry of life". Nature News. doi:10.1038/news.2010.645. Retrieved 2 December 2010.
- "Arsenic-loving bacteria may help in hunt for alien life". BBC News. 2 December 2010. Retrieved 2 December 2010.
- Katsnelson, A. (2010). "Arsenic-eating microbe may redefine chemistry of life". Nature. doi:10.1038/news.2010.645.
- "Studies refute arsenic bug claim". BBC News. 9 July 2012. Retrieved 2012-07-10.
- Tobias J. Erb; Patrick Kiefer, Bodo Hattendorf, Detlef Gunter, Julia Vorholt. (July 8, 2012). "GFAJ-1 Is an Arsenate-Resistant, Phosphate-Dependent Organism". Science. doi: 10.1126/science.1218455 . Retrieved 2012-07-10.
- RRResearch By Rosie Redfield. January 16, 2012
- Marshall Louis Reaves; Sunita Sinha; Joshua Rabinowitz; Leonid Kruglyak; Rosemary Redfield (July 8, 2012). "Absence of Detectable Arsenate in DNA from Arsenate-Grown GFAJ-1 Cells". Science. doi: 10.1126/science.1219861 . Retrieved 2012-07-10.
- Bortman, Henry (5 October 2009). "Searching for Alien Life, on Earth". Astrobiology Magazine (NASA). Retrieved 2 December 2010.
- Davies, Paul (4 December 2010). "The 'Give Me a Job' Microbe". Wall Street Journal. Retrieved 5 December 2010.
- Bortman, Henry (2 December 2010). "Thriving on arsenic". Astrobiology Magazine (NASA). Retrieved 11 December 2010.
- Oremland, Ronald S.; Stolz, John F. (9 May 2003). "The ecology of arsenic" (PDF). Science 300 (5621): 939–944. doi:10.1126/science.1081903. PMID 12738852.
- Wolfe-Simon, Felisa; Blum, J. S. et al. (2 December 2010). "A bacterium that can grow by using arsenic instead of phosphorus: Supporting online material" (PDF). Science 332 (6034): 1163–1166. doi:10.1126/science.1197258. PMID 21127214.
- Stolz, John F.; Basu, Partha; Santini, Joanne M.; Oremland, Ronald S. (2006). "Arsenic and selenium in microbial metabolism". Annual Review of Microbiology 60: 107–130. doi:10.1146/annurev.micro.60.080805.142053. PMID 16704340.
- "Halomonas sp. GFAJ-1". U.S. National Library of Medicine. Retrieved 11 December 2011.
- Arahal, D. R.; Vreeland, R. H.; Litchfield, C. D.; Mormile, M. R.; Tindall, B. J.; Oren, A.; Bejar, V.; Quesada, E.; Ventosa, A. (2007). "Recommended minimal standards for describing new taxa of the family Halomonadaceae". International Journal of Systematic and Evolutionary Microbiology 57 (Pt 10): 2436–2446. doi:10.1099/ijs.0.65430-0. PMID 17911321.
- Stackebrandt, Erko; Ebers, Jonas (2006). "Taxonomic parameters revisited: tarnished gold standards" (PDF). Microbiology Today 33 (4): 152–155.
- Sneath, P.H.A (1992). Lapage S.P.; Sneath, P.H.A.; Lessel, E.F.; Skerman, V.B.D.; Seeliger, H.P.R.; Clark, W.A., ed. International Code of Nomenclature of Bacteria. Washington, D.C.: American Society for Microbiology. ISBN 1-55581-039-X. PMID 21089234.
- Euzéby J.P. (2010). "Introduction". List of Prokaryotic names with Standing in Nomenclature. Retrieved 11 December 2010.
- Guo, Jianbo; Zhou, Jiti; Wang, Dong; Tian, Cunping; Wang, Ping; Uddin, M. Salah (2008). "A novel moderately halophilic bacterium for decolorizing azo dye under high salt condition". Biodegradation 19 (1): 15–19. doi:10.1007/s10532-007-9110-1. PMID 17347922.
- Kiesel, B.; Müller, R.H.; Kleinsteuber, R. (2007). "Adaptative potential of alkaliphilic bacteria towards chloroaromatic substrates assessed by a gfp-tagged 2,4-D degradation plasmid". Engineering in Life Sciences 7 (4): 361–372. doi:10.1002/elsc.200720200.
- Rosie Redfield. "RRResearch: Two mistakes discovered".
- Rosie Redfield. "RRResearch: Growth of GFAJ-1 in arsenate".
- Westheimer, F.H. (6 June 1987). "Why nature chose phosphates". Science 235 (4793): 1173–1178. doi:10.1126/science.2434996.
- Lagunas, Rosario; Pestana, David; Diez-Masa, Jose C. (1984). "Arsenic mononucleotides. Separation by high-performance liquid chromatography and identification with myokinase and adenylate deaminase". Biochemistry 23 (5): 955–960. doi:10.1021/bi00300a024. PMID 6324859.
- Fekry, M. I.; Tipton, P. A.; Gates, K. S. (2011). "Kinetic Consequences of Replacing the Internucleotide Phosphorus Atoms in DNA with Arsenic". ACS Chemical Biology 6 (2): 110126094628028. doi:10.1021/cb2000023.
- Quillaguamána, Jorge; Delgado, Osvaldo; Mattiasson, Bo; Hatti-Kaul, Rajni (January 2006). "Poly(β-hydroxybutyrate) production by a moderate halophile, Halomonas boliviensis LC1". Enzyme and Microbial Technology (Elsevier Inc.) 38 (1–2): 148–154. doi:10.1016/j.enzmictec.2005.05.013. PMID 15960675.
- Oren, Aharon (June 1999). "Bioenergetic aspects of halophilism". Microbiology and Molecular Biology Reviews (American Society for Microbiology) 63 (2): 334–48. ISSN 1092-2172. PMC 98969. PMID 10357854.
- Opinion (8 December 2010). Curb your enthusiasm for aliens, NASA (2790). New Scientist. p. 5. Retrieved 9 December 2010.
- "MEDIA ADVISORY : M10-167, NASA Sets News Conference on Astrobiology Discovery; Science Journal Has Embargoed Details". 29 November 2010.
- Carmen Drahl (2010). "Arsenic Bacteria Breed Backlash". Chemical & Engineering News 88 (50): 7. doi:10.1021/cen112210140356.
- Zimmer, Carl (7 December 2010). "Scientists see fatal flaws in the NASA study of arsenic-based life". Slate. Retrieved 7 December 2010.
- Zimmer, Carl (27 May 2011). "The Discovery of Arsenic-Based Twitter". Slate. Retrieved 29 May 2011.
- Redfield, Rosemary (4 December 2010). "Arsenic-associated bacteria (NASA's claims)". RR Research blog. Retrieved 4 December 2010.
- Redfield, Rosemary (8 December 2010). "My Letter to Science". RR Research blog. Retrieved 9 December 2010.
- Bradley, Alex (5 December 2010). "Arsenate-based DNA: a big idea with big holes". We, Beasties blog. Retrieved 9 December 2010.
- Wolfe-Simon, Felisa (16 December 2010). "Response to Questions Concerning the Science Article," (PDF). Retrieved 17 December 2010.
- "NASA Science Seminar: Arsenic and the Meaning of Life". 21 December 2010. Retrieved 30 January 2010.
- Wolfe-Simon, Felisa; Blum, Jodi Switzer; Kulp, Thomas R.; Gordon, Gwyneth W.; Hoeft, Shelley E.; Pett-Ridge, Jennifer; Stolz, John F.; Webb, Samuel M. et al. (27 May 2011). "Response to Comments on "A Bacterium That Can Grow Using Arsenic Instead of Phosphorus"" (PDF). Science 332 (6034): 1149–1149. doi:10.1126/science.1202098. Retrieved 30 May 2011.
- Pennisi, Elizabeth (8 December 2010). "Author of controversial arsenic paper speaks". ScienceInsider. Science. Retrieved 11 December 2010.
- Cotner, J. B.; Hall, E. K. (27 May 2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201943.
- Redfield, R. J. (27 May 2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201482.
- Schoepp-Cothenet, B.; Nitschke, W.; Barge, L. M.; Ponce, A.; Russell, M. J.; Tsapin, A. I. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201438.
- Csabai, I.; Szathmary, E. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201399.
- Borhani, D. W. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201255.
- Benner, S. A. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201304.
- Foster, P. L. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): i–1149. doi:10.1126/science.1201551.
- Oehler, S. (2011). "Comment on "A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1201381.
- Hamilton, Jon (30 May 2011). "Study Of Arsenic-Eating Microbe Finds Doubters". NPR. Retrieved 30 May 2011.
- Wolfe-Simon, F.; Blum, J. S.; Kulp, T. R.; Gordon, G. W.; Hoeft, S. E.; Pett-Ridge, J.; Stolz, J. F.; Webb, S. M. et al. (2011). "Response to Comments on "A Bacterium That Can Grow Using Arsenic Instead of Phosphorus"". Science 332 (6034): 1149–1149. doi:10.1126/science.1202098.
- Alberts, B. (2011). "Editor's Note". Science 332 (6034): 1149. doi:10.1126/science.1208877.
- Rosen, Barry P.; Ajees, A. Abdul; McDermott, Timothy R. (2011). "Life and death with arsenic". BioEssays 33 (5): 350–357. doi:10.1002/bies.201100012. PMID 21387349.
- Hayden, Erika Check (20 January 2012). "Study challenges existence of arsenic-based life". Nature News. doi:10.1038/nature.2012.9861. Retrieved 20 January 2012.
- David Sanders. "Despite refutation, Science arsenic life paper deserves retraction, scientist argues". Retraction Watch. Retrieved 2012-07-09.
- David Sanders. "Why it's high time to retract #arseniclife". periodicplayground. Retrieved 2013-02-16.
The family was originally described in 1988 to contain the genera Halomonas and Deleya.
In 1989, Chromobacterium marismortui was reclassified as Chromohalobacter marismortui forming a third genus in the family Halomonadaceae.
Subsequently, in 1990 a species was discovered and was originally proposed to be called Volcaniella eurihalina forming a new genus in the Halomonadaceae, but was later (in 1995) reclassified as a member of the genus Halomonas.
The species Carnimonas nigrificans (sole member of genus) was not placed in the family due to the lack of two out of 15 descriptive 16S rRNA signature sequences, but it has been proposed to reclassify it into the family.
In 1996, the family was later reorganised by unifying genera Deleya, Halomonas and Halovibrio and the species Paracoccus halodenitrificans into Halomonas and placing Zymobacter in this family. However, it was later discovered that the strain of Halovibrio variabilis DSM 3051 and DSM 3050 differed and the latter was made type strain of the Halovibrio, which remains still in use. and now comprising two species (the other being Halovibrio denitrificans)
In 2002, Halomonas marina was transferred to its own genus Cobetia, and in 2009 Halomonas marisflavi, Halomonas indalinina. and Halomonas avicenniae were transferred to a new genus called Kushneria (5 species)
Several singleton genera were created recently: in 2007, Halotalea alkalilenta was described, Aidingimonas halophila in 2009, Halospina denitrificans in 2006, Modicisalibacter tunisiensis in 2009 Salinicola socius in 2009. To the latter genus two species were transferred Halomonas salaria as Salinicola salarius and Chromohalobacter salarius as Salinicola halophilus.
- Halomonas, the type genus
- Aidingimonas halophila
- Chromohalobacter marismortui
- Chromohalobacter beijerinckii
- Chromohalobacter canadensis and Chromohalobacter israelensis, formerly of the genus Halomonas
- Chromohalobacter japonicus
- Chromohalobacter nigrandesensis
- Chromohalobacter salarius
- Chromohalobacter salexigens
- Chromohalobacter sarecensis, phychrotolerant
- Halotalea alkalilenta
- Kushneria aurantia, type species
- Kushneria marisflavi, Kushneria indalinina and Kushneria avicenniae were previously classified under Halomonas
- Zymobacter, not to be confused with Zymomonas mobilis, an alphaproteobacterion studies for its biofuel production, an easy error that even the International Code of Nomenclature of Bacteria made in as noted in
Note: Species of Deleya and Halovibrio are now now Halomonas
The names derives from Halomonas, which is the type genus of the family, plus the suffix -aceae, ending to denote a family
Geomicrobiologist Felisa Wolfe-Simon with a NASA funded team is researching a particular strain the Halomonadaceae family, named GFAJ-1, isolated and cultured from sediments collected along the shore of Mono Lake, near Yosemite National Park in eastern California. This GFAJ-1 strain of Halomonadaceae can grow in the presence of high concentrations of arsenic.
- FRANZMANN (P.D.), WEHMEYER (U.) and STACKEBRANDT (E.): Halomonadaceae fam. nov., a new family of the class Proteobacteria to accommodate the genera Halomonas and Deleya. Syst. Appl. Microbiol., 1988, 11, 16-19.
- VENTOSA (A.), GUTIERREZ (M.C.), GARCIA (M.T.) and RUIZ-BERRAQUERO (F.): Classification of "Chromobacterium marismortui" in a new genus, Chromohalobacter gen. nov., as Chromohalobacter marismortui comb. nov., nom. rev. Int. J. Syst. Bacteriol., 1989, 39, 382-386.
- QUESADA (E.), VALDERRAMA (M.J.), BEJAR (V.), VENTOSA (A.), GUTIERREZ (M.C.), RUIZ-BERRAQUERO (F.) and RAMOS-CORMENZANA (A.): Volcaniella eurihalina gen. nov., sp. nov., a moderately halophilic nonmotile gram-negative rod. Int. J. Syst. Bacteriol., 1990, 40, 261-267
- Mellado, E., Moore, E. R. B., Nieto, J. J. & Ventosa, A. (1995). Phylogenetic inferences and taxonomic consequences of 16S. ribosomal DNA sequence comparison of Chromohalobacter marismortui, Volcaniella eurihalina, and Deleya salina and reclassification of V. eurihalina as Halomonas eurihalina comb. nov. Int J Syst Bacteriol 45, 712–716.
- Garriga, M.; Ehrmann, M. A.; Arnau, J.; Hugas, M.; Vogel, R. F. (1998). "Carnimonas nigrificans gen. nov., sp. nov., a bacterial causative agent for black spot formation on cured meat products". International Journal of Systematic Bacteriology 48: 677. doi:10.1099/00207713-48-3-677. PMID 9734022.
- D. R. Arahal, W. Ludwig, K. H. Schleifer and A. Ventosa Phylogeny of the family Halomonadaceae based on 23S and 16S rDNA sequence analyses. International Journal of Systematic and Evolutionary Microbiology, Vol 52, 241-249
- DOBSON (S.J.) and FRANZMANN (P.D.): Unification of the genera Deleya (Baumann et al. 1983), Halomonas (Vreeland et al. 1980), and Halovibrio (Fendrich 1988) and the species Paracoccus halodenitrificans (Robinson and Gibbons 1952) into a single genus, Halomonas, and placement of the genus Zymobacter in the family Halomonadaceae. Int. J. Syst. Bacteriol., 1996, 46, 550-558
- Sorokin, D. Y.; Tindall, B. J. (2006). "The status of the genus name Halovibrio Fendrich 1989 and the identity of the strains Pseudomonas halophila DSM 3050 and Halomonas variabilis DSM 3051. Request for an Opinion". International Journal of Systematic and Evolutionary Microbiology 56 (2): 487–489. doi:10.1099/ijs.0.63965-0.
- Sorokin, D. Y.; Tourova, T. P.; Galinski, E. A.; Belloch, C.; Tindall, B. J. (2006). "Extremely halophilic denitrifying bacteria from hypersaline inland lakes, Halovibrio denitrificans sp. nov. And Halospina denitrificans gen. nov., sp. nov., and evidence that the genus name Halovibrio Fendrich 1989 with the type species Halovibrio variabilis should be associated with DSM 3050". International Journal of Systematic and Evolutionary Microbiology 56 (2): 379. doi:10.1099/ijs.0.63964-0.
- ARAHAL (D.R.), CASTILLO (A.M.), LUDWIG (W.), SCHLEIFER (K.H.) and VENTOSA (A.): Proposal of Cobetia marina gen. nov., comb. nov., within the family Halomonadaceae, to include the species Halomonas marina. Syst. Appl. Microbiol., 2002, 25, 207-211.
- Ntougias, S.; Zervakis, G. I.; Fasseas, C. (2007). "Halotalea alkalilenta gen. Nov., sp. Nov., a novel osmotolerant and alkalitolerant bacterium from alkaline olive mill wastes, and emended description of the family Halomonadaceae Franzmann et al. 1989, emend. Dobson and Franzmann 1996". International Journal of Systematic and Evolutionary Microbiology 57 (9): 1975–1983. doi:10.1099/ijs.0.65078-0.
- Wang, Y.; Tang, S. -K.; Lou, K.; Lee, J. -C.; Jeon, C. O.; Xu, L. -H.; Kim, C. -J.; Li, W. -J. (2009). "Aidingimonas halophila gen. Nov., sp. Nov., a moderately halophilic bacterium isolated from a salt lake". International Journal of Systematic and Evolutionary Microbiology 59 (12): 3088–3094. doi:10.1099/ijs.0.010264-0.
- Ben Ali Gam, Z.; Abdelkafi, S.; Casalot, L.; Tholozan, J. L.; Oueslati, R.; Labat, M. (2007). "Modicisalibacter tunisiensis gen. nov., sp. nov., an aerobic, moderately halophilic bacterium isolated from an oilfield-water injection sample, and emended description of the family Halomonadaceae Franzmann et al. 1989 emend Dobson and Franzmann 1996 emend. Ntougias et al. 2007". International Journal of Systematic and Evolutionary Microbiology 57 (10): 2307. doi:10.1099/ijs.0.65088-0.
- ANAN'INA (L.N.), PLOTNIKOVA (E.G.), GAVRISH (E.Y.), DEMAKOV (V.A.) and EVTUSHENKO (L.I.): Salinicola socius gen. nov., sp. nov., a moderately halophilic bacterium from a naphthalene-utilizing microbial association. Microbiology, 2007, 76, 324-330 (translation of Mikrobiologiya, 2007, 76, 369-376).
- De La Haba, R. R.; Sanchez-Porro, C.; Marquez, M. C.; Ventosa, A. (2009). "Taxonomic study of the genus Salinicola: Transfer of Halomonas salaria and Chromohalobacter salarius to the genus Salinicola as Salinicola salarius comb. Nov. And Salinicola halophilus nom. Nov., respectively". International Journal of Systematic and Evolutionary Microbiology 60 (4): 963–971. doi:10.1099/ijs.0.014480-0.
- Thao, M. L.; Baumann, P. (2004). "Evolutionary Relationships of Primary Prokaryotic Endosymbionts of Whiteflies and Their Hosts". Applied and Environmental Microbiology 70 (6): 3401–3406. doi:10.1128/AEM.70.6.3401-3406.2004. PMC 427722. PMID 15184137.
- VREELAND (R.H.), LITCHFIELD (C.D.), MARTIN (E.L.) and ELLIOT (E.): Halomonas elongata, a new genus and species of extremely salt-tolerant bacteria. Int. J. Syst. Bacteriol., 1980, 30, 485-495
- ROBINSON (J.) and GIBBONS (N.E.): The effect of salts on the growth of Micrococcus halodentrificans n. sp. Canadian Journal of Botany, 1952, 30, 147-154
- Elazari-Volcani 1940
- Hof 1935
- WANG (Y.), TANG (S.K.), LOU (K.), LEE (J.C.), JEON (C.O.), XU (L.H.), KIM (C.J.) and LI (W.J.): Aidingimonas halophila gen. nov., sp. nov., a moderately halophilic bacterium isolated from a salt lake. Int. J. Syst. Evol. Microbiol., 2009, 59, 3088-3094
- COBET (A.B.), WIRSEN (C.) and JONES (G.E.): The effect of nickel on a marine bacterium, Arthrobacter marinus sp. nov. Journal of General Microbiology, 1970, 62, 159-169.
- KIM (M.S.), ROH (S.W.) and BAE (J.W.): Cobetia crustatorum sp. nov., a novel slightly halophilic bacterium isolated from traditional fermented seafood in Korea. Int. J. Syst. Evol. Microbiol., 2010, 60, 620-626.
- NTOUGIAS (S.), ZERVAKIS (G.I.) and FASSEAS (C.): Halotalea alkalilenta gen. nov., sp. nov., a novel osmotolerant and alkalitolerant bacterium from alkaline olive mill wastes, and emended description of the family Halomonadaceae Franzmann et al. 1989
- SÁNCHEZ-PORRO (C.), DE LA HABA (R.R.), SOTO-RAMÍREZ (N.), MÁRQUEZ (M.C.), MONTALVO-RODRÍGUEZ (R.) and VENTOSA (A.): Description of Kushneria aurantia gen. nov., sp. nov., a novel member of the family Halomonadaceae, and a proposal for reclassification of Halomonas marisflavi as Kushneria marisflavi comb. nov., of Halomonas indalinina as Kushneria indalinina comb. nov. and of Halomonas avicenniae as Kushneria avicenniae comb. nov. Int. J. Syst. Evol. Microbiol., 2009, 59, 397-405.
- Arahal, D. R.; Vreeland, R. H.; Litchfield, C. D.; Mormile, M. R.; Tindall, B. J.; Oren, A.; Bejar, V.; Quesada, E.; Ventosa, A. (2007). "Recommended minimal standards for describing new taxa of the family Halomonadaceae". International Journal of Systematic and Evolutionary Microbiology 57 (Pt 10): 2436–2446. doi:10.1099/ijs.0.65430-0. PMID 17911321.
- Arahal, DR; Vreeland, RH; Litchfield, CD, et al. Errata. Recommended minimal standards for describing new taxa of the family Halomonadaceae (vol 57, pg 2436, 2007) Int. J. Syst. Evol. Microbiol., 58, 2673-2673, 2008
- "Classification, taxonomy and systematics of prokaryotes (bacteria)".
- Bortman, Henry (2010-12-02). "Arsenic-Eating Bacteria Opens New Possibilities for Alien Life". Space.Com web site (Space.com). Retrieved 2010-12-02.
- Bortman, Henry (5 October 2009). "Searching for Alien Life, on Earth". Astrobiology Magazine (NASA). Retrieved 2010-12-02.
- Katsnelson, Alla (2 December 2010). "Arsenic-eating microbe may redefine chemistry of life". Nature News. doi:10.1038/news.2010.645. Retrieved 2010-12-02.
To request an improvement, please leave a comment on the page. Thank you! |
By R.W. Julian
When the first settlers from Britain came to what is now the United States, there was little in the way of coined money to use in the marketplace. Barter was the order of the day, and crops, such as tobacco in Virginia, were often used to pay taxes.
By 1652, this lack of a circulating coinage had become serious enough that Massachusetts began minting its own silver coins, suitably debased from the legal English standard. This reduced weight kept the silver at home and provided a measure of relief for hard-pressed merchants. In 1684, however, the coinage was halted when the London government refused to tolerate it any longer.
During the early 1700s, a continuing shortage of gold and silver forced Colonial governments to issue paper money in ever-increasing amounts, which soon depreciated when measured against specie. Parliament repeatedly passed laws forbidding such issues but made little effort to provide Colonial marketplaces with sufficient coinage.
One of the underlying causes of the American Revolution was the second-class status of the American economy. Political leaders, such as Benjamin Franklin, wanted the colonists to manage their own affairs, including the monetary system, without interference from the British Crown.
Currency to Fund the War
It took money to fund the Revolutionary War, and the colonists had little of that. So like the alchemists of old, they created a paper currency backed by the Spanish milled dollar. But it would have been a rare citizen indeed who could have persuaded the government to honor that pledge.
The Continental Congress was not as naïve as it appears, however. It had little or no taxing power and the issues of paper depreciated, creating a reverse tax. In effect, those who were forced to use the Continental currency thus paid for the war as the value of the bills continued to fall.
When the war was over, the public had seen enough of paper currency and a nearly bankrupt national government was barely able to remain in existence. In the 1780s more than one attempt was made to create a monetary system, but all failed because the government could not even find enough money to build a mint.
In 1787, influential citizens met in Philadelphia to hammer out a document later to be known as the Constitution. Written into this document were restrictions on the States issuing paper money though, despite popular belief to the contrary, the Federal Government had the power to do so.
Hamilton’s Recommendation Shapes Monetary System
The new government began operations in April 1789 under President George Washington, and it was not long before Treasury Secretary Alexander Hamilton was able to bring order out of chaos when it came to finding the necessary funds to administer the country.
By late 1789, enough had been accomplished by Hamilton that the government turned its thoughts to a monetary system. In March 1790, Congress asked Hamilton to determine what was necessary to create our own coinage. His detailed report was submitted to the legislators in January 1791.
Hamilton recommended a bimetallic monetary system with a fixed relationship between gold and silver. He reported that the current international ratio was 15 to 1 (i.e. one ounce of gold was worth 15 ounces of silver) and suggested that we adopt this as our standard.
Some historians, in hindsight, have stated that Hamilton should have chosen a single standard, such as silver, because the gold and silver ratio on international markets tended to change over time. Hamilton, however, was only doing what was generally accepted in the 18th century, and the later failures of Congress to adjust the ratio in a timely fashion was not his fault.
Inspired by Spanish Milled Dollar
Hamilton chose the Spanish milled dollar (8 reales) as the basis for the new American dollar, giving it a content of 371.25 grains (24.06 grams) of pure silver. The weights of the gold coins were then calculated by Hamilton based on the suggested ratio of 15 to 1. He suggested that the fineness be 11/12ths (.917), meaning 11 parts of silver and one of copper.
At first, Congress did little except for a March 1791 resolution authorizing the President to carry out Hamilton’s ideas. By October, Washington had decided that the resolution was insufficient and informed Congressional leaders that something more concrete was required. The Senate got the point, and a committee was appointed to write draft legislation.
The Senate committee ignored Hamilton’s fineness and weight suggestions. The silver dollar, with the fractional pieces in tandem, now weighed 416 grains rather than the 405 grains recommended by Hamilton. The fineness was an awkward .8924. The amount of pure silver in the dollar, however, remained at 371.25 grains.
President Washington signed the coinage bill in early April 1792, and the rest of that year was spent preparing for coinage. In July 1792, a trial mintage of half dismes was undertaken under the authority of the President, but the issue was too small to be of importance.
In the 19th century, there was speculation that Martha Washington’s portrait had appeared on the 1792 half disme, but this theory later fell out of favor. Recent research, however, has shown that artist Joseph Ceracchi, who created the designs for the half disme, was a strong admirer of the First Lady and may well have used her portrait for this coin.
15 to 1 Ratio
Because of high bond requirements for key Mint officers, regular coinage of the precious metals did not begin until October 1794, when 1,758 silver dollars were struck. Gold did not follow until July 1795. For a few years, the bimetallic system of gold and silver at the 15 to 1 ratio worked reasonably well. Little of the new coinage went into circulation, however, except for silver dollars and half dollars. Other coins, primarily gold, were exported or kept in bank vaults as backing for paper money.
It should be noted that the early silver coinage, prior to 1835, depended on private citizens bringing their silver bullion or foreign coins to the mints. (There was no government bullion fund for coinage until that year.)
Hamilton’s 15 to 1 ratio was a virtual dead letter by 1800. The international banking community, reacting to the ongoing military threats by Napoleon Bonaparte, grew nervous, and gold became more popular, and thus more valuable, as Europeans sought to find something that could be easily hidden yet maintain its intrinsic value.
With Congress loathe to act, Mint Director Elias Boudinot halted the coinage of silver dollars and gold eagles in order to keep our coins from being so heavily exported. The dollars in particular went to China, never to return. The outflow of silver did lessen but the loss of gold became worse, and by 1815 United States gold coins did not circulate in this country.
The loss of gold to the marketplace meant that silver was now the standard of value in the United States. We were now on a de facto silver standard and would remain so until 1834.
Diversifying the Monetary System
The Mint concentrated after 1805 on half dollars and half eagles, the latter being increasingly exported. The half dollars, on the other hand, were used in everyday transactions as well as backing for private issues of paper money by many banks. Half dollars even reached the frontier as they were used in Indian treaty payments.
Does this mean that the average person had no coins except the copper cent or half dollar for the marketplace? The answer is no because large quantities of small Spanish silver coins, such as the real (equal to 12.5 cents) and 2 reales (equal to 25 cents), were in widespread use. It is for this reason that store prices were often seen as 12.5 or 37.5 cents. Small U.S. silver coins were seldom seen until the late 1820s.
As a result of pressure from President Andrew Jackson, Congress in 1834 passed a law that changed the amount of gold in the coinage. Instead of a ratio of 15 to 1, the new arrangement was 16.002 to 1, more in line with the international marketplace. The result was that gold now came to the United States, rather than leaving our shores.
For the first time in U.S. history, there was now gold in everyday domestic use. By 1836, however, it was realized that the 1834 law had slightly overvalued gold, and the ratio was revised accordingly in January 1837 to 15.998 to 1. One of the reasons for this change was the concern that the new gold coinage might create a situation where silver would again be exported.
The 1837 law also changed the silver coins slightly by altering the fineness to .900. The pure silver content remained the same; however, the dollar weighed 412.5 grains instead of 416. The smaller silver coins were in proportion.
Impact of Bimetallism
From 1837 through 1848, the nation enjoyed true bimetallism. In 1836 the Mint reintroduced the silver dollar to active use, although it was intended to be used by banks as reserves for their paper money. The idea was to reduce the number of half dollars struck by the Mint and thereby free up the coining presses for other denominations.
Bimetallism suffered a fatal blow with the 1848 discovery of gold in California. So much gold was mined that, coupled with equally large quantities from Australian mines, silver coins were soon undervalued. Bullion dealers began to buy up silver coins with gold, and by 1850 few silver coins were to be found in daily use.
In response to the coin shortage, Congress decreed in early 1851 a silver 3-cent piece, the trime, as a stopgap measure. It was sufficiently debased so that bullion dealers were not interested; it did not solve the coin shortage but was a strong step in the right direction.
The trime weighed only .8 of a gram, the lightest coin ever struck by the United States. The fineness was only .750, well below the .900 of the regular silver coins.
Congress took decisive action in February 1853 by lowering the weights of silver coins, except the dollar, by about 6 percent. The legal ratio of 15.998 to 1 remained in force, as depositors could still bring silver to the mints for dollar coins; otherwise minor silver coins were struck on government account only. The nation was now on a de facto gold standard because silver dollars no longer circulated. Arrows were placed at the dates of minor silver coins from 1853 to 1855 to denote the decrease in weight.
From 1837 to 1853, the half dollar, with the smaller silver in proportion, weighed 206.25 grains (13.36 grams), but this became 192 grains (12.44 grams) in 1853. The fineness did not change, and the trime was now struck to the same level of purity.
Conflict Forces Currency Change
The 1853 reform put the monetary system back on an even keel. There was plenty of gold and silver for public use until 1861, when the Civil War erupted. The conflict forced yet another change in the currency. The year 1862 saw the complete disappearance of silver and gold coinage from the marketplace and replacement with paper money. The government had again turned to the printing press, just as in 1776.
In reality, some silver coins did continue to circulate, but only on the West Coast and in limited numbers.
The extraordinary cost of the war made it impossible for several years to bring gold or silver back into the marketplace. In early 1873, however, the mint laws were overhauled and the polite fiction about bimetallism still being in effect was quietly dropped. We were now officially on the single gold standard but without any gold coins in daily use.
By the summer of 1873, minor silver coins (with arrows again at the date, reflecting a slight increase in weight) found their way to the marketplace. The nation had once more weathered a severe monetary crisis and silver coins were in plentiful supply. The half dollar, for example, now weighed 12.5 grams instead of 12.44. Ostensibly, the change was to accommodate the metric system, but the real reason was to use more silver.
One of the denominations abolished by Congress in 1873 was the silver dollar, which was replaced by the Trade dollar, intended to export excess silver to the Orient. However, the ever-falling price of silver meant that those persons with a stake in the mining industry began to call for a resumption of regular dollar coinage. The idea was to allow the public to bring silver to the mints, but supporters failed to realize that this would have brought chaos to the monetary system.
The Trade dollar weighed 420 grains (27.22 grams), 7.5 grains heavier than the old silver dollar. The increased weight was necessary to compete with the Mexican dollar in the Oriental market.
In February 1878, Congress responded by reinstating the silver dollar. However, mindful of the evils that would have resulted from unlimited coinage, the government was instead required to buy silver on the open market for the dollar coinage. The weight of the dollar remained at 412.5 grains and .900 fine silver, as adopted in the 1837 law.
Despite massive quantities of silver turned into dollars that nobody really wanted, in 1890 the Sherman Silver Purchase Act was passed, mandating even more silver to be purchased for dollar coinages. The result was disastrous.
Because the Sherman Act was perceived by European bankers as financial folly, foreigners holding American gold bonds began to demand payment in gold; soon the U.S. Treasury was facing bankruptcy. In 1893, newly inaugurated President Grover Cleveland, in an act of statesmanship, forced Congress to repeal the Sherman Act, thus saving the monetary system from collapse.
The 1896 presidential race between William Jennings Bryan and William McKinley ended in victory for the latter and also marked the effective end of the silver movement. Bryan, who knew next to nothing about monetary systems and market forces, made his famous “Cross of Gold” speech in which he blamed gold for most of the evils in the world. He demanded free coinage of silver at the old ratio of 16 to 1 at a time when the current market ratio was 30 to 1. The 1896 campaign was especially interesting for the tokens and medals that attacked or defended Bryan and his 16 to 1 proposal.
Formation of the Gold Standard
In 1900, as an afterthought, Congress passed a resolution stating that we were indeed on the gold standard while promising to use silver as much as possible in our coinage. Most legislators did not realize that we had been on the gold standard since 1853.
During World War I, large numbers of silver dollars were melted to aid the British war effort. Beginning in 1921, Peace dollars were struck to replace the melted Morgan dollars.
The depression that began in 1930 reached epic proportions in early 1933, forcing incoming President Franklin D. Roosevelt to take drastic steps. One of these measures, which proved a failure in the long run, was to take the United States off the gold standard and inflate the currency by raising the price of gold. Gold coinage was called in, and the right of the public to own gold was to remain in abeyance until 1974.
Although gold was no longer coined, silver was and in large quantities. The price of silver remained under $1 per ounce well into the late 1950s, but a growing demand for precious metals eventually forced an increase in price. By the early 1960s, United States mints were striking large numbers of silver coins at barely over melt value. A 1965 law put an end to silver coinage in the United States except for a debased half dollar containing 40% silver and weighing 11.5 grams.
The last of these debased half dollars was struck in 1970, and after that silver was permanently gone from the everyday coinage seen by the American public.
Silver coins had served the nation well until the 1960s but now are little more than a distant memory to the older generation of collectors. |
Researchers at the National Institute of Standards and Technology (NIST) have created a tunable superconducting circuit on a chip that can place a single microwave photon (particle of light) in two frequencies, or colors, at the same time.
This curious "superposition," a hallmark of the quantum world, is a chip-scale, microwave version of a common optics experiment in which a device called a beam-splitter sends a photon into either of two possible paths across a table of lasers, lenses and mirrors. The new NIST circuit can be used to create and manipulate different quantum states, and is thus a prototype of the scientific community's long-sought "optics table on a chip."
Described in Nature Physics,* the NIST experiments also created the first microwave-based bit for linear optical quantum computing. This type of quantum computer is typically envisioned as storing information in either the path of a light beam or the polarization (orientation) of single photons. In contrast, a microwave version would store information in a photon's frequency. Quantum computers, if they can be built, could solve certain problems that are intractable today.
The new NIST circuit combines components used in superconducting quantum computing experiments—a single photon source, a cavity that naturally resonates or vibrates at particular frequencies, and a coupling device called a SQUID (superconducting quantum interference device). Scientists tuned the SQUID properties to couple together two resonant frequencies of the cavity and then manipulated a photon to make it oscillate between different superpositions of the two frequencies. For instance, the photon could switch back and forth from equal 50/50 proportions of both frequencies to an uneven 75/25 split. This experimental setup traps photons in a "box" (the cavity) instead of sending them flying across an optical table.
"This is a new way to manipulate microwave quantum states trapped in a box," says NIST physicist José Aumentado, a co-author of the new paper. "The reason this is exciting is it's already technically feasible to produce interesting quantum states in chip-scale devices such as superconducting resonators, and now we can manipulate these states just as in traditional optics setups."
NIST researchers can control how the new circuit couples different quantum states of the resonator over time. As a result, they can create sequences of interactions to make simple optical circuits and reproduce traditional optics experiments. For example, they can make a measurement tool called an interferometer based on the frequency/color of a single photon, or produce special quantum states of light such as "squeezed" light. |
|Module 11 - Extreme Values and Optimization|
|Introduction | Lesson 1 | Lesson 2 | Lesson 3 | Lesson 4 | Self-Test|
|Lesson 11.3: The Can Problem|
In Lesson 11.2 the length of a removed arc that maximized the volume of a cone was found. In this lesson the radius of a can that minimizes surface area will be explored.
Stating the Problem
A right cylindrical can is being designed to hold 255 cubic units. Find the radius of the can with the least surface area and find the minimum surface area.
Drawing the Diagram
The diagram below illustrates a can of radius r and height h.
Finding Surface Area as a Function of r and h
The quantity to be minimized is surface area. The surface area includes the lateral surface formed by a rectangle with height h and width 2 r (the circumference of the circular base) and two circles that form the top and bottom. Therefore, the surface area is the sum of the areas of the two circles and the rectangle: S = 2 r2 + 2 rh.
Defining r and h as Functions of x
The surface area is a function of two variables, r and h. If r and h are both defined in terms of a single variable x, then the surface area will be defined as a function of x, which will allow it to be graphed on the TI-89.
Writing h and r as Functions of x
The desired volume is 255 cubic units and the formula for volume is V = r2h. Therefore, 255= r2h.
Solving the equation for h gives
Letting r = x, r and h can be defined as functions of x.
Now we have surface area s(x), completely defined as a function of x, as shown in the following screen.
Finding the Derivative of Surface Area with respect to x
Store the derivative of the surface area, s, in a new function named ds.
Alternatively, we could compute the derivative of s(x) directly from the formula found in the previous figure.
Finding the Zero(s) of the Derivative
Because the radius of the can must be greater than zero, the domain of the surface area function is all x > 0. Notice that if r = k is very large, then h is very small.
Find the exact value and the approximate value of the zero of the derivative.
Checking for Other Possible Extreme Values
The first derivative is defined everywhere in the function's domain, so x 3.43653 is the only critical point. There are no endpoints to consider.
Using the Second Derivative
The second derivative can often be used to determine if a critical point where the first derivative is zero indicates a local maximum or a minimum by revealing the function's concavity at that point. No conclusion can be drawn if the second derivative is zero or undefined at the critical point. The table below summarizes the relationship.
Test the critical point in the second derivative of the surface area function to determine whether it represents a local minimum. Use the exact value from the history screen.
Because the second derivative of the surface area function is positive at x = 3.43653, the graph of the function is concave upward and has a local minimum value there.
Because there are no endpoints and no other critical points, the local minimum value is the absolute minimum value.
Finding the Minimum Surface Area
Find the minimum surface area of the can by evaluating s(x) with the exact x-value that minimizes surface area. Then get the approximate value of the minimum surface area.
The minimum surface area of a cylindrical can that holds 255 cubic units is approximately 222.61 square units. The radius which will result in the minimum surface area is approximately 3.43653 units.
Graphing the Surface Area Function and Its First Derivative
As shown in Lesson 11.2, the graphs of the function and its first derivative illustrate the relationship between the function's minimum value and the corresponding zero of its derivative.
Click here for the answer.
Generalizing and Extending the Can Problem
The problem can be generalized by replacing the fixed volume of 255 with a letter and proceeding as in the last example. The problem can be further extended by exploring the ratio of height to radius for a right cylindrical can with fixed volume that has minimum surface area.
Replacing 255 with v
The first occurrence of the fixed value of the volume appears in the function that represents the height of the can. Replace the volume of the can with the letter "v" and redefine the function.
Finding The Zero(s) of the Derivative
Determine when the derivative is zero.
This value of x is the radius which will result in the minimum surface area.
The corresponding value of the minimum surface area is s = 3(2 )1/3v2/3.
Finding the Ratio
Evaluate the ratio of height to radius, , at the zero of the derivative.
Paste the last part of the command from the History Area.
The ratio of height to radius is 2, regardless of the volume v. That is, for right cylindrical cans with a fixed volume, when the height of the can is twice the radius the surface area of the can will be minimized.
|< Back | Next >|
2007 All rights reserved. | |
Short division is similar to long division, but it involves less written work and more mental arithmetic. The general method for both short and long division is the same, but in short division, you write down less of your work, doing the simple subtraction and multiplication mentally. To understand short division, you must have mastered the basic skills of subtraction and multiplication. Short division is ideal when the divisor, the number that you're dividing into another number, is less than 10.
Doing Short Division
1Write the problem. To write the problem correctly, place the divisor, the number that you're dividing into another number, outside the long division bar. Place the dividend, the number that you'll be dividing by the divisor, inside the long division bar. The quotient, or your result, will go on top of the division bar. Remember that for short division to work, your divisor has to be less than 10.
- For example: In 847/5, 5 is the divisor, so write it outside the division bar. 847 is the dividend, so place it inside the division bar.
- The quotient is blank because you haven't started dividing yet.
2Divide the first number of the dividend by the divisor. When you divide, you are stating how many times one number can fit into another number. For example, 2 can fit into 6 three times (2 + 2 + 2 =6). Continuing with our example, 5 goes into 8 just one time, but it doesn’t evenly divide into 8. We have 3 left over. Write the number 1, the first number of the quotient, on top of the division bar. This leftover number is called the remainder.
- If you were using long division, you would write out 8 minus 5 equals 3 and then bring down the 4 from the dividend. Short division simplifies this written process.
3Write the remainder next to the first number of the dividend. Write a small 3 to the top right of the number 8. This will remind you that there was a remainder of 3 when you divided 8 by 5. The next number you will divide into is the combination of the remainder and the second number.
- In our example, the next number is 34.
4Divide the number formed by the first remainder and the second number in the dividend by the divisor. The remainder is 3 and the second number of the dividend is 4, so the new number you'll be working with is 34.
- Now, divide 34 by 5. 5 goes into 34 six times (5 x 6 =30) with a remainder of 4.
- Write your quotient, 6, on the division bar to the right of the 1.
- Again, keep in mind you are doing most of the math mentally.
5Write the second remainder above the second number in the dividend and divide. Just as you did the first time, simply write a small 4 above and to the right of the number 4. The next number you will be dividing by is 47.
- Now, divide 47 by 5. 5 goes into 47 nine times (5 x 9 = 45) with a remainder of 2.
- Write your quotient, 9, on the division bar to the right of the 6.
6Write the final remainder on the division bar. Write "r 2" to the right of the quotient on the division bar. The final answer of 847/5 is 169 with a remainder 2.
Dividing in Special Cases
1Recognize that the divisor may not go into the first number of the dividend. In some cases, the divisor will be larger than the first number of the dividend and you will not be able to divide. In this case, you will divide into the first two numbers of the dividend.
- For example, 567/7. In this case, 7 doesn’t go into 5, but it does go into 56 eight times. When solving this problem, write the first number of the quotient over the 6 instead of the 5 and continue solving. The final answer is 81.
2Add a zero in the quotient if the divisor does not go into the dividend. This is similar to the first special case, except this time, you will put a zero in the middle of the quotient. If you encounter a problem like this, simply write a zero in the quotient, and try dividing with the next two numbers in the dividend until the number can be divided.
- For example, 3208/8, 8 goes into 32 four times, but does not go into 0. You would add a 0 and then divide into the next number. 8 goes into 8 one time, therefore, the solution would be 401.
3Practice with some more examples. The best way to understand short division is practicing with many different types of problems. Below are a few more examples for you to try out.
- Divide 748 by 2. How many times can 2 go into 7? Three with a remainder of 1. Write 1 next to the 4. How many times can 2 go into 14? Seven times, evenly. Two goes into 8 four times, evenly; therefore, the final answer is 374.
- Divide 368 by 8. Eight doesn’t fit into 3, but it does divide into 36. Eight fits into 36 four times with a remainder of 4 (8 x 4 = 32, 36 - 32 = 4). Write the 4 next to the 9. Eight can go into 48 six times, evenly; therefore, the final answer is 46.
- Divide 1228 by 4. Four doesn’t fit into 1, but it does fit into 12 three times, evenly. Four does not fit into 2, so you must add a zero in the quotient and divide four into 28. Four fits into 28 seven times; therefore, the final answer is 307.
How do I do short division when the dividend is a longer number than the divisor?wikiHow ContributorSo, if the question is 362 divided by 56, first you put a decimal point above the "6" in the dividend where the quotient would be. Then add more zeros to 56 until you can fit the divisor inside the number. Then divide. When you answer, there should be a decimal in your quotient.
|A video on how to do short division. |
Sources and Citations
Categories: Multiplication and Division
In other languages:
Português: Do Short Divisão, Italiano: Fare le Divisioni in Linea, Français: faire une division raccourcie, Deutsch: Kurz Division ausführen, 中文: 做短除法, Русский: выполнить ускоренное деление, Español: hacer una división corta, Bahasa Indonesia: Melakukan Pembagian Pendek, Nederlands: Snel delen, العربية: استخدام القسمة المختصرة, Tiếng Việt: Làm Phép Chia Ngắn
Thanks to all authors for creating a page that has been read 293,100 times. |
General relativity, or the general theory of relativity, is the geometric theory of gravitation published by Albert Einstein in 1916 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations.
Some predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. The predictions of general relativity have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity.
Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is ample evidence that the intense radiation emitted by certain kinds of astronomical objects is due to black holes; for example, microquasars and active galactic nuclei result from the presence of stellar black holes and black holes of a much more massive type, respectively. The bending of light by gravity can lead to the phenomenon of gravitational lensing, in which multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been observed indirectly; a direct measurement is the aim of projects such as LIGO and NASA/ESA Laser Interferometer Space Antenna and various pulsar timing arrays. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe.
- 1 History
- 2 From classical mechanics to general relativity
- 3 Definition and basic applications
- 4 Consequences of Einstein's theory
- 5 Astrophysical applications
- 6 Advanced concepts
- 7 Relationship with quantum theory
- 8 Current status
- 9 See also
- 10 Notes
- 11 References
- 12 Further reading
- 13 External links
Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall, he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, and form the core of Einstein's general theory of relativity.
The Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the so-called Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, which eventually resulted in the Reissner–Nordström solution, now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to reproduce that "observation". By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life.
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"). Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919, making Einstein instantly famous. Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology, too, became amenable to direct observational tests.
From classical mechanics to general relativity
General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity.
Geometry of Newtonian gravity
At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime.
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in a small enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is at rest in a gravitational field, or in free space aboard an accelerating rocket generating a force equal to gravity.
Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The result is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass.
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group which also includes translations and rotations.) The differences between the two become significant when we are dealing with speeds approaching the speed of light, and with high-energy phenomena.
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see the image on the left). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the space–time's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure.
Special relativity is defined in the absence of gravity, so for practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.
A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity.
The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish).
Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress (that is, pressure and shear). Using the equivalence principle, this tensor is readily generalized to curved space-time. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero— the simplest set of equations are what are called Einstein's (field) equations:
Einstein's field equations
is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as
On the right-hand side, is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits (or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics), the proportionality constant can be fixed as κ = 8πG/c4, with G the gravitational constant and c the speed of light. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Brans–Dicke theory, teleparallelism, and Einstein–Cartan theory.
Definition and basic applications
The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building.
Definition and basic properties
General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional, pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.
While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation.
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.
The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present.
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).
Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories.
Consequences of Einstein's theory
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of the ninety years of research that followed Einstein's initial publication.
Gravitational time dilation and frequency shift
Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation.
Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
Light deflection and gravitational time delay
General relativity predicts that the path of light is bent in a gravitational field; light passing a massive body is deflected towards that body. This effect has been confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity.
Closely related to light deflection is the gravitational time delay (or Shapiro delay), the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space.
One of several analogies between weak-field gravity and electromagnetism is that, analogous to electromagnetic waves, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, for weak fields, a linear approximation can be made. Such linearized gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or so-called Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
Orbital effects and the relativity of direction
General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction.
Precession of apsides
In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess—the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of the anomalous perihelion shift of the planet Mercury, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude.
According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are very compact, significant amounts of energy are emitted in the form of gravitational radiation.
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737-3039, in which both stars are pulsars.
Geodetic precession and frame-dragging
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are so-called gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used.
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.
Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies.
Gravitational wave astronomy
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). However, gravitational waves reaching us from the depths of the cosmos have not been detected directly. Such detection is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes. European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) due for launch in 2015.
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.
Black holes and other compact objects
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry.
where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be so-called dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.
A so-called inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below).
Kurt Godel showed that Closed timelike curve solutions to Einstein's equations exist which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then other -- similarly impractical -- GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes.
Causal structure and global geometry
In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams.
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of so-called energy conditions) are used to derive general results.
Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier.
Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. In the long run, they are rather simple objects characterized by eleven parameters specifying energy, linear momentum, angular momentum, location at a specified time and electric charge. This is stated by the black hole uniqueness theorems: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.
Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for black hole area to decrease—as long as other processes ensure that, overall, entropy increases. As thermodynamical objects with non-zero temperature, black holes should emit thermal radiation. Semi-classical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below).
There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the so-called BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
Each solution of Einstein's equation encompasses the whole history of a universe — it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories.
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in so-called "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
Global and quasi-local quantities
The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy.
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the so-called Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.
Relationship with quantum theory
If general relativity is considered one of the two pillars of modern physics, quantum theory, the basis of understanding matter from elementary particles to solid state physics, is the other. However, it is still an open question as to how the concepts of quantum theory can be reconciled with those of general relativity.
Quantum field theory in curved spacetime
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation, leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. At low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the result are models devoid of all predictive power ("non-renormalizability").
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being dynamical triangulations, causal sets, twistor models or the path-integral based models of quantum cosmology.
All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, and increasingly powerful computer simulations (such as those describing merging black holes) are run. The race for the first direct detection of gravitational waves continues, in the hope of creating opportunities to test the theory's validity for much stronger gravitational fields than has been possible to date. Almost a hundred years after its publication, general relativity remains a highly active area of research.
- Center of mass (relativistic)
- Contributors to general relativity
- Derivations of the Lorentz transformations
- Ehrenfest paradox
- Einstein–Hilbert action
- Introduction to mathematics of general relativity
- Relativity priority dispute
- Ricci calculus
- Tests of general relativity
- Timeline of gravitational physics and relativity
- Two-body problem in general relativity
- "Nobel Prize Biography". Nobel Prize Biography. Nobel Prize. Retrieved 25 February 2011.
- Pais 1982, ch. 9 to 15, Janssen 2005; an up-to-date collection of current research, including reprints of many of the original articles, is Renn 2007; an accessible overview can be found in Renn 2005, pp. 110ff. An early key article is Einstein 1907, cf. Pais 1982, ch. 9. The publication featuring the field equations is Einstein 1915, cf. Pais 1982, ch. 11–15
- Schwarzschild 1916a, Schwarzschild 1916b and Reissner 1916 (later complemented in Nordström 1918)
- Einstein 1917, cf. Pais 1982, ch. 15e
- Hubble's original article is Hubble 1929; an accessible overview is given in Singh 2004, ch. 2–4
- As reported in Gamow 1970. Einstein's condemnation would prove to be premature, cf. the section Cosmology, below
- Pais 1982, pp. 253–254
- Kennefick 2005, Kennefick 2007
- Pais 1982, ch. 16
- Thorne, Kip (2003). "Warping spacetime". The future of theoretical physics and cosmology: celebrating Stephen Hawking's 60th birthday. Cambridge University Press. p. 74. ISBN 0-521-82081-2., Extract of page 74
- Israel 1987, ch. 7.8–7.10, Thorne 1994, ch. 3–9
- Sections Orbital effects and the relativity of direction, Gravitational time dilation and frequency shift and Light deflection and gravitational time delay, and references therein
- Section Cosmology and references therein; the historical development is in Overbye 1999
- The following exposition re-traces that of Ehlers 1973, sec. 1
- Arnold 1989, ch. 1
- Ehlers 1973, pp. 5f
- Will 1993, sec. 2.4, Will 2006, sec. 2
- Wheeler 1990, ch. 2
- Ehlers 1973, sec. 1.2, Havas 1964, Künzle 1972. The simple thought experiment in question was first described in Heckmann & Schücking 1959
- Ehlers 1973, pp. 10f
- Good introductions are, in order of increasing presupposed knowledge of mathematics, Giulini 2005, Mermin 2005, and Rindler 1991; for accounts of precision experiments, cf. part IV of Ehlers & Lämmerzahl 2006
- An in-depth comparison between the two symmetry groups can be found in Giulini 2006a
- Rindler 1991, sec. 22, Synge 1972, ch. 1 and 2
- Ehlers 1973, sec. 2.3
- Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1
- Ehlers 1973, pp. 17ff; a derivation can be found in Mermin 2005, ch. 12. For the experimental evidence, cf. the section Gravitational time dilation and frequency shift, below
- Rindler 2001, sec. 1.13; for an elementary account, see Wheeler 1990, ch. 2; there are, however, some differences between the modern version and Einstein's original concept used in the historical derivation of general relativity, cf. Norton 1985
- Ehlers 1973, sec. 1.4 for the experimental evidence, see once more section Gravitational time dilation and frequency shift. Choosing a different connection with non-zero torsion leads to a modified theory known as Einstein–Cartan theory
- Ehlers 1973, p. 16, Kenyon 1990, sec. 7.2, Weinberg 1972, sec. 2.8
- Ehlers 1973, pp. 19–22; for similar derivations, see sections 1 and 2 of ch. 7 in Weinberg 1972. The Einstein tensor is the only divergence-free tensor that is a function of the metric coefficients, their first and second derivatives at most, and allows the spacetime of special relativity as a solution in the absence of sources of gravity, cf. Lovelock 1972. The tensors on both side are of second rank, that is, they can each be thought of as 4×4 matrices, each of which contains ten independent terms; hence, the above represents ten coupled equations. The fact that, as a consequence of geometric relations known as Bianchi identities, the Einstein tensor satisfies a further four identities reduces these to six independent equations, e.g. Schutz 1985, sec. 8.3
- Kenyon 1990, sec. 7.4
- Brans & Dicke 1961, Weinberg 1972, sec. 3 in ch. 7, Goenner 2004, sec. 7.2, and Trautman 2006, respectively
- Wald 1984, ch. 4, Weinberg 1972, ch. 7 or, in fact, any other textbook on general relativity
- At least approximately, cf. Poisson 2004
- Wheeler 1990, p. xi
- Wald 1984, sec. 4.4
- Wald 1984, sec. 4.1
- For the (conceptual and historical) difficulties in defining a general principle of relativity and separating it from the notion of general covariance, see Giulini 2006b
- section 5 in ch. 12 of Weinberg 1972
- Introductory chapters of Stephani et al. 2003
- A review showing Einstein's equation in the broader context of other PDEs with physical significance is Geroch 1996
- For background information and a list of solutions, cf. Stephani et al. 2003; a more recent review can be found in MacCallum 2006
- Chandrasekhar 1983, ch. 3,5,6
- Narlikar 1993, ch. 4, sec. 3.3
- Brief descriptions of these and further interesting solutions can be found in Hawking & Ellis 1973, ch. 5
- Lehner 2002
- For instance Wald 1984, sec. 4.4
- Will 1993, sec. 4.1 and 4.2
- Will 2006, sec. 3.2, Will 1993, ch. 4
- Rindler 2001, pp. 24–26 vs. pp. 236–237 and Ohanian & Ruffini 1994, pp. 164–172. Einstein derived these effects using the equivalence principle as early as 1907, cf. Einstein 1907 and the description in Pais 1982, pp. 196–198
- Rindler 2001, pp. 24–26; Misner, Thorne & Wheeler 1973, § 38.5
- Pound–Rebka experiment, see Pound & Rebka 1959, Pound & Rebka 1960; Pound & Snider 1964; a list of further experiments is given in Ohanian & Ruffini 1994, table 4.1 on p. 186
- Greenstein, Oke & Shipman 1971; the most recent and most accurate Sirius B measurements are published in Barstow, Bond et al. 2005.
- Starting with the Hafele–Keating experiment, Hafele & Keating 1972a and Hafele & Keating 1972b, and culminating in the Gravity Probe A experiment; an overview of experiments can be found in Ohanian & Ruffini 1994, table 4.1 on p. 186
- GPS is continually tested by comparing atomic clocks on the ground and aboard orbiting satellites; for an account of relativistic effects, see Ashby 2002 and Ashby 2003
- Stairs 2003 and Kramer 2004
- General overviews can be found in section 2.1. of Will 2006; Will 2003, pp. 32–36; Ohanian & Ruffini 1994, sec. 4.2
- Ohanian & Ruffini 1994, pp. 164–172
- Cf. Kennefick 2005 for the classic early measurements by the Eddington expeditions; for an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004
- This is not an independent axiom; it can be derived from Einstein's equations and the Maxwell Lagrangian using a WKB approximation, cf. Ehlers 1973, sec. 5
- Blanchet 2006, sec. 1.3
- Rindler 2001, sec. 1.16; for the historical examples, Israel 1987, pp. 202–204; in fact, Einstein published one such derivation as Einstein 1907. Such calculations tacitly assume that the geometry of space is Euclidean, cf. Ehlers & Rindler 1997
- From the standpoint of Einstein's theory, these derivations take into account the effect of gravity on time, but not its consequences for the warping of space, cf. Rindler 2001, sec. 11.11
- For the Sun's gravitational field using radar signals reflected from planets such as Venus and Mercury, cf. Shapiro 1964, Weinberg 1972, ch. 8, sec. 7; for signals actively sent back by space probes (transponder measurements), cf. Bertotti, Iess & Tortora 2003; for an overview, see Ohanian & Ruffini 1994, table 4.4 on p. 200; for more recent measurements using signals received from a pulsar that is part of a binary system, the gravitational field causing the time delay being that of the other pulsar, cf. Stairs 2003, sec. 4.4
- Will 1993, sec. 7.1 and 7.2
- These have been indirectly observed through the loss of energy in binary pulsar systems such as the Hulse–Taylor binary, the subject of the 1993 Nobel Prize in physics. A number of projects are underway to attempt to observe directly the effects of gravitational waves. For an overview, see Misner, Thorne & Wheeler 1973, part VIII. Unlike electromagnetic waves, the dominant contribution for gravitational waves is not the dipole, but the quadrupole; see Schutz 2001
- Most advanced textbooks on general relativity contain a description of these properties, e.g. Schutz 1985, ch. 9
- For example Jaranowski & Królak 2005
- Rindler 2001, ch. 13
- Gowdy 1971, Gowdy 1974
- See Lehner 2002 for a brief introduction to the methods of numerical relativity, and Seidel 1998 for the connection with gravitational wave astronomy
- Schutz 2003, pp. 48–49, Pais 1982, pp. 253–254
- Rindler 2001, sec. 11.9
- Will 1993, pp. 177–181
- In consequence, in the parameterized post-Newtonian formalism (PPN), measurements of this effect determine a linear combination of the terms β and γ, cf. Will 2006, sec. 3.5 and Will 1993, sec. 7.3
- The most precise measurements are VLBI measurements of planetary positions; see Will 1993, ch. 5, Will 2006, sec. 3.5, Anderson et al. 1992; for an overview, Ohanian & Ruffini 1994, pp. 406–407
- Kramer et al. 2006
- A figure that includes error bars is fig. 7 in Will 2006, sec. 5.1
- Stairs 2003, Schutz 2003, pp. 317–321, Bartusiak 2000, pp. 70–86
- Weisberg & Taylor 2003; for the pulsar discovery, see Hulse & Taylor 1975; for the initial evidence for gravitational radiation, see Taylor 1994
- Kramer 2004
- Penrose 2004, §14.5, Misner, Thorne & Wheeler 1973, §11.4
- Weinberg 1972, sec. 9.6, Ohanian & Ruffini 1994, sec. 7.8
- Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003
- Kahn 2007
- A mission description can be found in Everitt et al. 2001; a first post-flight evaluation is given in Everitt, Parkinson & Kahn 2007; further updates will be available on the mission website Kahn 1996–2012.
- Townsend 1997, sec. 4.2.1, Ohanian & Ruffini 1994, pp. 469–471
- Ohanian & Ruffini 1994, sec. 4.7, Weinberg 1972, sec. 9.7; for a more recent review, see Schäfer 2004
- Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009
- Iorio L. (August 2006), "COMMENTS, REPLIES AND NOTES: A note on the evidence of the gravitomagnetic field of Mars", Classical Quantum Gravity 23 (17): 5451–5454, arXiv:gr-qc/0606092, Bibcode:2006CQGra..23.5451I, doi:10.1088/0264-9381/23/17/N01
- Iorio L. (June 2010), "On the Lense–Thirring test with the Mars Global Surveyor in the gravitational field of Mars", Central European Journal of Physics 8 (3): 509–513, arXiv:gr-qc/0701146, Bibcode:2010CEJPh...8..509I, doi:10.2478/s11534-009-0117-6
- For overviews of gravitational lensing and its applications, see Ehlers, Falco & Schneider 1992 and Wambsganss 1998
- For a simple derivation, see Schutz 2003, ch. 23; cf. Narayan & Bartelmann 1997, sec. 3
- Walsh, Carswell & Weymann 1979
- Images of all the known lenses can be found on the pages of the CASTLES project, Kochanek et al. 2007
- Roulet & Mollerach 1997
- Narayan & Bartelmann 1997, sec. 3.7
- Barish 2005, Bartusiak 2000, Blair & McNamara 1997
- Hough & Rowan 2000
- Hobbs, George; Archibald, A.; Arzoumanian, Z.; Backer, D.; Bailes, M.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S. et al. (2010), "The international pulsar timing array project: using pulsars as a gravitational wave detector", Classical and Quantum Gravity 27 (8): 084013, arXiv:0911.5206, Bibcode:2010CQGra..27h4013H, doi:10.1088/0264-9381/27/8/084013
- Danzmann & Rüdiger 2003
- "LISA pathfinder overview". ESA. Retrieved 2012-04-23.
- Thorne 1995
- Cutler & Thorne 2002
- Miller 2002, lectures 19 and 21
- Celotti, Miller & Sciama 1999, sec. 3
- Springel et al. 2005 and the accompanying summary Gnedin 2005
- Blandford 1987, sec. 8.2.4
- For the basic mechanism, see Carroll & Ostlie 1996, sec. 17.2; for more about the different types of astronomical objects associated with this, cf. Robson 1996
- For a review, see Begelman, Blandford & Rees 1984. To a distant observer, some of these jets even appear to move faster than light; this, however, can be explained as an optical illusion that does not violate the tenets of relativity, see Rees 1966
- For stellar end states, cf. Oppenheimer & Snyder 1939 or, for more recent numerical work, Font 2003, sec. 4.1; for supernovae, there are still major problems to be solved, cf. Buras et al. 2003; for simulating accretion and the formation of jets, cf. Font 2003, sec. 4.2. Also, relativistic lensing effects are thought to play a role for the signals received from X-ray pulsars, cf. Kraus 1998
- The evidence includes limits on compactness from the observation of accretion-driven phenomena ("Eddington luminosity"), see Celotti, Miller & Sciama 1999, observations of stellar dynamics in the center of our own Milky Way galaxy, cf. Schödel et al. 2003, and indications that at least some of the compact objects in question appear to have no solid surface, which can be deduced from the examination of X-ray bursts for which the central compact object is either a neutron star or a black hole; cf. Remillard et al. 2006 for an overview, Narayan 2006, sec. 5. Observations of the "shadow" of the Milky Way galaxy's central black hole horizon are eagerly sought for, cf. Falcke, Melia & Agol 2000
- Dalal et al. 2006
- Barack & Cutler 2004
- Originally Einstein 1917; cf. Pais 1982, pp. 285–288
- Carroll 2001, ch. 2
- Bergström & Goobar 2003, ch. 9–11; use of these models is justified by the fact that, at large scales of around hundred million light-years and more, our own universe indeed appears to be isotropic and homogeneous, cf. Peebles et al. 1991
- E.g. with WMAP data, see Spergel et al. 2003
- These tests involve the separate observations detailed further on, see, e.g., fig. 2 in Bridle et al. 2003
- Peebles 1966; for a recent account of predictions, see Coc, Vangioni‐Flam et al. 2004; an accessible account can be found in Weiss 2006; compare with the observations in Olive & Skillman 2004, Bania, Rood & Balser 2002, O'Meara et al. 2001, and Charbonnel & Primas 2005
- Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005
- Alpher & Herman 1948, for a pedagogical introduction, see Bergström & Goobar 2003, ch. 11; for the initial detection, see Penzias & Wilson 1965 and, for precision measurements by satellite observatories, Mather et al. 1994 (COBE) and Bennett et al. 2003 (WMAP). Future measurements could also reveal evidence about gravitational waves in the early universe; this additional information is contained in the background radiation's polarization, cf. Kamionkowski, Kosowsky & Stebbins 1997 and Seljak & Zaldarriaga 1997
- Evidence for this comes from the determination of cosmological parameters and additional observations involving the dynamics of galaxies and galaxy clusters cf. Peebles 1993, ch. 18, evidence from gravitational lensing, cf. Peacock 1999, sec. 4.6, and simulations of large-scale structure formation, see Springel et al. 2005
- Peacock 1999, ch. 12, Peskin 2007; in particular, observations indicate that all but a negligible portion of that matter is not in the form of the usual elementary particles ("non-baryonic matter"), cf. Peacock 1999, ch. 12
- Namely, some physicists have questioned whether or not the evidence for dark matter is, in fact, evidence for deviations from the Einsteinian (and the Newtonian) description of gravity cf. the overview in Mannheim 2006, sec. 9
- Carroll 2001; an accessible overview is given in Caldwell 2004. Here, too, scientists have argued that the evidence indicates not a new form of energy, but the need for modifications in our cosmological models, cf. Mannheim 2006, sec. 10; aforementioned modifications need not be modifications of general relativity, they could, for example, be modifications in the way we treat the inhomogeneities in the universe, cf. Buchert 2007
- A good introduction is Linde 1990; for a more recent review, see Linde 2005
- More precisely, these are the flatness problem, the horizon problem, and the monopole problem; a pedagogical introduction can be found in Narlikar 1993, sec. 6.4, see also Börner 1993, sec. 9.1
- Spergel et al. 2007, sec. 5,6
- More concretely, the potential function that is crucial to determining the dynamics of the inflaton is simply postulated, but not derived from an underlying physical theory
- Brandenberger 2007, sec. 2
- Frauendiener 2004, Wald 1984, sec. 11.1, Hawking & Ellis 1973, sec. 6.8, 6.9
- Wald 1984, sec. 9.2–9.4 and Hawking & Ellis 1973, ch. 6
- Thorne 1972; for more recent numerical studies, see Berger 2002, sec. 2.1
- Israel 1987. A more exact mathematical description distinguishes several kinds of horizon, notably event horizons and apparent horizons cf. Hawking & Ellis 1973, pp. 312–320 or Wald 1984, sec. 12.2; there are also more intuitive definitions for isolated systems that do not require knowledge of spacetime properties at infinity, cf. Ashtekar & Krishnan 2004
- For first steps, cf. Israel 1971; see Hawking & Ellis 1973, sec. 9.3 or Heusler 1996, ch. 9 and 10 for a derivation, and Heusler 1998 as well as Beig & Chruściel 2006 as overviews of more recent results
- The laws of black hole mechanics were first described in Bardeen, Carter & Hawking 1973; a more pedagogical presentation can be found in Carter 1979; for a more recent review, see Wald 2001, ch. 2. A thorough, book-length introduction including an introduction to the necessary mathematics Poisson 2004. For the Penrose process, see Penrose 1969
- Bekenstein 1973, Bekenstein 1974
- The fact that black holes radiate, quantum mechanically, was first derived in Hawking 1975; a more thorough derivation can be found in Wald 1975. A review is given in Wald 2001, ch. 3
- Narlikar 1993, sec. 4.4.4, 4.4.5
- Horizons: cf. Rindler 2001, sec. 12.4. Unruh effect: Unruh 1976, cf. Wald 2001, ch. 3
- Hawking & Ellis 1973, sec. 8.1, Wald 1984, sec. 9.1
- Townsend 1997, ch. 2; a more extensive treatment of this solution can be found in Chandrasekhar 1983, ch. 3
- Townsend 1997, ch. 4; for a more extensive treatment, cf. Chandrasekhar 1983, ch. 6
- Ellis & Van Elst 1999; a closer look at the singularity itself is taken in Börner 1993, sec. 1.2
- Here one should remind to the well-known fact that the important "quasi-optical" singularities of the so-called eikonal approximations of many wave-equations, namely the "caustics", are resolved into finite peaks beyond that approximation.
- Namely when there are trapped null surfaces, cf. Penrose 1965
- Hawking 1966
- The conjecture was made in Belinskii, Khalatnikov & Lifschitz 1971; for a more recent review, see Berger 2002. An accessible exposition is given by Garfinkle 2007
- The restriction to future singularities naturally excludes initial singularities such as the big bang singularity, which in principle be visible to observers at later cosmic time. The cosmic censorship conjecture was first presented in Penrose 1969; a textbook-level account is given in Wald 1984, pp. 302–305. For numerical results, see the review Berger 2002, sec. 2.1
- Hawking & Ellis 1973, sec. 7.1
- Arnowitt, Deser & Misner 1962; for a pedagogical introduction, see Misner, Thorne & Wheeler 1973, §21.4–§21.7
- Fourès-Bruhat 1952 and Bruhat 1962; for a pedagogical introduction, see Wald 1984, ch. 10; an online review can be found in Reula 1998
- Gourgoulhon 2007; for a review of the basics of numerical relativity, including the problems arising from the peculiarities of Einstein's equations, see Lehner 2001
- Misner, Thorne & Wheeler 1973, §20.4
- Arnowitt, Deser & Misner 1962
- Komar 1959; for a pedagogical introduction, see Wald 1984, sec. 11.2; although defined in a totally different way, it can be shown to be equivalent to the ADM mass for stationary spacetimes, cf. Ashtekar & Magnon-Ashtekar 1979
- For a pedagogical introduction, see Wald 1984, sec. 11.2
- Wald 1984, p. 295 and refs therein; this is important for questions of stability—if there were negative mass states, then flat, empty Minkowski space, which has mass zero, could evolve into these states
- Townsend 1997, ch. 5
- Such quasi-local mass–energy definitions are the Hawking energy, Geroch energy, or Penrose's quasi-local energy–momentum based on twistor methods; cf. the review article Szabados 2004
- An overview of quantum theory can be found in standard textbooks such as Messiah 1999; a more elementary account is given in Hey & Walters 2003
- Ramond 1990, Weinberg 1995, Peskin & Schroeder 1995; a more accessible overview is Auyang 1995
- Wald 1994, Birrell & Davies 1984
- For Hawking radiation Hawking 1975, Wald 1975; an accessible introduction to black hole evaporation can be found in Traschen 2000
- Wald 2001, ch. 3
- Put simply, matter is the source of spacetime curvature, and once matter has quantum properties, we can expect spacetime to have them as well. Cf. Carlip 2001, sec. 2
- Schutz 2003, p. 407
- A timeline and overview can be found in Rovelli 2000
- Donoghue 1995
- In particular, a technique known as renormalization, an integral part of deriving predictions which take into account higher-energy contributions, cf. Weinberg 1996, ch. 17, 18, fails in this case; cf. Goroff & Sagnotti 1985
- An accessible introduction at the undergraduate level can be found in Zwiebach 2004; more complete overviews can be found in Polchinski 1998a and Polchinski 1998b
- At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges, e.g. Ibanez 2000. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity, e.g. Green, Schwarz & Witten 1987, sec. 2.3, 5.3
- Green, Schwarz & Witten 1987, sec. 4.2
- Weinberg 2000, ch. 31
- Townsend 1996, Duff 1996
- Kuchař 1973, sec. 3
- These variables represent geometric gravity using mathematical analogues of electric and magnetic fields; cf. Ashtekar 1986, Ashtekar 1987
- For a review, see Thiemann 2006; more extensive accounts can be found in Rovelli 1998, Ashtekar & Lewandowski 2004 as well as in the lecture notes Thiemann 2003
- Isham 1994, Sorkin 1997
- Loll 1998
- Sorkin 2005
- Penrose 2004, ch. 33 and refs therein
- Hawking 1987
- Ashtekar 2007, Schwarz 2007
- Maddox 1998, pp. 52–59, 98–122; Penrose 2004, sec. 34.1, ch. 30
- section Quantum gravity, above
- section Cosmology, above
- Friedrich 2005
- A review of the various problems and the techniques being developed to overcome them, see Lehner 2002
- See Bartusiak 2000 for an account up to that year; up-to-date news can be found on the websites of major detector collaborations such as GEO 600 and LIGO
- For the most recent papers on gravitational wave polarizations of inspiralling compact binaries, see Blanchet et al. 2008, and Arun et al. 2007; for a review of work on compact binaries, see Blanchet 2006 and Futamase & Itoh 2006; for a general review of experimental tests of general relativity, see Will 2006
- See, e.g., the electronic review journal Living Reviews in Relativity
- Alpher, R. A.; Herman, R. C. (1948), "Evolution of the universe", Nature 162 (4124): 774–775, Bibcode:1948Natur.162..774A, doi:10.1038/162774b0
- Anderson, J. D.; Campbell, J. K.; Jurgens, R. F.; Lau, E. L. (1992), "Recent developments in solar-system tests of general relativity", in Sato, H.; Nakamura, T., Proceedings of the Sixth Marcel Großmann Meeting on General Relativity, World Scientific, pp. 353–355, ISBN 981-02-0950-9
- Arnold, V. I. (1989), Mathematical Methods of Classical Mechanics, Springer, ISBN 3-540-96890-3
- Arnowitt, Richard; Deser, Stanley; Misner, Charles W. (1962), "The dynamics of general relativity", in Witten, Louis, Gravitation: An Introduction to Current Research, Wiley, pp. 227–265
- Arun, K.G.; Blanchet, L.; Iyer, B. R.; Qusailah, M. S. S. (2007), "Inspiralling compact binaries in quasi-elliptical orbits: The complete 3PN energy flux", Physical Review D 77 (6), arXiv:0711.0302, Bibcode:2008PhRvD..77f4035A, doi:10.1103/PhysRevD.77.064035
- Ashby, Neil (2002), "Relativity and the Global Positioning System" (PDF), Physics Today 55 (5): 41–47, Bibcode:2002PhT....55e..41A, doi:10.1063/1.1485583
- Ashby, Neil (2003), "Relativity in the Global Positioning System", Living Reviews in Relativity 6, retrieved 2007-07-06
- Ashtekar, Abhay (1986), "New variables for classical and quantum gravity", Phys. Rev. Lett. 57 (18): 2244–2247, Bibcode:1986PhRvL..57.2244A, doi:10.1103/PhysRevLett.57.2244, PMID 10033673
- Ashtekar, Abhay (1987), "New Hamiltonian formulation of general relativity", Phys. Rev. D36 (6): 1587–1602, Bibcode:1987PhRvD..36.1587A, doi:10.1103/PhysRevD.36.1587
- Ashtekar, Abhay (2007), "LOOP QUANTUM GRAVITY: FOUR RECENT ADVANCES AND A DOZEN FREQUENTLY ASKED QUESTIONS", The Eleventh Marcel Grossmann Meeting - on Recent Developments in Theoretical and Experimental General Relativity, Gravitation and Relativistic Field Theories - Proceedings of the MG11 Meeting on General Relativity, p. 126, arXiv:0705.2222, Bibcode:2008mgm..conf..126A, doi:10.1142/9789812834300_0008, ISBN 9789812834263
- Ashtekar, Abhay; Krishnan, Badri (2004), "Isolated and Dynamical Horizons and Their Applications", Living Rev. Relativity 7, arXiv:gr-qc/0407042, Bibcode:2004LRR.....7...10A, doi:10.12942/lrr-2004-10, retrieved 2007-08-28
- Ashtekar, Abhay; Lewandowski, Jerzy (2004), "Background Independent Quantum Gravity: A Status Report", Class. Quant. Grav. 21 (15): R53–R152, arXiv:gr-qc/0404018, Bibcode:2004CQGra..21R..53A, doi:10.1088/0264-9381/21/15/R01
- Ashtekar, Abhay; Magnon-Ashtekar, Anne (1979), "On conserved quantities in general relativity", Journal of Mathematical Physics 20 (5): 793–800, Bibcode:1979JMP....20..793A, doi:10.1063/1.524151
- Auyang, Sunny Y. (1995), How is Quantum Field Theory Possible?, Oxford University Press, ISBN 0-19-509345-3
- Bania, T. M.; Rood, R. T.; Balser, D. S. (2002), "The cosmological density of baryons from observations of 3He+ in the Milky Way", Nature 415 (6867): 54–57, Bibcode:2002Natur.415...54B, doi:10.1038/415054a, PMID 11780112
- Barack, Leor; Cutler, Curt (2004), "LISA Capture Sources: Approximate Waveforms, Signal-to-Noise Ratios, and Parameter Estimation Accuracy", Phys. Rev. D69 (8): 082005, arXiv:gr-qc/0310125, Bibcode:2004PhRvD..69h2005B, doi:10.1103/PhysRevD.69.082005
- Bardeen, J. M.; Carter, B.; Hawking, S. W. (1973), "The Four Laws of Black Hole Mechanics", Comm. Math. Phys. 31 (2): 161–170, Bibcode:1973CMaPh..31..161B, doi:10.1007/BF01645742
- Barish, Barry (2005), "Towards detection of gravitational waves", in Florides, P.; Nolan, B.; Ottewil, A., General Relativity and Gravitation. Proceedings of the 17th International Conference, World Scientific, pp. 24–34, ISBN 981-256-424-1
- Barstow, M; Bond, Howard E.; Holberg, J. B.; Burleigh, M. R.; Hubeny, I.; Koester, D. (2005), "Hubble Space Telescope Spectroscopy of the Balmer lines in Sirius B", Mon. Not. Roy. Astron. Soc. 362 (4): 1134–1142, arXiv:astro-ph/0506600, Bibcode:2005MNRAS.362.1134B, doi:10.1111/j.1365-2966.2005.09359.x
- Bartusiak, Marcia (2000), Einstein's Unfinished Symphony: Listening to the Sounds of Space-Time, Berkley, ISBN 978-0-425-18620-6
- Begelman, Mitchell C.; Blandford, Roger D.; Rees, Martin J. (1984), "Theory of extragalactic radio sources", Rev. Mod. Phys. 56 (2): 255–351, Bibcode:1984RvMP...56..255B, doi:10.1103/RevModPhys.56.255
- Beig, Robert; Chruściel, Piotr T. (2006), "Stationary black holes", in Françoise, J.-P.; Naber, G.; Tsou, T.S., Encyclopedia of Mathematical Physics, Volume 2, Elsevier, p. 2041, arXiv:gr-qc/0502041, Bibcode:2005gr.qc.....2041B, ISBN 0-12-512660-3
- Bekenstein, Jacob D. (1973), "Black Holes and Entropy", Phys. Rev. D7 (8): 2333–2346, Bibcode:1973PhRvD...7.2333B, doi:10.1103/PhysRevD.7.2333
- Bekenstein, Jacob D. (1974), "Generalized Second Law of Thermodynamics in Black-Hole Physics", Phys. Rev. D9 (12): 3292–3300, Bibcode:1974PhRvD...9.3292B, doi:10.1103/PhysRevD.9.3292
- Belinskii, V. A.; Khalatnikov, I. M.; Lifschitz, E. M. (1971), "Oscillatory approach to the singular point in relativistic cosmology", Advances in Physics 19 (80): 525–573, Bibcode:1970AdPhy..19..525B, doi:10.1080/00018737000101171; original paper in Russian: Belinsky, V. A.; Lifshits, I. M.; Khalatnikov, E. M. (1970), "Колебательный Режим Приближения К Особой Точке В Релятивистской Космологии", Uspekhi Fizicheskikh Nauk (Успехи Физических Наук), 102(3) (11): 463–500, Bibcode:1970UsFiN.102..463B
- Bennett, C. L.; Halpern, M.; Hinshaw, G.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.; Page, L. et al. (2003), "First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results", Astrophys. J. Suppl. 148 (1): 1–27, arXiv:astro-ph/0302207, Bibcode:2003ApJS..148....1B, doi:10.1086/377253
- Berger, Beverly K. (2002), "Numerical Approaches to Spacetime Singularities", Living Rev. Relativity 5, arXiv:gr-qc/0201056, Bibcode:2002LRR.....5....1B, doi:10.12942/lrr-2002-1, retrieved 2007-08-04
- Bergström, Lars; Goobar, Ariel (2003), Cosmology and Particle Astrophysics (2nd ed.), Wiley & Sons, ISBN 3-540-43128-4
- Bertotti, Bruno; Ciufolini, Ignazio; Bender, Peter L. (1987), "New test of general relativity: Measurement of de Sitter geodetic precession rate for lunar perigee", Physical Review Letters 58 (11): 1062–1065, Bibcode:1987PhRvL..58.1062B, doi:10.1103/PhysRevLett.58.1062, PMID 10034329
- Bertotti, Bruno; Iess, L.; Tortora, P. (2003), "A test of general relativity using radio links with the Cassini spacecraft", Nature 425 (6956): 374–376, Bibcode:2003Natur.425..374B, doi:10.1038/nature01997, PMID 14508481
- Bertschinger, Edmund (1998), "Simulations of structure formation in the universe", Annu. Rev. Astron. Astrophys. 36 (1): 599–654, Bibcode:1998ARA&A..36..599B, doi:10.1146/annurev.astro.36.1.599
- Birrell, N. D.; Davies, P. C. (1984), Quantum Fields in Curved Space, Cambridge University Press, ISBN 0-521-27858-9
- Blair, David; McNamara, Geoff (1997), Ripples on a Cosmic Sea. The Search for Gravitational Waves, Perseus, ISBN 0-7382-0137-5
- Blanchet, L.; Faye, G.; Iyer, B. R.; Sinha, S. (2008), "The third post-Newtonian gravitational wave polarisations and associated spherical harmonic modes for inspiralling compact binaries in quasi-circular orbits", Classical and Quantum Gravity 25 (16): 165003, arXiv:0802.1249, Bibcode:2008CQGra..25p5003B, doi:10.1088/0264-9381/25/16/165003
- Blanchet, Luc (2006), "Gravitational Radiation from Post-Newtonian Sources and Inspiralling Compact Binaries", Living Rev. Relativity 9, Bibcode:2006LRR.....9....4B, doi:10.12942/lrr-2006-4, retrieved 2007-08-07
- Blandford, R. D. (1987), "Astrophysical Black Holes", in Hawking, Stephen W.; Israel, Werner, 300 Years of Gravitation, Cambridge University Press, pp. 277–329, ISBN 0-521-37976-8
- Börner, Gerhard (1993), The Early Universe. Facts and Fiction, Springer, ISBN 0-387-56729-1
- Brandenberger, Robert H. (2007), "Conceptual Problems of Inflationary Cosmology and a New Approach to Cosmological Structure Formation", Inflationary Cosmology, Lecture Notes in Physics 738, p. 393, arXiv:hep-th/0701111, Bibcode:2008LNP...738..393B, doi:10.1007/978-3-540-74353-8_11, ISBN 978-3-540-74352-1
- Brans, C. H.; Dicke, R. H. (1961), "Mach's Principle and a Relativistic Theory of Gravitation", Physical Review 124 (3): 925–935, Bibcode:1961PhRv..124..925B, doi:10.1103/PhysRev.124.925
- Bridle, Sarah L.; Lahav, Ofer; Ostriker, Jeremiah P.; Steinhardt, Paul J. (2003), "Precision Cosmology? Not Just Yet", Science 299 (5612): 1532–1533, arXiv:astro-ph/0303180, Bibcode:2003Sci...299.1532B, doi:10.1126/science.1082158, PMID 12624255
- Bruhat, Yvonne (1962), "The Cauchy Problem", in Witten, Louis, Gravitation: An Introduction to Current Research, Wiley, p. 130, ISBN 978-1-114-29166-9
- Buchert, Thomas (2007), "Dark Energy from Structure—A Status Report", General Relativity and Gravitation 40 (2–3): 467–527, arXiv:0707.2153, Bibcode:2008GReGr..40..467B, doi:10.1007/s10714-007-0554-8
- Buras, R.; Rampp, M.; Janka, H.-Th.; Kifonidis, K. (2003), "Improved Models of Stellar Core Collapse and Still no Explosions: What is Missing?", Phys. Rev. Lett. 90 (24): 241101, arXiv:astro-ph/0303171, Bibcode:2003PhRvL..90x1101B, doi:10.1103/PhysRevLett.90.241101, PMID 12857181
- Caldwell, Robert R. (2004), "Dark Energy", Physics World 17 (5): 37–42
- Carlip, Steven (2001), "Quantum Gravity: a Progress Report", Rept. Prog. Phys. 64 (8): 885–942, arXiv:gr-qc/0108040, Bibcode:2001RPPh...64..885C, doi:10.1088/0034-4885/64/8/301
- Carroll, Bradley W.; Ostlie, Dale A. (1996), An Introduction to Modern Astrophysics, Addison-Wesley, ISBN 0-201-54730-9
- Carroll, Sean M. (2001), "The Cosmological Constant", Living Rev. Relativity 4, arXiv:astro-ph/0004075, Bibcode:2001LRR.....4....1C, doi:10.12942/lrr-2001-1, retrieved 2007-07-21
- Carter, Brandon (1979), "The general theory of the mechanical, electromagnetic and thermodynamic properties of black holes", in Hawking, S. W.; Israel, W., General Relativity, an Einstein Centenary Survey, Cambridge University Press, pp. 294–369 and 860–863, ISBN 0-521-29928-4
- Celotti, Annalisa; Miller, John C.; Sciama, Dennis W. (1999), "Astrophysical evidence for the existence of black holes", Class. Quant. Grav. 16 (12A): A3–A21, arXiv:astro-ph/9912186, doi:10.1088/0264-9381/16/12A/301
- Chandrasekhar, Subrahmanyan (1983), The Mathematical Theory of Black Holes, Oxford University Press, ISBN 0-19-850370-9
- Charbonnel, C.; Primas, F. (2005), "The Lithium Content of the Galactic Halo Stars", Astronomy & Astrophysics 442 (3): 961–992, arXiv:astro-ph/0505247, Bibcode:2005A&A...442..961C, doi:10.1051/0004-6361:20042491
- Ciufolini, Ignazio; Pavlis, Erricos C. (2004), "A confirmation of the general relativistic prediction of the Lense-Thirring effect", Nature 431 (7011): 958–960, Bibcode:2004Natur.431..958C, doi:10.1038/nature03007, PMID 15496915
- Ciufolini, Ignazio; Pavlis, Erricos C.; Peron, R. (2006), "Determination of frame-dragging using Earth gravity models from CHAMP and GRACE", New Astron. 11 (8): 527–550, Bibcode:2006NewA...11..527C, doi:10.1016/j.newast.2006.02.001
- Coc, A.; Vangioni‐Flam, Elisabeth; Descouvemont, Pierre; Adahchour, Abderrahim; Angulo, Carmen (2004), "Updated Big Bang Nucleosynthesis confronted to WMAP observations and to the Abundance of Light Elements", Astrophysical Journal 600 (2): 544–552, arXiv:astro-ph/0309480, Bibcode:2004ApJ...600..544C, doi:10.1086/380121
- Cutler, Curt; Thorne, Kip S. (2002), "An overview of gravitational wave sources", in Bishop, Nigel; Maharaj, Sunil D., Proceedings of 16th International Conference on General Relativity and Gravitation (GR16), World Scientific, p. 4090, arXiv:gr-qc/0204090, Bibcode:2002gr.qc.....4090C, ISBN 981-238-171-6
- Dalal, Neal; Holz, Daniel E.; Hughes, Scott A.; Jain, Bhuvnesh (2006), "Short GRB and binary black hole standard sirens as a probe of dark energy", Phys.Rev. D74 (6): 063006, arXiv:astro-ph/0601275, Bibcode:2006PhRvD..74f3006D, doi:10.1103/PhysRevD.74.063006
- Danzmann, Karsten; Rüdiger, Albrecht (2003), "LISA Technology—Concepts, Status, Prospects" (PDF), Class. Quant. Grav. 20 (10): S1–S9, Bibcode:2003CQGra..20S...1D, doi:10.1088/0264-9381/20/10/301
- Dirac, Paul (1996), General Theory of Relativity, Princeton University Press, ISBN 0-691-01146-X
- Donoghue, John F. (1995), "Introduction to the Effective Field Theory Description of Gravity", in Cornet, Fernando, Effective Theories: Proceedings of the Advanced School, Almunecar, Spain, 26 June–1 July 1995, Singapore: World Scientific, p. 12024, arXiv:gr-qc/9512024, Bibcode:1995gr.qc....12024D, ISBN 981-02-2908-9
- Duff, Michael (1996), "M-Theory (the Theory Formerly Known as Strings)", Int. J. Mod. Phys. A11 (32): 5623–5641, arXiv:hep-th/9608117, Bibcode:1996IJMPA..11.5623D, doi:10.1142/S0217751X96002583
- Ehlers, Jürgen (1973), "Survey of general relativity theory", in Israel, Werner, Relativity, Astrophysics and Cosmology, D. Reidel, pp. 1–125, ISBN 90-277-0369-8
- Ehlers, Jürgen; Falco, Emilio E.; Schneider, Peter (1992), Gravitational lenses, Springer, ISBN 3-540-66506-4
- Ehlers, Jürgen; Lämmerzahl, Claus, eds. (2006), Special Relativity—Will it Survive the Next 101 Years?, Springer, ISBN 3-540-34522-1
- Ehlers, Jürgen; Rindler, Wolfgang (1997), "Local and Global Light Bending in Einstein's and other Gravitational Theories", General Relativity and Gravitation 29 (4): 519–529, Bibcode:1997GReGr..29..519E, doi:10.1023/A:1018843001842
- Einstein, Albert (1907), "Über das Relativitätsprinzip und die aus demselben gezogene Folgerungen" (PDF), Jahrbuch der Radioaktivität und Elektronik 4: 411, retrieved 2008-05-05
- Einstein, Albert (1915), "Die Feldgleichungen der Gravitation", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin: 844–847, retrieved 2006-09-12
- Einstein, Albert (1916), "Die Grundlage der allgemeinen Relativitätstheorie" (PDF), Annalen der Physik 49: 769–822, doi:10.1002/andp.19163540702, archived from the original on 2006-08-29, retrieved 2006-09-03
- Einstein, Albert (1917), "Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie", Sitzungsberichte der Preußischen Akademie der Wissenschaften: 142
- Ellis, George F R; Van Elst, Henk (1999), Lachièze-Rey, Marc, ed., "Theoretical and Observational Cosmology", Theoretical and observational cosmology : proceedings of the NATO Advanced Study Institute on Theoretical and Observational Cosmology (Kluwer): 1–116, arXiv:gr-qc/9812046, Bibcode:1999toc..conf....1E, doi:10.1007/978-94-011-4455-1_1, ISBN 978-0-7923-5946-3
- Everitt, C. W. F.; Buchman, S.; DeBra, D. B.; Keiser, G. M. (2001), "Gravity Probe B: Countdown to launch", in Lämmerzahl, C.; Everitt, C. W. F.; Hehl, F. W., Gyros, Clocks, and Interferometers: Testing Relativistic Gravity in Space (Lecture Notes in Physics 562), Springer, pp. 52–82, ISBN 3-540-41236-0
- Everitt, C. W. F.; Parkinson, Bradford; Kahn, Bob (2007), The Gravity Probe B experiment. Post Flight Analysis—Final Report (Preface and Executive Summary) (PDF), Project Report: NASA, Stanford University and Lockheed Martin, retrieved 2007-08-05
- Falcke, Heino; Melia, Fulvio; Agol, Eric (2000), "Viewing the Shadow of the Black Hole at the Galactic Center", Astrophysical Journal 528 (1): L13–L16, arXiv:astro-ph/9912263, Bibcode:2000ApJ...528L..13F, doi:10.1086/312423, PMID 10587484
- Flanagan, Éanna É.; Hughes, Scott A. (2005), "The basics of gravitational wave theory", New J.Phys. 7: 204, arXiv:gr-qc/0501041, Bibcode:2005NJPh....7..204F, doi:10.1088/1367-2630/7/1/204
- Font, José A. (2003), "Numerical Hydrodynamics in General Relativity", Living Rev. Relativity 6, doi:10.12942/lrr-2003-4, retrieved 2007-08-19
- Fourès-Bruhat, Yvonne (1952), "Théoréme d'existence pour certains systémes d'équations aux derivées partielles non linéaires", Acta Mathematica 88 (1): 141–225, doi:10.1007/BF02392131
- Frauendiener, Jörg (2004), "Conformal Infinity", Living Rev. Relativity 7, Bibcode:2004LRR.....7....1F, doi:10.12942/lrr-2004-1, retrieved 2007-07-21
- Friedrich, Helmut (2005), "Is general relativity 'essentially understood'?", Annalen Phys. 15 (1–2): 84–108, arXiv:gr-qc/0508016, Bibcode:2006AnP...518...84F, doi:10.1002/andp.200510173
- Futamase, T.; Itoh, Y. (2006), "The Post-Newtonian Approximation for Relativistic Compact Binaries", Living Rev. Relativity 10, retrieved 2008-02-29
- Gamow, George (1970), My World Line, Viking Press, ISBN 0-670-50376-2
- Garfinkle, David (2007), "Of singularities and breadmaking", Einstein Online, retrieved 2007-08-03
- Geroch, Robert (1996). "Partial Differential Equations of Physics". arXiv:gr-qc/9602055 [gr-qc].
- Giulini, Domenico (2005), Special Relativity: A First Encounter, Oxford University Press, ISBN 0-19-856746-4
- Giulini, Domenico (2006a), "Algebraic and Geometric Structures in Special Relativity", in Ehlers, Jürgen; Lämmerzahl, Claus, Special Relativity—Will it Survive the Next 101 Years?, Springer, pp. 45–111, arXiv:math-ph/0602018, Bibcode:2006math.ph...2018G, ISBN 3-540-34522-1
- Giulini, Domenico (2006b), Stamatescu, I. O., ed., "An assessment of current paradigms in the physics of fundamental interactions", Approaches to Fundamental Physics, Lecture Notes in Physics (Springer) 721: 105, arXiv:gr-qc/0603087, Bibcode:2007LNP...721..105G, doi:10.1007/978-3-540-71117-9_6, ISBN 978-3-540-71115-5
- Gnedin, Nickolay Y. (2005), "Digitizing the Universe", Nature 435 (7042): 572–573, Bibcode:2005Natur.435..572G, doi:10.1038/435572a, PMID 15931201
- Goenner, Hubert F. M. (2004), "On the History of Unified Field Theories", Living Rev. Relativity 7, Bibcode:2004LRR.....7....2G, doi:10.12942/lrr-2004-2, retrieved 2008-02-28
- Goroff, Marc H.; Sagnotti, Augusto (1985), "Quantum gravity at two loops", Phys. Lett. 160B (1–3): 81–86, Bibcode:1985PhLB..160...81G, doi:10.1016/0370-2693(85)91470-4
- Gourgoulhon, Eric (2007). "3+1 Formalism and Bases of Numerical Relativity". arXiv:gr-qc/0703035 [gr-qc].
- Gowdy, Robert H. (1971), "Gravitational Waves in Closed Universes", Phys. Rev. Lett. 27 (12): 826–829, Bibcode:1971PhRvL..27..826G, doi:10.1103/PhysRevLett.27.826
- Gowdy, Robert H. (1974), "Vacuum spacetimes with two-parameter spacelike isometry groups and compact invariant hypersurfaces: Topologies and boundary conditions", Ann. Phys. (N.Y.) 83 (1): 203–241, Bibcode:1974AnPhy..83..203G, doi:10.1016/0003-4916(74)90384-4
- Green, M. B.; Schwarz, J. H.; Witten, E. (1987), Superstring theory. Volume 1: Introduction, Cambridge University Press, ISBN 0-521-35752-7
- Greenstein, J. L.; Oke, J. B.; Shipman, H. L. (1971), "Effective Temperature, Radius, and Gravitational Redshift of Sirius B", Astrophysical Journal 169: 563, Bibcode:1971ApJ...169..563G, doi:10.1086/151174
- Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Predicted Relativistic Time Gains". Science 177 (4044): 166–168. Bibcode:1972Sci...177..166H. doi:10.1126/science.177.4044.166. PMID 17779917.
- Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Observed Relativistic Time Gains". Science 177 (4044): 168–170. Bibcode:1972Sci...177..168H. doi:10.1126/science.177.4044.168. PMID 17779918.
- Havas, P. (1964), "Four-Dimensional Formulation of Newtonian Mechanics and Their Relation to the Special and the General Theory of Relativity", Rev. Mod. Phys. 36 (4): 938–965, Bibcode:1964RvMP...36..938H, doi:10.1103/RevModPhys.36.938
- Hawking, Stephen W. (1966), "The occurrence of singularities in cosmology", Proceedings of the Royal Society A294 (1439): 511–521, Bibcode:1966RSPSA.294..511H, doi:10.1098/rspa.1966.0221
- Hawking, S. W. (1975), "Particle Creation by Black Holes", Communications in Mathematical Physics 43 (3): 199–220, Bibcode:1975CMaPh..43..199H, doi:10.1007/BF02345020
- Hawking, Stephen W. (1987), "Quantum cosmology", in Hawking, Stephen W.; Israel, Werner, 300 Years of Gravitation, Cambridge University Press, pp. 631–651, ISBN 0-521-37976-8
- Hawking, Stephen W.; Ellis, George F. R. (1973), The large scale structure of space-time, Cambridge University Press, ISBN 0-521-09906-4
- Heckmann, O. H. L.; Schücking, E. (1959), "Newtonsche und Einsteinsche Kosmologie", in Flügge, S., Encyclopedia of Physics 53, p. 489
- Heusler, Markus (1998), "Stationary Black Holes: Uniqueness and Beyond", Living Rev. Relativity 1, doi:10.12942/lrr-1998-6, retrieved 2007-08-04
- Heusler, Markus (1996), Black Hole Uniqueness Theorems, Cambridge University Press, ISBN 0-521-56735-1
- Hey, Tony; Walters, Patrick (2003), The new quantum universe, Cambridge University Press, ISBN 0-521-56457-3
- Hough, Jim; Rowan, Sheila (2000), "Gravitational Wave Detection by Interferometry (Ground and Space)", Living Rev. Relativity 3, retrieved 2007-07-21
- Hubble, Edwin (1929), "A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae" (PDF), Proc. Nat. Acad. Sci. 15 (3): 168–173, Bibcode:1929PNAS...15..168H, doi:10.1073/pnas.15.3.168, PMC 522427, PMID 16577160
- Hulse, Russell A.; Taylor, Joseph H. (1975), "Discovery of a pulsar in a binary system", Astrophys. J. 195: L51–L55, Bibcode:1975ApJ...195L..51H, doi:10.1086/181708
- Ibanez, L. E. (2000), "The second string (phenomenology) revolution", Class. Quant. Grav. 17 (5): 1117–1128, arXiv:hep-ph/9911499, Bibcode:2000CQGra..17.1117I, doi:10.1088/0264-9381/17/5/321
- Iorio, L. (2009), "An Assessment of the Systematic Uncertainty in Present and Future Tests of the Lense-Thirring Effect with Satellite Laser Ranging", Space Sci. Rev. 148 (1–4): 363, arXiv:0809.1373, Bibcode:2009SSRv..148..363I, doi:10.1007/s11214-008-9478-1
- Isham, Christopher J. (1994), "Prima facie questions in quantum gravity", in Ehlers, Jürgen; Friedrich, Helmut, Canonical Gravity: From Classical to Quantum, Springer, ISBN 3-540-58339-4
- Israel, Werner (1971), "Event Horizons and Gravitational Collapse", General Relativity and Gravitation 2 (1): 53–59, Bibcode:1971GReGr...2...53I, doi:10.1007/BF02450518
- Israel, Werner (1987), "Dark stars: the evolution of an idea", in Hawking, Stephen W.; Israel, Werner, 300 Years of Gravitation, Cambridge University Press, pp. 199–276, ISBN 0-521-37976-8
- Janssen, Michel (2005), "Of pots and holes: Einstein's bumpy road to general relativity" (PDF), Ann. Phys. (Leipzig) 14 (S1): 58–85, Bibcode:2005AnP...517S..58J, doi:10.1002/andp.200410130
- Jaranowski, Piotr; Królak, Andrzej (2005), "Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case", Living Rev. Relativity 8, doi:10.12942/lrr-2005-3, retrieved 2007-07-30
- Kahn, Bob (1996–2012), Gravity Probe B Website, Stanford University, retrieved 2012-04-20
- Kahn, Bob (April 14, 2007), Was Einstein right? Scientists provide first public peek at Gravity Probe B results (Stanford University Press Release) (PDF), Stanford University News Service
- Kamionkowski, Marc; Kosowsky, Arthur; Stebbins, Albert (1997), "Statistics of Cosmic Microwave Background Polarization", Phys. Rev. D55 (12): 7368–7388, arXiv:astro-ph/9611125, Bibcode:1997PhRvD..55.7368K, doi:10.1103/PhysRevD.55.7368
- Kennefick, Daniel (2005), "Astronomers Test General Relativity: Light-bending and the Solar Redshift", in Renn, Jürgen, One hundred authors for Einstein, Wiley-VCH, pp. 178–181, ISBN 3-527-40574-7
- Kennefick, Daniel (2007), "Not Only Because of Theory: Dyson, Eddington and the Competing Myths of the 1919 Eclipse Expedition", Proceedings of the 7th Conference on the History of General Relativity, Tenerife, 2005 0709, p. 685, arXiv:0709.0685, Bibcode:2007arXiv0709.0685K
- Kenyon, I. R. (1990), General Relativity, Oxford University Press, ISBN 0-19-851996-6
- Kochanek, C.S.; Falco, E.E.; Impey, C.; Lehar, J. (2007), CASTLES Survey Website, Harvard-Smithsonian Center for Astrophysics, retrieved 2007-08-21
- Komar, Arthur (1959), "Covariant Conservation Laws in General Relativity", Phys. Rev. 113 (3): 934–936, Bibcode:1959PhRv..113..934K, doi:10.1103/PhysRev.113.934
- Kramer, Michael (2004), Karshenboim, S. G.; Peik, E., eds., "Astrophysics, Clocks and Fundamental Constants (Lecture Notes in Physics Vol. 648)", Lecture Notes in Physics, Lecture Notes in Physics (Springer) 648: 33–54, arXiv:astro-ph/0405178, Bibcode:2004LNP...648...33K, doi:10.1007/978-3-540-40991-5_3, ISBN 978-3-540-21967-5
- Kramer, M.; Stairs, I. H.; Manchester, R. N.; McLaughlin, M. A.; Lyne, A. G.; Ferdman, R. D.; Burgay, M.; Lorimer, D. R. et al. (2006), "Tests of general relativity from timing the double pulsar", Science 314 (5796): 97–102, arXiv:astro-ph/0609417, Bibcode:2006Sci...314...97K, doi:10.1126/science.1132305, PMID 16973838
- Kraus, Ute (1998), "Light Deflection Near Neutron Stars", Relativistic Astrophysics, Vieweg, pp. 66–81, ISBN 3-528-06909-0
- Kuchař, Karel (1973), "Canonical Quantization of Gravity", in Israel, Werner, Relativity, Astrophysics and Cosmology, D. Reidel, pp. 237–288, ISBN 90-277-0369-8
- Künzle, H. P. (1972), "Galilei and Lorentz Structures on spacetime: comparison of the corresponding geometry and physics", Ann. Inst. Henri Poincaré a 17: 337–362
- Lahav, Ofer; Suto, Yasushi (2004), "Measuring our Universe from Galaxy Redshift Surveys", Living Rev. Relativity 7, arXiv:astro-ph/0310642, Bibcode:2004LRR.....7....8L, doi:10.12942/lrr-2004-8, retrieved 2007-08-19
- Landgraf, M.; Hechler, M.; Kemble, S. (2005), "Mission design for LISA Pathfinder", Class. Quant. Grav. 22 (10): S487–S492, arXiv:gr-qc/0411071, Bibcode:2005CQGra..22S.487L, doi:10.1088/0264-9381/22/10/048
- Lehner, Luis (2001), "Numerical Relativity: A review", Class. Quant. Grav. 18 (17): R25–R86, arXiv:gr-qc/0106072, Bibcode:2001CQGra..18R..25L, doi:10.1088/0264-9381/18/17/202
- Lehner, Luis (2002), "NUMERICAL RELATIVITY: STATUS AND PROSPECTS", General Relativity and Gravitation - Proceedings of the 16th International Conference, p. 210, arXiv:gr-qc/0202055, Bibcode:2002grg..conf..210L, doi:10.1142/9789812776556_0010, ISBN 9789812381712
- Linde, Andrei (1990), Particle Physics and Inflationary Cosmology, Harwood, p. 3203, arXiv:hep-th/0503203, Bibcode:2005hep.th....3203L, ISBN 3-7186-0489-2
- Linde, Andrei (2005), "Towards inflation in string theory", J. Phys. Conf. Ser. 24: 151–160, arXiv:hep-th/0503195, Bibcode:2005JPhCS..24..151L, doi:10.1088/1742-6596/24/1/018
- Loll, Renate (1998), "Discrete Approaches to Quantum Gravity in Four Dimensions", Living Rev. Relativity 1, arXiv:gr-qc/9805049, Bibcode:1998LRR.....1...13L, doi:10.12942/lrr-1998-13, retrieved 2008-03-09
- Lovelock, David (1972), "The Four-Dimensionality of Space and the Einstein Tensor", J. Math. Phys. 13 (6): 874–876, Bibcode:1972JMP....13..874L, doi:10.1063/1.1666069
- Ludyk, Günter (2013). Einstein in Matrix Form (1st ed. ed.). Berlin: Springer. ISBN 9783642357978.
- MacCallum, M. (2006), "Finding and using exact solutions of the Einstein equations", in Mornas, L.; Alonso, J. D., A Century of Relativity Physics (ERE05, the XXVIII Spanish Relativity Meeting) 841, American Institute of Physics, p. 129, arXiv:gr-qc/0601102, Bibcode:2006AIPC..841..129M, doi:10.1063/1.2218172
- Maddox, John (1998), What Remains To Be Discovered, Macmillan, ISBN 0-684-82292-X
- Mannheim, Philip D. (2006), "Alternatives to Dark Matter and Dark Energy", Prog. Part. Nucl. Phys. 56 (2): 340–445, arXiv:astro-ph/0505266, Bibcode:2006PrPNP..56..340M, doi:10.1016/j.ppnp.2005.08.001
- Mather, J. C.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E.; Fixsen, D. J.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A. et al. (1994), "Measurement of the cosmic microwave spectrum by the COBE FIRAS instrument", Astrophysical Journal 420: 439–444, Bibcode:1994ApJ...420..439M, doi:10.1086/173574
- Mermin, N. David (2005), It's About Time. Understanding Einstein's Relativity, Princeton University Press, ISBN 0-691-12201-6
- Messiah, Albert (1999), Quantum Mechanics, Dover Publications, ISBN 0-486-40924-4
- Miller, Cole (2002), Stellar Structure and Evolution (Lecture notes for Astronomy 606), University of Maryland, retrieved 2007-07-25
- Misner, Charles W.; Thorne, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
- Møller, Christian (1952), The Theory of Relativity (3rd ed.), Oxford University Press
- Narayan, Ramesh (2006), "Black holes in astrophysics", New Journal of Physics 7: 199, arXiv:gr-qc/0506078, Bibcode:2005NJPh....7..199N, doi:10.1088/1367-2630/7/1/199
- Narayan, Ramesh; Bartelmann, Matthias (1997). "Lectures on Gravitational Lensing". arXiv:astro-ph/9606001 [astro-ph].
- Narlikar, Jayant V. (1993), Introduction to Cosmology, Cambridge University Press, ISBN 0-521-41250-1
- Nieto, Michael Martin (2006), "The quest to understand the Pioneer anomaly" (PDF), EurophysicsNews 37 (6): 30–34, Bibcode:2006ENews..37...30N, doi:10.1051/epn:2006604
- Nordström, Gunnar (1918), "On the Energy of the Gravitational Field in Einstein's Theory", Verhandl. Koninkl. Ned. Akad. Wetenschap., 26: 1238–1245
- Nordtvedt, Kenneth (2003). "Lunar Laser Ranging—a comprehensive probe of post-Newtonian gravity". arXiv:gr-qc/0301024 [gr-qc].
- Norton, John D. (1985), "What was Einstein's principle of equivalence?" (PDF), Studies in History and Philosophy of Science 16 (3): 203–246, doi:10.1016/0039-3681(85)90002-0, retrieved 2007-06-11
- Ohanian, Hans C.; Ruffini, Remo (1994), Gravitation and Spacetime, W. W. Norton & Company, ISBN 0-393-96501-5
- Olive, K. A.; Skillman, E. A. (2004), "A Realistic Determination of the Error on the Primordial Helium Abundance", Astrophysical Journal 617 (1): 29–49, arXiv:astro-ph/0405588, Bibcode:2004ApJ...617...29O, doi:10.1086/425170
- O'Meara, John M.; Tytler, David; Kirkman, David; Suzuki, Nao; Prochaska, Jason X.; Lubin, Dan; Wolfe, Arthur M. (2001), "The Deuterium to Hydrogen Abundance Ratio Towards a Fourth QSO: HS0105+1619", Astrophysical Journal 552 (2): 718–730, arXiv:astro-ph/0011179, Bibcode:2001ApJ...552..718O, doi:10.1086/320579
- Oppenheimer, J. Robert; Snyder, H. (1939), "On continued gravitational contraction", Physical Review 56 (5): 455–459, Bibcode:1939PhRv...56..455O, doi:10.1103/PhysRev.56.455
- Overbye, Dennis (1999), Lonely Hearts of the Cosmos: the story of the scientific quest for the secret of the Universe, Back Bay, ISBN 0-316-64896-5
- Pais, Abraham (1982), 'Subtle is the Lord...' The Science and life of Albert Einstein, Oxford University Press, ISBN 0-19-853907-X
- Peacock, John A. (1999), Cosmological Physics, Cambridge University Press, ISBN 0-521-41072-X
- Peebles, P. J. E. (1966), "Primordial Helium abundance and primordial fireball II", Astrophysical Journal 146: 542–552, Bibcode:1966ApJ...146..542P, doi:10.1086/148918
- Peebles, P. J. E. (1993), Principles of physical cosmology, Princeton University Press, ISBN 0-691-01933-9
- Peebles, P.J.E.; Schramm, D.N.; Turner, E.L.; Kron, R.G. (1991), "The case for the relativistic hot Big Bang cosmology", Nature 352 (6338): 769–776, Bibcode:1991Natur.352..769P, doi:10.1038/352769a0
- Penrose, Roger (1965), "Gravitational collapse and spacetime singularities", Physical Review Letters 14 (3): 57–59, Bibcode:1965PhRvL..14...57P, doi:10.1103/PhysRevLett.14.57
- Penrose, Roger (1969), "Gravitational collapse: the role of general relativity", Rivista del Nuovo Cimento 1: 252–276, Bibcode:1969NCimR...1..252P
- Penrose, Roger (2004), The Road to Reality, A. A. Knopf, ISBN 0-679-45443-8
- Penzias, A. A.; Wilson, R. W. (1965), "A measurement of excess antenna temperature at 4080 Mc/s", Astrophysical Journal 142: 419–421, Bibcode:1965ApJ...142..419P, doi:10.1086/148307
- Peskin, Michael E.; Schroeder, Daniel V. (1995), An Introduction to Quantum Field Theory, Addison-Wesley, ISBN 0-201-50397-2
- Peskin, Michael E. (2007), "Dark Matter and Particle Physics", Journal of the Physical Society of Japan 76 (11): 111017, arXiv:0707.1536, Bibcode:2007JPSJ...76k1017P, doi:10.1143/JPSJ.76.111017
- Poisson, Eric (2004), "The Motion of Point Particles in Curved Spacetime", Living Rev. Relativity 7, doi:10.12942/lrr-2004-6, retrieved 2007-06-13
- Poisson, Eric (2004), A Relativist's Toolkit. The Mathematics of Black-Hole Mechanics, Cambridge University Press, ISBN 0-521-83091-5
- Polchinski, Joseph (1998a), String Theory Vol. I: An Introduction to the Bosonic String, Cambridge University Press, ISBN 0-521-63303-6
- Polchinski, Joseph (1998b), String Theory Vol. II: Superstring Theory and Beyond, Cambridge University Press, ISBN 0-521-63304-4
- Pound, R. V.; Rebka, G. A. (1959), "Gravitational Red-Shift in Nuclear Resonance", Physical Review Letters 3 (9): 439–441, Bibcode:1959PhRvL...3..439P, doi:10.1103/PhysRevLett.3.439
- Pound, R. V.; Rebka, G. A. (1960), "Apparent weight of photons", Phys. Rev. Lett. 4 (7): 337–341, Bibcode:1960PhRvL...4..337P, doi:10.1103/PhysRevLett.4.337
- Pound, R. V.; Snider, J. L. (1964), "Effect of Gravity on Nuclear Resonance", Phys. Rev. Lett. 13 (18): 539–540, Bibcode:1964PhRvL..13..539P, doi:10.1103/PhysRevLett.13.539
- Ramond, Pierre (1990), Field Theory: A Modern Primer, Addison-Wesley, ISBN 0-201-54611-6
- Rees, Martin (1966), "Appearance of Relativistically Expanding Radio Sources", Nature 211 (5048): 468–470, Bibcode:1966Natur.211..468R, doi:10.1038/211468a0
- Reissner, H. (1916), "Über die Eigengravitation des elektrischen Feldes nach der Einsteinschen Theorie", Annalen der Physik 355 (9): 106–120, Bibcode:1916AnP...355..106R, doi:10.1002/andp.19163550905
- Remillard, Ronald A.; Lin, Dacheng; Cooper, Randall L.; Narayan, Ramesh (2006), "The Rates of Type I X-Ray Bursts from Transients Observed with RXTE: Evidence for Black Hole Event Horizons", Astrophysical Journal 646 (1): 407–419, arXiv:astro-ph/0509758, Bibcode:2006ApJ...646..407R, doi:10.1086/504862
- Renn, Jürgen, ed. (2007), The Genesis of General Relativity (4 Volumes), Dordrecht: Springer, ISBN 1-4020-3999-9
- Renn, Jürgen, ed. (2005), Albert Einstein—Chief Engineer of the Universe: Einstein's Life and Work in Context, Berlin: Wiley-VCH, ISBN 3-527-40571-2
- Reula, Oscar A. (1998), "Hyperbolic Methods for Einstein's Equations", Living Rev. Relativity 1, Bibcode:1998LRR.....1....3R, doi:10.12942/lrr-1998-3, retrieved 2007-08-29
- Rindler, Wolfgang (2001), Relativity. Special, General and Cosmological, Oxford University Press, ISBN 0-19-850836-0
- Rindler, Wolfgang (1991), Introduction to Special Relativity, Clarendon Press, Oxford, ISBN 0-19-853952-5
- Robson, Ian (1996), Active galactic nuclei, John Wiley, ISBN 0-471-95853-0
- Roulet, E.; Mollerach, S. (1997), "Microlensing", Physics Reports 279 (2): 67–118, arXiv:astro-ph/9603119, Bibcode:1997PhR...279...67R, doi:10.1016/S0370-1573(96)00020-8
- Rovelli, Carlo (2000). "Notes for a brief history of quantum gravity". arXiv:gr-qc/0006061 [gr-qc].
- Rovelli, Carlo (1998), "Loop Quantum Gravity", Living Rev. Relativity 1, doi:10.12942/lrr-1998-1, retrieved 2008-03-13
- Schäfer, Gerhard (2004), "Gravitomagnetic Effects", General Relativity and Gravitation 36 (10): 2223–2235, arXiv:gr-qc/0407116, Bibcode:2004GReGr..36.2223S, doi:10.1023/B:GERG.0000046180.97877.32
- Schödel, R.; Ott, T.; Genzel, R.; Eckart, A.; Mouawad, N.; Alexander, T. (2003), "Stellar Dynamics in the Central Arcsecond of Our Galaxy", Astrophysical Journal 596 (2): 1015–1034, arXiv:astro-ph/0306214, Bibcode:2003ApJ...596.1015S, doi:10.1086/378122
- Schutz, Bernard F. (1985), A first course in general relativity, Cambridge University Press, ISBN 0-521-27703-5
- Schutz, Bernard F. (2001), "Gravitational radiation", in Murdin, Paul, Encyclopedia of Astronomy and Astrophysics, Grove's Dictionaries, ISBN 1-56159-268-4
- Schutz, Bernard F. (2003), Gravity from the ground up, Cambridge University Press, ISBN 0-521-45506-5
- Schwarz, John H. (2007), "String Theory: Progress and Problems", Progress of Theoretical Physics Supplement 170: 214, arXiv:hep-th/0702219, Bibcode:2007PThPS.170..214S, doi:10.1143/PTPS.170.214
- Schwarzschild, Karl (1916a), "Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie", Sitzungsber. Preuss. Akad. D. Wiss.: 189–196
- Schwarzschild, Karl (1916b), "Über das Gravitationsfeld eines Kugel aus inkompressibler Flüssigkeit nach der Einsteinschen Theorie", Sitzungsber. Preuss. Akad. D. Wiss.: 424–434
- Seidel, Edward (1998), "Numerical Relativity: Towards Simulations of 3D Black Hole Coalescence", in Narlikar, J. V.; Dadhich, N., Gravitation and Relativity: At the turn of the millennium (Proceedings of the GR-15 Conference, held at IUCAA, Pune, India, December 16–21, 1997), IUCAA, p. 6088, arXiv:gr-qc/9806088, Bibcode:1998gr.qc.....6088S, ISBN 81-900378-3-8
- Seljak, Uros̆; Zaldarriaga, Matias (1997), "Signature of Gravity Waves in the Polarization of the Microwave Background", Phys. Rev. Lett. 78 (11): 2054–2057, arXiv:astro-ph/9609169, Bibcode:1997PhRvL..78.2054S, doi:10.1103/PhysRevLett.78.2054
- Shapiro, S. S.; Davis, J. L.; Lebach, D. E.; Gregory, J. S. (2004), "Measurement of the solar gravitational deflection of radio waves using geodetic very-long-baseline interferometry data, 1979–1999", Phys. Rev. Lett. 92 (12): 121101, Bibcode:2004PhRvL..92l1101S, doi:10.1103/PhysRevLett.92.121101, PMID 15089661
- Shapiro, Irwin I. (1964), "Fourth test of general relativity", Phys. Rev. Lett. 13 (26): 789–791, Bibcode:1964PhRvL..13..789S, doi:10.1103/PhysRevLett.13.789
- Shapiro, I. I.; Pettengill, Gordon; Ash, Michael; Stone, Melvin; Smith, William; Ingalls, Richard; Brockelman, Richard (1968), "Fourth test of general relativity: preliminary results", Phys. Rev. Lett. 20 (22): 1265–1269, Bibcode:1968PhRvL..20.1265S, doi:10.1103/PhysRevLett.20.1265
- Singh, Simon (2004), Big Bang: The Origin of the Universe, Fourth Estate, ISBN 0-00-715251-5
- Sorkin, Rafael D. (2005), "Causal Sets: Discrete Gravity", in Gomberoff, Andres; Marolf, Donald, Lectures on Quantum Gravity, Springer, p. 9009, arXiv:gr-qc/0309009, Bibcode:2003gr.qc.....9009S, ISBN 0-387-23995-2
- Sorkin, Rafael D. (1997), "Forks in the Road, on the Way to Quantum Gravity", Int. J. Theor. Phys. 36 (12): 2759–2781, arXiv:gr-qc/9706002, Bibcode:1997IJTP...36.2759S, doi:10.1007/BF02435709
- Spergel, D. N.; Verde, L.; Peiris, H. V.; Komatsu, E.; Nolta, M. R.; Bennett, C. L.; Halpern, M.; Hinshaw, G. et al. (2003), "First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters", Astrophys. J. Suppl. 148 (1): 175–194, arXiv:astro-ph/0302209, Bibcode:2003ApJS..148..175S, doi:10.1086/377226
- Spergel, D. N.; Bean, R.; Doré, O.; Nolta, M. R.; Bennett, C. L.; Dunkley, J.; Hinshaw, G.; Jarosik, N. et al. (2007), "Wilkinson Microwave Anisotropy Probe (WMAP) Three Year Results: Implications for Cosmology", Astrophysical Journal Supplement 170 (2): 377–408, arXiv:astro-ph/0603449, Bibcode:2007ApJS..170..377S, doi:10.1086/513700
- Springel, Volker; White, Simon D. M.; Jenkins, Adrian; Frenk, Carlos S.; Yoshida, Naoki; Gao, Liang; Navarro, Julio; Thacker, Robert et al. (2005), "Simulations of the formation, evolution and clustering of galaxies and quasars", Nature 435 (7042): 629–636, arXiv:astro-ph/0504097, Bibcode:2005Natur.435..629S, doi:10.1038/nature03597, PMID 15931216
- Stairs, Ingrid H. (2003), "Testing General Relativity with Pulsar Timing", Living Rev. Relativity 6, arXiv:astro-ph/0307536, Bibcode:2003LRR.....6....5S, doi:10.12942/lrr-2003-5, retrieved 2007-07-21
- Stephani, H.; Kramer, D.; MacCallum, M.; Hoenselaers, C.; Herlt, E. (2003), Exact Solutions of Einstein's Field Equations (2 ed.), Cambridge University Press, ISBN 0-521-46136-7
- Synge, J. L. (1972), Relativity: The Special Theory, North-Holland Publishing Company, ISBN 0-7204-0064-3
- Szabados, László B. (2004), "Quasi-Local Energy-Momentum and Angular Momentum in GR", Living Rev. Relativity 7, doi:10.12942/lrr-2004-4, retrieved 2007-08-23
- Taylor, Joseph H. (1994), "Binary pulsars and relativistic gravity", Rev. Mod. Phys. 66 (3): 711–719, Bibcode:1994RvMP...66..711T, doi:10.1103/RevModPhys.66.711
- Thiemann, Thomas (2006), "Approaches to Fundamental Physics", Lect.NotesPhys, Lecture Notes in Physics 721: 185–263, arXiv:hep-th/0608210, Bibcode:2007LNP...721..185T, doi:10.1007/978-3-540-71117-9_10, ISBN 978-3-540-71115-5
- Thiemann, Thomas (2003), "Lectures on Loop Quantum Gravity", Lect. Notes Phys., Lecture Notes in Physics 631: 41–135, arXiv:gr-qc/0210094, doi:10.1007/978-3-540-45230-0_3, ISBN 978-3-540-40810-9
- Thorne, Kip S. (1972), "Nonspherical Gravitational Collapse—A Short Review", in Klauder, J., Magic without Magic, W. H. Freeman, pp. 231–258
- Thorne, Kip S. (1994), Black Holes and Time Warps: Einstein's Outrageous Legacy, W W Norton & Company, ISBN 0-393-31276-3
- Thorne, Kip S. (1995), "Gravitational radiation", Particle and Nuclear Astrophysics and Cosmology in the Next Millenium: 160, arXiv:gr-qc/9506086, Bibcode:1995pnac.conf..160T, ISBN 0-521-36853-7
- Townsend, Paul K. (1997). "Black Holes (Lecture notes)". arXiv:gr-qc/9707012 [gr-qc].
- Townsend, Paul K. (1996). "Four Lectures on M-Theory". arXiv:hep-th/9612121 [hep-th]. Bibcode 1997hepcbconf..385T.
- Traschen, Jenny (2000), Bytsenko, A.; Williams, F., eds., "Mathematical Methods of Physics (Proceedings of the 1999 Londrina Winter School)", Mathematical Methods in Physics (World Scientific): 180, arXiv:gr-qc/0010055, Bibcode:2000mmp..conf..180T
- Trautman, Andrzej (2006), "Einstein–Cartan theory", in Françoise, J.-P.; Naber, G. L.; Tsou, S. T., Encyclopedia of Mathematical Physics, Vol. 2, Elsevier, pp. 189–195, arXiv:gr-qc/0606062, Bibcode:2006gr.qc.....6062T
- Unruh, W. G. (1976), "Notes on Black Hole Evaporation", Phys. Rev. D 14 (4): 870–892, Bibcode:1976PhRvD..14..870U, doi:10.1103/PhysRevD.14.870
- Valtonen, M. J.; Lehto, H. J.; Nilsson, K.; Heidt, J.; Takalo, L. O.; Sillanpää, A.; Villforth, C.; Kidger, M. et al. (2008), "A massive binary black-hole system in OJ 287 and a test of general relativity", Nature 452 (7189): 851–853, arXiv:0809.1280, Bibcode:2008Natur.452..851V, doi:10.1038/nature06896, PMID 18421348
- Wald, Robert M. (1975), "On Particle Creation by Black Holes", Commun. Math. Phys. 45 (3): 9–34, Bibcode:1975CMaPh..45....9W, doi:10.1007/BF01609863
- Wald, Robert M. (1984), General Relativity, University of Chicago Press, ISBN 0-226-87033-2
- Wald, Robert M. (1994), Quantum field theory in curved spacetime and black hole thermodynamics, University of Chicago Press, ISBN 0-226-87027-8
- Wald, Robert M. (2001), "The Thermodynamics of Black Holes", Living Rev. Relativity 4, Bibcode:2001LRR.....4....6W, doi:10.12942/lrr-2001-6, retrieved 2007-08-08
- Walsh, D.; Carswell, R. F.; Weymann, R. J. (1979), "0957 + 561 A, B: twin quasistellar objects or gravitational lens?", Nature 279 (5712): 381–4, Bibcode:1979Natur.279..381W, doi:10.1038/279381a0, PMID 16068158
- Wambsganss, Joachim (1998), "Gravitational Lensing in Astronomy", Living Rev. Relativity 1, arXiv:astro-ph/9812021, Bibcode:1998LRR.....1...12W, doi:10.12942/lrr-1998-12, retrieved 2007-07-20
- Weinberg, Steven (1972), Gravitation and Cosmology, John Wiley, ISBN 0-471-92567-5
- Weinberg, Steven (1995), The Quantum Theory of Fields I: Foundations, Cambridge University Press, ISBN 0-521-55001-7
- Weinberg, Steven (1996), The Quantum Theory of Fields II: Modern Applications, Cambridge University Press, ISBN 0-521-55002-5
- Weinberg, Steven (2000), The Quantum Theory of Fields III: Supersymmetry, Cambridge University Press, ISBN 0-521-66000-9
- Weisberg, Joel M.; Taylor, Joseph H. (2003), "The Relativistic Binary Pulsar B1913+16"", in Bailes, M.; Nice, D. J.; Thorsett, S. E., Proceedings of "Radio Pulsars," Chania, Crete, August, 2002, ASP Conference Series
- Weiss, Achim (2006), "Elements of the past: Big Bang Nucleosynthesis and observation", Einstein Online (Max Planck Institute for Gravitational Physics), retrieved 2007-02-24
- Wheeler, John A. (1990), A Journey Into Gravity and Spacetime, Scientific American Library, San Francisco: W. H. Freeman, ISBN 0-7167-6034-7
- Will, Clifford M. (1993), Theory and experiment in gravitational physics, Cambridge University Press, ISBN 0-521-43973-6
- Will, Clifford M. (2006), "The Confrontation between General Relativity and Experiment", Living Rev. Relativity 9, arXiv:gr-qc/0510072, Bibcode:2006LRR.....9....3W, doi:10.12942/lrr-2006-3, retrieved 2007-06-12
- Zwiebach, Barton (2004), A First Course in String Theory, Cambridge University Press, ISBN 0-521-83143-1
- Popular books
- Geroch, R (1981), General Relativity from A to B, Chicago: University of Chicago Press, ISBN 0-226-28864-1
- Lieber, Lillian (2008), The Einstein Theory of Relativity: A Trip to the Fourth Dimension, Philadelphia: Paul Dry Books, Inc., ISBN 978-1-58988-044-3
- Wald, Robert M. (1992), Space, Time, and Gravity: the Theory of the Big Bang and Black Holes, Chicago: University of Chicago Press, ISBN 0-226-87029-4
- Wheeler, John; Ford, Kenneth (1998), Geons, Black Holes, & Quantum Foam: a life in physics, New York: W. W. Norton, ISBN 0-393- 31991-1
- Beginning undergraduate textbooks
- Callahan, James J. (2000), The Geometry of Spacetime: an Introduction to Special and General Relativity, New York: Springer, ISBN 0-387-98641-3
- Taylor, Edwin F.; Wheeler, John Archibald (2000), Exploring Black Holes: Introduction to General Relativity, Addison Wesley, ISBN 0-201-38423-X
- Advanced undergraduate textbooks
- B. F. Schutz (2009), A First Course in General Relativity (Second Edition), Cambridge University Press, ISBN 978-0-521-88705-2
- Cheng, Ta-Pei (2005), Relativity, Gravitation and Cosmology: a Basic Introduction, Oxford and New York: Oxford University Press, ISBN 0-19-852957-0
- Gron, O.; Hervik, S. (2007), Einstein's General theory of Relativity, Springer, ISBN 978-0-387-69199-2
- Hartle, James B. (2003), Gravity: an Introduction to Einstein's General Relativity, San Francisco: Addison-Wesley, ISBN 0-8053-8662-9
- Hughston, L. P. & Tod, K. P. (1991), Introduction to General Relativity, Cambridge: Cambridge University Press, ISBN 0-521-33943-X
- d'Inverno, Ray (1992), Introducing Einstein's Relativity, Oxford: Oxford University Press, ISBN 0-19-859686-3
- Ludyk, Günter (2013). Einstein in Matrix Form (1st ed. ed.). Berlin: Springer. ISBN 9783642357978.
- Graduate-level textbooks
- Carroll, Sean M. (2004), Spacetime and Geometry: An Introduction to General Relativity, San Francisco: Addison-Wesley, ISBN 0-8053-8732-3
- Grøn, Øyvind; Hervik, Sigbjørn (2007), Einstein's General Theory of Relativity, New York: Springer, ISBN 978-0-387-69199-2
- Landau, Lev D.; Lifshitz, Evgeny F. (1980), The Classical Theory of Fields (4th ed.), London: Butterworth-Heinemann, ISBN 0-7506-2768-9
- Misner, Charles W.; Thorne, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
- Stephani, Hans (1990), General Relativity: An Introduction to the Theory of the Gravitational Field, Cambridge: Cambridge University Press, ISBN 0-521-37941-5
- Wald, Robert M. (1984), General Relativity, University of Chicago Press, ISBN 0-226-87033-2
|Wikimedia Commons has media related to General relativity.|
|Wikibooks has more on the topic of: General relativity|
|Wikisource has original works on the topic: Relativity|
- Relativity: The special and general theory (PDF)
- Einstein Online – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics
- NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity
- Einstein's General Theory of Relativity on YouTube by Leonard Susskind's Modern Physics lectures. Recorded September 22, 2008 at Stanford University
- Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory courses and advanced ones).
- General Relativity Tutorials by John Baez
- Brown, Kevin. "Reflections on relativity". Mathpages.com. Retrieved May 29, 2005.
- Carroll, Sean M. "Lecture Notes on General Relativity". Retrieved January 5, 2014.
- Moor, Rafi. "Understanding General Relativity". Retrieved July 11, 2006.
- Waner, Stefan. "Introduction to Differential Geometry and General Relativity" (PDF). Retrieved 2006-01-31. |
What is it?
From CDC.gov: Coronaviruses are a large family of viruses that are common in people and many different species of animals including camels, cattle, cats, and bats. Coronaviruses are types of viruses that typically affect the respiratory tracts of birds and mammals, including humans. They are associated with the common cold, bronchitis, pneumonia, severe acute respiratory syndrome (SARS), and COVID-19.
These viruses are typically responsible for common colds more than serious diseases. However, coronaviruses are also behind some more severe outbreaks.
How does COVID-19 Spread?
It spreads between people who are in close contact with one another (within about 6 feet) via respiratory droplets produced when an infected person coughs or sneezes. There are theories that the virus may be airborne. These droplets can land in the mouths or noses of people who are nearby or possibly be inhaled into the lungs. Studies have shown that this novel (new) coronavirus may remain viable from hours to days on surfaces made from a variety of materials. Here is a chart from the New England Journal of Medicine:
Individuals are usually considered to be the most contagious when they are most symptomatic or the sickest, however, a person can possibly spread the coronavirus prior to showing symptoms.
What are the symptoms and emergency warning signs?
The most common symptoms are fever, cough, chest pain, and shortness of breath. Some other associated symptoms include loss of taste and smell, gastrointestinal symptoms such as diarrhea, and myalgias (muscle aches). These symptoms may appear 2-14 days after exposure. And they can present in both children and adults – meaning anyone can contract the diseases however children usually present with a milder presentation. If you develop the noted symptoms contact your doctor however if you develop the following seek medical care immediately:
- Difficulty breathing or shortness of breath
- Persistent pain or pressure in the chest
- New confusion or inability to arouse
- Bluish lips or face
Who are at higher risk?
The following individuals are at higher risk:
- Older adults
- People who have serious chronic medical conditions like:
- Heart disease
- Lung disease
However, I must stress ANYONE can contract the virus including infants. Recent studies have shown that 20% of hospitalized adults fall between the age of 20 to 44 years of age.
PROTECTION: PHYSICALLY, FINANCIALLY, AND MENTALLY FROM COVID-19
- Socially distance yourself from others. From www.merriam-webster.com: social distancing means “the practice of maintaining a greater than usual physical distance from other people or of avoiding direct contact with people or objects in public places during the outbreak of a contagious disease in order to minimize exposure and reduce the transmission of infection.”
- If your area is under orders to stay-at-home, make sure you adhere to those rules. Do not leave your home unless it is for groceries, medicine or medical care.
- Avoid close contact with people who are sick.
- Keep your distance from the family who are especially vulnerable like immunocompromised individuals and older people.
- Keep kids CLEAN – they are vectors and can transmit the virus.
- Stay home when you are sick, except to get medical care.
- Cover your coughs and sneezes with a tissue then throw it away immediately and sanitize your hands. You can also cough in your elbow.
- Wash your hands frequently and for at least 20 seconds. Here’s a great video from John Hopkins: https://www.youtube.com/watch?v=IisgnbMfKvI
- Avoid coughing into your hands. Clean your hands if you cough or sneeze into them.
- Clean frequently touched surfaces and objects daily (e.g. telephones, counter-tops, toilet seats, sinks/vanities, remote controls, shopping carts/basket handles, train/bus seats, gas station levers, doorknobs/handles, light switches, and cabinet handles) using a regular household detergent and water or disinfectant wipes such as Clorox or Lysol if it’s appropriate for the surface.
- Do not reuse disinfectant wipes on multiple surfaces. Do not dry surfaces after wiping them down – allow them to air dry.
- Wash clothes, towels and bedding regularly… curtains too.
- Avoid public transportation when possible. CANCEL/POSTPONE all nonessential travel, especially air travel.
- Make sure that you have at least a 30-day supply of essential medication prescriptions, testing supplies, and first aid items. Remember that these items can usually be delivered to you!
- Wear masks when you are out and about to prevent spread even with casual interaction.
To combat the spread of the disease, many cities, counties, and states are beginning to implement stay at home orders once again. As a result, many individuals have found themselves out of work and furloughed. As of this week, December 7th, there are 12 million Americans who are out of work. A staggering amount! If you find yourself out of a job or unsure of the security of the one you currently hold, these are the steps to undertake:
- Be proactive and call your creditors to discuss your current situation and make possible arrangements. This includes student loan debt holders who may be able to apply for a forbearance which will allow stoppage on payments temporarily. For example, many insurance companies are offering discounts on car insurance premiums. Also, you may qualify to have your mortgage payments deferred for up to 90 days. Make sure to check all the details with your mortgage broker.
- Keep priority obligations on track. Priority obligations include mortgage, rent, groceries, prescriptions, and utilities. Again, be proactive and call your mortgage holder or landlord and make arrangements if you’re laid off or think you will be.
- Develop an emergency spending budget.
- Stop non-essential spending now which should be easy to do because most places like concert halls, movie theaters, and restaurants are closed.
- BEEF UP EMERGENCY FUNDS IF YOU CAN!
- Identify community resources and government assistance programs if available. Communities agencies may help with food banks, temporary assistance with utilities, etc.
- Reach out to a nonprofit financial counselor to find ways to eliminate debt and reduce financial obligations.
- Obtain disability insurance if possible.
- Reevaluate investment accounts including 401k, 403b. Meet with an advisor and review goals and risk adversity.
- This is a great time to make a will or trust and to update beneficiaries on your life insurance policy and other benefits. Meet with an attorney to start the process if you have not already.
We are in some unprecedented times due to widespread disease including death, and isolation due to lockdowns and social distancing – all these can be a great source of anxiety. My first piece of advice is to not panic or worry because both serve no purpose. Preparation is the key and if you follow the steps noted above and those asked by your local officials, you will be in good shape mostly. Here are some additional steps:
- Meditate – Wonderful to decrease stress and still your mind.
- Have gratitude – If you are stuck in the house with your family, show gratitude that you have a family and a great home. If you live alone be thankful that you have a place to call home and PEACE.
- Exercise – Increases endorphins which makes you feel better.
- Plan for the future – Write that book you’ve always wanted to write or create a business plan for a possible business. This will keep you distracted AND give you something to look forward to.
- Take breaks from social media – I love to be up to date on the current news but too much can increase one’s anxiety levels.
- Stay in contact with your loved ones – Just because we are socially distancing ourselves it doesn’t mean we have to socially isolate. Call that person you haven’t spoken to in years, schedule Zoom virtual parties or even host a book club online. Also, keep in touch with elderly parents or friends. This pandemic can be especially stressful for them so make sure to speak to them on a regular basis.
- Online courses – Take one! Many learning organizations are offering classes. Enroll in a class you’ve longed to take.
- Tap into those hobbies – If you love to read, this is the time to catch up, connect with your children by playing board games, journal or learn new recipes for meals.
With the right attitude, preparation, and adherence to local and health guidelines we will conquer this pandemic and hopefully we will be the better for it.
Love, Dr. Randi |
Big History Project
- ACTIVITY: DQ Notebook 4.3
- WATCH: Introduction to Geology
- READ: Alfred Wegener and Harry Hess
- ACTIVITY: Claim Testing – Geology and the Earth’s Formation
- READ: A Girl Talk Geological Revolution - Marie Tharp: Graphic Biography
- READ: Eratosthenes of Cyrene
- WATCH: Introduction to the Geologic Time Chart
- READ: Principles of Geology
- ACTIVITY: What Do You Know? What Do You Ask?
- READ: The Universe Through a Pinhole — Hasan Ibn al-Haytham
- READ: Gallery — Geology
- Quiz: Our Solar System and Earth
READ: Eratosthenes of Cyrene
Measuring the Circumference of the Earth
By Cynthia Stokes Brown
More than 2,000 years ago Eratosthenes compared the position of the Sun’s rays in two locations to calculate the spherical size of the Earth with reasonable accuracy.
Eratosthenes was born in the Greek colony Cyrene, now the city of Shahhat, Libya. As a young man, he traveled to Athens to pursue his studies. He returned to Cyrene and made such a name for himself in scholarly endeavors that the Greek ruler of Egypt brought him to Alexandria to tutor his son. When the chief librarian of the famous Library of Alexandria died in 236 BCE, Eratosthenes was appointed to the prominent position around the age of 40.
A man of many talents, Eratosthenes was a librarian, geographer, mathematician, astronomer, historian, and poet. His friends at the library nicknamed him Pentathlos, or athlete who competes in five different events. The name seemed to fit a scholar who excelled in many fields of study. Most of Eratosthenes’s writings have been lost, but other scholars reported his work and findings — which were extensive.
Studying the earth
Eratosthenes may have been the first to use the word geography. He invented a system of longitude and latitude and made a map of the known world. He also designed a system for finding prime numbers — whole numbers that can only be divided by themselves or by the number 1. This method, still in use today, is called the “Sieve of Eratosthenes.”
Eratosthenes was also the first to calculate the tilt of the Earth’s axis, which he figured with remarkable accuracy; the finding was reported by Ptolemy (85-165 CE). Eratosthenes also calculated the distance from the Earth to the Moon and to the Sun, but with less accuracy. He made a catalog of 675 stars. He made a calendar with leap years and laid the foundation of chronology in the Western world by organizing the dates of literary and political events from the siege of Troy (about 1194–1184 BCE) to his own time.
Yet his most lasting achievement was his remarkably accurate calculation of the Earth’s circumference (the distance around a circle or sphere). He computed this by using simple geometry and trigonometry and by recognizing Earth as a sphere in space. Most Greek scholars by the time of Aristotle (384–322 BCE) agreed that Earth was a sphere, but none knew how big it was.
How did Greek scholars know the Earth was a sphere? They observed that ships disappeared over the horizon while their masts were still visible. They saw the curved shadow of the Earth on the Moon during lunar eclipses. And they noticed the changing positions of the stars in the sky.
Measuring the earth
Eratosthenes heard about a famous well in the Egyptian city of Swenet (Syene in Greek, and now known as Aswan), on the Nile River. At noon one day each year — the summer solstice (between June 20 and June 22) — the Sun’s rays shone straight down into the deep pit. They illuminated only the water at the bottom, not the sides of the well as on other days, proving that the Sun was directly overhead. (Syene was located very close to what we call the Tropic of Cancer, 23.5 degrees north, the northernmost latitude at which the Sun is ever directly overhead at noon.)
Eratosthenes erected a pole in Alexandria, and on the summer solstice he observed that it cast a shadow, proving that the Sun was not directly overhead but slightly south. Recognizing the curvature of the Earth and knowing the distance between the two cities enabled Eratosthenes to calculate the planet’s circumference.
Eratosthenes could measure the angle of the Sun’s rays off the vertical by dividing the length of the leg opposite the angle (the length of the shadow) by the leg adjacent to the angle (the height of the pole). This gave him an angle of 7.12 degrees. He knew that the circumference of Earth constituted a circle of 360 degrees, so 7.12 (or 7.2, to divide 360 evenly by 50) degrees would be about one-fiftieth of the circumference. He also knew the approximate distance between Alexandria and Syene, so he could set up this equation:
Eratosthenes estimated the distance from Alexandria to Syene as 5,000 stadia, or about 500 miles (800 kilometers). He made this estimation from the time it took walkers, who were trained to measure distances by taking regular strides, to trek between the cities. By solving the equation, he calculated a circumference of 250,000 stadia, or 25,000 miles (40,000 kilometers).
Several sources of error crept into Eratosthenes’s calculations and our interpretation of them. For one thing, he was using as his unit of measure the Greek unit “stadion,” or the length of an athletic stadium. But not all stadiums were built the same length. In Greece a stadion equaled roughly 185 meters (607 feet), while in Egypt the stadion was about 157.5 meters (517 feet). We don’t know which unit Eratosthenes used. If he used the Greek measure, his calculation would have been off by about 16 percent. If he used the Egyptian one, his error would have been less than 2 percent off the actual Earth’s circumference of 24,860 miles (40,008 kilometers).
A century after Eratosthenes, the Greek astronomer Posidonius of Rhodes (c. 135–51 BCE) calculated the Earth’s circumference. Posidonius used the star Canopus as frame of reference: when the star is visible at the horizon in Rhodes, it is 7.5 degrees above the horizon in Alexandria. His first calculations came out almost exactly correct, but he revised the distance between Rhodes and Alexandria, which resulted in a number comparable to about 18,000 miles (about 29,000 kilometers), some 28 percent smaller than the actual circumference. Ptolemy reported the calculations of Posidonius instead of those of Eratosthenes, and it was Ptolemy’s writings that found their way to Christopher Columbus. If Ptolemy had used Eratosthenes’s larger, more accurate figure for Earth’s circumference, Columbus might never have sailed west.
Eratosthenes lived to be about 82 years old, when he starved himself to death because he feared the onset of blindness.
By Cynthia Stokes Brown
For Further Discussion
Think about the following and share your ideas in the Questions Area below. If you were living in Greece at the time of Eratosthenes, how do you think you would have reacted to his proof? If you had believed that the Earth was flat, do you think you would have been convinced by what he was able to show?
Want to join the conversation?
- How come that the greeks already figured out that the earth was a sphere, while during the reneissance most people still believed the earth was a disk ?
Did the knowledge get lost with the fall of the greek empire ?(14 votes)
- The concept of the spherical earth was gradually accepted during the Middle Ages (before the Renaissance) - at least among the more educated members of the population. In fact, Saint Bede was born in the seventh century and his treatise On the Reckoning of Time makes claims about seasonal changes in daylight resting clearly upon the premise of a spherical planet.
I think sometimes it is hard to imagine a world in which most of the population was illiterate but for most of the history of mankind, it was a reality: only in recent centuries was more than a small minority of the population able to read. Since the written word is the way we transmit accumulated knowledge across generations, it took time for knowledge to "trickle down" to the largest segments of the population.(16 votes)
- So....does that mean that.....if the well had not been located exactly where it was that he would not have been able to calculate the circumference?
Was this the first time the equation had been used to find the circumference/ distance -- or were the mathematicians of his day measuring circumference of things for other things such as architecture etc. And one day old Erat would have thought.....hmmmm......I wonder if I could figure out the circumference of the earth? --or was this a question being actively being worked on? Was he scoffed at for his discovery or was it generally accepted and then.....what was it actually good for in his day other than just knowing? Did it actively help cartographers, travelers or the shipping trade or did it make any sort of impact on the knowledge during and around the time that it was discovered or who would have cared enough to do something with this knowledge?
Does anyone know how old was the well and who built it, how deep it was, how long it took to dig and whether it was intentionally placed to line up so perfectly or was it an ingredient of a Mimi goldilocks moment?(5 votes)
- While the well being at a spot where the shadow was negligible was convenient, he could have used two sticks a known north/south distance apart (east/west wouldn't have been useful) and used the change in angle in place of the 7.12 degrees mentioned in the article to find the circumference. The rest you'd probably have to consult a historian sorry.(3 votes)
- why did Ptolemy, not consider Eratoathenes measurement but rather Poisdonus' measurement.(3 votes)
- Did he have a wife or kids?(3 votes)
- According to space.com,(which is the source Google uses) the earth's circumference is 24,901 miles. According to National Geographic it is 24,902 miles and according to your article it was significantly less than both of those estimates. Is your information updated?(1 vote)
- Those are equatorial measurements. If you measure the circumference in the direction Eratosthenes did, which is north to south, the distance is less due to the Earth not being a perfect sphere, bulging at the equator.
You can see both circumferences listed here: https://en.wikipedia.org/wiki/Earth(4 votes)
The circumference of the Earth was already calculated in ancient Egypt and it turned out to be rather accurate considering the conditions of the time.
- i agree the greeks just were stubborn and didn't want to believe that or the information wasn't passed on to the greeks(1 vote)
- how do you make a catalog for starts and how would u know if that is the same star as before?(1 vote)
- While stars move, their patterns will remain the same. And that's how we have constellations!(1 vote)
- His figures would have convinced me that the earth was round mainly due to the fact that the curvature of the earth could be proven at the time, so why would others still believe anything but proven facts?(1 vote)
- I probably would just believe that the earth's surface is just uneven, and the earth is actually flat, and also that the pole was not oriented correctly.(1 vote)
- how did they exchange the information about stick Shadow at the same time ?
i mean they did not have a phone to tell that "hey, now there is no shadow on my obelisk"
and the other guy is like "no waaay, we still got shadow coming from our obelisk ... how is this possible!"(1 vote)
- He didn't have to exchange information. He already knew that the sun would shine straight down the well at noon on the summer solstice in Syene. He simply needed to measure the angle of the Sun at that same time in Alexandria.(1 vote) |
A black hole is an object in the Universe where gravity is so extreme which escaping from that is impossible. Most black holes have been discovered by using the effects of their gravitational fields, and scientists have so far been unable to provide a structure for black holes due to the inability of particles escaping from black holes.
Since black holes were discovered, scientists have been searching for their structure and ways of creation. By knowing that the density of a black hole is extremely high, like protons (1018 kg/m3), we can conclude that in a black hole there is a compact set of protons and neutrons that could create such a high density. Taking into account that a black hole has a surrounding area we should imagine that this area could be like an ocean of electrons, protons and neutrons. One can also imagine the black hole as an extremely large atom whose nucleus is made of protons and neutrons and the electrons revolving around it are ocean of electrons, protons and neutrons.
But another mysterious about the black hole is its different types. Considering that our universe is about 14 billion years old and there are galaxies whose lifetime is about 13 billion years, it can be concluded that their core, black holes, was formed at the beginning of the Big Bang. In fact, initial black holes are a part of the initial Big Bang that has the ability to create black holes with a density of about 1020 kg⁄m3. For more explanation, we note the following contents:
A. Objects whose density is between zero to 106 kg/m3. Those include all types of elements and atoms.
B. Matters whose density is between 1014 to 1020 kg/m3, such as black holes, white dwarfs and magnetars.
C. Big Bang globe whose density is about 1042 kg/m3.
Accordingly, there must be matters whose density is between 1020 and 1040 kg/m3 , which are lost and unknown for us. Therefore, there are separated components from the Big Bang that have the density of 1040 kg/m3. During the explosion of the Big Bang, all the types of different matters, including black holes, are created. In fact, the explosion of the Big Bang could create 3 types of black holes:
1. Regular black hole: black holes with an average density of 1020 kg/m3.
2. Super black hole: black holes whose average density is about 1026 kg/m3.
3. Meta black hole: black holes whose average density is about 1032 kg/m3.
The Milky Way galaxy has a regular black hole at its center, the Andromeda galaxy has a super black hole at its center and the Pleiades, whose central galaxy has a Meta black hole at its center.
Therefore, black holes can be formed through two possible processes: the explosion of the Big Bang or the death of stars with sufficient mass. As a result of the explosion of a star, an impact is created and the released energy throws out the electrons of star's atoms. As a result, a collection of protons and neutrons accumulates at its center and creates a dense mass known as a black hole. In fact, the central structure of black holes consists of a collection of protons and neutrons, while their outer structure can be a sea of electrons.
In simple terms, black holes can be thought of as cosmic devourers that absorb everything in their vicinity, including stars, particles, etc. Since “for every action in nature there is an equal and opposite reaction”, the process of absorption on the one side creates a definite possibility of the output on the other side. Heavier particles such as protons and neutrons remain in the center of the black hole, while other particles like electrons and photons are exited from the other side. This escaping phenomenon of particles from a black hole is known as a white hole. In fact, the balance of absorption and expulsion reactions creates stability and order within the black hole, ensuring its longevity and structural integrity. Considering that this is the natural structure of a black hole, a certain percentage of what it absorbs is ejected. Approximately 40% of the output consists of photons, 30% electrons, and few amount of protons and neutrons. |
4.1 Writing and Balancing Chemical Equations - UNGShare on:
Write and balance chemical equations in molecular, total ionic, and net ionic formats. .... Use this interactive tutorial (http://openstaxcollege.org/l/16BalanceEq) for ..... the ion's formula, guideline 4 may then be used to calculate the oxidation.
automatic 'balancing' of chemical equations - Universidad de Alcalá
Abstract-A computer program conceived as an aid to chemistry students is presented for the calculation of the stoichiometric coefficients of chemical equations~ ...
Chemistry, Seventh Edition - Woodbridge Township School District
3.8 Balancing Chemical Equations 98 ... 4.4 Types of Chemical Reactions 140. 4.5 Precipitation ..... ChemWork interactive assignments, end-of-chapter online ...... To calculate the moles of O3 produced, we must use the appropriate mole ratio:.
An Introduction to Chemical Reactions - An Introduction to Chemistry
4.1 Chemical Reactions and Chemical. Equations. 4.2 Solubility of Ionic. Compounds and .... Tip-off You are asked to balance a chemical equation. General Steps ...... This reaction takes place in the catalytic converter of your car. f . C6H14(l ) + ...
Balancing Chemical Equations
Suppose we wish to work out the chemical equation for this combustion. ... To make the calculation easier and as these are relative proportions, we can let X4 = 1 ... http://www.mathcentre.ac.uk/ [Accessed: 19/5/15] - an online drop-in centre for ...
Balancing Chemical Equations Using Matrices.pdf
Quantifying Chemical Reactions
Balance equations for basic chemical reactions. 3. Calculate chemical quantities from measurable quantities using mole ratios. 4. .... This interactive illustrates how different discoveries build upon, disprove, or reinforce previous theories.
Unit Title: Chemical Reactions - Colorado Department of Education
Mar 31, 2014 ... Balanced chemical equations illustrate the relationships between ... Calculate the amount(s) of reactant(s) and product(s) based on information given, using the law of ..... search online for additional examples of reaction types.
On Balancing Acidic and Basic Reduction/Oxidation Reactions with
Jun 8, 2015 ... Available online at http://pubs.sciepub.com/wjce/3/3/4 ... In this report we present a calculator-based, linear algebraic method that balances even the most rigorous ... The algebraic method for balancing chemical equations.
Chemistry - Textbooks Online
Calculation of empirical formula from quantitative analysis and percentage ... limitations - Stoichiometric equations - Balancing chemical equation in its ...
OpenStax CNX provides students with free online and low-cost print editions of .... 4.1 Writing and Balancing Chemical Equations . ...... provide important tools that help us calculate, interpret, describe, and generally make sense of the chemical ...
ENGR 1108-Dxx General Chemistry Bridge Course for Engineers
Access to homework site is purchased online directly from the ... 4. balance chemical equations and use stoichiometric relationships to calculate product and.
Stoichiometry of Formulas and Equations
3.3 Writing and Balancing Chemical. Equations. 3.4 Calculating Amounts of ..... it allows you to calculate the mass or number of entities of a substance in a sam-.
Introduction to Chemistry - Open Education Group
Chapter 1: Introduction to Chemistry & the Nature of Science. ..... solutions. 4. I can calculate concentration in terms of molarity and molality. 5. ... I can balance chemical reactions and recognize that the number of atoms in a chemical reaction ...
some basic concepts of chemistry - NCERT (ncert.nic.in)
calculate the mass per cent of different elements ... Chemical principles are important in diverse areas, such as: weather patterns ...... According to the law of conservation of mass, a balanced chemical equation has the same number of atoms ...
Applied Chemistry Chemistry 101 Laboratory Manual
EXPERIMENT 3: Determination of the Empirical Formula of a Compound. .... the balance. 3. Never place any chemical directly on the balance pan: always use a ... Calculate the Percent Error in your density determination knowing that the.
Chemistry 1A: General Chemistry - Las PositasChemistry 1A
Las Positas College, Chemistry 1A Lab Manual Fall 2012. Page 3 ... Experiment 13 Determination of Heat of Reaction. 97 .... Do not take reagent bottles to your desk or into the balance room. ..... To calculate formula of unknown hydrate:.
Preparatory Chemistry - University of Manitoba
Aug 13, 2015 ... Distance and Online Education Student Resources. Acknowledgements ... Interpret, write, and balance chemical equations. CHEM 0900 - ...
AP Chemistry 2014 Free-Response Questions - The College Board
AP Central is the official online home for the AP Program: apcentral.collegeboard .org. .... (i) write a balanced, net-ionic equation for the reaction, and ... (d) Calculate the number of moles of precipitate that is produced in the experiment.
The Free High School Science Texts: A Textbook for - Savannah
Jun 12, 2005 ... 10.2 Balancing Chemical Equations . ... 15.3 Balancing redox reactions . .... one can use it to calculate the probability of finding the electron at ...
Determining a set of independent chemical reactions - ScienceDirect
Received 9 November 1981, Revised 9 August 1982, Available online 6 August 2001 ... For the first approach, specification of a set of chemical reactions taking place ... that the number of reactions must be sufficient to allow one to calculate the ... and one element balance equation will result for each element; an additional ... |
Cartesian coordinates can be used to pinpoint where you are on a map or graph.
Using Cartesian Coordinates you mark a point on a graph by how far along and how far
up it is:
The point (12,5) is 12 units along, and 5 units up.
X and Y Axis
The left-right (horizontal) direction is commonly called X.
The up-down (vertical) direction is commonly called Y.
Put them together on a graph ...
... and you are ready to go
Where they cross over is the "0" point,
you measure everything from there.
The X Axis runs horizontally through zero
The Y Axis runs vertically through zero
Axis: The reference line from which distances are measured.
The plural of Axis is Axes, and is pronounced ax-eez
Point (6,4) is
6 units across (in the x direction), and
4 units up (in the y direction)
So (6,4) means:
Go along 6 and then go up 4 then "plot the dot".
And you can remember which axis is which by:x is A CROSS, so x is ACROSS the page.
Play With It !
Now would be a good time to play with
Interactive Cartesian Coordinates
to see for yourself how it all works.
Like 2 Number Lines Put Together
It is like we put two Number Lines together, one going left-right, and the other going up-down.
As x increases, the point moves further right.
When x decreases, the point moves further to the left.
As y increases, the point moves further up.
When y decreases, the point moves further down.
The coordinates are always written in a certain order:
the horizontal distance first,
then the vertical distance.
This is called an "ordered pair" (a pair of numbers in a special order)
And usually the numbers are separated by a comma, and parentheses are put
around the whole thing like this:
Example: (3,2) means 3 units to the right, and 2 units up
Example: (0,5) means 0 units to the right, and 5 units up.
In other words, only 5 units up.
The point (0,0) is given the special name "The Origin", and is sometimes
given the letter "O".
Abscissa and Ordinate
You may hear the words "Abscissa" and "Ordinate" ... they are just
the x and y values:
Abscissa: the horizontal ("x") value in a pair of coordinates: how
far along the point is
Ordinate: the vertical ("y") value in a pair of coordinates: how far up or
down the point is
What About Negative Values of X and Y?
Just like with the Number Line, you can also have
Negative: start at zero and head in the opposite
Negative x goes to the left
Negative y goes down
So, for a negative number:
go backwards for x
go down for y
For example (-6,4) means:
go back along the x axis 6 then go
And (-6,-4) means:
go back along the x axis 6 then
go down 4.
When we include negative
values, the x and y axes divide
the space up into 4 pieces:
Quadrants I, II, III and IV
(They are numbered in
a counterclockwise direction)
In Quadrant I both x and y are positive, but ...
in Quadrant II x is negative (y is still positive),
in Quadrant III both x and y are negative, and
in Quadrant IV x is positive again, while y
I Positive Positive (3,2)
II Negative Positive
Example: The point "A" (3,2) is 3 units along, and 2
Both x and y are positive, so that point is in
Example: The point "C" (-2,-1) is 2 units along in the
negative direction, and 1 unit down (i.e. negative
Both x and y are negative, so that point is in
Note: The word Quadrant comes
form quad meaning four. For example, four babies born
at one birth are called quadruplets, a four-legged animal
is aquadruped. and a quadrilateral is a four-sided
Dimensions: 1, 2, 3 and more ...
Think about this:
The Number Line can only go:
so any position needs just one number
Cartesian coordinates can go:
so any position needs two numbers
How do we locate a spot in the real world (such as the
tip of your nose)? We need to know:
that is three numbers, or 3 dimensions!
Cartesian coordinates can be used for
locating points in 3 dimensions as in this
Here the point (2, 4, 5) is shown in
three-dimensional Cartesian coordinates.
In fact, this idea can be continued into four dimensions
and more - I just can't work out how to illustrate that for
The rectangular coordinate system is also known as
the Cartesian coordinate system after Rene Descartes, who popularized
its use in analytic geometry. The rectangular coordinate system is based
on a grid, and every point on the plane can be identified by
unique x and y coordinates, just as any point on the Earth can be
identified by giving its latitude and longitude.
Locations on the grid are
measured relative to a fixed
point, called the origin, and are
measured according to the
distance along a pair of axes.
The x and y axes are just like
the number line, with positive
distances to the right and
negative to the left in the case
of the x axis, and positive
distances measured upwards
and negative down for
the y axis. Any displacement
away from the origin can be
constructed by moving a
specified distance in
the x direction and then
another distance in
the y direction. Think of it as if
you were giving directions to
someone by saying something
like “go three blocks East and
then 2 blocks North.”
We specify the location of a
point by first giving
its x coordinate (the left or right
displacement from the origin),
and then the y coordinate (the
up or down displacement from
the origin). Thus, every point
on the plane can be identified
by a pair of numbers (x, y),
called its coordinates.
Sometimes we just want to
know what general part of the
graph we are talking about.
The axes naturally divide the
plane up into quarters. We call
these quadrants, and number
them from one to four. Notice
that the numbering begins in
the upper right quadrant and
continues around in the
Notice also that each quadrant
can be identified by the unique
combination of positive and
negative signs for the
coordinates of a point in that |
Sustainable agriculture is farming in sustainable ways meeting society's present food and textile needs, without compromising the ability for current or future generations to meet their needs. It can be based on an understanding of ecosystem services. There are many methods to increase the sustainability of agriculture. When developing agriculture within sustainable food systems, it is important to develop flexible business process and farming practices.
Agriculture has an enormous environmental footprint, playing a significant role in causing climate change, water scarcity, land degradation, deforestation and other processes; it is simultaneously causing environmental changes and being impacted by these changes. Developing sustainable food systems, contributes to the sustainability of the human population. For example, one of the best ways to mitigate climate change is to create sustainable food systems based on sustainable agriculture. Sustainable agriculture provides a potential solution to enable agricultural systems to feed a growing population within the changing environmental conditions.
In 1907 Franklin H. King in his book Farmers of Forty Centuries discussed the advantages of sustainable agriculture, and warned that such practices would be vital to farming in the future. The phrase 'sustainable agriculture' was reportedly coined by the Australian agronomist Gordon McClymont. The term became popular in the late 1980s.
There was an international symposium on sustainability in horticulture by the International Society of Horticultural Science at the International Horticultural Congress in Toronto in 2002. At the following conference at Seoul in 2006, the principles were discussed further.
In the US National Agricultural Research, Extension, and Teaching Policy Act of 1977, the term "sustainable agriculture" is defined as an integrated system of plant and animal production practices having a site-specific application that will, over the long term:
- satisfy human food and fiber needs
- enhance environmental quality and the natural resource base upon which the agriculture economy depends
- make the most efficient use of nonrenewable resources and on-farm resources and integrate, where appropriate, natural biological cycles and controls
- sustain the economic viability of farm operations
- enhance the quality of life for farmers and society as a whole.
The British scholar Jules Pretty has stated several key principles associated with sustainability in agriculture:
- The incorporation of biological and ecological processes such as nutrient cycling, soil regeneration, and nitrogen fixation into agricultural and food production practices.
- Using decreased amounts of non-renewable and unsustainable inputs, particularly environmentally harmful ones.
- Using the expertise of farmers to both productively work the land as well as to promote the self-reliance and self-sufficiency of farmers.
- Solving agricultural and natural resource problems through the cooperation and collaboration of people with different skills. The problems tackled include pest management and irrigation.
It “considers long-term as well as short-term economics because sustainability is readily defined as forever, that is, agricultural environments that are designed to promote endless regeneration”. It balances the need for resource conservation with the needs of farmers pursuing their livelihood.
It is considered to be reconciliation ecology, accommodating biodiversity within human landscapes.
There is a debate on the definition of sustainability regarding agriculture. The definition could be characterized by two different approaches: an ecocentric approach and a technocentric approach. The ecocentric approach emphasizes no- or low-growth levels of human development, and focuses on organic and biodynamic farming techniques with the goal of changing consumption patterns, and resource allocation and usage. The technocentric approach argues that sustainability can be attained through a variety of strategies, from the view that state-led modification of the industrial system like conservation-oriented farming systems should be implemented, to the argument that biotechnology is the best way to meet the increasing demand for food.
One can look at the topic of sustainable agriculture through two different lenses: multifunctional agriculture and ecosystem services. Both of approaches are similar, but look at the function of agriculture differently. Those that employ the multifunctional agriculture philosophy focus on farm-centered approaches, and define function as being the outputs of agricultural activity. The central argument of multifunctionality is that agriculture is a multifunctional enterprise with other functions aside from the production of food and fiber. These functions include renewable resource management, landscape conservation and biodiversity. The ecosystem service-centered approach posits that individuals and society as a whole receive benefits from ecosystems, which are called "ecosystem services". In sustainable agriculture, the services that ecosystems provide include pollination, soil formation, and nutrient cycling, all of which are necessary functions for the production of food.
It is also claimed sustainable agriculture is best considered as an ecosystem approach to agriculture, called agroecology.
Most agricultural professionals agree that there is a "moral obligation to pursue [the] goal [of] sustainability." The major debate comes from what system will provide a path to that goal because if an unsustainable method is used on a large scale it will have a massive negative effect on the environment and human population.
Factors affecting sustainability
Practices that can cause long-term damage to soil include excessive tilling of the soil (leading to erosion) and irrigation without adequate drainage (leading to salinization).
Conservation farming in Zambia
The most important factors for a farming site are climate, soil, nutrients and water resources. Of the four, water and soil conservation are the most amenable to human intervention.
When farmers grow and harvest crops, they remove some nutrients from the soil. Without replenishment, the land suffers from nutrient depletion and becomes either unusable or suffers from reduced yields. Sustainable agriculture depends on replenishing the soil while minimizing the use or need of non-renewable resources, such as natural gas or mineral ores.
A farm that can "produce perpetually", yet has negative effects on environmental quality elsewhere is not sustainable agriculture. An example of a case in which a global view may be warranted is the application of fertilizer or manure, which can improve the productivity of a farm but can pollute nearby rivers and coastal waters (eutrophication). The other extreme can also be undesirable, as the problem of low crop yields due to exhaustion of nutrients in the soil has been related to rainforest destruction. In Asia, the specific amount of land needed for sustainable farming is about 12.5 acres which include land for animal fodder, cereal production as a cash crop, and other food crops. In some cases, a small unit of aquaculture is included (AARI-1996).
Possible sources of nitrates that would, in principle, be available indefinitely, include:
- recycling crop waste and livestock or treated human manure
- growing legume crops and forages such as peanuts or alfalfa that form symbioses with nitrogen-fixing bacteria called rhizobia
- industrial production of nitrogen by the Haber process uses hydrogen, which is currently derived from natural gas (but this hydrogen could instead be made by electrolysis of water using renewable electricity)
- genetically engineering (non-legume) crops to form nitrogen-fixing symbioses or fix nitrogen without microbial symbionts.
The last option was proposed in the 1970s, but is only gradually becoming feasible. Sustainable options for replacing other nutrient inputs such as phosphorus and potassium are more limited.
Other options include long-term crop rotations, returning to natural cycles that annually flood cultivated lands (returning lost nutrients) such as the flooding of the Nile, the long-term use of biochar, and use of crop and livestock landraces that are adapted to less than ideal conditions such as pests, drought, or lack of nutrients. Crops that require high levels of soil nutrients can be cultivated in a more sustainable manner with appropriate fertilizer management practices.
Phosphate is a primary component in fertilizer. It is the second most important nutrient for plants after nitrogen, and is often a limiting factor. It is important for sustainable agriculture as it can improve soil fertility and crop yields. Phosphorus is involved in all major metabolic processes including photosynthesis, energy transfer, signal transduction, macromolecular biosynthesis, and respiration. It is needed for root ramification and strength and seed formation, and can increase disease resistance.
Phosphorus is found in the soil in both inorganic and organic forms and makes up approximately 0.05% of soil biomass. Phosphorus fertilizers are the main input of inorganic phosphorus in agricultural soils and approximately 70%–80% of phosphorus in cultivated soils is inorganic. Long-term use of phosphate-containing chemical fertilizers causes eutrophication and deplete soil microbial life, so people have looked to other sources.
Phosphorus fertilizers are manufactured from rock phosphate. However, rock phosphate is a non-renewable resource and it is being depleted by mining for agricultural use: peak phosphorus will occur within the next few hundred years, or perhaps earlier.
Land degradation is becoming a severe global problem. According to the Intergovernmental Panel on Climate Change: "About a quarter of the Earth's ice-free land area is subject to human-induced degradation (medium confidence). Soil erosion from agricultural fields is estimated to be currently 10 to 20 times (no tillage) to more than 100 times (conventional tillage) higher than the soil formation rate (medium confidence)." Over a billion tonnes of southern Africa's soil are being lost to erosion annually, which if continued will result in halving of crop yields within thirty to fifty years. Improper soil management is threatening the ability to grow sufficient food. Intensive agriculture reduces the carbon level in soil, impairing soil structure, crop growth and ecosystem functioning, and accelerating climate change.
Soil management techniques include no-till farming, keyline design and windbreaks to reduce wind erosion, reincorporation of organic matter into the soil, reducing soil salinization, and preventing water run-off.
As the global population increases and demand for food increases, there is pressure on land as a resource. In land-use planning and management, considering the impacts of land-use changes on factors such as soil erosion can support long-term agricultural sustainability, as shown by a study of Wadi Ziqlab, a dry area in the Middle East where farmers graze livestock and grow olives, vegetables, and grains.
Looking back over the 20th century shows that for people in poverty, following environmentally sound land practices has not always been a viable option due to many complex and challenging life circumstances. Currently, increased land degradation in developing countries may be connected with rural poverty among smallholder farmers when forced into unsustainable agricultural practices out of necessity.
Converting big parts of the land surface to agriculture have severe environmental and health consequences. For example, it leads to rise in Zoonotic disease like the Coronavirus disease 2019, by degrading natural buffers between humans and animals, reducing biodiversity and creating big groups of genetically similar animals.
Land is a finite resource on Earth. Although expansion of agricultural land can decrease biodiversity and contribute to deforestation, the picture is complex; for instance, a study examining the introduction of sheep by Norse settlers (Vikings) to the Faroe Islands of the North Atlantic concluded that, over time, the fine partitioning of land plots contributed more to soil erosion and degradation than grazing itself.
The Food and Agriculture Organization of the United Nations estimates that in coming decades, cropland will continue to be lost to industrial and urban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in the loss of biodiversity and increased soil erosion.
In modern agriculture, energy is used in on-farm mechanisation, food processing, storage, and transportation processes. It has therefore been found that energy prices are closely linked to food prices. Oil is also used as an input in agricultural chemicals. The International Energy Agency projects higher prices of non-renewable energy resources as a result of fossil fuel resources being depleted. It may therefore decrease global food security unless action is taken to 'decouple' fossil fuel energy from food production, with a move towards 'energy-smart' agricultural systems including renewable energy.
The use of solar powered irrigation in Pakistan is said to be a closed system for agricultural water irrigation.
The environmental cost of transportation could be avoided if people use local products.
In some areas sufficient rainfall is available for crop growth, but many other areas require irrigation. For irrigation systems to be sustainable, they require proper management (to avoid salinization) and must not use more water from their source than is naturally replenishable. Otherwise, the water source effectively becomes a non-renewable resource. Improvements in water well drilling technology and submersible pumps, combined with the development of drip irrigation and low-pressure pivots, have made it possible to regularly achieve high crop yields in areas where reliance on rainfall alone had previously made successful agriculture unpredictable. However, this progress has come at a price. In many areas, such as the Ogallala Aquifer, the water is being used faster than it can be replenished.
According to the UC Davis Agricultural Sustainability Institute, several steps must be taken to develop drought-resistant farming systems even in "normal" years with average rainfall. These measures include both policy and management actions:
- improving water conservation and storage measures
- providing incentives for selection of drought-tolerant crop species
- using reduced-volume irrigation systems
- managing crops to reduce water loss
- not planting crops at all.
Indicators for sustainable water resource development include the average annual flow of rivers from rainfall, flows from outside a country, the percentage of water coming from outside a country, and gross water withdrawal.
Costs, such as environmental problems, not covered in traditional accounting systems (which take into account only the direct costs of production incurred by the farmer) are known as externalities.
Netting studied sustainability and intensive agriculture in smallholder systems through history.
There are several studies incorporating externalities such as ecosystem services, biodiversity, land degradation, and sustainable land management in economic analysis. These include The Economics of Ecosystems and Biodiversity study and the Economics of Land Degradation Initiative which seek to establish an economic cost-benefit analysis on the practice of sustainable land management and sustainable agriculture.
Triple bottom line frameworks include social and environmental alongside a financial bottom line. A sustainable future can be feasible if growth in material consumption and population is slowed down and if there is a drastic increase in the efficiency of material and energy use. To make that transition, long- and short-term goals will need to be balanced enhancing equity and quality of life.
Countries' evaluation of trends in the use of selected management practices and approaches
Other practices include growing a diverse number of perennial crops in a single field, each of which would grow in separate season so as not to compete with each other for natural resources. This system would result in increased resistance to diseases and decreased effects of erosion and loss of nutrients in soil. Nitrogen fixation from legumes, for example, used in conjunction with plants that rely on nitrate from soil for growth, helps to allow the land to be reused annually. Legumes will grow for a season and replenish the soil with ammonium and nitrate, and the next season other plants can be seeded and grown in the field in preparation for harvest.
Sustainable methods of weed management may help reduce the development of herbicide-resistant weeds. Crop rotation may also replenish nitrogen if legumes are used in the rotations and may also use resources more efficiently.
There are also many ways to practice sustainable animal husbandry. Some of the tools to grazing management include fencing off the grazing area into smaller areas called paddocks, lowering stock density, and moving the stock between paddocks frequently.
An increased production is a goal of intensification. Sustainable intensification encompasses specific agriculture methods that increase production and at the same time help improve environmental outcomes. The desired outcomes of the farm are achieved without the need for more land cultivation or destruction of natural habitat; the system performance is upgraded with no net environmental cost. Sustainable Intensification has become a priority for the United Nations. Sustainable intensification differs from prior intensification methods by specifically placing importance on broader environmental outcomes. By the year 2018; it was predicted in 100 nations a combined total of 163 million farms used sustainable intensification. The amount of agricultural land covered by this is 453 million ha of land. That amount of land is equal to 29% of farms worldwide. In light of concerns about food security, human population growth and dwindling land suitable for agriculture, sustainable intensive farming practises are needed to maintain high crop yields, while maintaining soil health and ecosystem services. The capacity for ecosystem services to be strong enough to allow a reduction in use of non-renewable inputs whilst maintaining or boosting yields has been the subject of much debate. Recent work in irrigated rice production system of east Asia has suggested that - in relation to pest management at least - promoting the ecosystem service of biological control using nectar plants can reduce the need for insecticides by 70% whilst delivering a 5% yield advantage compared with standard practice.
Vertical farming is a concept with the potential advantages of year-round production, isolation from pests and diseases, controllable resource recycling and reduced transportation costs.
Water efficiency can be improved by reducing the need for irrigation and using alternative methods. Such methods includes: researching on drought resistant crops, monitoring plant transpiration and reducing soil evaporation.
Drought resistant crops have been researched extensively as a means to overcome the issue of water shortage. They are modified genetically so they can adapt in an environment with little water. This is beneficial as it reduces the need for irrigation and helps conserve water. Although they have been extensively researched, significant results have not been achieved as most of the successful species will have no overall impact on water conservation. However, some grains like rice, for example, have been successfully genetically modified to be drought resistant.
Soil and nutrients
Soil amendments include using compost from recycling centers. Using compost from yard and kitchen waste uses available resources in the area.
Abstinence from soil tillage before planting and leaving the plant residue after harvesting reduces soil water evaporation; It also serves to prevent soil erosion.
Crop residues left covering the surface of the soil may result in reduced evaporation of water, a lower surface soil temperature, and reduction of wind effects.
A way to make rock phosphate more effective is to add microbial inoculates such as phosphate-solubilizing microorganisms, known as PSMs, to the soil. These solubilize phosphorus already in the soil and use processes like organic acid production and ion exchange reactions to make that phosphorus available for plants. Experimentally, these PSMs have been shown to increase crop growth in terms of shoot height, dry biomass and grain yield.
Phosphorus uptake is even more efficient with the presence of mycorrhizae in the soil. Mycorrhiza is a type of mutualistic symbiotic association between plants and fungi, which are well-equipped to absorb nutrients, including phosphorus, in soil. These fungi can increase nutrient uptake in soil where phosphorus has been fixed by aluminum, calcium, and iron. Mycorrhizae can also release organic acids that solubilize otherwise unavailable phosphorus.
Pests and weeds
Soil steaming can be used as an alternative to chemicals for soil sterilization. Different methods are available to induce steam into the soil to kill pests and increase soil health.
Solarizing is based on the same principle, used to increase the temperature of the soil to kill pathogens and pests.
Certain plants can be cropped for use as biofumigants, "natural" fumigants, releasing pest suppressing compounds when crushed, ploughed into the soil, and covered in plastic for four weeks. Plants in the Brassicaceae family release large amounts of toxic compounds such as methyl isothiocyanates.
Sustainability may also involve crop rotation. Crop rotation and cover crops prevent soil erosion, by protecting topsoil from wind and water. Effective crop rotation can reduce pest pressure on crops and replenish soil nutrients. This reduces the need for fertilizers and pesticides. Increasing the diversity of crops by introducing new genetic resources can increase yields. Perennial crops reduce the need for tillage and thus help mitigate soil erosion, and may sometimes tolerate drought better, increase water quality and help increase soil organic matter. There are research programs attempting to develop perennial substitutes for existing annual crops, such as replacing wheat with the wild grass Thinopyrum intermedium, or possible experimental hybrids of it and wheat.
Sustainability, external inputs needed, and labour requirements of selected plant disease management practices of traditional farmers.
Often thought of as inherently destructive, slash-and-burn or slash-and-char shifting cultivation have been practised in the Amazon for thousands of years.
Some traditional systems combine polyculture with sustainability. In South-East Asia, rice-fish systems on rice paddies have raised freshwater fish as well as rice, producing an additional product and reducing eutrophication of neighbouring rivers. A variant in Indonesia combines rice, fish, ducks and water fern; the ducks eat the weeds that would otherwise limit rice growth, saving labour and herbicides, while the duck and fish manure substitute for fertilizer.
Raised field agriculture has been recently revived in certain areas of the world, such as the Altiplano region in Bolivia and Peru. This has resurged in the form of traditional Waru Waru raised fields, which create nutrient-rich soil in regions where such soil is scarce. This method is extremely productive and has recently been utilized by indigenous groups in the area and the nearby Amazon Basin to make use of lands that have been historically hard to cultivate.
In Ohio, some farmers that could not buy land good for agriculture restored soil considered as unsuitable for any agricultural activity with traditional methods
, a form of polyculture
in imitation of natural ecosystems. Trees provide resources for the coffee plants such as shade, nutrients, and soil structure; the farmers harvest coffee and timber.
The use of available city space (e.g., rooftop gardens, community gardens, garden sharing, and other forms of urban agriculture) may be able to contribute to sustainability.
There is limited evidence polyculture may contribute to sustainable agriculture. A meta-analysis of a number of polycrop studies found that predator insect biodiversity was higher at comparable yields than conventional in certain two-crop systems with a single cash crop combined with a cover crop.
One approach to sustainability is to develop polyculture systems using perennial crop varieties. Such varieties are being developed for rice, wheat, sorghum, barley, and sunflowers. If these can be combined in polyculture with a leguminous cover crop such as alfalfa, fixation of nitrogen will be added to the system, reducing the need for fertilizer and pesticides.
Organic agriculture can be defined as:
an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity whilst, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones.
Some claim organic agriculture may produce the most sustainable products available for consumers in the US, where no other alternatives exist, although the focus of the organics industry is not sustainability.
In 2018 the sales of organic products in USA reach $52.5 billion According to a big survey two thirds of Americans consume organic products at least occasionally
Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses on topsoil regeneration, increasing biodiversity, improving the water cycle, enhancing ecosystem services, supporting biosequestration, increasing resilience to climate change, and strengthening the health and vitality of farm soil. Practices include, recycling as much farm waste as possible, and adding composted material from sources outside the farm.
Permaculture is an approach to land management and philosophy that adopts arrangements observed in flourishing natural ecosystems. It includes a set of design principles derived using whole systems thinking. It uses these principles in fields such as regenerative agriculture, rewilding, and community resilience. Permaculture was originally a portmanteau of "permanent agriculture", but was later adjusted to "permanent culture", to incorporate necessary social aspects as inspired by Masanobu Fukuoka's natural farming. The term was coined by Bill Mollison and David Holmgren in 1978, who formulated the concept in opposition to Western industrialized methods and in congruence with Indigenous or traditional knowledge.
Permaculture has many branches including ecological design
, ecological engineering
, regenerative design
, environmental design
, and construction
. It also includes integrated water resources management
that develops sustainable architecture
, and regenerative and self-maintained habitat
and agricultural systems modeled from natural ecosystems.
Permaculture has been implemented and gained widespread visibility throughout the world as an agricultural and architectural design system and as a guiding life principle or philosophy. Much of its success has been attributed to the role of Indigenous knowledge and traditions, which the practice itself is rooted in.
In turn, the rise of permaculture has revalidated Indigenous knowledge in circles where it was previously devalued.
Rural economic development
In 2007, the United Nations reported on "Organic Agriculture and Food Security in Africa", stating that using sustainable agriculture could be a tool in reaching global food security without expanding land usage and reducing environmental impacts. There has been evidence provided by developing nations from the early 2000s stating that when people in their communities are not factored into the agricultural process that serious harm is done. The social scientist Charles Kellogg has stated that, "In a final effort, exploited people pass their suffering to the land." Sustainable agriculture mean the ability to permanently and continuously "feed its constituent populations."
There are a lot of opportunities that can increase farmers’ profits, improve communities, and continue sustainable practices. For example, in Uganda Genetically Modified Organisms were originally illegal, however, with the stress of banana crisis in Uganda where Banana Bacterial Wilt had the potential to wipe out 90% of yield they decided to explore GMOs as a possible solution. The government issued the National Biotechnology and Biosafety bill which will allow scientists that are part of the National Banana Research Program to start experimenting with genetically modified organisms. This effort has the potential to help local communities because a significant portion live off the food they grow themselves and it will be profitable because the yield of their main produce will remain stable.
Not all regions are suitable for agriculture. The technological advancement of the past few decades has allowed agriculture to develop in some of these regions. For example, Nepal has built greenhouses to deal with its high altitude and mountainous regions. Greenhouses allow for greater crop production and also use less water since they are closed systems.
Desalination techniques can turn salt water into fresh water which allows greater access to water for areas with a limited supply. This allows the irrigation of crops without decreasing natural fresh water sources. While desalination can be a tool to provide water to areas that need it to sustain agriculture, it requires money and resources. Regions of China have been considering large scale desalination in order to increase access to water, but the current cost of the desalination process makes it impractical.
Women working in sustainable agriculture come from numerous backgrounds, from academic and labour. In the past 30 years (1978-2007) in the United States the number of women farm operators has tripled. Today, women operate 14 percent of farms, compared to five percent in 1978. Much of the growth is due to women farming outside the "male dominated field of conventional agriculture".
Growing your own food
The practice of growing food in the backyard of houses, schools, etc., by families or by communities became widespread in the US at the time of world war one, the great recession and world war two, so that in one point of time 40% of the vegetables of the USA was produced in this way. The practice became more popular again in the time of the COVID-19 pandemic. This method permits to grow food in a relatively sustainable way and at the same time make easier for poor people to obtain food.
Delaware Valley University's "Roth Center for Sustainable Agriculture", located in Montgomery County, Pennsylvania.
Sustainable agriculture is a topic in international policy concerning its potential to reduce environmental risks. In 2011, the Commission on Sustainable Agriculture and Climate Change, as part of its recommendations for policymakers on achieving food security in the face of climate change, urged that sustainable agriculture must be integrated into national and international policy. The Commission stressed that increasing weather variability and climate shocks will negatively affect agricultural yields, necessitating early action to drive change in agricultural production systems towards increasing resilience. It also called for dramatically increased investments in sustainable agriculture in the next decade, including in national research and development budgets, land rehabilitation, economic incentives, and infrastructure improvement.
In May 2020 the European Union published a program, named "From Farm to Fork" for making its agriculture more sustainable. In the official page of the program From Farm to Fork is cited Frans Timmermans the Executive Vice-President of the European Commission, saying that:
"The coronavirus crisis has shown how vulnerable we all are, and how important it is to restore the balance between human activity and nature. At the heart of the Green Deal the Biodiversity and Farm to Fork strategies point to a new and better balance of nature, food systems, and biodiversity; to protect our people's health and well-being, and at the same time to increase the EU's competitiveness and resilience. These strategies are a crucial part of the great transition we are embarking upon."
The program includes the next targets:
In 2016, the Chinese government adopted a plan to reduce China's meat consumption by 50%, for achieving more sustainable and healthy food system.
In the United States, the federal Natural Resources Conservation Service provides technical and financial assistance for those interested in pursuing natural resource conservation along with production agriculture. With programs like SARE and China-UK Sustainable Agriculture Innovation Network to help promote research on sustainable agriculture practices and a framework for agriculture and climate change respectively.
In 2020 Mexico banned the domestic growing of GMO corn and announced a future ban on import by 2024. According to the announcement, the use of Glyphosate will also be banned by the same year.
Among 63 farmers interviewed in Tasmania most accepted the notion climate change was happening, but just a small segment believed that it was human-related. Few farmers thought that the issue of climate change was significant enough to diminish what was causing it. Some of the farmers were worried about how a suggested carbon dioxide reduction plan would affect the agricultural sector and were suspicious of numerous government related activities, seeing them as methods in which the government could punish producers. The author James Howard Kunstler claims almost all modern technology is bad and that there cannot be sustainability unless agriculture is done in ancient traditional ways. Efforts toward more sustainable agriculture are supported in the sustainability community, however, these are often viewed only as incremental steps and not as an end. Some foresee a true sustainable steady state economy that may be very different from today's: greatly reduced energy usage, minimal ecological footprint, fewer consumer packaged goods, local purchasing with short food supply chains, little processed foods, more home and community gardens, etc.
According to Michael Carolan, a major barrier to the adoption of sustainable agriculture is its appearance of a lack of benefits. Many benefits are not visible or immediately evident, and affecting changes such as lower rates of soil and nutrient loss, improved soil structure and higher levels of beneficial microorganisms takes time. In conventional agriculture the benefits are easily visible with no weeds, pests, etc. and the costs to soil and ecosystems around it are hidden and "externalised".
This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 License statement/permission on Wikimedia Commons. Text taken from The State of the World's Biodiversity for Food and Agriculture − In Brief, FAO, FAO.
- ^ "What is sustainable agriculture | Agricultural Sustainability Institute". asi.ucdavis.edu. 11 December 2018. Retrieved 2019-01-20.
- ^ "Introduction to Sustainable Agriculture". Ontario Ministry of Agriculture, Food and Rural Affairs. 2016. Retrieved 10 October 2019.
- ^ Brown, L. R. (2012). World on the Edge. Earth Policy Institute. Norton. ISBN 978-1-136-54075-2.
- ^ a b Rockström, Johan; Williams, John; Daily, Gretchen; Noble, Andrew; Matthews, Nathanial; Gordon, Line; Wetterstrand, Hanna; DeClerck, Fabrice; Shah, Mihir (2016-05-13). "Sustainable intensification of agriculture for human prosperity and global sustainability". Ambio. 46 (1): 4–17. doi:10.1007/s13280-016-0793-6. PMC 5226894. PMID 27405653.
- ^ King, Franklin H. (2004). Farmers of forty centuries. Retrieved 20 February 2016.
- ^ Rural Science Graduates Association (2002). "In Memo rium - Former Staff and Students of Rural Science at UNE". University of New England. Archived from the original on 6 June 2013. Retrieved 21 October 2012.
- ^ Kirschenmann, Frederick. A Brief History of Sustainable Agriculture, editor's note by Carolyn Raffensperger and Nancy Myers. The Networker, vol. 9, no. 2, March 2004.
- ^ Bertschinger, L. et al. (eds) (2004). Conclusions from the 1st Symposium on Sustainability in Horticulture and a Declaration for the 21st Century. In: Proc. XXVI IHC – Sustainability of Horticultural Systems. Acta Hort. 638, ISHS, pp. 509-512. Retrieved on: 2009-03-16.
- ^ Lal, R. (2008). Sustainable Horticulture and Resource Management. In: Proc. XXVII IHC-S11 Sustainability through Integrated and Organic Horticulture. Eds.-in-Chief: R.K. Prange and S.D. Bishop. Acta Hort.767, ISHS, pp. 19-44.
- ^ a b c d e f "National Agricultural Research, Extension, and Teaching Policy Act of 1977" (PDF). US Department of Agriculture. 13 November 2002. This article incorporates text from this source, which is in the public domain.
- ^ a b c d e f Pretty, Jules N. (March 2008). "Agricultural sustainability: concepts, principles and evidence". Philosophical Transactions of the Royal Society of London B: Biological Sciences. 363 (1491): 447–465. doi:10.1098/rstb.2007.2163. ISSN 0962-8436. PMC 2610163. PMID 17652074.
- ^ Stenholm, Charles; Waggoner, Daniel (February 1990). "Low-input, sustainable agriculture: Myth or method?". Journal of Soil and Water Conservation. 45 (1): 14. Retrieved 3 March 2016.
- ^ Tomich, Tom (2016). Sustainable Agriculture Research and Education Program (PDF). Davis, California: University of California.
- ^ Chrispeels, M. J.; Sadava, D. E. (1994). Farming Systems: Development, Productivity, and Sustainability. Plants, Genes, and Agriculture. Jones and Bartlett. pp. 25–57. ISBN 978-0867208719.
- ^ a b Robinson, Guy M. (2009-09-01). "Towards Sustainable Agriculture: Current Debates". Geography Compass. 3 (5): 1757–1773. doi:10.1111/j.1749-8198.2009.00268.x. ISSN 1749-8198.
- ^ a b c Huang, Jiao; Tichit, Muriel; Poulot, Monique; Darly, Ségolène; Li, Shuangcheng; Petit, Caroline; Aubry, Christine (2014-10-16). "Comparative review of multifunctionality and ecosystem services in sustainable agriculture". Journal of Environmental Management. 149: 138–147. doi:10.1016/j.jenvman.2014.10.020. PMID 25463579.
- ^ Renting, H.; Rossing, W.A.H.; Groot, J.C.J; Van der Ploeg, J.D.; Laurent, C.; Perraud, D.; Stobbelaar, D.J.; Van Ittersum, M.K. (2009-05-01). "Exploring multifunctional agriculture. A review of conceptual approaches and prospects for an integrative transitional framework". Journal of Environmental Management. 90: S112–S123. doi:10.1016/j.jenvman.2008.11.014. ISSN 0301-4797. PMID 19121889.
- ^ Tilman, David; Cassman, Kenneth G.; Matson, Pamela A.; Naylor, Rosamond; Polasky, Stephen (2002-08-08). "Agricultural sustainability and intensive production practices". Nature. 418 (6898): 671–677. Bibcode:2002Natur.418..671T. doi:10.1038/nature01014. PMID 12167873. S2CID 3016610.
- ^ Sandhu, Harpinder S.; Wratten, Stephen D.; Cullen, Ross (2010-02-01). "Organic agriculture and ecosystem services". Environmental Science & Policy. 13 (1): 1–7. doi:10.1016/j.envsci.2009.11.002. ISSN 1462-9011.
- ^ Altieri, Miguel A. (1995) Agroecology: The science of sustainable agriculture. Westview Press, Boulder, CO.
- ^ a b c d Stanislaus, Dundon (2009). "Sustainable Agriculture". Gale Virtual Reference Library.
- ^ "Scientists discover genetics of nitrogen fixation in plants - potential implications for future agriculture". News.mongabay.com. 2008-03-08. Retrieved 2013-09-10.
- ^ Proceedings of the National Academy of Sciences of the United States of America, March 25, 2008 vol. 105 no. 12 4928–4932
- ^ a b c d e Atekan, A.; Nuraini, Y.; Handayanto, E.; Syekhfani, S. (2014-07-07). "The potential of phosphate solubilizing bacteria isolated from sugarcane wastes for solubilizing phosphate". Journal of Degraded and Mining Lands Management. 1 (4): 175–182. doi:10.15243/jdmlm.2014.014.175.
- ^ a b Khan, Mohammad Saghir; Zaidi, Almas; Wani, Parvaze A. (2007-03-01). "Role of phosphate-solubilizing microorganisms in sustainable agriculture — A review" (PDF). Agronomy for Sustainable Development. 27 (1): 29–43. doi:10.1051/agro:2006011. ISSN 1774-0746. S2CID 22096957.
- ^ a b Cordell, Dana; White, Stuart (2013-01-31). "Sustainable Phosphorus Measures: Strategies and Technologies for Achieving Phosphorus Security". Agronomy. 3 (1): 86–116. doi:10.3390/agronomy3010086.
- ^ a b c Sharma, Seema B.; Sayyed, Riyaz Z.; Trivedi, Mrugesh H.; Gobi, Thivakaran A. (2013-10-31). "Phosphate solubilizing microbes: sustainable approach for managing phosphorus deficiency in agricultural soils". SpringerPlus. 2: 587. doi:10.1186/2193-1801-2-587. PMC 4320215. PMID 25674415.
- ^ a b Bhattacharya, Amitav (2019). "Chapter 5 - Changing Environmental Condition and Phosphorus-Use Efficiency in Plants". Changing Climate and Resource Use Efficiency in Plants. Academic Press. pp. 241–305. doi:10.1016/B978-0-12-816209-5.00005-2. ISBN 978-0-12-816209-5.
- ^ Green, B.W. (2015). "2 - Fertilizers in aquaculture". Feed and Feeding Practices in Aquaculture. Woodhead Publishing. pp. 27–52. doi:10.1016/B978-0-08-100506-4.00002-7. ISBN 978-0-08-100506-4.
- ^ IFDC.org - IFDC Report Indicates Adequate Phosphorus Resources, Sep-2010
- ^ Jasinski, SM (January 2017). Mineral Commodity Summaries (PDF). U.S. Geological Survey.
- ^ Van Kauwenbergh, Steven J. (2010). World Phosphate Rock Reserves and Resources. Muscle Shoals, AL, USA: International Fertilizer Development Center (IFDC). p. 60. ISBN 978-0-88090-167-3. Retrieved 7 April 2016.
- ^ Edixhoven, J.D.; Gupta, J.; Savenije, H.H.G. (2013). "Recent revisions of phosphate rock reserves and resources: reassuring or misleading? An in-depth literature review of global estimates of phosphate rock reserves and resources". Earth System Dynamics. 5 (2): 491–507. Bibcode:2014ESD.....5..491E. doi:10.5194/esd-5-491-2014.
- ^ Cordell, Dana (2009). "The story of phosphorus: Global food security and food for thought". Global Environmental Change. 19 (2): 292–305. doi:10.1016/j.gloenvcha.2008.10.009.
- ^ Cordell, Dana & Stuart White 2011. Review: Peak Phosphorus: Clarifying the Key Issues of a Vigorous Debate about Long-Term Phosphorus Security. Sustainability 2011, 3(10), 2027-2049; doi:10.3390/su3102027, http://www.mdpi.com/2071-1050/3/10/2027/htm
- ^ Summary for Policymakers. In: Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems (PDF). Intergovernmental Panel on Climate Change. 2019. p. 5. Retrieved 30 January 2020.
- ^ "CEP Factsheet". Musokotwane Environment Resource Centre for Southern Africa. Archived from the original on 2013-02-13.
- ^ a b Powlson, D.S.; Gregory, P.J.; Whalley, W.R.; Quinton, J.N.; Hopkins, D.W.; Whitmore, A.P.; Hirsch, P.R.; Goulding, K.W.T. (2011-01-01). "Soil management in relation to sustainable agriculture and ecosystem services". Food Policy. 36: S72–S87. doi:10.1016/j.foodpol.2010.11.025.
- ^ Principles of sustainable soil management in agroecosystems. Lal, R., Stewart, B. A. (Bobby Alton), 1932-. CRC Press. 2013. ISBN 978-1466513471. OCLC 768171461.CS1 maint: others (link)
- ^ Gliessman, Stephen (2015). Agroecology: the ecology of sustainable food systems. Boca Raton: CRC Press. ISBN 978-1439895610. OCLC 744303838.
- ^ Mohawesh, Yasser; Taimeh, Awni; Ziadat, Feras (September 2015). "Effects of land-use changes and soil conservation intervention on soil properties as indicators for land degradation under a Mediterranean climate". Solid Earth. 6 (3): 857–868. Bibcode:2015SolE....6..857M. doi:10.5194/se-6-857-2015.
- ^ Grimble, Robin (April 2002). "Rural Poverty and Environmental Management : A framework for understanding". Transformation: An International Journal of Holistic Mission Studies. 19 (2): 120–132. doi:10.1177/026537880201900206. OCLC 5724786521. S2CID 149066616.
- ^ Barbier, Edward B.; Hochard, Jacob P. (May 11, 2016). "Does Land Degradation Increase Poverty in Developing Countries?". PLOS ONE. 11 (5): e0152973. Bibcode:2016PLoSO..1152973B. doi:10.1371/journal.pone.0152973. PMC 4864404. PMID 27167738.
- ^ "Science points to causes of COVID-19". United Nations Environmental Programm. United Nations. Retrieved 24 June 2020.
- ^ Carrington, Damian (17 June 2020). "Pandemics result from destruction of nature, say UN and WHO". The Guardian. Retrieved 24 June 2020.
- ^ Thomson, Amanda; Simpson, Ian; Brown, Jennifer (October 2005). "Sustainable rangeland grazing in Norse Faroe" (PDF). Human Ecology. 33 (5): 737–761. doi:10.1007/s10745-005-7596-x. hdl:1893/132. S2CID 18144243.
- ^ "FAO World Agriculture towards 2015/2030". Food and Agriculture Organization. 21 August 2008.
- ^ "FAO World Agriculture towards 2015/2030". Fao.org. Retrieved 2013-09-10.
- ^ a b "FAO 2011 Energy Smart Food" (PDF). Retrieved 2013-09-10.
- ^ Sarkodie, Samuel A.; Ntiamoah, Evans B.; Li, Dongmei (2019). "Panel heterogeneous distribution analysis of trade and modernized agriculture on CO2 emissions: The role of renewable and fossil fuel energy consumption". Natural Resources Forum. 43 (3): 135–153. doi:10.1111/1477-8947.12183. ISSN 1477-8947.
- ^ "Advances in Sustainable Agriculture: Solar-powered Irrigation Systems in Pakistan". McGill University. 2014-02-12. Retrieved 2014-02-12.
- ^ "Urban Agriculture: Practices to Improve Cities". 2011-01-18.
- ^ a b c d e f "What is Sustainable Agriculture? — ASI". Sarep.ucdavis.edu. Archived from the original on 2007-04-21. Retrieved 2013-09-10.
- ^ "Indicators for sustainable water resources development". Fao.org. Retrieved 2013-09-10.
- ^ Netting, Robert McC. (1993) Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture. Stanford Univ. Press, Palo Alto.
- ^ "Beyond the limits: global collapse or a sustainable future".
- ^ Glover, Jerry D.; Cox, Cindy M.; Reganold, John P. (2007). "Future Farming: A Return to Roots?" (PDF). Scientific American. 297 (2): 82–89. Bibcode:2007SciAm.297b..82G. doi:10.1038/scientificamerican0807-82. PMID 17894176. Retrieved 2013-09-10.
- ^ Mortensen, David (January 2012). "Navigating a Critical Juncture for Sustainable Weed Management" (PDF). BioScience. 62: 75–84. doi:10.1525/bio.2012.62.1.12. S2CID 32500562.
- ^ Field Crops Res. 34:239
- ^ "Pastures: Sustainable Management". Attra.ncat.org. 2013-08-05. Archived from the original on 2010-05-05. Retrieved 2013-09-10.
- ^ Pretty. J. (November 23, 2018). Intensification for redesigned and sustainable agriculture systems; http://science.sciencemag.org/content/362/6417/eaav0294/tab-pdf
- ^ Gurr, Geoff M.; et al. (2016). "Multi-country evidence that crop diversification promotes ecological intensification of agriculture". Nature Plants. 2 (3): 16014. doi:10.1038/nplants.2016.14. PMID 27249349. S2CID 205458366.
- ^ Marks, Paul (15 January 2014). "Vertical farms sprouting all over the world". New Scientist. Retrieved 8 March 2018.
- ^ MEI, Xu-rong; ZHONG, Xiu-li; Vincent, Vadez; LIU, Xiao-ying (2013-07-01). "Improving Water Use Efficiency of Wheat Crop Varieties in the North China Plain: Review and Analysis" (PDF). Journal of Integrative Agriculture. 12 (7): 1243–1250. doi:10.1016/S2095-3119(13)60437-2.
- ^ Hu, Honghong; Xiong, Lizhong (2014-01-01). "Genetic Engineering and Breeding of Drought-Resistant Crops". Annual Review of Plant Biology. 65 (1): 715–41. doi:10.1146/annurev-arplant-050213-040000. PMID 24313844.
- ^ a b Mitchell, Jeffrey P.; Singh, Purnendu N.; Wallender, Wesley W.; Munk, Daniel S.; Wroble, Jon F.; Horwath, William R.; Hogan, Philip; Roy, Robert; Hanson, Blaine R. (April 2012). "No-tillage and high-residue practices reduce soil water evaporation" (PDF). California Agriculture. 66 (2): 55–61. doi:10.3733/ca.v066n02p55.
- ^ a b c d KAUR, Gurdeep; REDDY, Mondem Sudhakara (2015). "Effects of Phosphate-Solubilizing Bacteria, Rock Phosphate and Chemical Fertilizers on Maize-Wheat Cropping Cycle and Economics". Pedosphere. 25 (3): 428–437. doi:10.1016/s1002-0160(15)30010-2.
- ^ a b Plant relationships. Carroll, George C., 1940-, Tudzynski, P. (Paul). Berlin: Springer. 1997. ISBN 9783662103722. OCLC 679922657.CS1 maint: others (link)
- ^ a b c Shenoy, V.V.; Kalagudi, G.M. (2005). "Enhancing plant phosphorus use efficiency for sustainable cropping". Biotechnology Advances. 23 (7–8): 501–513. doi:10.1016/j.biotechadv.2005.01.004. PMID 16140488.
- ^ "Soil Solarization". Rodale's Organic Life. Retrieved 14 February 2016.
- ^ "Archived copy" (PDF). Archived from the original (PDF) on 2017-05-17. Retrieved 2015-10-20.CS1 maint: archived copy as title (link)
- ^ "Plant Production and Protection Division - Biofumigation". Food and Agriculture Organization. 2019. Retrieved 12 October 2019.
- ^ a b c "What is Sustainable Agriculture?". Union of Concerned Scientists. 10 April 2017. Retrieved 29 October 2019.
- ^ Global plan of action for the conservation and sustainable utilization of plant genetic resources for food and agriculture ; and, The Leipzig declaration. Rome: Rome : Food and Agriculture Organization of the United Nations. 1996. ISBN 978-9251040270.
- ^ a b Baker, Beth (2017). "Can Modern Agriculture Be Sustainable?". BioScience. 67 (4): 325–331. doi:10.1093/biosci/bix018. ISSN 0006-3568.
- ^ Thurston, H. David (1992). Sustainable practices for plant disease management in traditional farming systems. Boulder, Colorado: Westview Press. p. 11. ISBN 978-0813383637.
- ^ Sponsel, Leslie E (1986). "Amazon ecology and adaptation". Annual Review of Anthropology. 15: 67–97. doi:10.1146/annurev.anthro.15.1.67.
- ^ Burchett, Stephen; Burchett, Sarah (2011). Introduction to Wildlife Conservation in Farming. John Wiley & Sons. p. 268. ISBN 978-1-119-95759-1.
- ^ Bezemer, Marjolein (12 December 2018). "Mixed farming increases rice yield". reNature Foundation.
- ^ "Ohio Farmers Use Old-Fashioned Methods to Turn Degraded Land into Green Pasture". Yale Climate Connections. Ecowatch. November 8, 2019. Retrieved 10 November 2019.
- ^ Viljoen, Andre; Howe, Joe, eds. (2005). Continuous Productive Urban Landscapes : Designing Urban Agriculture for Sustainable Cities. Taylor & Francis. ISBN 9781136414329. OCLC 742299840.
- ^ Iverson, Aaron L.; Marín, Linda E.; Ennis, Katherine K.; Gonthier, David J.; Connor-Barrie, Benjamin T.; Remfert, Jane L.; Cardinale, Bradley J.; Perfecto, Ivette (2014). "REVIEW: Do polycultures promote win-wins or trade-offs in agricultural ecosystem services? A meta-analysis". Journal of Applied Ecology. 51 (6): 1593–1602. doi:10.1111/1365-2664.12334.
- ^ Danielle Treadwell, Jim Riddle, Mary Barbercheck, Deborah Cavanaugh-Grant, Ed Zaborski, Cooperative Extension System, What is organic farming?
- ^ H. Martin, '’Ontario Ministry of Agriculture, Food and Rural Affairs Introduction to Organic Farming, ISSN 1198-712X
- ^ Dale Rhoads, Purdue Extension Service, What is organic farming?
- ^ Gold, Mary. "What is organic production?". National Agricultural Library. USDA. Retrieved 1 March 2014.
- ^ Gelski, Jeff (20 May 2019). "U.S. annual organic food sales near $48 billion". Food Business News. Retrieved 19 December 2019.
- ^ "Organic Market Overview". United States Department of Agriculture Economic Research Service. Retrieved 19 December 2019.
- ^ "Our Sustainable Future - Regenerative Ag Description". csuchico.edu. Retrieved 2017-03-09.
- ^ Underground, The Carbon; Initiative, Regenerative Agriculture; CSU (2017-02-24). "What is Regenerative Agriculture?". Regeneration International. Retrieved 2017-03-09.
- ^ a b c d Pilgeram, Ryanne (2015). "Beyond 'Inherit It or Marry It': Exploring How Women Engaged in Sustainable Agriculture Access Farmland". Academic Search Complete. Retrieved 13 March 2017.
- ^ "Regenerative Agriculture | Regenerative Agriculture Foundation". regenerativeagriculturefoundation.org. Retrieved 2017-03-09.
- ^ "Regenerative Organic Agriculture | ORGANIC INDIA". us.organicindia.com. Retrieved 2017-03-09.
- ^ a b Birnbaum Fox, Juliana (9 June 2010). "Indigenous Science". Cultural Survival Quarterly. 33.1 – via Indiana University.
Bill Mollison, often called the 'father of permaculture,' worked with indigenous people in his native Tasmania and worldwide, and credits them with inspiring his work. “I believe that unless we adopt sophisticated aboriginal belief systems and learn respect for all life, then we lose our own,” he wrote in the seminal Permaculture: A Designers’ Manual.
- ^ Holmgren, David (2007). "Essence of Permaculture" (PDF). Permaculture: Principles & Pathways Beyond Sustainability: 7.
This focus in permaculture on learning from indigenous, tribal and cultures of place is based on the evidence that these cultures have existed in relative balance with their environment, and survived for longer than any of our more recent experiments in civilisation.
- ^ Schaeffer, John (2014). Real Goods Solar Living Sourcebook. New Society Publishers. p. 292. ISBN 9780865717848.
Bill Mollison and a younger David Holmgren, who were studying the unstable and unsustainable characteristics of Western industrialized culture [...] They were drawn to indigenous worldviews...
- ^ Mars, Ross (2005). The Basics of Permaculture Design. Chelsea Green. p. 1. ISBN 978-1-85623-023-0.
- ^ Millner, Naomi (2016). ""The right to food is nature too": food justiceand everyday environmental expertise in theSalvadoran permaculture movement". Local Environment. 22: 13 – via Taylor & Francis.
Traditional and indigenous practices are highly valued in permaculture because they have been developed in perpetual dialogue with specific climactic and soil conditions, and evolving seed varieties.
- ^ Fadaee, Simin (2019). "The permaculture movement in India: a socialmovement with Southern characteristics". Social Movement Studies. 18: 724 – via Taylor & Francis.
Therefore, permaculture is rooted in local cultures and to a large extent its practice is based on indigenous knowledge, customs and resources.
- ^ Conz, Brian W. (2018). "Permaculture demonstration sites in Central America: contributions to agroecological transition and implications for educators". Revista Geográfica de América Central. 2018: 119–120.
She suggests that permaculture there has helped to revalorize indigenous knowledge by drawing on ‘a mixture of research and reimagining, resulting in a hybrid set of practices.... Permaculture design techniques are being reappropriated in alignment with indigenous histories and ontologies’ in ways that help reclaim ‘both biodiversity and the ontological diversity on which that biological is based'
- ^ Harper, Glyn; Hart, Darren; Moult, Sarah; Hull, Roger (2004). "Banana streak virus is very diverse in Uganda". Virus Research. 100 (1): 51–56. doi:10.1016/j.virusres.2003.12.024. PMID 15036835.
- ^ Tripathi, Leena; Atkinson, Howard; Roderick, Hugh; Kubiriba, Jerome; Tripathi, Jaindra N. (2017). "Genetically engineered bananas resistant to Xanthomonas wilt disease and nematodes". Food and Energy Security. 6 (2): 37–47. doi:10.1002/fes3.101. PMC 5488630. PMID 28713567.
- ^ Stacey, Neil; Fox, James; Hildebrandt, Diane (2018-02-14). "Reduction in greenhouse water usage through inlet CO2 enrichment". AIChE Journal. 64 (7): 2324–2328. doi:10.1002/aic.16120. ISSN 0001-1541.
- ^ Chaibi, M. T. (2000). "An overview of solar desalination for domestic and agriculture water needs in remote arid areas". Desalination. 127 (2): 119–133. doi:10.1016/s0011-9164(99)00197-6.
- ^ Shaffer, Devin; Yip, Ngai (2012-10-01). "Seawater desalination for agriculture by integrated forward and reverse osmosis: Improved product water quality for potentially less energy". Journal of Membrane Science. 415–416: 1–8. doi:10.1016/j.memsci.2012.05.016. ISSN 0376-7388.
- ^ Zhou, Y.; Tol, R. S. (2004). "Implications of desalination for water resources in China—an economic perspective". Desalination. 164 (3): 225–240. doi:10.1016/s0011-9164(04)00191-2.
- ^ AGRIBLE. (January 4, 2017). Women in Sustainable Agriculture; https://about.agrible.com/agnews/2017/1/3/women-in-sustainable-agriculture
- ^ Robbins, Ocean. "Starting a Food Garden: How Growing Your Own Vegetables Can Ease Food Supply Anxiety & Support Health". Food Revolution Network. Retrieved 8 June 2020.
- ^ a b c "Achieving food security in the face of climate change: Summary for policymakers from the Commission on Sustainable Agriculture and Climate Change" (PDF). CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS). November 2011.
- ^ a b "From Farm to Fork". European Commission website. European Union. Retrieved 26 May 2020. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
- ^ Matthew, Bossons. "New Meat: Is China Ready for a Plant-Based Future?". That's. Retrieved 21 June 2020.
- ^ Milman, Oliver; Leavenworth, Stuart (20 June 2016). "China's plan to cut meat consumption by 50% cheered by climate campaigners". The Guardian. Retrieved 21 June 2020.
- ^ Ron Nichols (2019). "The sustainable solutions at our feet". National Resources Conservation Service, US Department of Agriculture. Retrieved 29 October 2019.
- ^ "AMLO announces GMO corn ban; farm lobby critical, organic growers call it a victory". Reforma, Reuters. Mexico News Daily. 4 January 2021. Retrieved 8 January 2021.
- ^ Fleming, A. Vanclay, F. (August 3rd, 2009); Farmer responses to climate change and sustainable agriculture. A review; https://hal.archives-ouvertes.fr/hal-00886547/document
- ^ Kunstler, James Howard (2012). Too Much Magic; Wishful Thinking, Technology, and the Fate of the Nation. Atlantic Monthly Press. ISBN 978-0-8021-9438-1.
- ^ McKibben, D., ed. (2010). The Post Carbon Reader: Managing the 21st Century Sustainability Crisis. Watershed Media. ISBN 978-0-9709500-6-2.
- ^ a b Carolan, Michael (2006). "Do You See What I See? Examining the Epistemic Barriers to Sustainable Agriculture". Academic Search Complete. Retrieved 13 March 2017. |
Its determination is arbitrary. It was placed in the meridian plane of the London observatory Greenwich by international agreement during the International Meridian Conference in 1884 and is therefore often referred to as the Greenwich meridian (meridian of the passage instrument at the Royal Greenwich Observatory ). Until then, different zero meridians were in use.
By convention, the longitude from the prime meridian to the east (i.e. in the sense of the earth's rotation ) is counted as positive (0 ° to + 180 °) and negative to the west (0 ° to −180 °). But more common are eastern longitude (0–180 ° east, algebraically positive) and western longitude (0–180 ° west, algebraically negative). The abbreviations used are O or E for "east" and W for "west". The symbol E (English for "East", French "Est") is also used in some parts of German to avoid confusion with the number 0. In the western hemisphere (especially in the USA), contrary to the international norm, western counting of 0 ° –360 ° is also used.
The length of the prime meridian is half the circumference of the earth's international ellipsoid , measured over the poles , i.e. 20,003.9 km. With its opposite so-called antimeridian, which intersects Wrangel Island at 180 ° (without the addition of E or W), the prime meridian complements itself to form a great circle on Earth. The date line runs partly (near the poles and north of the equator) exactly at 180 ° Greenwich (deviations or exceptions from the 180th degree of longitude are: in the Bering Strait , near the Aleutian Islands (Alaska), in Kiribati , near the Fiji Islands , at Tuvalu , near Tonga and the Kermadec Islands and Chatham Islands belonging to New Zealand ).
Before the establishment of an international prime meridian in 1884, almost every European country had its own prime meridian, usually the geographical longitude of the respective capital or its observatory. With increasing international travel - especially by rail - it became necessary to standardize the existing systems. A large-scale traffic plan required a uniform time for smooth and safe operation according to the timetable instead of the previously sufficiently accurate solar time , which differed from city to city. In addition, it became more and more important to have accurate international time ( world time ) available. It is defined as the mean local time of the prime meridian.
The International Meridian Conference, Washington 1884
At the International Meridian Conference in Washington, DC with representatives from 25 nations on October 13, 1884, the meridian running through Greenwich was introduced as the basis of the international coordinate system.
As a possible international prime meridian, five main options were discussed at the Washington Conference:
- the Parisian meridian of the Paris Observatory : 2 ° 20 ′ 14.025 ″ east of Greenwich
- the Ferro meridian on the Canary Island Ferro (today's name: El Hierro ) at 17 ° 40 ′ west today, which has been known since ancient times
- a possible prime meridian in the Azores at about 28 ° 0 ′ west
- a possible prime meridian in the Pacific Ocean at 180 ° today (opposite arc to the Greenwich meridian, corresponds approximately to today's date line )
- the Greenwich meridian most commonly used on the modern nautical charts of that time (this frequent use was the decisive factor in its choice).
In the course of the conference it soon became apparent that the Paris Prime Meridian would not find a majority. The old Ferro-Meridian was regarded as a "French submarine" because it was fixed a few decades earlier at exactly 20 ° west of Paris. The Azores and the Bering Strait were ruled out mainly because they did not have an observatory and were not connected to the rest of the world by telegraph at that time.
So the Greenwich Meridian finally prevailed as the International Prime Meridian with a large majority - with France abstaining.
Corrected position of the prime meridian
Visitors to the Greenwich Observatory are often astonished that their GPS receiver does not show exactly zero at the zero meridian marked there. It runs about 102 meters east of the historic meridian through Greenwich Park. The cause of this difference is a measurement error caused by local gravity fields ( gravitational anomalies ) at the time of the agreement made in 1884, which led to the incorrect determination of the astronomical reference points. These still valid reference points and the center of the earth lie in a (meridian) plane that cuts the earth's surface a little further to the east.
Apart from that: The currently valid prime meridian of the WGS84 and ETRS89 / GRS80 reference system is no longer firmly tied to the earth's surface, but a modeled geodetic datum , since meridians are not surface-fixed due to tidal forces , polar movement and continental drift .
Course of the prime meridian
The Greenwich Prime Meridian crosses eight present-day states on land and has the following length in the individual countries:
- United Kingdom (319 km)
- France (735 km)
- Spain (336 km)
- Algeria (1,555 km)
- Mali (760 km)
- Burkina Faso (430 km)
- Togo (39 km)
- Ghana (569 km)
- as well as the stateless Antarctic Neuschwabenland and still further south the Queen Maud Land . The length of the prime meridian on the mainland of the continent Antarctica is 2,331 km.
The prime meridian also crosses the following waters:
- Arctic Ocean (3,217 km - from 90 ° north to 61 ° north)
- North Sea (977 km)
- Mediterranean Sea (424 km)
- Volta Reservoir in Ghana (78 km)
- Atlantic Ocean including Southern Ocean (8,278 km)
Prime meridian and time measurement
The mean solar time at the prime meridian became decisive for universal time ( GMT , Greenwich Mean Time ), which was only replaced by coordinated universal time (UTC) in 1972 . Today, this is no longer based on local time, but - as it is called - coordinates continuous atomic time with astronomically measured Universal Time, which reflects the irregularities of the earth's rotation and the positions of the sun . The adjustment takes place via leap seconds . At the prime meridian, the difference between mean local time and UTC is not precisely zero today.
Historical concepts of the reference line of the earth measuring networks
- First division of the world into longitudes and latitudes by Hipparchus of Nikaia (190-120 BC): Rhodes (his astronomical observation site)
- Claudius Ptolemaeus moved it around 150 to the western border of the known world: Isla del Meridiano ( El Hierro or Ferro, the westernmost of the Canary Islands , the ancient Hesperides), and thus created the Ferro meridian, which was used well into the 20th century .
- Arab astronomers first laid the prime meridian through the western tip of Africa, then in 1075 10 ° west of Baghdad .
- Thereafter there were repeated attempts at relocation, e.g. B. When the Azores were discovered in 1427 and America in 1492.
- In April 1634 the island of Ferro was confirmed by a scholars' congress of all seafaring nations.
- The geographer and polymath Johann Gottfried Gregorii alias Melissantes proposed in 1708 the international unification of the prime meridian with the help of a multilateral political agreement.
- From 1718 the meridian of Paris was used in France , from 1738 the meridian of Greenwich was used in England.
- In the 19th century, the cartographer Philippe Vandermaelen used a prime meridian through Brussels, the seat of his publishing house, e.g. B. the first world atlas on a uniform scale Atlas universel de geographie physique, politique, statistique et mineralogique .
- The meridian of Rome running through the Torre des Meridiano on Monte Mario in Rome was used for the Italian military maps from 1870 to 1974. It runs through Rome as well as through the Vatican .
- In Germany the meridian was taken over from Greenwich in 1885, in France only around 1900. Austria-Hungary used it until 1918 in parallel with the old Ferro meridian.
- Since the 1980s, the prime meridian - and also the 0 ° latitude , i.e. the equator , as well as the geographic north and south poles - is no longer fixed on the surface of the earth , but aligned with a reference ellipsoid over the earth's shape, thus avoiding the effects of continental drift and tidal forces .
Zero meridians of other celestial bodies
- As the heliographic prime meridian of the sun, also known as the Carringtonian prime meridian, the central meridian was established on January 1, 1854 at 12 noon world time. It is used to determine the heliographic coordinates of sunspots .
- Moon : The selenographic prime meridian intersects the lunar equator in the middle of the moon, the point that points to the center of the earth on average over an 18½-year period . It is located near the Bruce crater in the Sinus Medii .
- The prime meridian of the planet Venus goes through the central mountain of the Ariadne crater.
- The prime meridian of the planet Mars is defined by the small crater Airy-0, named after the British astronomer George Biddell Airy , south of the Martian equator. (see also areography )
- The prime meridian of the dwarf planet Ceres is determined by the small impact crater Kait , named after the Hattic goddess of fertility .
- On the large gas planets Jupiter and Saturn, there is no land structure comparable to that of the earth to which a prime meridian could be fixed. There are therefore different reference systems ( atmosphere , magnetosphere ).
- Like the Earth's moon, Saturn's moon Titan also has a bound rotation , i.e. always points with the same side to the planet. The center of this side is the prime meridian.
- The prime meridian of the dwarf planet Pluto lies in the direction of its largest moon Charon and vice versa the prime meridian of Charon lies in the direction of Pluto. Both are the only known bodies in the solar system with a double-bonded rotation.
- Royal Observatory, Greenwich - now the National Maritime Museum
- Proceedings of the International Meridian Conference Washington, 1884 (in English) .
- Prime Meridian in Greenwich
- According to the GPS receiver, why is the prime meridian elsewhere .
- A. Schödlbauer: Geodetic Astronomy. de Gruyter 2000, p. 3: The geographical longitude L is the directional angle that the meridian plane of P forms with the meridian plane [...] of Greenwich. The counting of this angle begins as agreed at the reference meridian and is counted positively to the east . (Note: whether in the west negative or> 180 ° does not matter computationally)
- Motion (minutes of the meeting, page 98, below ) and vote (minutes of the meeting, page 99 ) on October 13, 1884
- Spektrum.de: Why the prime meridian has shifted
- See Stephen Malys, John H. Seago, Nikolaos K. Pavlis, P. Kenneth Seidelmann, George H. Kaplan: Why the Greenwich meridian moved. In: Journal of Geodesy 89 (2015), No. 8.doi: 10.1007 / s00190-015-0844-y .
- Melissantes: Geographia novissima . 1st chapter. Frankfurt am Main and Leipzig 1708, pp. 38/39.
- Gerald Sammet: The world of maps: Historical and modern cartography in dialogue (= Atlantica: Earth experience ). 1st edition. Bertelsmann Lexikon Institut, Gütersloh 2008, ISBN 978-3-577-07251-9 , p. 259 ( limited preview in Google Book Search [accessed August 2, 2018]).
- Observation and Practice . In: Günter D. Roth (Ed.): Handbook for Star Friends . tape 2 , p. 61 ( limited preview in Google Book Search [accessed August 5, 2018]).
- See Gazetteer of Planetary Nomenclature , Planetary Names: Crater, craters: Kait on Ceres. |
Lecture Notes 2 - Bird Flight I
|Origin of Flight
Exactly how birds acquired the ability to fly has baffled scientists for years. Archaeopteryx provided a starting point for speculation. Built like a dinosaur, but with wings, scientists guessed at how a hypothetical ancestor might have taken flight. Some scientists support the arboreal hypothesis (e.g., Feduccia 1996) and suggest that the ancestors of Archaeopteryx lived in trees and glided into flapping flight (Figure to the right). But others argue that the claws of Archaeopteryx weren't suited to climbing. So, others support the cursorial hypothesis (e.g., Burgers and Chiappe 1999) and suggest that these ancestors used their long, powerful legs to run fast with their arms outstretched, and were at some point lifted up by air currents and carried into flapping flight (Figure to the bottom right).
Studying living animals can throw light on their evolutionary past. Ken Dial (2003) of the Flight Lab at the University of Montana noticed the ability of gamebird chicks to escape danger by scrambling up vertical surfaces. The chicks first run very fast, flapping their immature, partially feathered wings, frantically creating enough momentum to run up a vertical surface to safety. Could this survival instinct be the origin of flight?
And finally, James Carey, a UC Davis demographer and ecologist, has proposed that the evolution of bird flight is linked to parental care (Carey and Adams 2001).
Whatever the origins, dinosaurs, and birds, eventually took to the air.
Images & text used with permission.
|Dinosaurs' flapping led to flight? The wing-assisted incline running hypothesis -- The feathered forelimbs of small, two-legged dinosaurs may have helped them run up hills or other inclines to escape predators. This half running, half flapping may have evolved into an ability to fly. Dial (2003) reported findings suggesting that the ability to fly evolved gradually. Feathers may have first protected animals from cold & wet weather, then been used out of necessity when something with big teeth was chasing them. Even before their wings develop enough to fly, some living birds use them to improve traction and gain speed. Dial studied birds, like partridges, capable of only limited flight. Energetically, "It's a lot cheaper to run than fly," Dial said. So these baby birds, with big feet & powerful legs, use them in combination with their wings, first to stay balanced and grounded, then to take on steeper and steeper inclines. Using this "wing assisted incline running," Chukar Partridges can negotiate 50 degree inclines right after hatching, 60 degree slopes at 4 days old, and at 20 days, can perform a vertical ascent. "The wings help them stick to the ground," said Dial. The wings only come into play on steep angles because at about a 50 - 60 degree incline the birds start slipping. Then they begin a head to tail movement, like a reptile, that pushes them to the ground to enhance traction. "They use their wings like spoilers on a race car, to give their feet better traction," he said. Use of this wing-assisted running doesn't stop when the birds are old enough to fly. Adult birds often choose the running and flapping option instead of flying because it is more energy efficient. - Written by Marsha Walton, CNN|
Chukar Partridge flapping & climbing
Jesus-Christ Hypothesis. Because all fossils of Archaeopteryx come from marine sediments, suggesting a coral-reef setting, Videler (2005)
suggests that, like the Jesus Christ lizards [Basiliscus spp.; (a)], Archaeopteryx and its ancestors were 'Jesus-Christ dinosaurs' running over water
to escape from predators and travel between islands in the coral lagoons of central Europe 150 million years ago. At first, both thrust and weight
support were provided by the feet slapping against the water. Later, the wings gradually took over some of the weight support, with every step
toward increased lift providing a fitness advantage.
Biplane wing planform and flight performance of a feathered dinosaur (Chatterjee and Templin 2007) -- Microraptor gui, a four-winged dromaeosaur from the Early Cretaceous of China, provides strong evidence for an arboreal-gliding origin of avian flight. It possessed asymmetric flight feathers not only on the manus but also on the pes. A previously published reconstruction shows that the hindwing of Microraptor supported by a laterally extended leg would have formed a second pair of wings in tetrapteryx fashion. However, this wing design conflicts with known theropod limb joints that entail a parasagittal posture of the hindlimb. Here, we offer an alternative planform of the hindwing of Microraptor that is concordant with its feather orientation for producing lift and normal theropod hindlimb posture. In this reconstruction, the wings of Microraptor could have resembled a staggered biplane configuration during flight, where the forewing formed the dorsal wing and the metatarsal wing formed the ventral one. The contour feathers on the tibia were positioned posteriorly, oriented in a vertical plane for streamlining that would reduce the drag considerably. Leg feathers are present in many fossil dromaeosaurs, early birds, and living raptors, and they play an important role in flight during catching and carrying prey. A computer simulation of the flight performance of Microraptor suggests that its biplane wings were adapted for undulatory "phugoid" gliding (see below) between trees, where the horizontal feathered tail offered additional lift and stability and controlled pitch. Like the Wright 1903 Flyer, Microraptor, a gliding relative of early birds, took to the air with two sets of wings.
Phugoid gliding is a type of flight where a plane (or Microraptor gui) pitches up and climbs, and then pitches down and descends,
accompanied by speeding up and slowing down as it goes "uphill" and "downhill (Source: www.centennialofflight.gov).
The Four-winged Dinosaur (NOVA)
Theropod size and avian flight -- An 80-million-year-old dinosaur fossil unearthed in the Gobi Desert of Mongolia demonstrates that miniaturization, long thought to be a hallmark of bird origins and a necessary precursor of flight, occurred progressively in primitive dinosaurs. "This study alters our understanding of the evolution of birds by suggesting that flight is a 'spin-off' adaptation of a much earlier trend toward miniaturization in certain dinosaur lineages," said H. R. Lane (NSF). "Paleontologists thought that miniaturization occurred in the earliest birds, which then facilitated the origin of flight," said Alan Turner (American Museum of Natural History). "Now the evidence shows that this decrease in body size occurred well before the origin of birds and that the dinosaur ancestors of birds were, in a sense, pre-adapted for flight." Because most dinosaurs were too massive to fly, miniaturization is considered crucial to the origin of flight. To date, fossil evidence of miniaturization and other characteristics leading to flight has been sparse. While other dinosaurs of the Cretaceous Period were increased in size, this newly discovered dinosaur (Mahakala omnogovae) represented a step towards miniaturization necessary for flight. Other groups that evolved flight, such as pterosaurs and bats, all evolved from small ancestors. With the discovery of Mahakala, Turner et al. (2007) showed that this miniaturization occurred much earlier." Mahakalal was nearly full-grown when it died, measuring less than two feet in length and weighing about 24 ounces. In the broader context of the dinosaur family tree, Mahakala shows that dinosaurs' size decreased progressively as they evolved toward birds. "Many of the animals that were thought to look like giant lizards only a few years ago are now known to have been feathered, to have brooded their nests, to have been active, and to have had many other defining bird characteristics, like wishbones and three forward-facing toes," said Mark Norell (American Museum of Natural History). "We can now add that the precursors of birds were also small, primitive members of a lineage that later grew much larger--long after their divergence from the evolutionary stem leading to birds."
Phylogeny and body size change within paravian theropods. A temporally calibrated cladogram depicting the phylogenetic position of Mahakala and paravian body size through time and across phylogeny is shown. Silhouettes are to scale, illustrating the relative magnitude of body size differences. Left-facing silhouettes near open circles show reconstructed ancestral body sizes. Ancestral paravian body size is estimated to be 600 to 700 g and 64 to 70 cm long. The ancestral deinonychosaur, troodontid, and dromaeosaurid body size is estimated at 700 g. Large numbers (1, 2, 3, and 4) indicate the four major body increase trends in Deinonychosauria. Ma, Maastrichtian; Ca, Campanian; Sa, Santonian; Co, Coniacian; Tu, Turonian; Ce, Cenomanian; Ab, Albian; Ap, Aptian; Bar, Barremian; Hau, Hauterivian; Va, Valanginian; Ber, Berriasian; Ti, Tithonian; Ki, Kimmeridgian. Ma, million years ago (From: Turner et al. 2007).
The Berlin Archaeopteryx. In the earliest cast of the main slab (A), long hindlimb feathers are visible (B) (Longrich 2006).
Berlin Archaeopteryx. A, Plumage of the right hindlimb. B, Schematic drawing. Abbreviations: cov, covert feathers; prt, pretibial feathers; pst, shafts of post-tibial feathers; pub, pubis; ti, tibia (Longrich 2006).
Berlin Archaeopteryx. A, Reconstruction. B, Life restoration. The hindlimbs have been abducted to 90° so as show the area of the leg plumage. The area of the hindlimbs was measured distal to the body contour and proximal to the ankle (Longrich 2006).
Case closed?? Support for the arboreal hypothesis -- Feathers cover the legs of the Berlin specimen of Archaeopteryx lithographica, extending from the cranial surface of the tibia and the caudal margins of both tibia and femur. These feathers exhibit features of flight feathers rather than contour feathers, including vane asymmetry, curved shafts, and a self-stabilizing overlap pattern. Many of these features facilitate lift generation in the wings and tail of birds, suggesting that the hindlimbs acted as airfoils. Longrich (2006) presented a new reconstruction of Archaeopteryx where the hindlimbs formed approximately 12% of total airfoil area. Depending upon their orientation, the hindlimbs could have reduced stall speed by up to 6% and turning radius by up to 12%. The presence of the “four-winged” planform in both Archaeopteryx and basal Dromaeosauridae indicates that their common ancestor used both forelimbs and hindlimbs to generate lift. The presence of flight feathers on the hindlimbs is inconsistent with the cursorial hypothesis, the Jesus-Christ hypothesis, and the wing-assisted incline running hypothesis; in these scenarios, such a specialization would serve no purpose, and would impede locomotion. The evidence presented by Longrich (2006), therefore, supports an arboreal origin of avian flight, and suggests that arboreal parachuting and gliding preceded the evolution of avian flight.
Evolution of flight: a summary -- Although the timing remains unclear, the first step toward the evolution of flight involved a reduction in size, with their ancestors decreasing in size during the Triassic and well before the evolution of birds and flight. Endothermy must have evolved sometime between the early Late Triassic, when dinosaurs first appeared in the fossil record and the evolution of modern birds whose ancestors first appeared in the early Late Jurassic. More specifically, coelurosaurs, a diverse group of dinosaurs that likely included the ancestors of birds, exhibited substantial and sustained morphological transformation and this rapid evolution of skeletal diversity may indicate rapidly changing selection pressures as a result of radiation into new ecological niches. The evolution of endothermy may have been more likely in lineages, such as the smaller coelurosaurs, exposed to new selection pressures rather than in more conservative, larger-bodied, lineages (Schluter 2001). For example, the body temperatures of small dinosaurs (< 100 kg) that lived at mid-latitudes (45-55°) or higher would have been well below 30°C during winter if they were crocodile-like ectotherms (Seebacher 2003). Selection pressures for morphological and physiological thermoregulatory adaptations would likely have been strongest in such dinosaurs. Of course, without insulation, the thermoregulatory advantages gained from elevated resting metabolic rates would be limited. Most skin impressions from dinosaurs indicate the presence of naked skin (Sumida and Brochu 2000), except for integumentary structures in coelurosaurs that may have afforded thermal insulation (Chen et al. 1998). Although other dinosaurs may have possessed integumentary structures with insulatory qualities, current evidence suggests that these evolved only in coelurosaurs. The earliest known feathers stem from the Late Jurassic, so if those feathers possessed insulating qualities, true endothermy may have evolved sometime after that (Seebacher 2003).
By the time Archaeopteryx arrived on the scene, therefore, birds obviously had the basic features needed for flight – relatively small with feathers and, if not truly endothermic, then, at minimum, an elevated metabolism. The question then is how the ancestors of Archaeopteryx, with the necessary characteristics, first took to the air. Several hypotheses have been proposed. Primary among them are the arboreal hypothesis (e.g., Feduccia 1996), with the ancestors of Archaeopteryx living in trees (or at least climbing into trees on a regular basis) and initially gliding before developing flapping flight, and the cursorial hypothesis (e.g., Burgers and Chiappe 1999), with these ancestors using long, powerful legs to run fast with their arms (wings) outstretched and, eventually, developing sufficient lift to take flight.
Two additional hypotheses include the WAIR (wing-assisted incline running) hypothesis and the ‘Jesus-Christ’ hypothesis. Dial (2003) noticed the ability of young Chukars to escape danger by scrambling up inclined surfaces. The chicks first run very fast, flapping their rather small, partially feathered wings to creating enough momentum to run up a inclined surface to safety. The ancestors of birds may have using proto-wings in a similar fashion, with wings eventually evolving to the point of permitting not only running up inclined surfaces but, for an animal running across the ground, flight. Because all fossils of Archaeopteryx come from marine sediments, suggesting a coral-reef setting, Videler (2005) suggested that, like the Jesus Christ lizards (Basiliscus spp.), Archaeopteryx and its ancestors were 'Jesus-Christ dinosaurs' running over water to escape from predators and travel between islands in the coral lagoons of central Europe 150 million years ago. At first, both thrust and weight support were provided by the feet slapping against the water. Later, the wings gradually took over some of the weight support, with every step toward increased lift providing a fitness advantage.
There is currently no clear concensus in support of any of these hypotheses for the origin of bird flight. However, a four-winged dromaeosaur (Microraptor gui) from the early Cretaceous of China provides evidence for an arboreal-gliding origin of avian flight. It had asymmetric flight feathers not only on the forelimb, but on the hindlimb as well. Chatterjee and Templin (2007) proposed that the wings of Microraptor could have resembled a staggered biplane configuration during flight, where the forewing formed the dorsal wing and the hindwing formed the ventral one. The contour feathers on the tibia fo the hindlimb were positioned posteriorly, oriented in a vertical plane for streamlining that would reduce the drag considerably. Leg feathers are present in many fossil dromaeosaurs, early birds, and living raptors, and they play an important role in flight during catching and carrying prey. A computer simulation of the flight performance of Microraptor suggested that its biplane wings were adapted for undulatory "phugoid" gliding between trees, where the horizontal feathered tail offered additional lift and stability and controlled pitch. Thus, Microraptor, a gliding relative of early birds, apparently took to the air with two sets of wings.
In further support of the arboreal hypothesis, feathers also cover the legs of the Berlin specimen of Archaeopteryx lithographica, extending from the cranial surface of the tibia and the caudal margins of both tibia and femur. These feathers exhibit features of flight feathers rather than contour feathers, including vane asymmetry, curved shafts, and a self-stabilizing overlap pattern. Many of these features facilitate lift generation in the wings and tail of birds, suggesting that the hindlimbs acted as airfoils. Longrich (2006) presented a new reconstruction of Archaeopteryx where the hindlimbs formed approximately 12% of total airfoil area. Depending upon their orientation, the hindlimbs could have reduced stall speed by up to 6% and turning radius by up to 12%. The presence of “four-wings” in both Archaeopteryx and basal Dromaeosauridae suggests that their common ancestor used both forelimbs and hindlimbs to generate lift. In addition, the presence of flight feathers on the hindlimbs is inconsistent with the cursorial hypothesis and the Jesus-Christ hypothesis because flight feathers on the hindlimbs would seemingly limit running speed. The evidence presented by Longrich (2006), therefore, supports an arboreal origin of avian flight, and suggests that arboreal parachuting and gliding likely preceded the evolution of avian flight just as it apparently did in the evolution of flight in bats (Speakman 2001) and pterosaurs (Naish and Martill 2003).
Archaeopteryx (Source: Nick Longrich)
Although the presence of flight feathers on the hindlimbs would seem to support the arboreal hypothesis for the origin of flight, such feathers do not necessarily indicate that Archaeopteryx and its immediate ancestors were strictly tree-dwellers. Many present-day birds spend time both in trees and on the ground and Archaeopteryx likely did the same. With hindlimb feathers, as well as flight muscles less developed than those of present day birds, Archaeopteryx may have found it difficult, if not impossible, to take off directly from the ground. So, to take flight, Archaeopteryx and its ancestors likely sought elevated perches like trees for ‘launching.’ In doing so, they may very well have used wing-assisted incline running just like some present-day birds. For example, several petrels are known to climb trees to launch themselves into the air (del Hoyo et al. 1992), and, for some seabirds, the presence of ‘take-off trees’ is important in selection of breeding habitat (Sullivan and Wilson 2001).
Neurological evidence supports the idea that Archaeopteryx was a rather accomplished flyer. Reconstruction of the braincase and inner ear of Archaeopteryx revealed strong similarities to present-day birds, with areas of the brain involved in hearing and vision enlarged and an enlarged forebrain that would enhance the rapid integration of sensory information required in a flying animal (Alonso et al. 2004).
The Life of Birds by David Attenborough - The Mastery of Flight
Flight requires lift, which occurs because wings move air downwards. Lift is created only when air strikes a wing at an angle (i.e., the angle of attack). When the leading edge of a wing is higher than the trailing edge, the bottom of the wing 'pushes' the air forward and creates an area of high pressure below and ahead of the wing. At the same time, air is deflected downward so, because of Newton's Third Law of Motion (for every action there is an equal and opposite reaction), the wing is deflected upward. Both the upper and lower surfaces of the wing deflect the air. The upper surface deflects air down because the airflow “sticks” to the wing surface and follows the tilted wing (the “Coanda effect”).
Because of inertia, air moving over the top of the wing tends to keep moving in a straight line while, simultaneously, atmospheric pressure tends to force air against the top of the wing. The inertia, however, keeps the air moving over the wing from 'pushing' against the top of the wing with as much force as it would if the wing wasn't moving. This creates an area of lower pressure above the wing. Because air tends to move from areas of high pressure to areas of low pressure, air tends to move from the high pressure area below and ahead of the wing to the lower pressure area above and behind the wing. This air moves, therefore, toward the trailing edge of the wing, or the same direction as the airflow created by the wing's motion. As a result, air flows faster over the top of the wing. Because air under the wing is dragged slightly in the direction of travel, it moves slower than does the air moving over the top of the wing. Thus, air is flowing slower beneath the bottom of the wing. The faster-moving air going over the top of the wing exerts less pressure than the slower-moving air under the wing and, as a result, the wing is pushed upwards by the difference in pressure between the top and the bottom (the Bernoulli effect). So, both the development of low pressure above the wing (Bernoulli's Principle) and the wing's reaction to the deflected air underneath it (Newton's third Law) contribute to the total lift force generated.
Note that air, both above AND below the wing, is deflected downward.
Source: An excellent article about lift ("Lift doesn't suck") by Roger Long.
Why does the slower moving air generate more pressure against the wing than the faster moving air? In calm air, the molecules are moving randomly in all directions. However, when air begins to move, most (but not all) molecules are moving in the same direction. The faster the air moves, the greater the number of air molecules moving in the same direction. So, air moving a bit slower will have more molecules moving in other directions. In the case of a wing, because air under the wing is moving a bit slower than air over the wing, more air molecules will be striking the bottom of the wing than will be striking the top of the wing.
How does a wing work?
Clear example of how wings deflect air downward. Notice the trough formed in the clouds.
How do airfoil shape, camber, and angle of attack influence lift?
Click on this image!
Lift is generated in two ways. (a) Air is deflected downward when
there is a positive angle of attack (Newtonian lift). (b) When air
moves faster over the top of the wing than the bottom, the pressure
diference (lower on the top of wing) generates Bernoulli lift
(Figure from Bajec and Heppner 2009).
When the curvature over the top becomes greater by increasing the angle of attack (below), the amount of lift generated increases because the force with which the wing is pushed upward increases. Eventually, however, if the angle of attack becomes too great, the flow separates off the wing and less lift is generated. The result is stalling. Birds also tend to stall at low speeds because slower moving air may not move smoothly over the wing.
If the angle of attack is too great, air flow over the top of the wing may become more turbulent & the result is less lift.
Flying from an eagle's point of view
Angle of attack decreases with increasing speed. Angle of attack during two wingbeats of a Ringed Turtle-Dove (Streptopelia risoria) flying at 1 meter/sec (A), 5 meters/sec (B),
9 meters/sec (C), and 17 meters/sec (D). Angle of attack at low speeds peaked at 52 degrees (proximal wing) and 43 degrees (distal wing), much
greater than those commonly used by aircraft (0-15 degrees). At faster speeds, mean angle of attack decreased to 9-14 degrees (proximal wing) and -5-14 degrees
(distal wing), within the range employed by aircraft. Shaded areas indicate downstroke; solid line = distal wing & dashed line = proximal wing (Hedrick et al. 2002).
At low speeds (such as during take-off & landing), birds can maintain smooth air flow over the wing (and, therefore, maintain lift) by using the alula (also called the bastard wing). The alula is formed by feathers (usually 3 or 4) attached to the first digit.
When these feathers are elevated (above right & below right), they keep air moving smoothly over the wing & help a bird maintain lift.
At increasing angles of attack, an eddy starts to propagate from the trailing edge towards the leading edge of the wing. As a result, air flowing over the top of the wing separates from the upper surface and lift is lost. However, when coverts are lifted upward by the eddy, they prevent the spread of the eddy and work as 'eddy-flaps.'
The 'covert eddy-flaps', by preventing the spread of the eddy toward the leading edge of the wing, help maintain lift (i.e., prevent stalling) at high angles of attack, e.g., when taking off or landing.
Eoalulavis hoyasi. Top, fluorescence induced ultraviolet photo
of the specimen before preparation. Bottom, reconstruction.
A - alula, PR - primary remiges, and SR - secondary remiges
(Sanz and Ortega 2002).
| The fossilized remains of a tiny bird
evidence that birds flew as nimbly 115 million years ago as their
do today. The fossilized bird, Eoalulavis hoyasi, was found in
limestone quarry in Spain (Sanz et al. 1996). About the size of a
goldfinch, the bird had an alula, or bastard wing, that would have
it stay aloft at slow speeds. Eoalulavis is the most primitive
known with an alula.
Archaeopteryx probably flapped and glided, but did not have an alula. Eoalulavis provides evidence that by 30 million years after Archaeopteryx, at least one group of early birds had developed the alula.
Eoalulavis hoyasi, which means "dawn bird with a bastard wing from Las Hoyas," was discovered at a site where a freshwater lake existed millions of years ago. The bird may have hunted by wading in shallow water the way plovers and other shorebirds do today.
To find out if this ligament played the same shoulder-stabilizing role in primitive animals, they looked to the alligator. Alligators are close relatives of birds and both are archosaurs, the “ruling reptiles” that appeared on the planet some 250 million years ago and evolved into the dinosaurs that dominated during the Mesozoic Era. So to understand the sweep of evolution, the alligator was a great starting place. In the lab, three alligators were put on motorized treadmills and X-ray videos were made. The video was used to make a 3D computer animation that showed the precise positioning of the shoulder as the animal walked. They found that alligators use muscles – not ligaments – to do the hard work of supporting the shoulder. Then Baier studied the skeleton of Archaeopteryx lithographica, and even traveled to Beijing to examine the fossilized remains of Confuciusornis, Sinornithoides youngi and Sinornithosaurus millenii, close relatives of modern birds recently discovered in China.
If the acrocoracohumeral ligament was critical to the origin of flight, Baier expected to find evidence of it in Archaeopteryx. Surprisingly, however, the new ligament-based force balance system appears to have evolved more gradually in Mesozoic fliers. “What this means is that there were refinements over time in the flight apparatus of birds,” Baier said. “Our work also suggests that when early birds flew, they balanced their shoulders differently than birds do today. And so they could have flown differently. Some scientists think they glided down from trees or flapped off the ground. Our approach of looking at this force balance system can help us test these theories.”
|Of course, a bird moving through the air is opposed by friction & this is called drag. The types of drag acting on birds are parasitic drag, pressure (or induced) drag, & friction (or profile) drag. Parasitic drag is caused by friction between a bird’s body and the air (and is termed parasitic because the body does not generate any lift). Induced drag occurs when the air flow separates from the surface of a wing, while friction drag is due to the friction between the air and bird moving through the air. Friction drag is minimized by a wing's thin leading edge (wings 'slice' through the air). Induced drag occurs at low speeds and at higher speeds as, at wing tips, air moves from the area of high pressure (under the wing) to the area of low pressure (top of the wing). As wings move through the air, this curling action causes spirals (vortices) of air (see photo of continuous vortices to the right) which can disrupt the smooth flow of air over a wing (and reduce lift).|
A 'smoke angel' created after flares were released and caused by wingtip vortices (Photo source: US Air Force).
Bird tails & flight -- Most birds have rather short triangular tails when spread. In flight, the tail is influenced by the time-varying wake of flapping wings and the flow over the body. It is reasonable to assume that body, wings and tail morphology have evolved in concert. Modelling the interaction between the wings and tail suggests that the induced drag of the wing–tail combination is lower than that for the wings alone. A tail thus enables the bird to have wings that are optimized for cruising speed (with the tail furled to minimize drag) and, at low speeds, the spread tail reduces induced drag during manoeuvring and turning flight. Observations show that tails are maximally spread at low speeds and then become furled increasingly with increasing speed (Hedenström 2002).
Figure to the left. Flow visualization around mounted wingless starling bodies using the smoke-wire technique in a wind tunnel at 9 ms−1. (a) The bird with intact tail and covert feathers; (b) tail feathers protruding beyond ventral coverts are trimmed to the same length as coverts; (c) tail feathers, ventral and dorsal covert feathers removed. The height of the wake increases from (a) to (c). The dorsal boundary layer also becomes increasingly turbulent in (b) and (c) compared with the intact tail-body configuration in (a). From: Hedenström (2002).
|(A) Depictions of the vortex-ring and continuous-vortex gaits. (B) Cross-sectional view of the wing profile. Lift produced during flapping provides weight support (upward force) and thrust (horizontal force). In the vortex-ring gait, lift is produced only during the downstroke, providing positive upward force and forward thrust. In the continuous-vortex gait, lift is produced during both the upstroke and the downstroke. The downstroke produces a positive upward force and forward thrust; the upstroke produces a positive upward force and rearward thrust. Partial flexion of the wing during the upstroke reduces the magnitude of the rearward thrust to less than that of the forward thrust produced during the downstroke, providing net positive thrust per wingbeat (From Hedrick et al. 2002).|
|Birds are known to employ two different gaits in flapping flight, a vortex-ring gait in slow flight and a continuous-vortex gait in fast flight. In the vortex ring gait, the upstroke is aerodynamically passive (there is no bound circulation during this phase, and hence no trailing vortex), and the wings flex and move close to the body to minimize drag. In the continuous vortex gait (where each wingtip sheds a separate vortex trail during both the upstroke and downstroke), the wings are aerodynamically active throughout (i.e., lift is generated both during the downstroke and the upstroke), while the wings remain near-planar throughout and deform only by flexure at the wrist. Hedrick et al. (2002) studied the use of these gaits over a wide range of speeds in Cockatiels and Ringed Turtle-doves trained to fly in a wind tunnel. Despite differences in wing shape and wing loading, both species shifted from a vortex-ring to a continuous-vortex gait at a speed of 7 meters/sec. They found that the shift from a vortex-ring to a continuous-vortex gait depended on sufficient forward velocity to provide airflow over the wing during the upstroke similar to that during the downstroke. This shift in flight gait appeared to reflect the need to minimize drag and produce forward thrust in order to fly at high speed.|
Flow visualization images by helium-bubble multi-flash photography (top) and sketch of vortex wake (below)
as reconstructed by stereophotogrammetry, for the vortex ring gait of a slow-flying Rock Pigeon (Columba livia; left),
and for the continuous vortex gait of a European Kestrel (Falco tinnunculus; right) in cruising flight (Rayner and Gordon 1997).
The amount of drag varies with a bird's mass (increased mass = increased friction drag), shape, & speed, and with a wing's surface area & shape.
Increased streamlining (e.g., no trailing legs and extended head)
reduces drag (Pennycuick et al. 1996).
Form drag = parasitic drag + friction drag
(figure from http://en.wikipedia.org/wiki/Parasitic_drag#mediaviewer/File:Drag_Curve_2.jpg)
As described below, some wing shapes help to reduce induced drag. Wing shapes vary substantially among birds:
Skeletal elements of the wing of five species of birds scaled so the carpometacarpi are of equal length (Dial 1992).
Theoretical wings that illustrate extremes of pointedness (shift in wingtip toward the leading edge) and convexity (decrease in acuteness of the wingtip). (a) rounded (low aspect ratio) and (b) pointed (high aspect ratio) wings; (c) concave and (d) convex wings (From: Lockwood et al. 1998).
Distribution of species in terms of wing pointedness and convexity. Each point represents one species. a. tern, b. duck, c. pigeon, d. gull, e. magpie, f. buzzard (soaring hawk), and g. sparrowhawk (accipiter) (From: Lockwood et al. 1998).
Aspect ratio affects the relative magnitude of induced and profile drag; if mass, wing area, and other wing shape parameters remain constant, a long, thin high-aspect ratio wing reduces the cost of flight and extends range. However, high aspect ratio is not necessarily associated with high speed (favored by smaller wings). Elliptical wings (low aspect ratio) can maximize thrust from flapping, whereas as more pointed wing (high-speed) with a sharp wingtip minimizes wing weight and wing inertia. Short wings must be flapped at high frequency to provide sufficient thrust. So, relatively short, pointed wings allow rapid wing-beats with reduced inertia and that translates into greater speed (e.g., shorebirds, auks, and ducks). More rounded (convex) wings produce more lift toward the wingtip (where the wing moves faster) and are particularly effective for birds that fly at slow speeds (e.g., taking off from the ground) or need high levels of acceleration. Many small passerines often fly slowly or in 'cluttered' habitats, or need rapid acceleration to escape predators. The same is true for birds like accipiters and corvids (crows and jays; Lockwood et al. 1998).
Laysan Albatross wing (Source: http://www.ups.edu/biology/museum/wingphotos.html)
Masked Booby in flight
|Whistle through the wing - Birds molt for a variety of reasons. Molting regulates body temperature, keeps feathers neat and waterproof and allows seasonal changes in appearance for mating or migration. However, generating new feathers uses extra energy; staying warm with less plumage uses extra energy, and flying with smaller, work-in-progress wings requires extra energy. So, not all birds molt in the same way. Ducks, swans and geese, for example, shed all their flight feathers at once and are flightless until replacements have grown. Most other birds, however, lose and renew their feathers according to a continuous, pre-programmed sequence. This sequential molting gives rise to a range of temporary feather gaps that seem to reduce take-off speed, take-off angle and level flight speed and to impede predator evasion by raising a bird's minimum turning radius. Anders Hedenstrom and Shigeru Sunada of Cambridge University estimated how the aerodynamics of flight are affected by molting (Hedenstrom & Sunada 1999). They estimated drag and lift by analyzing the fluid dynamics of symmetrical gaps in flat, rectangular model wings of various width-to-length (aspect) ratios, at a fixed angle with respect to air flow – a system that reasonably approximates a bird in gliding, but not flapping, flight. Although the effects were small, Hedenstrom and Sunada concluded that both feather gap size and position affect flight performance. Large gaps, and gaps in the middle of the wing impede aerodynamic efficiency more than small, wing-tip gaps. They also found that the detrimental effect of molt gaps increases with increasing aspect ratio. In other words a bird with short, broad wings, like a vulture, won't miss a few feathers as much as one with long, narrow wings, like an albatross. "This is of great ecological significance," they muse, "as it could help explain why large birds show relatively slow rates of molting that are associated with rather small gaps." -- Sara Abdulla, Nature Science Update||
Wilson's Storm Petrel
Photo by Brian Patteson
Chimney Swift wing (Source: http://www.ups.edu/biology/museum/wingphotos.html)
Peregrine Falcon with a camera mounted on its back!
|How pigeons give falcons the slip -- A Peregrine Falcon dive-bombing at several hundred miles an hour to
knock a pigeon out of the sky would seem to be a study in
At those speeds, attention must be paid. But even a falcon in hot
can become distracted. And what distracts it, is a patch of white
on the rump of an otherwise blue-gray pigeon. "The brain can be primed
by a conspicuous thing," said Alberto Palleroni. The falcon, he said,
the conspicuous thing -- the white patch -- and doesn't notice the
starting to turn away and escape. "In effect, it's a kind of a card
or a ruse" on the part of the pigeon, Palleroni said.
Palleroni et al. (2005) observed more than 1,800 falcon attacks on wild pigeons over seven years. They recorded the plumage types among the pigeons and noticed that while birds with white rump patches made up 20% of the pigeon population, very few were captured by the falcons. When a Peregrine Falcon attacks a pigeon, it plunges at speeds greater than 200 miles an hour, levels off and comes upon the pigeon from behind, punching it with what amounts to a closed fist. At those speeds even a grazing blow kills the pigeon; the falcon then circles back and picks it up. The only way the slower-flying pigeon can escape is by dipping a wing, rolling and veering off. If the falcon is distracted by the white patch, it won't notice the dipping of the wing (which, being blue-gray, blends with the landscape) until it's too late.
Plumage color in pigeons is an independently heritable trait, Palleroni said, meaning it is not tied to selection involving sexual or other traits. So it is highly likely that the white rump feathers are an anti-predator adaptation to high-speed attacks. Not bad for a bird that many people disdain. "The feral pigeon is an amazing balance of adaptations and success," Palleroni said. - Henry Fountain, New York Times
Golden Eagle in flight
Philippine Eagle (currently critically endangered)
Another important factor that influences a bird's flying ability is
wing loading - the weight (or mass) of a bird divided by wing area
wing area in square centimeters). Birds with low wing loading need less
power to sustain flight. Birds considered to be the 'best' flyers, such
as swallows & swifts, have lower wing loading values than other
|The Flight Strategy of Magnificent Frigatebirds -- Frigatebirds cannot land on the sea because their feathers are not waterproof. If they did land, they would find it even harder to take off again because their legs are too short. Despite this, frigatebirds are perfectly suited for an aerial life over the sea because they have the lowest wing-loading (large wing area & low body mass) of any bird. Weimerskirch et al. (2003) investigated the movements of Magnificent Frigatebirds (Fregata magnificens) while foraging at sea off the coast of French Guiana. Because they are very light in comparison to their wing surface, frigatebirds can glide at altitudes of up to 2,500 meters. Then they glide downward, taking advantage of the next current. This flight strategy, which limits the bird's physical efforts, is the same as that used by migratory birds during long flights over land. Migratory birds, however, avoid flying over the sea due to a lack of thermals, while frigatebirds fly over the sea. As it turns out, ascending air currents are found over the sea only in tropical regions where the waters are warm enough to create such currents on a continuous basis. Frigatebirds can therefore fly night and day using this technique. To investigate the movements of Magnificent Frigatebirds, Weimerskirch et al. (2003) fitted the birds with satellite transmitters and altimeters, which allowed them to observe that the birds only occasionally come close to the sea surface to catch prey. They catch flying fish or squid driven above the surface by underwater predators like schools of tuna or dolphins. To identify such feeding opportunities, which are very rare, requires long hours of flight at high altitudes. Frigatebirds rarely feed their young, which consequently grow very slowly. The species is, however, well-adapted: it has a low reproductive rate and parent birds care for their young for over one year, the longest period of parental care of any bird.|
Check this short video of soaring frigatebirds.
Aspect ratio vs. wing loading index in some birds, airplanes, a hang-glider, a butterfly, and a maple seed.
The numbers after various flying objects refer to aspect ratio. Fm, Fregata magnificens (Magnificent Frigatebird); Ga, Gallirallus australis (Common name: Weka;
an endemic New Zealand bird in the rail family) (From: Norberg 2002).
Different combinations of wing loading and aspect ratio permit particular flight modes and foraging strategies. Species with long wings and high aspect ratios also have low wing loadings, particularly those with low body mass, and their flight is inexpensive, e.g., many seabirds, swifts, and swallows. Birds with high wing loading and short wings, but still with high aspect ratios, are adapted to fast and rather inexpensive flight (short wings reduce profile power that is large in fast flight), e.g., loons, mergansers, geese, swans, ducks, and auks. Birds flying close to or among vegetation, e.g., flycatchers, tend to have low aspect ratios that contribute to high induced drag, but their low mass and wing loading reduce flight costs. The very low aspect ratios of many smaller birds that occupy densely-vegetated habitats, e.g., gallinaceous birds, mean that the energetic cost of flight is expensive, so these species spend much of their time walking. Birds with higher wing loading, e.g., penguins, are flightless (Norberg 2002).
Flight styles -- Based on differences in aspect ratios and wing loading (Rayner 1988; see figure below), flight styles can also be categorized as either specialized or non-specialized. The non-specialists have average aspect ratios and average wing loading and are excellent flyers (capable of long flights and with good maneuverability) that typically use flapping flight. The non-specialists can be further subdivided, based on aspect ratio and speed, as slow non-specialists and fast non-specialists. In the slow category would be most passerines (Passeriformes), pelicans (Pelicaniformes), herons, egrets, ibises, and storks (Ciconiiformes), pigeons and doves (Columbiformes), cuckoos (Cuculiformes), most owls (Strigiformes), trogons (Trogoniformes), most birds in the order Gruiformes (e.g., gallinules, rails, and bustards), mousebirds (Coliiformes), woodpeckers (Piciformes), and parrots (Psittaciformes). Fast non-specialists include many falcons (Falconidae), gulls (Larinae), and storm-petrels (Hydrobatidae).
Birds with morphological attributes (aspect ratio and wing loading) that differ (beyond one or two standard deviations) from those of ‘typical’ birds exhibit specialized flight styles (Rayner 1988). Among these specialized styles are:
Approximate centroids of major groups of birds relative to aspect ratios and wing loading (From: Rayner 1988).
Modified version of Rayner's (1998) figure (From: http://speculativeevolution.wikia.com/wiki/File:BirdWingGraph.png)
Bird Flight Speeds (m/s) Plotted in Relation to Body Mass (kg) and Wing Loading (N/m2) for 138 Species of Six Main Monophyletic Group
Bird flight speeds -- Alerstam et al. (2007) examined the cruising speeds of 138 different species of migrating birds in flapping flight using tracking radar. Mean airspeeds among the 138 species ranged between 8 and 23 m/s (or about 18 to 51 mph). Birds of prey, songbirds, swifts, gulls, terns, and herons had flight speeds in the lower part of this range, while pigeons, some of the waders, divers, swans, geese, and ducks were fast flyers in the range 15–20 m/s (33 - 45 mph). Cormorants, cranes, and skuas were among the species flying at intermediary speeds, about 15 m/s. The diving ducks reached the fastest mean speeds, with several species exceeding 20 m/s (and up to 23 m/s). An important factor in explaining variation in flight speed was phylogenetic group; species of the same group tended to fly at similar characteristic speeds.
Depending on their ecological life style and foraging, birds are adapted to different aspects of flight performance, e.g., speed, agility, lift generation, escape, take-off, and energetic cost of flight.. These adaptations are likely to have implications for the flight apparatus (anatomy, physiology, and muscle operation) and the flight behavior that may constrain the cruising flight speed. Species flying at comparatively slow cruising speeds frequently use thermal soaring (raptors and storks), are adapted for hunting and load carrying (raptors), or for take-off and landing in dense vegetation (herons). Associated with these flight habits they have a lower ratio of elevator (supracoracoideus) to depressor (pectoralis) flight muscle (particularly low among birds of prey) compared with shorebirds and waterfowl. Alerstam et al. (2007) suggested that functional differences in flight apparatus and musculature among birds of different life and flight styles (differences often associated with evolutionary origin) have a significant influence on a birds performance and speed in sustained cruising flight.
Altitude vs. time showing rapid descents during migratory flights as recorded by radar. (A) Barn Swallow, (B) Yellow Wagtail, (C) Reed Warbler, (D) Yellow Wagtail, (E) Meadow Pipit, and (F) Yellow Wagtail.
|Diving speeds -- Hedenstrom and Liechti (2001) used radar to track the flights of migrating birds as they descended from their cruising altitudes after crossing the Mediterranean Sea. Dive angles were as great as 83.5 degrees and the maximum speed recorded was 53.7 meters/sec (or about 120 miles/hour). Larger birds can attain even greater speeds, with estimates of the top speed of Peregrine Falcons as high as 89 - 157 meters/sec (or about 200-350 miles/hour). Although such estimates may be correct, their accuracy is unknown because the speed of a diving falcon is difficult to measure. The required instrumention is complex, and the dive is a brief, rare event that takes place at unpredictable places and times (Tucker 1998).|
The high wing loading of birds like grebes, loons (check Looney Lift-Off), and swans (see Tundra Swan below) means that it's more difficult for them to generate sufficient lift to take-off. That's why these birds often run along the surface of a lake for some distance before taking flight. They must generate enough speed to generate enough lift to get their relatively heavy bodies into the air!
Want to see a Laysan Albatross taking flight?? Check this video!
Canada Geese taking off (slow motion)
Swans taking off
|Take-off! -- Initiating flight is challenging, and considerable effort has focused on understanding the energetics and aerodynamics of take-off for both machines and animals. Available evidence suggests that birds maximize their initial flight velocity using leg thrust rather than wing flapping (e.g., see the drawings of a European Starling taking off from the ground below). The smallest birds, hummingbirds, are unique in their ability to perform sustained hovering but have small hindlimbs that could hinder generation of high leg thrust. During take-off by hummingbirds, Tobalske et al. (2004) measured hindlimb forces on a perch mounted with strain gauges and filmed wingbeat kinematics with high-speed video. Whereas other birds obtain 80–90% of their initial flight velocity using leg thrust, the leg contribution in hummingbirds was 59% during normal take-off. Unlike other species, hummingbirds beat their wings several times as they thrust using their hindlimbs. In a phylogenetic context, these results show that reduced body and hindlimb size in hummingbirds limits their peak acceleration during leg thrust and, ultimately, their take-off velocity. Previously, the influence of motivational state on take-off flight performance has not been investigated for any bird. Tobalske et al. (2004) studied the full range of motivational states by testing performance as the birds took off: (1) to initiate flight autonomously, (2) to escape a startling stimulus or (3) to aggressively chase a conspecific away from a feeder. Motivation affected performance. Escape and aggressive take-off featured decreased hindlimb contribution (46% and 47%, respectively) and increased flight velocity. When escaping, hummingbirds shortened their body movement prior to onset of leg thrust and began beating their wings earlier and at higher frequency. Thus, hummingbirds are capable of modulating their leg and wingbeat kinetics to increase take-off velocity.|
European Starling taking off from the ground. Time notations (milliseconds) are relative to the defined start of take-off (vertical
force > 105% of body weight). Key events: wings begin unfolding (73 ms) and start of downstroke (108 ms). From: Earls (2000).
|Landing - Birds must usually be much more precise when landing than an airplane pilot; often landing on a branch rather than a runway. During landing, birds increase the angle of attack of their wings until they stall. This decreases both speed and lift. Birds also spread and lower their tails, with the tail increasing drag & acting like a brake. Finally, legs and feet are extended for landing. Click on the Raven to the right for a cool animation . . . . (Hint: After viewing the animation, left click & hold the round cursor at the bottom; you can move it and examine more carefully what's happening during landing). Also, check this slow-motion video of a pigeon landing on a branch and this one of a Barn Owl landing.|
Tree Swallow landing
Photo by Anupam Pal & used with his permission
Click on the photo to see a short video of a
Rock Pigeon landing in slow motion.
Rock Pigeon landing in slow motion - note the position of the alulas and the spread tail feathers
Bald Eagle landing
Eagle Owl landing
|Leading-edge vortex lifts swifts --
How do birds
fly up to a branch and land smoothly and precisely? It turns out that
may use a completely different kind of lift -- which not only works at
slow speeds, but even helps birds brake to a stop. Using a model of a
wing in water containing particles lit with a laser, Videler et al.
discovered how Common
Swifts (Apus apus) create lift with a "leading-edge vortex" (LEV). Think of an LEV as a horizontal tornado that forms above a
swept-back wing as it cuts through the air. The vortex is a
zone. Like the low-pressure zone formed above conventional wings, it
lift. Until this study, it had been seen in insects, but not birds.
Birds have two-part wings. The proximal "arm wing" is rounded on front, humped on top, and sharp on the back -- just like most airplane wings. Further away, the "hand wing" is flatter on top and extremely sharp on the front. The hand wing resembles the wing of a fighter plane, and it is also often swept back -- angled -- toward the rear. Wings on some high-performance jets can change angle to alter the leading-edge vortex. Wings that are nearly straight out create more lift. Swept-back wings create more drag (air friction). Acrobatic birds may also take advantage of the LEV; changing wing angle gives them the ratio of lift and drag they need for flying and snatching insects in mid-air.
The LEV not only creates lift, especially at slow speeds, but also confers another benefit that helps the swift perform insectivorous aerobatics. While conventional lift is chiefly an upward force, the LEV can also produce drag, which allows sudden steering. "The LEV can be used for controlling flight," says Videler. "It's very suited for that because there is no time delay, the forces are produced instantaneously. That's very useful if you want to maneuver very quickly." -- Courtesy of the University of Wisconsin Board of Regents
Swifts hunt in the air, catching flying insects on the wing. To snag its prey, a swift has to be able to fly fast and make very tight turns, just like a jet fighter (From: Müller and Lentink 2004).
When gliding, a Common Swift shows a torpedo-shaped body. Its arm-wing (close to its body) has a rounded leading edge. The bird's long, slender hand-wing has a much sharper profile. The inset shows the feathers at the hand-wing's leading edge.
Alerstam, T., M. Rosén, J. Bäckman, P. G. P. Ericson, and O. Hellgren. 2007. Flight speeds among bird species: allometric and phylogenetic effects. PLoS Biology 5: e197.
Alonso, P. D., A. C. Milner, R. A. Ketcham, M. J. Cookson and T. B. Rowe. 2004. The avian nature of the brain and inner ear of Archaeopteryx. Nature 430: 666 - 669.
Baier, D. B., S. M. Gatesy, and F. A. Jenkins. 2007. A critical ligamentous mechanism in the evolution of avian flight. Nature 445: 307-310.
Bajec, I. L., and F. H. Heppner. 2009. Organized flight in birds. Animal Behaviour 78: 777-789.
Burgers, P. and L. M. Chiappe. 1999. The wings of Archaeopteryx as a primary thrust generator. Nature 399: 60-62.
Carey, J.R. and J. Adams. 2001. The preadaptive role of parental care in the evolution of avian flight. Archaeopteryx 19: 97 - 108.
Chatterjee, S. and R. J. Templin. 2007. Biplane wing planform and flight performance of the feathered dinosaur Microraptor gui. Proceedings of the National Academy of Science, online early - Jan. 2007.
Chen P., Z. Dong, and S. Zhen. 1998. An exceptionally well-preserved theropod dinosaur from the Yixian Formation of China. Nature. 391:147–152.
del Hoyo, J., A. Elliott, and J. Sargatal (eds.). 1992. Handbook of birds of the world, volume 1. Lynx Edicions, Barcelona, Spain.
Dial, K. P. 1992. Avian forelimb muscles and nonsteady flight: can birds fly without using the muscles in their wings? Auk 109: 874-885.
Dial, K. P. 2003. Wing-assisted incline running and the evolution of flight. Science 299:402-404.
Earls, K. D. 2000. Kinematics and mechanics of ground take-off in the starling (Sturnus vulgaris) and the quail (Coturnix coturnix). Journal of Experimental Biology 203:725-739.
Feduccia, A. 1996. The origin and evolution of birds. Yale Univ. Press, New Haven.
Hedenström, A. 2002. Aerodynamics, evolution and ecology of avian flight. Trends in Ecology and Evolution 17: 415-422.
Hedenström, A. and F. Liechti. 2001. Field estimates of body drag coefficient on the basis of dives in passerine birds. Journal of Experimental Biology 204: 1167-1175.
Hedenstrom, A. and S. Sunada. 1999. On the aerodynamics of moult gaps in birds. Journal of Experimental Biology 202:67-76.
Hedrick, T. L., B. W. Tobalske, and A. A. Biewener. 2002.Estimates of circulation and gait change based on a three-dimensional kinematic analysis of flight in cockatiels (Nymphicus hollandicus) and Ringed Turtle-doves (Streptopelia risoria). Journal of Experimental Biology 205:1389-1409.
Lockwood, R., J. P. Swaddle, and J. M. V. Rayner. 1998. Avian wingtip shape reconsidered: wingtip shape indices and morphological adaptations to migration. Journal of Avian Biology 29: 273-292.
Longrich, N. 2006. Structure and function of hindlimb feathers in Archaeopteryx lithographica. Paleobiology 32: 417-431.
Müller, U. K. and David Lentink. 2004. Turning on a dime. Science 306: 1899 - 1900.
Naish, D. and D. M. Martill. 2003. Pterosaurs – a successful invasion of prehistoric skies. Biologist 50: 213-216.
Noberg, U. M. L. 2002. Structure, form, and function of flight in engineering and the living world. Journal of Morphology 252:52-81.
Palleroni, A., C. T. Miller, M. Hauser, and P. Marler. 2005. Predation: Prey plumage adaptation against falcon attack. Nature 434:973-974.
Pennycuick C. J., M. Klaassen, A. Kvist, and A. Lindström. 1996. Wingbeat frequency and the body drag anomaly: wind-tunnel observations on a thrush nightingale (Luscinia luscinia) and a teal (Anas crecca). Journal of Experimental Biology 199: 2757–2765.
Rayner, J. M. V. 1988. Form and function in avian flight. Current Ornithology 5: 1-66.
Ros, I. 2013. Low speed avian maneuvering flight. Ph.D. dissertation, Harvard University, Cambridge, MA.
Sanz, J. L., L. M. Chiappe, P. Perez-Moreno, A. D. Buscalioni, J. J. Moratalla, F. Ortega, & F. J. Payata-Ariza. 1996. An Early Cretaceous bird from Spain and its implications for the evolution of avian flight. Nature 382: 442-445.
Sanz, J. L. and F. Ortega. 2002. The birds from Las Hoyas. Science Progress 85:113-130.
Schluter D. 2001. Ecology and the origin of species. Trends in Ecology and Evolution. 16:372–380.
Seebacher, F. 2003. Dinosaur body temperatures: the occurrence of endothermy and ectothermy. Paleobiology 29: 105-122.
Speakman, J. R. 2001. The evolution of flight and echolocation in bats: another leap in the dark. Mammal Review 31: 111-130.
Sullivan, W. and K.-J. Wilson. 2001. Differences in habitat selection between Chatham Petrels (Pterodroma axillaris) and Broad-billed Prions (Pachyptila vittata): implications for management of burrow competition. New Zealand Journal of Ecology 25: 65-69.
Sumida S. S., and C. A. Brochu. 2000. Phylogenetic context for the origin of feathers. American Zoologist. 40:486–503.
Videler, J.J., E.J. Stamhuis, and G.D.E. Povel. 2004. Leading-edge vortex lifts swifts. Science 306:1960-1962.
Weimerskirch, H., O. Chastel, C. Barbraud, and O. Tostain. 2003. Frigatebirds ride high on thermals. Nature 421:333-334.
I - Introduction to Birds
III - Bird Flight II
Lift from flow turning (NASA)
Spread wing-tips of the birds as a model for drag reduction
Theory of Flapping Flight
Back to BIO 554/754 syllabus
Back to Avian Biology |
Financial literacy for kids is important for a number of reasons. Not only can it help your child avoid debt and excess spending, it will also help them develop healthy financial habits. Children should be taught financial literacy from an early age and be taught to make good financial choices. Here are some tips to help your child build financial literacy:
Financial literacy starts with open communication. It doesn’t have to be a formal conversation; teachable moments can happen every day. For example, a news item about bankruptcy could be a great opportunity to discuss personal debt, or a gas station transaction can lead to a discussion about the cost of gas. The key is to explain how money decisions will help them achieve their goals.
Financial literacy for kids also begins with budgeting. Budgeting will teach your children how to make wise spending and investment decisions, and it will also help them set financial goals. As a parent, it is your responsibility to teach your child to save money. Don’t let stereotypes hold them back from pursuing their dreams.
Once Upon a Dime is a great example of an interactive lesson plan. The lesson plan teaches kids basic economic concepts while providing role-playing opportunities. Another excellent resource is Practical Money Skills, which links parents, educators, banks, and governments to the resources they need. Another example is the Consumer Jungle, a website with financial literacy games and activities. |
General equilibrium is a concept in macroeconomics that seeks to explain the behaviour of an entire economy with respect to its various interacting markets. The basic theory generally includes the goods market and the money market. Both these markets must be in equilibrium to form a general equilibrium. The goods market is represented by the equilibrium between investment and savings. This determines an equilibrium interest rate. This equilibrium interest rate is affected by changes in GDP negatively. In an open economy, the interest rate is determined by the world and the goods market then determines the exchange rate instead. The money market equilibrium is when money supply equals money demand. This gives an equilibrium interest rate which is positively correlated with GDP. The intersection of the money market (LM Curve) and goods market (IS Curve); when they both use the same equilibrium interest rate (or exchange rate depending on the type of economy) is general equilibrium. From this relationship, we can derive aggregate demand.
Aggregate demand is the relationship between the price level and the real GDP of an economy at a given time. This general equilibrium can be thought of as a long run macroeconomic equilibrium when the economy is at full employment. A shock to this economy would cause a change in either the IS or LM curve and then result in a change in the aggregate demand function. The economy initially moves to a short run macroeconomic equilibrium defined by the property of an inability for the price level to adjust. Afterwards, long run macroeconomic equilibrium is achieved when price levels adjust which is reflected by a shift of the LM curve causing a return to full employment.
Macroeconomic equilibrium has two conditions. The first is that the desired aggregate expenditure is equal to actual GDP. This simply means that households are willing to purchase what is being produced (aggregate demand). The second condition is that firms must want to produce the current level of GDP (aggeregate supply). The conditions are fulfilled when both curves intersect.
If price levels are too high, then there is an excess supply of output and there is an increase in unsold stocks for producers. This means that producers need to cut back on production to avoid excess in inventories. When the price level is below equilibrium, there is an excess demand in the short run, which signals to producers to expand output. Shortages of resources will also lead to a general rise in costs and prices.
Changes in macroeconomic equilibrium shows how the economy reacts to shocks to the real GDP and price level. Aggregate demand and supply shocks are described by how they impact real GDP. Positive shocks increase equilibrium GDP and negative shocks reduce equilibrium GDP.© BrainMass Inc. brainmass.com December 15, 2018, 1:58 am ad1c9bdddf |
CODE Function in Excel
Code Function in excel is used to find out the code of the character in the string, it finds out the code for the first character only so if we use this formula as =Code(“Anand”) and =Code(“An”) we will get the same result as 65 as the code for character A is 65.
- text: The text parameter is only and a mandatory parameter of the CODE Function. This parameter could be one single character, a string, or any function which returns a text as a result.
How to Use CODE Function in Excel? (with Examples)
In this section, we will understand the use of CODE Function and will look at a few examples with the help of actual data.
As you can clearly observe in the output section, the CODE function is returning the ASCII value of corresponding Characters written in the first column. The ASCII value of “A” is 65, and “a” is 97. You can easily verify the ASCII values of every character of your keyboard form the Internet.
4.9 (1,353 ratings) 35+ Courses | 120+ Hours | Full Lifetime Access | Certificate of Completion
In the above example, we have applied the CODE function on cells containing strings, so as you can see in the output column, the CODE Function is returning the ASCII value of the first character of the sentence.
For example, three, we have used another two functions, LOWER and UPPER, to use their return value as a parameter of the CODE function. The LOWER function returns the lower case of the character passed as a parameter; similarly, UPPER returns the upper case of a character passed as a parameter.
Things to Remember
- The main purpose of the CODE function is to return the ASCII code of a character of the first character in any cell.
- The CODE Function is not that popular among the excel community, but as an excel expert, you should be aware of this function, as you might find it handy in VBA Coding.
- It was first introduced in Excel 2000 and is available in all subsequent versions of excel.
- The parameter “text” in CODE Function is mandatory, and if it is left blank, the function will return a #VALUE error, which can easily be resolved by providing a proper character or string as a parameter to the function.
- The return type of CODE Function is a numeric value.
- It is actually the inverse of the CHAR function in excel. The CHAR function returns the corresponding character from a numeric ASCII value.
- You might observe a different output than the one shown in our examples on a Mac OS because Mac OS uses the Macintosh character set while windows use the ANSI character set.
CODE Function in Excel Video
This has been a guide to CODE in Excel. Here we discuss the CODE Formula in excel and how to use the CODE function along with excel example and downloadable excel templates. You may also look at these useful functions in excel – |
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient (integer division). In algebra, the remainder is the polynomial "left over" after dividing one polynomial by another. The modulo operation is the operation that produces such a remainder when given a dividend and divisor.
Formally it is also true that a remainder is what is left after subtracting one number from another, although this is more precisely called the difference. This usage can be found in some elementary textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back and keep the rest." However, the term "remainder" is still used in this sense when a function is approximated by a series expansion and the error expression ("the rest") is referred to as the remainder term.
If a and d are integers, with d non-zero, it can be proven that there exist unique integers q and r, such that a = qd + r and 0 ≤ r < |d|. The number q is called the quotient, while r is called the remainder.
The remainder, as defined above, is called the least positive remainder or simply the remainder. The integer a is either a multiple of d or lies in the interval between consecutive multiples of d, namely, q⋅d and (q + 1)d (for positive q).
At times it is convenient to carry out the division so that a is as close as possible to an integral multiple of d, that is, we can write
- a = k⋅d + s, with |s| ≤ |d/2| for some integer k.
In this case, s is called the least absolute remainder. As with the quotient and remainder, k and s are uniquely determined except in the case where d = 2n and s = ± n. For this exception we have,
- a = k⋅d + n = (k + 1)d − n.
A unique remainder can be obtained in this case by some convention such as always taking the positive value of s.
In the division of 43 by 5 we have:
- 43 = 8 × 5 + 3,
so 3 is the least positive remainder. We also have,
- 43 = 9 × 5 − 2,
and −2 is the least absolute remainder.
These definitions are also valid if d is negative, for example, in the division of 43 by −5,
- 43 = (−8) × (−5) + 3,
and 3 is the least positive remainder, while,
- 43 = (−9) × (−5) + (−2)
and −2 is the least absolute remainder.
In the division of 42 by 5 we have:
- 42 = 8 × 5 + 2,
and since 2 < 5/2, 2 is both the least positive remainder and the least absolute remainder.
In these examples, the (negative) least absolute remainder is obtained from the least positive remainder by subtracting 5, which is d. This holds in general. When dividing by d, either both remainders are positive and therefore equal, or they have opposite signs. If the positive remainder is r1, and the negative one is r2, then
- r1 = r2 + d.
For floating-point numbersEdit
When a and d are floating-point numbers, with d non-zero, a can be divided by d without remainder, with the quotient being another floating-point number. If the quotient is constrained to being an integer, however, the concept of remainder is still necessary. It can be proved that there exists a unique integer quotient q and a unique floating-point remainder r such that a = qd + r with 0 ≤ r < |d|.
Extending the definition of remainder for floating-point numbers as described above is not of theoretical importance in mathematics; however, many programming languages implement this definition, see modulo operation.
In programming languagesEdit
While there are no difficulties inherent in the definitions, there are implementation issues that arise when negative numbers are involved in calculating remainders. Different programming languages have adopted different conventions:
- Pascal chooses the result of the mod operation positive, but does not allow d to be negative or zero (so, a = (a div d ) × d + a mod d is not always valid).
- C99 chooses the remainder with the same sign as the dividend a. (Before C99, the C language allowed other choices.)
- Perl, Python (only modern versions), and Common Lisp choose the remainder with the same sign as the divisor d.
Euclidean division of polynomials is very similar to Euclidean division of integers and leads to polynomial remainders. Its existence is based on the following theorem: Given two univariate polynomials a(x) and b(x) (with b(x) not the zero polynomial) defined over a field (in particular, the reals or complex numbers), there exist two polynomials q(x) (the quotient) and r(x) (the remainder) which satisfy:
where "deg(...)" denotes the degree of the polynomial (the degree of the constant polynomial whose value is always 0 is defined to be negative, so that this degree condition will always be valid when this is the remainder.) Moreover, q(x) and r(x) are uniquely determined by these relations.
This differs from the Euclidean division of integers in that, for the integers, the degree condition is replaced by the bounds on the remainder r (non-negative and less than the divisor, which insures that r is unique.) The similarity of Euclidean division for integers and also for polynomials leads one to ask for the most general algebraic setting in which Euclidean division is valid. The rings for which such a theorem exists are called Euclidean domains, but in this generality uniqueness of the quotient and remainder are not guaranteed.
- Smith 1958, p. 97
- Ore 1988, p. 30. But if the remainder is 0, it is not positive, even though it is called a "positive remainder".
- Ore 1988, p. 32
- Pascal ISO 7185:1990 18.104.22.168
- "C99 specification (ISO/IEC 9899:TC2)" (PDF). 6.5.5 Multiplicative operators. 2005-05-06. Retrieved 16 August 2018.
-
- Larson & Hostetler 2007, p. 154
- Rotman 2006, p. 267
- Larson & Hostetler 2007, p. 157
- Larson, Ron; Hostetler, Robert (2007), Precalculus:A Concise Course, Houghton Mifflin, ISBN 978-0-618-62719-6
- Ore, Oystein (1988) , Number Theory and Its History, Dover, ISBN 978-0-486-65620-5
- Rotman, Joseph J. (2006), A First Course in Abstract Algebra with Applications (3rd ed.), Prentice-Hall, ISBN 978-0-13-186267-8
- Smith, David Eugene (1958) , History of Mathematics, Volume 2, New York: Dover, ISBN 0486204308
- Davenport, Harold (1999). The higher arithmetic: an introduction to the theory of numbers. Cambridge, UK: Cambridge University Press. p. 25. ISBN 0-521-63446-6.
- Katz, Victor, ed. (2007). The mathematics of Egypt, Mesopotamia, China, India, and Islam : a sourcebook. Princeton: Princeton University Press. ISBN 9780691114859.
- Schwartzman, Steven (1994). "remainder (noun)". The words of mathematics : an etymological dictionary of mathematical terms used in english. Washington: Mathematical Association of America. ISBN 9780883855119.
- Zuckerman, Martin M. Arithmetic: A Straightforward Approach. Lanham, Md: Rowman & Littlefield Publishers, Inc. ISBN 0-912675-07-1. |
A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-tests test the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's t-test whose critical values are defined by the sample size (through the corresponding degrees of freedom).
Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. Therefore, many statistical tests can be conveniently performed as approximate Z-tests if the sample size is large or the population variance is known. If the population variance is unknown (and therefore has to be estimated from the sample itself) and the sample size is not large (n < 30), the Student's t-test may be more appropriate.
How to perform a Z test when T is a statistic that is approximately normally distributed under the null hypothesis is as follows:
First, estimate the expected value μ of T under the null hypothesis, and obtain an estimate s of the standard deviation of T.
Second, determine the properties of T : one tailed or two tailed.
For Null hypothesis H0: μ≥μ0 vs alternative hypothesis H1: μ<μ0 , it is lower/left-tailed (one tailed).
For Null hypothesis H0: μ≤μ0 vs alternative hypothesis H1: μ>μ0 , it is upper/right-tailed (one tailed).
For Null hypothesis H0: μ=μ0 vs alternative hypothesis H1: μ≠μ0 , it is two-tailed.
Third, calculate the standard score :
which one-tailed and two-tailed p-values can be calculated as Φ(Z)(for lower/left-tailed tests), Φ(−Z) (for upper/right-tailed tests) and 2Φ(−|Z|) (for two-tailed tests) where Φ is the standard normal cumulative distribution function.
For the Z-test to be applicable, certain conditions must be met.
If estimates of nuisance parameters are plugged in as discussed above, it is important to use estimates appropriate for the way the data were sampled. In the special case of Z-tests for the one or two sample location problem, the usual sample standard deviation is only appropriate if the data were collected as an independent sample.
In some situations, it is possible to devise a test that properly accounts for the variation in plug-in estimates of nuisance parameters. In the case of one and two sample location problems, a t-test does this.
Suppose that in a particular geographic region, the mean and standard deviation of scores on a reading test are 100 points, and 12 points, respectively. Our interest is in the scores of 55 students in a particular school who received a mean score of 96. We can ask whether this mean score is significantly lower than the regional mean—that is, are the students in this school comparable to a simple random sample of 55 students from the region as a whole, or are their scores surprisingly low?
First calculate the standard error of the mean:
where is the population standard deviation.
Next calculate the z-score, which is the distance from the sample mean to the population mean in units of the standard error:
In this example, we treat the population mean and variance as known, which would be appropriate if all students in the region were tested. When population parameters are unknown, a Student's t-test should be conducted instead.
The classroom mean score is 96, which is −2.47 standard error units from the population mean of 100. Looking up the z-score in a table of the standard normal distribution cumulative probability, we find that the probability of observing a standard normal value below −2.47 is approximately 0.5 − 0.4932 = 0.0068. This is the one-sided p-value for the null hypothesis that the 55 students are comparable to a simple random sample from the population of all test-takers. The two-sided p-value is approximately 0.014 (twice the one-sided p-value).
Another way of stating things is that with probability 1 − 0.014 = 0.986, a simple random sample of 55 students would have a mean test score within 4 units of the population mean. We could also say that with 98.6% confidence we reject the null hypothesis that the 55 test takers are comparable to a simple random sample from the population of test-takers.
The Z-test tells us that the 55 students of interest have an unusually low mean test score compared to most simple random samples of similar size from the population of test-takers. A deficiency of this analysis is that it does not consider whether the effect size of 4 points is meaningful. If instead of a classroom, we considered a subregion containing 900 students whose mean score was 99, nearly the same z-score and p-value would be observed. This shows that if the sample size is large enough, very small differences from the null value can be highly statistically significant. See statistical hypothesis testing for further discussion of this issue.
Location tests are the most familiar Z-tests. Another class of Z-tests arises in maximum likelihood estimation of the parameters in a parametric statistical model. Maximum likelihood estimates are approximately normal under certain conditions, and their asymptotic variance can be calculated in terms of the Fisher information. The maximum likelihood estimate divided by its standard error can be used as a test statistic for the null hypothesis that the population value of the parameter equals zero. More generally, if is the maximum likelihood estimate of a parameter θ, and θ0 is the value of θ under the null hypothesis,
can be used as a Z-test statistic.
When using a Z-test for maximum likelihood estimates, it is important to be aware that the normal approximation may be poor if the sample size is not sufficiently large. Although there is no simple, universal rule stating how large the sample size must be to use a Z-test, simulation can give a good idea as to whether a Z-test is appropriate in a given situation.
Z-tests are employed whenever it can be argued that a test statistic follows a normal distribution under the null hypothesis of interest. Many non-parametric test statistics, such as U statistics, are approximately normal for large enough sample sizes, and hence are often performed as Z-tests.
In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.
In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.
In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".
In probability theory and statistics, the chi-squared distribution with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.
The statistical power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis when a specific alternative hypothesis is true. It is commonly denoted by , and represents the chances of a "true positive" detection conditional on the actual existence of an effect to detect. Statistical power ranges from 0 to 1, and as the power of a test increases, the probability of making a type II error by wrongly failing to reject the null hypothesis decreases.
In statistics, an effect size is a number measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics.
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
The t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis.
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.
Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power. In complicated studies there may be several different sample sizes: for example, in a stratified survey there would be different sizes for each stratum. In a census, data is sought for an entire population, hence the intended sample size is equal to the population. In experimental design, where a study may be divided into different treatment groups, there may be different sample sizes for each group.
The following is a glossary of terms used in the mathematical sciences statistics and probability.
The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data.
In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters. A pivot quantity need not be a statistic—the function and its value can depend on the parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic.
Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.
In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.
Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSDtest, is a single-step multiple comparison procedure and statistical test. It can be used to find means that are significantly different from each other.
In statistics, the t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. It is used in hypothesis testing via Student's t-test. The t-statistic is used in a t-test to determine whether to support or reject the null hypothesis. It is very similar to the Z-score but with the difference that t-statistic is used when the sample size is small or the population standard deviation is unknown. For example, the t-statistic is used in estimating the population mean from a sampling distribution of sample means if the population standard deviation is unknown. It is also used along with p-value when running hypothesis tests where the p-value tells us what the odds are of the results to have happened.
In the comparison of various statistical procedures, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator, experiment, or test needs fewer observations than a less efficient one to achieve a given performance. This article primarily deals with efficiency of estimators.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality. |
Calculating variance allows you to measure how far a set of numbers is spread out. Variance is one of the descriptors of probability distribution, and it describes how far numbers lie from the mean. Variance is often used in conjunction with standard deviation, which is the square root of the variance. If you want to know how to calculate the variance of a set of data points, just follow these steps.
Help Calculating Variance
1Write down the formula for calculating variance. The formula for measuring an unbiased estimate of the population variance from a fixed sample of n observations is the following:(s2) = Σ [(xi - x̅)2]/n - 1. The formula for calculating the variance in an entire population is the same as this one except the numerator is n, not n - 1, but it should not be used any time you are working with a finite sample of observations. Here's what the parts of the formula for calculating variance mean:
- s2 = Variance
- Σ = Summation, which means the sum of every term in the equation after the summation sign.
- xi = Sample observation. This represents every term in the set.
- x̅ = The mean. This represents the average of all the numbers in the set.
- n = The sample size. You can think of this as the number of terms in the set.
2Calculate the sum of the terms. First, create a chart that has a column for observations (terms), the mean (x̅), the mean subtracted from the terms (xi - x̅) and then the square of these terms [(xi - x̅)2)]. After you've made the chart and placed all of the terms in the first column, simply add up all of the numbers in the set. Let's say you're working with the following numbers: 17, 15, 23, 7, 9, 13. Just add them up: 17 + 15 + 23 + 7 + 9 + 13 = 84.
3Calculate the mean of the terms. To find the mean of any set of terms, simply add up the terms and divide the result by the number of terms. In this case, you already know that the sum of the terms is 84. Since there are 6 terms, just divide 84 by 6 to find the mean. 84/6 = 14. Write "14" all the way down the column for the mean.
4Subtract the mean from each term. To fill the third column, simply take each term from the sample observations and subtract it from 14, the sample mean. You can check your work by adding up all of the results and confirming that they add up to zero. Here's how to subtract each sample observation from the average:
- 17 - 14 = 3
- 15 - 14 = 1
- 23 - 14 = 9
- 7 - 14 = -7
- 9 - 14 = -5
- 13 - 14 = -1
5Square each result. Now that you've subtracted the average from each sample observation, simply square each result and write the answer in the fourth column. Remember that all of your results will be positive. Here's how to do it:
- 32 = 9
- 12 = 1
- 92 = 81
- (-7)2 = 49
- (-5)2 = 25
- (-1)2 = 1
6Calculate the sum of the squared terms. Now simply add up all of the new terms. 9 + 1 + 81 + 49 + 25 + 1 = 166
7Substitute the values into the original equation. Just plug in the values into the original equation, remembering that "n" represents the number of data points.
- s2 = 166/(6-1)
8Solve. Simply divide 166 by 5. The result is 33.2 If you'd like to find the standard deviation, simply find the square root of 33.2. √33.2 = 5.76. Now you can interpret this data in a larger context. Usually, the variance between two sets of data are compared, and the lower number indicates less variation within that data set.Ad
We could really use your help!
planting and gardening?
- Since it is difficult to interpret the variance, this value is usually only calculated as a start in calculating the standard deviation.
In other languages:
Español: calcular la varianza, Italiano: Calcolare la Varianza, Deutsch: Varianz berechnen, Français: calculer la variance, Русский: посчитать дисперсию случайной величины, 中文: 计算方差, Português: Calcular a Variância, Nederlands: Variantie berekenen, Bahasa Indonesia: Menghitung Variasi
Thanks to all authors for creating a page that has been read 629,660 times. |
Read the assigned reading from the chapter. Then choose ONE of the questions below to answer. Answer the question you chose in a response that is a minimum of 1-2 paragraphs.
Be sure to explain your answers and give reasons for your views. You should cite the textbook and use brief quotations and summaries from the textbook in your response. Do NOT use any other sources besides the textbook.
5.1 OVERVIEW: THE FREE WILL PROBLEM
Few things in life are more valuable to us than freedom. We want it, we demand it, we say we cannot live without it. We yearn for and expect social or political freedom, the freedom to go where we want, say what we please, and do as we may within broad legal and social limits. But we also want—and usually assume we have—a more profound kind of freedom, what philosophers call free will. This type of freedom is the power of self-determination: If we possess it, then at least some of our choices are not decided for us or forced upon us but are up to us. If we don’t possess it, our social and political freedoms would seem to be considerably less valuable. If our actions are not our own because, say, someone has brainwashed or drugged us to control how we vote, then being free to vote would seem to be an empty liberty. So the central question in free will debates is whether we in fact have this more fundamental form of freedom. The question arises because, as in many other issues in philosophy, two of our basic beliefs about ourselves and the world seem in conflict. On one hand, we tend to think we have free will in the sense just described. On the other, we also usually assume that every event has a cause. Or, as philosophers would say, we accept determinism, the doctrine that every event is determined or necessitated by preceding events and the laws of nature. Determinism says that all events—including our choices and actions—are produced inexorably by previous events, which are caused by still earlier events, which are caused by still others, the chain of causes leading back into the indefinite past. Since every cause always results in the same effect, the future can unfold in only one way. Everything that happens must happen in an unalterable, preset fashion. But if determinism is true, how can any choices we make or any actions we perform be up to us? How can we do anything “of our own free will”? If determinism is true, your reading this book right now was caused by prior events such as certain states in your brain, body, and environment, and these events were in turn caused by still others, and the causal sequence must stretch back countless years to a time before you existed. You had no say in the movement or direction of this causal train, no control over how it went. Your reading this book right now could not have turned out any other way. You could not have done otherwise. How, then, could your actions be free? Figure 5.1 Are all of our actions produced by a chain of events that stretches back into the indefinite past? Determinism is the doctrine that every event is determined by preceding events and the laws of nature. You must believe in free will; there is no choice. —Isaac Bashevis Singer 252 Chapter 5 Free Will and Determinism From this conflict comes the problem of free will—the challenge of reconciling determinism with our intuitions or ideas about personal freedom. The problem seems all the sharper because both horns of this apparent dilemma are endorsed by common sense. In our lives we recognize the work of deterministic forces: Every cause does seem to regularly and lawfully produce an effect, and every effect seems to have a cause. Baseballs obey gravity, bread nourishes, fire burns, electronics work, human bodies are shaped by genetics, and human personalities are molded by experience. All this is reinforced by science, which tirelessly traces the universe’s myriad links between cause and effect. Our everyday experience also suggests that sometimes it is indeed up to us how we choose and act, and that we could have chosen and acted otherwise than we did. But who cares whether all our actions are determined by forces beyond our control? Well, we do. Most of us are unsettled by the thought that our choices and actions may not be our own, that everything we do is inevitable, preset, or necessary. This fear of a predetermined existence is reflected in movies, books, and popular culture. In the films Gattaca, A Clockwork Orange, and The Truman Show, deterministic forces in various guises are part of what makes these movies so disturbing. The novel Brave New World by Aldous Huxley shows us a futuristic society of contented citizens who are happy with their lot in life—but only because social engineers manipulate and dampen the people’s desires with a mind-numbing drug called soma. B. F. Skinner’s novel Walden II depicts another community of happy folk who want only what they can readily acquire or achieve. They are perfectly satisfied with their lives because they have been programmed through lifelong behavioral conditioning (the kind that Skinner himself advocated) to desire only what is attainable. Skinner portrays his vision as a utopia, but many think it is a dystopia in which social freedom is a reality but free will is nonexistent. People also care about the issue of free will because upon it hang momentous questions about moral responsibility, legal punishment, praise and blame, and social and political control. If our actions are not free in any important sense, it is difficult to see how we could be held morally responsible for what we do. If our actions are fully determined, how could we be legitimately subjected to punishment, praise, or blame for our actions? Punishing us for something we did would be like penalizing us for having red hair or brown eyes. As you might expect, many who reject the notion of free will think that punishing people for crimes makes no sense. Instead of punishing criminals, they say, we should try to modify their behavior. Instead of imprisoning or executing them, we should train them through behavioral conditioning and other techniques to be law-abiding. The issues of determinism and free will often come up in court when someone is being tried for a serious crime such as rape or murder. The defense attorney argues that the defendant is not responsible for his actions, for his character was warped by abusive parents, an impoverished or brutal environment, or bad genes. His life was programmed—determined—to turn out a certain way, and he had no say in any of it. The prosecutor insists that despite the influence of these factors, the defendant deserves most of the blame for his crime because ultimately he acted freely. The jury then must decide where determinism ends and free will begins. The problem of free will is the challenge of reconciling determinism with our intuitions or ideas about personal freedom. 1 Are you bothered by the thought of a rigidly determined existence? Does the idea that all your actions are determined disturb you—or reassure you? Men are deceived if they think themselves free, an opinion which consists only in this, that they are conscious of their actions and ignorant of the causes by which they are determined. —Baruch Spinoza 5.1 Overview: The Free Will Problem 253 Philosophers both ancient and modern have proposed three solutions to the free will problem. The first is known as hard determinism, the view that no one has free will. Hard determinists accept these three propositions: (1) Determinism is true, (2) determinism and free will are incompatible, and (3) we never act freely. Proposition 2 is a statement of the doctrine of incompatibilism: Determinism and free will are incompatible doctrines; they both cannot be true. That is, if every event is determined, there can be no free will; if free will exists, determinism cannot be actual. Hard determinists argue that given the truth of determinism and the truth of incompatibilism, the assertion of free will must be false. To support Proposition 1, determinists typically appeal to the deliverances of science. They point out that scientific research in many fields, from astrophysics to zoology, is forever uncovering causal connections, seeming to confirm a deterministic picture of the world. Scientists now know that human behavior is shaped to a remarkable degree by heredity, the brain’s biochemistry, behavioral conditioning, and evolution. All these facts reinforce the notion that human choices and actions are brought about deterministically. Strangely enough, science—specifically quantum physics—has also provided evidence that determinism is false. Or, to put it another way, some scientific evidence supports indeterminism, the view that not every event is determined by preceding events and the laws of nature. The standard view among quantum physicists is that many events on the quantum level (the domain of subatomic particles) are uncaused. Among philosophers, however, debate still continues over what this quantum indeterminacy means for the problem of free will. The second proposed solution to the free will problem is compatibilism, or soft determinism. Compatibilists believe that (1) determinism is true, (2) determinism and free will are compatible, and (3) we sometimes act freely. So compatibilism claims that although determinism is true, our actions can still be free because determinism and free will are not in conflict (incompatibilism is false). It is possible for every event to be caused by preceding events plus the laws of nature—and for us to still act freely. But how is such a thing possible? Traditional compatibilism holds that your action is free if (1) it is caused by your own choices or desires and (2) it is not impeded or constrained by anything. You act in complete freedom when you give money to a charity—if you really do want to give your money and if nothing prevents you from doing so (for example, no physical obstacles stand in your way, no one is coercing you, and no inner compulsion Figure 5.2 A teenager on death row, 1986. To many, if determinism is true, criminals should not be punished, just trained. Does this way of dealing with criminals make sense to you? Incompatibilism is the view that if determinism is true, no one can act freely. Indeterminism is the view that not every event is determined by preceding events and the laws of nature. Hard determinism is the view that free will does not exist, that no one acts freely. Compatibilism is the view that although determinism is true, our actions can still be free. 254 Chapter 5 Free Will and Determinism restrains you). You act freely when you are able to do what you desire to do; you do not act freely when you are not able to do what you desire to do. This would be true, according to traditional compatibilism, even if your desires were themselves determined by forces beyond your control. Your will itself may be determined by preceding events and the laws of nature, but if you are able to do what you will, you act freely. In this way, says the compatibilist, free will is compatible with determinism. But some critics reject the compatibilist’s notion of freedom. They maintain that merely being able to act according to your desires without constraints is not real freedom if your desires are determined for you in the first place. The third answer to the problem of free will is libertarianism (not to be confused with the political doctrine of the same name). It asserts that some actions are free, for they are ultimately caused, or controlled, by the person, or agent. So libertarians believe that (1) determinism is false (indeterminism is true), (2) determinism and free will are incompatible, and (3) we sometimes act freely. They hold that indeterminism is necessary for free will, that free actions can occur only in a world where not all events are determined by prior events and natural laws. Note how libertarianism differs from the other two positions on free will. Both libertarians and hard determinists accept incompatibilism, but they take opposing views on determinism and free action. And, contrary to compatibilists, libertarians reject determinism and embrace incompatibilism. Figure 5.3 Physicists think that some events on the quantum level are uncaused. Does this mean that some events on the macro level (the level inhabited by rocks, stars, and people) are also uncaused? If so, would this indeterminism give us free will? Life is like a game of cards. The hand you are dealt is determinism; the way you play it is free will. —Jawaharlal Nehru 2 How would a personal belief in determinism affect your view of crime and punishment? Do you think that people are generally responsible for their crimes, or are they not responsible due to deterministic forces beyond their control? One of the annoying things about believing in free will and individual responsibility is the difficulty of finding somebody to blame your problems on. And when you do find somebody, it’s remarkable how often his picture turns up on your driver’s license. —P. J. O’Rourke 3 At this point in your reading, which doctrine are you more sympathetic to—hard determinism, compatibilism, or libertarianism? Libertarianism (not political) is the view that some actions are free, for they are caused or controlled by the person or agent. 5.1 Overview: The Free Will Problem 255 Like the other free will theories, libertarianism has its detractors. For example, some have objected that it is incoherent, mysterious, or both. They ask, How can an agent cause event A when there is no previous event B in the agent that causes event A, and no prior event C that causes event B, and so on? Because libertarians accept indeterminism, they are committed to denying such a causal sequence. But explaining how free will is possible while rejecting deterministic causal chains has been a challenge for libertarians, and some of their solutions have provoked considerable skepticism.
5.2 DETERMINISM AND INDETERMINISM
The hard determinist believes that determinism is a fact about the universe and that incompatibilism is true (that no one can act freely if determinism is true). From these two claims it is a short step to the conclusion that no one acts freely (that libertarianism is false). This line of reasoning (or something close to it) has been around since ancient times, but since the rise of modern science in the seventeenth century it has seemed to some to be much more credible because determinism itself has seemed more credible. Baron d’Holbach (1723–1789), a prominent philosopher of the French Enlightenment, has given us one of the clearest and boldest statements of the hard determinist position: 3. Do you believe that the compatibilist’s concept of free will is plausible? If you were free to act on any of your desires but your desires were controlled by God, would you have free will? 4. Suppose hard determinism were true. Would that mean we are not responsible for our actions? If hard determinism did make responsibility impossible, would that fact show that the theory is false? 5. Which theory of free will seems to agree best with your own experience of making choices and taking action? Baron d’Holbach, “Of the System of Man’s Free Agency” It has been already sufficiently proved that the soul is nothing more than the body considered relatively to some of its functions more concealed than others: it has been shown that this soul, even when it shall be supposed immaterial, is continually modified conjointly with the body, is submitted to all its motion, and that without this it would remain inert and dead; that, consequently, it is subjected to the influence of those material and physical causes which give impulse to the body; of which the mode of existence, whether habitual or transitory, depends upon the material elements by which it is surrounded, that form its texture, constitute its temperament, enter into it by means of the aliments, and penetrate it by their subtility. The faculties which are called intellectual, and those qualities which are styled moral, have been explained in a manner purely physical and natural. In the last place it has been demonstrated that all the ideas, all the systems, all the affections, all the opinions, whether true or false, which man forms to himself, are to be attributed to his physical and material senses. Thus man is a being purely physical; in whatever manner he is considered, he is connected to universal nature, and submitted to the necessary and immutable laws that she imposes on all the beings she contains, according to their peculiar essences or to the respective properties with which, without consulting them, she endows each particular species. Man’s life is a line that nature commands WRITING TO UNDERSTAND: CRITIQUING PHILOSOPHICAL VIEWS (continued) SECTION 5.1 5.2 Determinism and Indeterminism 257 To d’Holbach and other Enlightenment thinkers, the theories and discoveries of science were robust proof that every event was determined by preceding events and natural laws. They saw the universe as a grand, intricate, physical machine, with every part—including human beings—predetermined to operate in foreordained fashion. In such a universe, they insisted, free actions are impossible. Free will is an illusion. We think we are free only because we are ignorant of the forces that bind us. Since d’Holbach’s day, many others have taken the findings of science to be undeniable evidence for universal determinism. After all, science has had—and continues to have—remarkable success in explaining and predicting all sorts of natural phenomena, including the choices and actions of human beings. In light of this success, many people believe that the truth of determinism is simply obvious. Nowadays, most who accept determinism are compatibilists, but a few of them see no reason to think free will is compatible with determinism, so they take the hard determinist view. Yet in an ironic turn of scientific history, reasons to doubt determinism have come from science itself. Quantum physics provides a surprising counterexample to the notion that every event has a cause. The most widely accepted view among quantum physicists is that at the subatomic level, some events (such as the decay of radioactive particles) are random and therefore uncaused. If so, it is not the case that every event is determined by preceding events and the laws of nature, and the central premise in the argument for hard determinism is unfounded. Some hard determinists maintain that these uncaused events are mostly confined to the subatomic realm and do not significantly affect the larger world of human actions. This suggests, they say, that for all practical purposes, determinism is true. But others reject this view, contending that quantum indeterminism isn’t as restricted to the quantum level as some assume, and that therefore causal indeterminism could arise anywhere. Most indeterminists do not deny that many, perhaps most, of our actions are caused by prior events; they concede that much of human behavior may be causally determined. But they reject the notion that previous events cause all our actions; they think that claim is a sweeping generalization that science has yet to demonstrate. him to describe upon the surface of the earth, without his ever being able to swerve from it, even for an instant. He is born without his own consent; his organization does in nowise depend upon himself; his ideas come to him involuntarily; his habits are in the power of those who cause him to contract them; he is unceasingly modified by causes, whether visible or concealed, over which he has no control, which necessarily regulate his mode of existence, give the hue to his way of thinking, and determine his manner of acting. He is good or bad, happy or miserable, wise or foolish, reasonable or irrational, without his will being for any thing in these various states. Nevertheless, in despite of the shackles by which he is bound, it is pretended he is a free agent, or that independent of the causes by which he is moved, he determines his own will, and regulates his own condition.1 A man is the origin of his action. —Aristotle Figure 5.4 Baron d’Holbach (1723–1789). 4 Do you believe both that every event has a cause and that free actions are possible? If so, are these beliefs compatible? 258 Chapter 5 Free Will and Determinism Long before the advent of quantum physics, there were thinkers who posited indeterminism in the world and argued that it opened the way for humans to have free will. The “atomist” philosophers of ancient Greece theorized that the world was composed of bits of matter called atoms moving in rigidly determined fashion— except that these objects sometimes “swerved” randomly to allow for undetermined, free actions in humans. Centuries later the distinguished American philosopher William James (1842–1910) argued that indeterminism is a feature of the universe that permits “alternative futures” and the possibility of freedom. It allows some things to happen by chance. Most importantly, James says, it allows free actions, for free actions are chance happenings. He explains his view like this: William James William James (1842–1910) is one of America’s most influential philosophers, leaving a lasting impression on debates in epistemology, philosophy of religion, ethics, and free will. He was born in New York City and grew up in an intellectually stimulating family. His father was a philosopher of religion, and his brother Henry was the famous novelist. He studied abroad, earned a Harvard degree in medicine, and spent most of his career lecturing and writing in psychology and philosophy. His reputation as the greatest psychologist of America and Europe was assured by the publication of his voluminous work The Principles of Psychology (1890). After that came numerous philosophical essays and books, including The Will to Believe and Other Essays in Popular Philosophy (1897); The Varieties of Religious Experience (1902); Pragmatism: A New Name for Some Old Ways of Thinking (1907); and The Meaning of Truth (1909). James is one of the founders of the philosophy of pragmatism, a doctrine about meaning and truth. James is famous for articulating a pragmatic theory of truth, which says that the truth of a statement is a matter of its utility. For James, utility may mean either success in predicting events or promotion of beneficial feelings and actions. Through pragmatism, James came to the conclusion that religion is a legitimate and important aspect of life because we can plausibly accept religious claims on grounds of their utility, regardless of their lack of evidence. Ironically, James, the famous psychologist, was given to psychosomatic illness and clinical depression. Once while wrestling with the problem of free will, he fell into a devastatingly dark mood and did not recover until he had found a solution. He concluded that despite determinism, we can have free will because chance events make room for free actions. Philosophers At Work Figure 5.5 William James (1842– 1910), philosopher, psychologist, pragmatist, and believer in free will. A man can do what he wants, but not want what he wants. —Arthur Schopenhauer 5.2 Determinism and Indeterminism 259 William James, “The Dilemma of Determinism” What does determinism profess? . . . It professes that those parts of the universe already laid down absolutely appoint and decree what the other parts shall be. The future has no ambiguous possibilities hidden in its womb: the part we call the present is compatible with only one totality. Any other future complement than the one fixed from eternity is impossible. The whole is in each and every part, and welds it with the rest into an absolute unity, an iron block, in which there can be no equivocation or shadow of turning. . . . Indeterminism, on the contrary, says that the parts have a certain amount of loose play on one another, so that the laying down of one of them does not necessarily determine what the others shall be. It admits that possibilities may be in excess of actualities, and that things not yet revealed to our knowledge may really in themselves be ambiguous. Of two alternative futures which we conceive, both may now be really possible; and the one become impossible only at the very moment when the other excludes it by becoming real itself. Indeterminism thus denies the world to be one unbending unit of fact. It says there is a certain ultimate pluralism in it; and, so saying, it corroborates our ordinary unsophisticated view of things. To that view, actualities seem to float in a wider sea of possibilities from out of which they are chosen; and, somewhere, indeterminism says, such possibilities exist, and form a part of truth. . . . Do not all the motives that assail us, all the futures that offer themselves to our choice, spring equally from the soil of the past; and would not either one of them, whether realized through chance or through necessity, the moment it was realized, seem to us to fit that past, and in the completest and most continuous manner to interdigitate with the phenomena already there?2 James holds that a free choice is not determined by previous events; it is uncaused. There is more than one way that the choice can go, and how it goes is a matter of chance. But even though the choice comes about by chance, it will seem to follow from previous events just as a determined choice would. Many have rejected this kind of argument, including those who believe that indeterminism is a prerequisite for free will. The difficulty, they say, is that indeterminism alone does not make for free and responsible actions. Libertarians, for example, agree that indeterminism is necessary for free will, that free actions can occur only in a world where not all actions are determined by prior events and natural laws. But they also point out that if what an agent does happens by chance (that is, randomly), then she is not free to act or not act. What she does just happens, and she has nothing to do with it. Her actions are not under her control and therefore are not really her actions. In fact, they would not be actions at all. An action is an event intended to happen by the agent, but if her intentions have nothing to do with it (because it is random), it is not really an action and is definitely not free. So for libertarians, indeterminism by itself is not enough for free will, which is why they take pains to explain the role of the agent in free actions. The conclusion libertarians draw from all this is that both determinism and indeterminism can be enemies of free will. Determinism coupled with incompatibilism.
The great appeal of traditional compatibilism is that it provides a plausible way to reconcile free will and determinism. It says that determinism is true and so is the commonsense belief that we have free will. Science is squared with our presumption of freedom, and incompatibilism is unfounded. This reconciliation project has been—and still is—attractive to many serious thinkers, including the ancient Greek Stoics, some English-speaking philosophers of previous centuries, and numerous contemporary proponents. Among the greatest of these are Thomas Hobbes (1588–1679), John Locke (1632–1704), David Hume (1711–1776), and John Stuart Mill (1806–1873). Locke sums up traditional compatibilism like this: John Locke, An Essay Concerning Human Understanding But though the preference of the Mind be always determined . . . yet the Person who has the power, in which alone consists the liberty to act, or not to act, according to such preference, is nevertheless free; such determination abridges not that Power.3 You say: I am not free. But I have raised and lowered my arm. Everyone understands that this illogical answer is an irrefutable proof of freedom. —Leo Tolstoy 5.3 Compatibilism 261 Compatibilists do not deny that all our wants or desires are caused by preceding events. In fact, they hold that determinism is necessary for free will; an undetermined choice, they say, would be random and uncontrolled by the agent. They insist that even though our desires are determined, we can still act freely as long as (1) we have the power to do what we want, and (2) nothing is preventing us from doing it (for example, no one is restraining or coercing us). Both compatibilists and most of their critics agree that free actions (and moral responsibility) require alternative possibilities, or a “could do otherwise” sort of freedom. If we are free—if our actions are truly up to us—we must be able to act in one of several different ways, to have more than one option to choose from. We must have the wherewithal to do otherwise than what we actually do. But if we have only one choice open to us, if all other possibilities are closed, then our actions are not up to us. Incompatibilists say that this is precisely what would happen if determinism were true. But compatibilists assert that we can still do otherwise even if determinism reigns in the world. But how? Compatibilists can make this claim by assigning a conditional, or hypothetical, meaning to the notion of “could do otherwise.” To them, “could do otherwise” means that you would have been able to do something different if you had wanted to. You are free in the sense that if you had desired to do something different than what you actually did, nothing would have prevented you from doing it. If you had wanted a piece of cake instead of the slice of pie that you actually got, and nothing would have prevented you from getting cake, then your action was free. Whatever you finally choose is, of course, determined by previous events. But you would have been able to choose differently if history had been different. Here is Walter Stace (1886–1967), a twentieth-century compatibilist, arguing the compatibilist’s case by trying to ascertain what we ordinarily mean by “free acts”: W. T. Stace, Religion and the Modern Mind The only reasonable view is that all human actions, both those which are freely done and those which are not, are either wholly determined by causes, or at least as much determined as other events in nature. It may be true, as the physicists tell us, that nature is not as deterministic as was once thought. But whatever degree of determinism prevails in the world, human actions appear to be as much determined as anything else. And if this is so, it cannot be the case that what distinguishes actions freely chosen from those which are not free is that the latter are determined by causes while the former are not. Therefore, being uncaused or being undetermined by causes, must be an incorrect definition of free will. What, then, is the difference between acts which are freely done and those which are not? What is the characteristic which is present [in all free actions] and absent from [all unfree actions]? Is it not obvious that, although both sets 6 Is the compatibilist’s definition of “could do otherwise” plausible? Or is it, as James called it, a “wretched subterfuge”? 7 Does it matter to you whether you have free will? Would your behavior change if you believed (or didn’t believe) that all your actions were determined by forces beyond your control? 8 Are free acts, as Stace says, “those whose immediate causes are psychological states in the agent”? Would such acts still be free if the “psychological states” were secretly controlled by someone else through hypnosis? 262 Chapter 5 Free Will and Determinism Philosophy Now Does Belief in Free Will Matter? Your belief or nonbelief in free will doesn’t affect your behavior; your acceptance or rejection of the doctrine doesn’t matter to how you live your life. Is this true? Is it true that your belief in free will is inconsequential? Some philosophers, as well as many nonphilosophers, think so. But some scientific research suggests otherwise. In studies conducted by Kathleen D. Vohs and Jonathan W. Schooler, college students who were encouraged to doubt free will were more likely to cheat than students who were not given that encouragement. This is how the researchers sum up the results: In Experiment 1, particip
We are a professional custom writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework.
Yes. We have posted over our previous orders to display our experience. Since we have done this question before, we can also do it for you. To make sure we do it perfectly, please fill our Order Form. Filling the order form correctly will assist our team in referencing, specifications and future communication.
1. Click on the “Place order tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
2. Fill in your paper’s requirements in the "PAPER INFORMATION" section and click “PRICE CALCULATION” at the bottom to calculate your order price.
3. Fill in your paper’s academic level, deadline and the required number of pages from the drop-down menus.
4. Click “FINAL STEP” to enter your registration details and get an account with us for record keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
5. From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.
Need help with this assignment?
Order it here claim 25% discount
Discount Code: SAVE25 |
An antibody (AB), also known as an immunoglobulin (Ig), is a large Y-shape protein produced by plasma cells that is used by the immune system to identify and neutralize foreign objects such as bacteria and viruses. The antibody recognizes a unique part of the foreign target, called an antigen. Each tip of the "Y" of an antibody contains a paratope (a structure analogous to a lock) that is specific for one particular epitope (similarly analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize its target directly (for example, by blocking a part of a microbe that is essential for its invasion and survival). The production of antibodies is the main function of the humoral immune system.
Antibodies are secreted by a type of white blood cell called a plasma cell. Antibodies can occur in two physical forms, a soluble form that is secreted from the cell, and a membrane-bound form that is attached to the surface of a B cell and is referred to as the B cell receptor (BCR). The BCR is found only on the surface of B cells and facilitates the activation of these cells and their subsequent differentiation into either antibody factories called plasma cells or memory B cells that will survive in the body and remember that same antigen so the B cells can respond faster upon future exposure. In most cases, interaction of the B cell with a T helper cell is necessary to produce full activation of the B cell and, therefore, antibody generation following antigen binding. Soluble antibodies are released into the blood and tissue fluids, as well as many secretions to continue to survey for invading microorganisms.
Antibodies are glycoproteins belonging to the immunoglobulin superfamily; the terms antibody and immunoglobulin are often used interchangeably. Antibodies are typically made of basic structural units—each with two large heavy chains and two small light chains. There are several different types of antibody heavy chains, and several different kinds of antibodies, which are grouped into different isotypes based on which heavy chain they possess. Five different antibody isotypes are known in mammals, which perform different roles, and help direct the appropriate immune response for each different type of foreign object they encounter. For example, IgE is responsible for an allergic response consisting mast cell degranulation and histamine release. So if an antigen binds to IgE, for example house dust mite particles, then it will cause an allergic asthmatic reaction.
Though the general structure of all antibodies is very similar, a small region at the tip of the protein is extremely variable, allowing millions of antibodies with slightly different tip structures, or antigen-binding sites, to exist. This region is known as the hypervariable region. Each of these variants can bind to a different antigen. This enormous diversity of antibodies allows the immune system to recognize an equally wide variety of antigens. The large and diverse population of antibodies is generated by random combinations of a set of gene segments that encode different antigen-binding sites (or paratopes), followed by random mutations in this area of the antibody gene, which create further diversity. Antibody genes also re-organize in a process called class switching that changes the base of the heavy chain to another, creating a different isotype of the antibody that retains the antigen-specific variable region. This allows a single antibody to be used by several different parts of the immune system.
- 1 Forms
- 2 Antibody-Antigen Interactions
- 3 Isotypes
- 4 Structure
- 5 Function
- 6 Immunoglobulin diversity
- 7 Medical applications
- 8 Research applications
- 9 Regulatory validation of monoclonal antibody products for human use
- 10 Structure prediction
- 11 History
- 12 See also
- 13 References
- 14 External links
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific - for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Antibodies can come in different varieties known as isotypes or classes. In placental mammals there are five antibody isotypes known as IgA, IgD, IgE, IgG, and IgM. They are each named with an "Ig" prefix that stands for immunoglobulin, another name for antibody, and differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table. The different suffixes of the antibody isotypes denote the different types of heavy chains the antibody contains, with each heavy chain class named alphabetically: α, γ, δ, ε, and μ. This gives rise to IgA, IgG, IgD, IgE, and IgM, respectively.
|IgA||2||Found in mucosal areas, such as the gut, respiratory tract and urogenital tract, and prevents colonization by pathogens. Also found in saliva, tears, and breast milk.|
|IgD||1||Functions mainly as an antigen receptor on B cells that have not been exposed to antigens. It has been shown to activate basophils and mast cells to produce antimicrobial factors.|
|IgE||1||Binds to allergens and triggers histamine release from mast cells and basophils, and is involved in allergy. Also protects against parasitic worms.|
|IgG||4||In its four forms, provides the majority of antibody-based immunity against invading pathogens. The only antibody capable of crossing the placenta to give passive immunity to the fetus.|
|IgM||1||Expressed on the surface of B cells (monomer) and in a secreted form (pentamer) with very high avidity. Eliminates pathogens in the early stages of B cell-mediated (humoral) immunity before there is sufficient IgG.|
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM+ isotype in a cell surface bound form. The B lymphocyte, in its mature ready-to-respond form, is known as "naive B lymphocyte." The naive B lymphocyte express both surface IgM+ and IgD+. The co-expression of both these immunoglobulin isotypes renders the B cell 'mature' and ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
|IgY||Birds and reptiles.
Found in serum and egg yolk.
|IgW||Found in sharks and skates; related to mammalian IgD.|
Antibodies are heavy (~150 kDa) globular plasma proteins. They have sugar chains added to some of their amino acid residues. In other words, antibodies are glycoproteins. The basic functional unit of each antibody is an immunoglobulin (Ig) monomer (containing only one Ig unit); secreted antibodies can also be dimeric with two Ig units as with IgA, tetrameric with four Ig units like teleost fish IgM, or pentameric with five Ig units, like mammalian IgM.
The variable parts of an antibody are its V regions, and the constant part is its C region.
The Ig monomer is a "Y"-shaped molecule that consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds. Each chain is composed of structural domains called immunoglobulin domains. These domains contain about 70–110 amino acids and are classified into different categories (for example, variable or IgV, and constant or IgC) according to their size and function. They have a characteristic immunoglobulin fold in which two beta sheets create a "sandwich" shape, held together by interactions between conserved cysteines and other charged amino acids.
There are five types of mammalian Ig heavy chain denoted by the Greek letters: α, δ, ε, γ, and μ. The type of heavy chain present defines the class of antibody; these chains are found in IgA, IgD, IgE, IgG, and IgM antibodies, respectively. Distinct heavy chains differ in size and composition; α and γ contain approximately 450 amino acids, whereas μ and ε have approximately 550 amino acids.
In birds, the major serum antibody, also found in yolk, is called IgY. It is quite different from mammalian IgG. However, in some older literature and even on some commercial life sciences product websites it is still called "IgG", which is incorrect and can be confusing.
Each heavy chain has two regions, the constant region and the variable region. The constant region is identical in all antibodies of the same isotype, but differs in antibodies of different isotypes. Heavy chains γ, α and δ have a constant region composed of three tandem (in a line) Ig domains, and a hinge region for added flexibility; heavy chains μ and ε have a constant region composed of four immunoglobulin domains. The variable region of the heavy chain differs in antibodies produced by different B cells, but is the same for all antibodies produced by a single B cell or B cell clone. The variable region of each heavy chain is approximately 110 amino acids long and is composed of a single Ig domain.
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). A light chain has two successive domains: one constant domain and one variable domain. The approximate length of a light chain is 211 to 217 amino acids. Each antibody contains two light chains that are always identical; only one type of light chain, κ or λ, is present per antibody in mammals. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
CDRs, Fv, Fab and Fc Regions
Some parts of an antibody have the same functions. The arms of the Y, for example, contain the sites that can bind to antigens (in general, identical) and, therefore, recognize specific foreign objects. This region of the antibody is called the Fab (fragment, antigen-binding) region. It is composed of one constant and one variable domain from each heavy and light chain of the antibody. The paratope is shaped at the amino terminal end of the antibody monomer by the variable domains from the heavy and light chains. The variable domain is also referred to as the FV region and is the most important region for binding to antigens. To be specific, variable loops of β-strands, three each on the light (VL) and heavy (VH) chains are responsible for binding to the antigen. These loops are referred to as the complementarity determining regions (CDRs). The structures of these CDRs have been clustered and classified by Chothia et al. and more recently by North et al. and Nikoloudis et al. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
The base of the Y plays a role in modulating immune cell activity. This region is called the Fc (Fragment, crystallizable) region, and is composed of two heavy chains that contribute two or three constant domains depending on the class of the antibody. Thus, the Fc region ensures that each antibody generates an appropriate immune response for a given antigen, by binding to a specific class of Fc receptors, and other immune molecules, such as complement proteins. By doing this, it mediates different physiological effects including recognition of opsonized particles, lysis of cells, and degranulation of mast cells, basophils, and eosinophils.
In summary, whilst the Fab region of the antibody determines its antigen specificity, the Fc region of the antibody determines the antibody's class effect. Since only the constant domains of the heavy chains make up the Fc region of an antibody, the classes of heavy chain in antibodies determine their class effects. Possible classes of heavy chains in antibodies include alpha, gamma, delta, epsilon, and mu, and they define the antibody's isotypes IgA, G, D, E, and M, respectively. This infers different isotypes of antibodies have different class effects due to their different Fc regions binding and activating different types of receptors. Possible class effects of antibodies include: Opsonisation, agglutination, haemolysis, complement activation, mast cell degranulation, and neutralisation (though this class effect may be mediated by the Fab region rather than the Fc region). It also implies that Fab-mediated effects are directed at microbes or toxins, whilst Fc mediated effects are directed at effector cells or effector molecules (see below).
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway.
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interacts with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) - this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
The region (locus) of a chromosome that encodes an antibody is large and contains several distinct genes for each domain of the antibody—the locus containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences, between the variable domains, are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. Interestingly, the rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
A group of antibodies can be called monovalent (or specific) if they have affinity for the same epitope, or for the same antigen (but potentially different epitopes on the molecule), or for the same strain of microorganism (but potentially different antigens on or in it). In contrast, a group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of polyvalent IgG. In contrast, monoclonal antibodies are monovalent for the same epitope.
Heterodimeric antibodies, which are also asymmetrical and antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the “knobs-into-holes” format. This format is specific to the heavy chain part of the constant region in antibodies. The “knobs” part is engineered by replacing a small amino acid with a larger one. It fits into the “hole”, which is engineered by replacing a large amino acid with a smaller one. What connects the “knobs” to the “holes” are the disulfide bonds between each chain. The “knobs-into-holes” shape facilitates antibody dependent cell mediated cytotoxicity. Single chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The “knobs-into-holes” format enhances heterodimer formation but doesn’t suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but aren’t limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms don’t have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change shape from the natural form should lead to decreased functionality.
Disease diagnosis and therapy
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests. Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Rhesus factor, also known as Rhesus D (RhD) antigen, is an antigen found on red blood cells; individuals that are Rhesus-positive (Rh+) have this antigen on their red blood cells and individuals that are Rhesus-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human Rhesus D (RhD) antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rhesus-negative mother has a Rhesus-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. It is important to note that this occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rhesus antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cell express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISPOT techniques.
Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. A paper published in F1000 in 2014 provided researchers with a guide for reporting research antibody use.
Regulatory validation of monoclonal antibody products for human use
Production and testing :
Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically-induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include :
- The demonstration that the process is able to produce in good quality (the process should be validated)
- The efficiency of the antibody purification[disambiguation needed] (all impurities and virus must be eliminated)
- The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
- Determination of the virus clearance studies
Before clinical trials, studies of product safety and feasibility have to be performed :
- Product safety testing : Sterility (bacteria and fungi), In vitro and in vivo testing for adventitious viruses, Murine retrovirus testing... Product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
- Feasibility testing : These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vito or in vivo testing).
Preclinical studies :
- Testing cross-reactivity of antibody : to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (Reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models). More informations about in vitro cross-reactivity testing.
- Preclinical pharmacology and toxicity testing : Preclinical safety testing of antibody is designed to identify possible toxicities in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
- Animal toxicity studies : Acute toxicity testing, Repeat-dose toxicity testing, Long-term toxicity testing http://www.animalresearch.info/en/drug-development/safety-testing/
- Pharmacokinetics and pharmacodynamics testing : Use for determinate clinical dosages, antibody activities (AUC, pharmacodynamics, biodistribution, ...), evaluation of the potential clinical[disambiguation needed] effects
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enables computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims.
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper.
The study of antibodies began in 1890 when Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagreaus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA) and David S. Rowe and John L. Fahey identified IgD, and IgE was identified by Kimishige Ishizaka and Teruko Ishizaka as a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
- Antibody mimetic
- Anti-mitochondrial antibodies
- Anti-nuclear antibodies
- Humoral immunity
- Immunosuppressive drug
- Intravenous immunoglobulin (IVIg)
- Magnetic immunoassay
- Monoclonal antibody
- Neutralizing antibody
- Secondary antibodies
- Single-domain antibody
- Slope spectroscopy
- Charles Janeway (2001). Immunobiology. (5th ed.). Garland Publishing. ISBN 0-8153-3642-X. (electronic full text via NCBI Bookshelf).
- Litman GW, Rast JP, Shamblott MJ, Haire RN, Hulst M, Roess W, Litman RT, Hinds-Frey KR, Zilch A, Amemiya CT (January 1993). "Phylogenetic diversification of immunoglobulin genes and the antibody repertoire". Mol. Biol. Evol. 10 (1): 60–72. PMID 8450761.
- Pier GB, Lyczak JB, Wetzler LM (2004). Immunology, Infection, and Immunity. ASM Press. ISBN 1-55581-246-5.
- Borghesi L, Milcarek C (2006). "From B cell to plasma cell: regulation of V(D)J recombination and antibody secretion". Immunol. Res. 36 (1–3): 27–32. doi:10.1385/IR:36:1:27. PMID 17337763.
- Parker D (1993). "T cell-dependent B cell activation". Annu Rev Immunol 11 (1): 331–360. doi:10.1146/annurev.iy.11.040193.001555. PMID 8476565.
- Rhoades RA, Pflanzer RG (2002). Human Physiology (4th ed.). Thomson Learning. ISBN 0-534-42174-1.
- Market E, Papavasiliou FN (October 2003). "V(D)J recombination and the evolution of the adaptive immune system". PLoS Biol. 1 (1): E16. doi:10.1371/journal.pbio.0000016. PMC 212695. PMID 14551913.
- Diaz M, Casali P (2002). "Somatic immunoglobulin hypermutation". Curr Opin Immunol 14 (2): 235–240. doi:10.1016/S0952-7915(02)00327-8. PMID 11869898.
- Parker D (1993). "T cell-dependent B cell activation". Annu. Rev. Immunol. 11 (1): 331–360. doi:10.1146/annurev.iy.11.040193.001555. PMID 8476565.
- Wintrobe, Maxwell Myer (2004). John G. Greer, John Foerster, John N Lukens, George M Rodgers, Frixos Paraskevas, ed. Wintrobe's clinical hematology (11 ed.). Hagerstown, MD: Lippincott Williams & Wilkins. pp. 453–456. ISBN 978-0-7817-3650-3.
- Tolar P, Sohn HW, Pierce SK (February 2008). "Viewing the antigen-induced initiation of B-cell activation in living cells". Immunol. Rev. 221 (1): 64–76. doi:10.1111/j.1600-065X.2008.00583.x. PMID 18275475.
- Woof J, Burton D (2004). "Human antibody-Fc receptor interactions illuminated by crystal structures.". Nat Rev Immunol 4 (2): 89–99. doi:10.1038/nri1266. PMID 15040582.
- Underdown B, Schiff J (1986). "Immunoglobulin A: strategic defense initiative at the mucosal surface". Annu Rev Immunol 4 (1): 389–417. doi:10.1146/annurev.iy.04.040186.002133. PMID 3518747.
- Geisberger R, Lamers M, Achatz G (2006). "The riddle of the dual expression of IgM and IgD". Immunology 118 (4): 889–898. doi:10.1111/j.1365-2567.2006.02386.x. PMC 1782314. PMID 16895553.
- Chen K, Xu W, Wilson M, He B, Miller NW, Bengtén E, Edholm ES, Santini PA, Rath P, Chiu A, Cattalini M, Litzman J, B Bussel J, Huang B, Meini A, Riesbeck K, Cunningham-Rundles C, Plebani A, Cerutti A (2009). "Immunoglobulin D enhances immune surveillance by activating antimicrobial, proinflammatory and B cell-stimulating programs in basophils". Nature Immunology 10 (8): 889–898. doi:10.1038/ni.1748. PMC 2785232. PMID 19561614.
- Goding J (1978). "Allotypes of IgM and IgD receptors in the mouse: a probe for lymphocyte differentiation". Contemp Top Immunobiol 8: 203–43. doi:10.1007/978-1-4684-0922-2_7. ISBN 978-1-4684-0924-6. PMID 357078.
- Zhang, Cecilia; Du Pasquier, Louis; Hsu, Ellen (2013-08-09). "Shark IgW C region diversification through RNA processing and isotype switching". Journal of Immunology 191 (6): 3410–3418. doi:10.4049/jimmunol.1301257. PMID 23935192.
- Mattu T, Pleass R, Willis A, Kilian M, Wormald M, Lellouch A, Rudd P, Woof J, Dwek R (1998). "The glycosylation and structure of human serum IgA1, Fab, and Fc regions and the role of N-glycosylation on Fc alpha receptor interactions". J Biol Chem 273 (4): 2260–2272. doi:10.1074/jbc.273.4.2260. PMID 9442070.
- Roux K (1999). "Immunoglobulin structure and function as revealed by electron microscopy". Int Arch Allergy Immunol 120 (2): 85–99. doi:10.1159/000024226. PMID 10545762.
- Barclay A (2003). "Membrane proteins with immunoglobulin-like domains – a master superfamily of interaction molecules". Semin Immunol 15 (4): 215–223. doi:10.1016/S1044-5323(03)00047-2. PMID 14690046.
- Putnam FW, Liu YS, Low TL (1979). "Primary structure of a human IgA1 immunoglobulin. IV. Streptococcal IgA1 protease, digestion, Fab and Fc fragments, and the complete amino acid sequence of the alpha 1 heavy chain". J Biol Chem 254 (8): 2865–74. PMID 107164.
- Al-Lazikani B, Lesk AM, Chothia C (1997). "Standard conformations for the canonical structures of immunoglobulins". J Mol Biol 273 (4): 927–948. doi:10.1006/jmbi.1997.1354. PMID 9367782.
- North B, Lehmann A, Dunbrack RL (2010). "A new clustering of antibody CDR loop conformations". J Mol Biol 406 (2): 228–256. doi:10.1016/j.jmb.2010.10.030. PMC 3065967. PMID 21035459.
- Nikoloudis D, Pitts JE, Saldanha JW (2014). "A complete, multi-level conformational clustering of antibody complementarity-determining regions". PeerJ 2 (e456). doi:10.7717/peerj.456. PMC 4103072. PMID 25071986.
- Heyman B (1996). "Complement and Fc-receptors in regulation of the antibody response". Immunol Lett 54 (2–3): 195–199. doi:10.1016/S0165-2478(96)02672-7. PMID 9052877.
- Borghesi L, Milcarek C (2006). "From B cell to plasma cell: regulation of V(D)J recombination and antibody secretion". Immunol Res 36 (1–3): 27–32. doi:10.1385/IR:36:1:27. PMID 17337763.
- Ravetch J, Bolland S (2001). "IgG Fc receptors". Annu Rev Immunol 19 (1): 275–290. doi:10.1146/annurev.immunol.19.1.275. PMID 11244038.
- Rus H, Cudrici C, Niculescu F (2005). "The role of the complement system in innate immunity". Immunol Res 33 (2): 103–112. doi:10.1385/IR:33:2:103. PMID 16234578.
- Racaniello, Vincent (6 October 2009). "Natural antibody protects against viral infection". Virology Blog. Archived from the original on 17 November 2010. Retrieved 22 January 2010.
- Milland J, Sandrin MS (December 2006). "ABO blood group and related antigens, natural antibodies and transplantation". Tissue Antigens 68 (6): 459–466. doi:10.1111/j.1399-0039.2006.00721.x. PMID 17176435.
- Mian I, Bradwell A, Olson A (1991). "Structure, function and properties of antibody binding sites". J Mol Biol 217 (1): 133–151. doi:10.1016/0022-2836(91)90617-F. PMID 1988675.
- Fanning LJ, Connor AM, Wu GE (1996). "Development of the immunoglobulin repertoire". Clin. Immunol. Immunopathol. 79 (1): 1–14. doi:10.1006/clin.1996.0044. PMID 8612345.
- Nemazee D (2006). "Receptor editing in lymphocyte development and central tolerance". Nat Rev Immunol 6 (10): 728–740. doi:10.1038/nri1939. PMID 16998507.
- Peter Parham. "The Immune System. 2nd ed. Garland Science: New York, 2005. pg.47–62
- Mraz, M.; Dolezalova, D.; Plevova, K.; Stano Kozubik, K.; Mayerova, V.; Cerna, K.; Musilova, K.; Tichy, B.; Pavlova, S.; Borsky, M.; Verner, J.; Doubek, M.; Brychtova, Y.; Trbusek, M.; Hampl, A.; Mayer, J.; Pospisilova, S. (2012). "MicroRNA-650 expression is influenced by immunoglobulin gene rearrangement and affects the biology of chronic lymphocytic leukemia". Blood 119 (9): 2110–2113. doi:10.1182/blood-2011-11-394874. PMID 22234685.
- Bergman Y, Cedar H (2004). "A stepwise epigenetic process controls immunoglobulin allelic exclusion". Nat Rev Immunol 4 (10): 753–761. doi:10.1038/nri1458. PMID 15459667.
- Honjo T, Habu S (1985). "Origin of immune diversity: genetic variation and selection". Annu Rev Biochem 54 (1): 803–830. doi:10.1146/annurev.bi.54.070185.004103. PMID 3927822.
- Or-Guil M, Wittenbrink N, Weiser AA, Schuchhardt J (2007). "Recirculation of germinal center B cells: a multilevel selection strategy for antibody maturation". Immunol. Rev. 216: 130–41. doi:10.1111/j.1600-065X.2007.00507.x. PMID 17367339.
- Neuberger M, Ehrenstein M, Rada C, Sale J, Batista F, Williams G, Milstein C (March 2000). "Memory in the B-cell compartment: antibody affinity maturation". Philos Trans R Soc Lond B Biol Sci 355 (1395): 357–360. doi:10.1098/rstb.2000.0573. PMC 1692737. PMID 10794054.
- Stavnezer J, Amemiya CT (2004). "Evolution of isotype switching". Semin. Immunol. 16 (4): 257–275. doi:10.1016/j.smim.2004.08.005. PMID 15522624.
- Durandy A (2003). "Activation-induced cytidine deaminase: a dual role in class-switch recombination and somatic hypermutation". Eur. J. Immunol. 33 (8): 2069–2073. doi:10.1002/eji.200324133. PMID 12884279.
- Casali P, Zan H (2004). "Class switching and Myc translocation: how does DNA break?". Nat. Immunol. 5 (11): 1101–1103. doi:10.1038/ni1104-1101. PMID 15496946.
- Lieber MR, Yu K, Raghavan SC (2006). "Roles of nonhomologous DNA end joining, V(D)J recombination, and class switch recombination in chromosomal translocations". DNA Repair (Amst.) 5 (9–10): 1234–1245. doi:10.1016/j.dnarep.2006.05.013. PMID 16793349.
- page 22 in: Shoenfeld, Yehuda.; Meroni, Pier-Luigi.; Gershwin, M. Eric (2007). Autoantibodie. Amsterdam; Boston: Elsevier. ISBN 978-0-444-52763-9.
- Farlex dictionary: monovalent Citing: The American Heritage Science Dictionary, Copyright 2005
- Farlex dictionary > polyvalent Citing: The American Heritage Medical Dictionary. 2004
- Gunasekaran L (2010). "Enhancing antibody Fc heterodimer formation through electrostatic steering effects: applications to bispecific molecules and monovalent IgG". The Journal of Biological Chemistry 285 (25): 19637–19646. doi:10.1016/S0014-5793(98)00021-0.
- Muller K.M (1998). "The first constant domain (CH1 and CL) of an antibody used as heterodimerization domain for bispecific miniantibodies". FEBS Letters 422 (2): 259–264. doi:10.1074/jbc.M110.117382.
- Gao C (1999). "Making artificial antibodies: A format for phage display of combinatorial heterodimeric arrays". PNAS 96 (11): 6025–6030. doi:10.1073/pnas.96.11.6025.
- "Animated depictions of how antibodies are used in ELISA assays". Cellular Technology Ltd.—Europe. Archived from the original on 17 November 2010. Retrieved 8 May 2007.
- "Animated depictions of how antibodies are used in ELISPOT assays". Cellular Technology Ltd.—Europe. Archived from the original on 17 November 2010. Retrieved 8 May 2007.
- Stern P (2006). "Current possibilities of turbidimetry and nephelometry". Klin Biochem Metab 14 (3): 146–151. Archived from the original on 17 November 2010.
- Dean, Laura (2005). "Chapter 4: Hemolytic disease of the newborn". Blood Groups and Red Cell Antigens. NCBI Bethesda (MD): National Library of Medicine (US),.
- Feldmann M, Maini R (2001). "Anti-TNF alpha therapy of rheumatoid arthritis: what have we learned?". Annu Rev Immunol 19 (1): 163–196. doi:10.1146/annurev.immunol.19.1.163. PMID 11244034.
- Doggrell S (2003). "Is natalizumab a breakthrough in the treatment of multiple sclerosis?". Expert Opin Pharmacother 4 (6): 999–1001. doi:10.1517/14656522.214.171.1249. PMID 12783595.
- Krueger G, Langley R, Leonardi C, Yeilding N, Guzzo C, Wang Y, Dooley L, Lebwohl M (2007). "A human interleukin-12/23 monoclonal antibody for the treatment of psoriasis". N Engl J Med 356 (6): 580–592. doi:10.1056/NEJMoa062382. PMID 17287478.
- Plosker G, Figgitt D (2003). "Rituximab: a review of its use in non-Hodgkin's lymphoma and chronic lymphocytic leukaemia". Drugs 63 (8): 803–843. doi:10.2165/00003495-200363080-00005. PMID 12662126.
- Vogel C, Cobleigh M, Tripathy D, Gutheil J, Harris L, Fehrenbacher L, Slamon D, Murphy M, Novotny W, Burchmore M, Shak S, Stewart S (2001). "First-line Herceptin monotherapy in metastatic breast cancer". Oncology. 61. Suppl 2 (Suppl. 2): 37–42. doi:10.1159/000055400. PMID 11694786.
- LeBien TW (1 July 2000). "Fates of human B-cell precursors". Blood 96 (1): 9–23. PMID 10891425. Archived from the original on 17 November 2010.
- Ghaffer A (26 March 2006). "Immunization". Immunology — Chapter 14. University of South Carolina School of Medicine. Archived from the original on 17 November 2010. Retrieved 6 June 2007.
- Urbaniak S, Greiss M (2000). "RhD haemolytic disease of the fetus and the newborn". Blood Rev 14 (1): 44–61. doi:10.1054/blre.1999.0123. PMID 10805260.
- Fung Kee Fung K; Eason E; Crane J; Armson A; De La Ronde S; Farine D; Keenan-Lindsay L; Leduc L; Reid GJ; Aerde JV; Wilson RD; Davies G; Désilets VA; Summers A; Wyatt P; Young DC; Maternal-Fetal Medicine Committee; Genetics Committee (2003). "Prevention of Rh alloimmunization". J Obstet Gynaecol Can 25 (9): 765–73. PMID 12970812.
- Tini M, Jewell UR, Camenisch G, Chilov D, Gassmann M (2002). "Generation and application of chicken egg-yolk antibodies". Comp. Biochem. Physiol., Part a Mol. Integr. Physiol. 131 (3): 569–574. doi:10.1016/S1095-6433(01)00508-6. PMID 11867282.
- Cole SP, Campling BG, Atlaw T, Kozbor D, Roder JC (1984). "Human monoclonal antibodies". Mol. Cell. Biochem. 62 (2): 109–20. doi:10.1007/BF00223301. PMID 6087121.
- Kabir S (2002). "Immunoglobulin purification by affinity chromatography using protein A mimetic ligands prepared by combinatorial chemical synthesis". Immunol Invest 31 (3–4): 263–278. doi:10.1081/IMM-120016245. PMID 12472184.
- Brehm-Stecher B, Johnson E (2004). "Single-cell microbiology: tools, technologies, and applications". Microbiol Mol Biol Rev 68 (3): 538–559. doi:10.1128/MMBR.68.3.538-559.2004. PMC 515252. PMID 15353569. Archived from the original on 17 November 2010.
- Williams N (2000). "Immunoprecipitation procedures". Methods Cell Biol. Methods in Cell Biology 62: 449–453. doi:10.1016/S0091-679X(08)61549-6. ISBN 978-0-12-544164-3. PMID 10503210.
- Kurien B, Scofield R (2006). "Western blotting". Methods 38 (4): 283–293. doi:10.1016/j.ymeth.2005.11.007. PMID 16483794.
- Scanziani E (1998). "Immunohistochemical staining of fixed tissues". Methods Mol Biol 104: 133–140. doi:10.1385/0-89603-525-5:133. ISBN 978-0-89603-525-6. PMID 9711649.
- Reen DJ. (1994). "Enzyme-linked immunosorbent assay (ELISA)". Methods Mol Biol. 32: 461–466. doi:10.1385/0-89603-268-X:461. ISBN 0-89603-268-X. PMID 7951745.
- Kalyuzhny AE (2005). "Chemistry and biology of the ELISPOT assay". Methods Mol Biol. 302: 015–032. doi:10.1385/1-59259-903-6:015. ISBN 1-59259-903-6. PMID 15937343.
- "On the reproducibility of science: unique identification of research resources in the biomedical literature". PeerJ. 2 September 2013. Retrieved 1 September 2014.
- "Reporting research antibody use: how to increase experimental reproducibility". F1000. 23 August 2013. Retrieved 1 September 2014.
- Whitelegg N.R.J., Rees A.R. (2000). "WAM: an improved algorithm for modeling antibodies on the WEB". Protein Engineering 13 (12): 819–824. doi:10.1093/protein/13.12.819. PMID 11239080. Archived from the original on 17 November 2010.
- Marcatili P, Rosi A,Tramontano A (2008). "PIGS: automatic prediction of antibody structures". Bioinformatics 24 (17): 1953–1954. doi:10.1093/bioinformatics/btn341. PMID 18641403. Archived from the original on 17 November 2010.
Prediction of Immunoglobulin Structure (PIGS)
- Sivasubramanian A, Sircar A, Chaudhury S, Gray J J (2009). "Toward high-resolution homology modeling of antibody Fv regions and application to antibody–antigen docking". Proteins 74 (2): 497–514. doi:10.1002/prot.22309. PMC 2909601. PMID 19062174. Archived from the original on 17 November 2010.
- Park, Hyeongsu. "Written Description Problems of the Monoclonal Antibody Patents after Centocor v. Abbott". jolt.law.harvard.edu. Retrieved 12 Dec 2014.
- Lindenmann, Jean (1984). "Origin of the Terms 'Antibody' and 'Antigen'". Scand. J. Immunol. 19 (4): 281–5. doi:10.1111/j.1365-3083.1984.tb00931.x. PMID 6374880. Archived from the original on 17 November 2010.
- Padlan, Eduardo (February 1994). "Anatomy of the antibody molecule". Mol. Immunol. 31 (3): 169–217. doi:10.1016/0161-5890(94)90001-9. PMID 8114766.
- "New Sculpture Portraying Human Antibody as Protective Angel Installed on Scripps Florida Campus". Archived from the original on 17 November 2010. Retrieved 12 December 2008.
- Pescovitz, David. "Protein sculpture inspired by Vitruvian Man". Archived from the original on 17 November 2010. Retrieved 12 December 2008.
- "Emil von Behring — Biography". Archived from the original on 17 November 2010. Retrieved 5 June 2007.
- AGN (1931). "The Late Baron Shibasaburo Kitasato". Canadian Medical Association Journal 25 (2): 206. PMC 382621. PMID 20318414.
- Winau F, Westphal O, Winau R (2004). "Paul Ehrlich—in search of the magic bullet". Microbes Infect. 6 (8): 786–789. doi:10.1016/j.micinf.2004.04.003. PMID 15207826.
- Silverstein AM (2003). "Cellular versus humoral immunology: a century-long dispute". Nat. Immunol. 4 (5): 425–428. doi:10.1038/ni0503-425. PMID 12719732.
- Van Epps HL (2006). "Michael Heidelberger and the demystification of antibodies". J. Exp. Med. 203 (1): 5. doi:10.1084/jem.2031fta. PMC 2118068. PMID 16523537. Archived from the original on 17 November 2010.
- Marrack, JR (1938). Chemistry of antigens and antibodies (2nd ed.). London: His Majesty's Stationery Office. OCLC 3220539.
- "The Linus Pauling Papers: How Antibodies and Enzymes Work". Archived from the original on 17 November 2010. Retrieved 5 June 2007.
- Silverstein AM (2004). "Labeled antigens and antibodies: the evolution of magic markers and magic bullets". Nat. Immunol. 5 (12): 1211–1217. doi:10.1038/ni1140. PMID 15549122. Archived from the original on 18 December 2009.
- Edelman GM, Gally JA (1962). "The nature of Bence-Jones proteins. Chemical similarities to polypetide chains of myeloma globulins and normal gamma-globulins". J. Exp. Med. 116 (2): 207–227. doi:10.1084/jem.116.2.207. PMC 2137388. PMID 13889153.
- Stevens FJ, Solomon A, Schiffer M (1991). "Bence Jones proteins: a powerful tool for the fundamental study of protein chemistry and pathophysiology". Biochemistry 30 (28): 6803–6805. doi:10.1021/bi00242a001. PMID 2069946.
- Raju TN (1999). "The Nobel chronicles. 1972: Gerald M Edelman (b 1929) and Rodney R Porter (1917–85)". Lancet 354 (9183): 1040. doi:10.1016/S0140-6736(05)76658-7. PMID 10501404.
- Hochman J, Inbar D, Givol D (1973). "An active antibody fragment (Fv) composed of the variable portions of heavy and light chains". Biochemistry 12 (6): 1130–1135. doi:10.1021/bi00730a018. PMID 4569769.
- Tomasi TB (1992). "The discovery of secretory IgA and the mucosal immune system". Immunol. Today 13 (10): 416–418. doi:10.1016/0167-5699(92)90093-M. PMID 1343085.
- Preud'homme JL; Petit I; Barra A; Morel F; Lecron JC; Lelièvre E (2000). "Structural and functional properties of membrane and secreted IgD". Mol. Immunol. 37 (15): 871–887. doi:10.1016/S0161-5890(01)00006-2. PMID 11282392.
- Johansson SG (2006). "The discovery of immunoglobulin E". Allergy and asthma proceedings : the official journal of regional and state allergy societies 27 (2 Suppl 1): S3–6. PMID 16722325.
- Hozumi N, Tonegawa S (1976). "Evidence for somatic rearrangement of immunoglobulin genes coding for variable and constant regions". Proc. Natl. Acad. Sci. U.S.A. 73 (10): 3628–3632. doi:10.1073/pnas.73.10.3628. PMC 431171. PMID 824647.
|Wikimedia Commons has media related to Antibodies.|
- Mike's Immunoglobulin Structure/Function Page at University of Cambridge
- Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
- Microbiology and Immunology On-line Textbook at University of South Carolina
- A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
- How Lymphocytes Produce Antibody from Cells Alive!
- Antibody applications Fluorescent antibody image library, University of Birmingham |
Sea level rise
Since at least the start of the 20th century, the average global sea level has been rising. Between 1900 and 2016, the sea level rose by 16–21 cm (6.3–8.3 in). More precise data gathered from satellite radar measurements reveal an accelerating rise of 7.5 cm (3.0 in) from 1993 to 2017,:1554 which is a trend of roughly 30 cm (12 in) per century. This acceleration is due mostly to human-caused global warming, which is driving thermal expansion of seawater and the melting of land-based ice sheets and glaciers. Between 1993 and 2018, thermal expansion of the oceans contributed 42% to sea level rise; the melting of temperate glaciers, 21%; Greenland, 15%; and Antarctica, 8%. Climate scientists expect the rate to further accelerate during the 21st century.:62
Projecting future sea level is challenging, due to the complexity of many aspects of the climate system. As climate research into past and present sea levels leads to improved computer models, projections have consistently increased. For example, in 2007 the Intergovernmental Panel on Climate Change (IPCC) projected a high end estimate of 60 cm (2 ft) through 2099, but their 2014 report raised the high-end estimate to about 90 cm (3 ft). A number of later studies have concluded that a global sea level rise of 200 to 270 cm (6.6 to 8.9 ft) this century is "physically plausible". A conservative estimate of the long-term projections is that each Celsius degree of temperature rise triggers a sea level rise of approximately 2.3 meters (4.2 ft/degree Fahrenheit) over a period of two millennia: an example of climate inertia.
The sea level will not rise uniformly everywhere on Earth, and it will even drop in some locations. Local factors include tectonic effects and subsidence of the land, tides, currents and storms. Sea level rises can influence human populations considerably in coastal and island regions. Widespread coastal flooding is expected with several degrees of warming sustained for millennia. Further effects are higher storm-surges and more dangerous tsunamis, displacement of populations, loss and degradation of agricultural land and damage in cities. Natural environments like marine ecosystems are also affected, with fish, birds and plants losing parts of their habitat.
Societies can respond to sea level rise in three different ways: to retreat, to accommodate and to protect. Sometimes these adaptation strategies go hand in hand, but at other times choices have to be made among different strategies. Ecosystems that adapt to rising sea levels by moving inland might not always be able to do so, due to natural or artificial barriers.
Past changes in sea levelEdit
Understanding past sea level is important for the analysis of current and future changes. In the recent geological past, changes in land ice and thermal expansion from increased temperatures are the dominant reasons of sea level rise. The last time the Earth was 2 °C (3.6 °F) warmer than pre-industrial temperatures, sea levels were at least 5 metres (16 ft) higher than now: this was when warming because of changes in the amount of sunlight due to slow changes in the Earth's orbit caused the last interglacial. The warming was sustained over a period of thousands of years and the magnitude of the rise in sea level implies a large contribution from the Antarctic and Greenland ice sheets.:1139
Since the last glacial maximum about 20,000 years ago, the sea level has risen by more than 125 metres (410 ft), with rates varying from less than a mm/year to 40+ mm/year, as a result of melting ice sheets over Canada and Eurasia. Rapid disintegration of ice sheets led to so called 'meltwater pulses', periods during which sea level rose rapidly. The rate of rise started to slow down about 8,200 years before present; the sea level was almost constant in the last 2,500 years, before the recent rising trend that started at the end of the 19th century or in the beginning of the 20th.
Sea level measurementEdit
Sea level changes can be driven either by variations in the amount of water in the oceans, the volume of the ocean or by changes of the land compared to the sea surface. The different techniques used to measure changes in sea level do not measure exactly the same. Tide gauges can only measure relative sea level, whilst satellites can also measure absolute sea level changes. To get precise measurements for sea level, researchers studying the ice and the oceans on our planet factor in ongoing deformations of the solid Earth, in particular due to landmasses still rising from past ice masses retreating, and also the Earth's gravity and rotation.
Since the launch of TOPEX/Poseidon in 1992, altimetric satellites have been recording the changes in sea level. Those satellites can measure the hills and valleys in the sea caused by currents and detect trends in their height. To measure the distance to the sea surface, the satellites send a microwave pulse to the ocean's surface and record the time it takes to return. Microwave radiometers correct the additional delay caused by water vapor in the atmosphere. Combining these data with the precisely known location of the spacecraft makes it possible to determine sea-surface height to within a few centimeters (about one inch). Current rates of sea level rise from satellite altimetry have been estimated to be 3.0 ± 0.4 millimetres (0.118 ± 0.016 in) per year for the period 1993–2017. Earlier satellite measurements were previously slightly at odds with tide gauge measurements. A small calibration error for the Topex/Poseidon satellite was eventually identified as having caused a slight overestimation of the 1992–2005 sea levels, that masked the ongoing sea level rise acceleration.
Satellites are useful for measuring regional variations in sea level, such as the substantial rise between 1993 and 2012 in the western tropical Pacific. This sharp rise has been linked to increasing trade winds, which occur when the Pacific Decadal Oscillation (PDO) and the El Niño–Southern Oscillation (ENSO) change from one state to the other. The PDO is a basin-wide climate pattern consisting of two phases, each commonly lasting 10 to 30 years, while the ENSO has a shorter period of 2 to 7 years.
Another important source of sea-level observations is the global network of tide gauges. Compared to the satellite record, this record has major spatial gaps but covers a much longer period of time. Coverage of tide gauges started primarily in the Northern Hemisphere, with data for the Southern Hemisphere remaining scarce up to the 1970s. The longest running sea-level measurements, NAP or Amsterdam Ordnance Datum established in 1675, are recorded in Amsterdam, the Netherlands. In Australia record collection is also quite extensive, including measurements by an amateur meteorologist beginning in 1837 and measurements taken from a sea-level benchmark struck on a small cliff on the Isle of the Dead near the Port Arthur convict settlement in 1841.
This network was used, in combination with satellite altimeter data, to establish that global mean sea-level rose 19.5 cm (7.7 in) between 1870 and 2004 at an average rate of about 1.44 mm/yr (1.7 mm/yr during the 20th century). Data collected by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia show the current global mean sea level trend to be 3.2 mm (0.13 in) per year, a doubling of the rate during the 20th century. This is an important confirmation of climate change simulations which predicted that sea level rise would accelerate in response to global warming.
Some regional differences are also visible in the tide gauge data. Some of the recorded regional differences are due to differences in the actual sea level, while other are due to vertical land movements. In Europe for instance, considerable variation is found because some land areas are rising while others are sinking. Since 1970, most tidal stations have measured higher seas, but sea levels have dropped along the northern Baltic Sea due to post-glacial rebound.
The three main reasons warming causes global sea level to rise are: oceans expand, ice sheets lose ice faster than it forms from snowfall, and glaciers at higher altitudes also melt. Sea level rise since the start of the 20th century has been dominated by retreat of glaciers and expansion of the ocean, but the contributions of the two large ice sheets (Greenland and Antarctica) are expected to increase in the 21st century. The ice sheets store most of the land ice (∼99.5%), with a sea-level equivalent (SLE) of 7.4 m (24 ft) for Greenland and 58.3 m (191 ft) for Antarctica.
Each year about 8 mm (0.31 in) of precipitation (liquid equivalent) falls on the ice sheets in Antarctica and Greenland, mostly as snow, which accumulates and over time forms glacial ice. Much of this precipitation began as water vapor evaporated from the ocean surface. Some of the snow is blown away by wind or disappears from the ice sheet by melt or by directly changing into a gas. The rest of the snow slowly changes into ice. This ice can flow to the edges of the ice sheet and return to the ocean by melting at the edge or in the form of icebergs. If precipitation, surface processes and ice loss at the edge balance each other, sea level remains the same. However scientists have found that ice is being lost, and at an accelerating rate.
Most of the additional heat trapped in the Earth's climate system by global warming is stored in oceans. They store more than 90% of the extra heat and act as a buffer against the effects of climate change. The heat needed to raise an average temperature increase of the entire world ocean by 0.01 °C would increase the atmospheric temperature by approximately 10 °C. Thus, a small change in the mean temperature of the ocean represents a very large change in the total heat content of the climate system.
When the ocean gains heat, the water expands and sea level rises. The amount of expansion varies with both water temperature and pressure. For each degree, warmer water and water under great pressure (due to depth) expand more than cooler water and water under less pressure.:1161 This means that cold Arctic Ocean water will expand less compared to warm tropical water. Because different climate models have slightly different patterns of ocean heating, they do not agree fully on the predictions for the contribution of ocean heating on sea level rise. Heat gets transported into deeper parts of the ocean by winds and currents, and some of it reaches depths of more than 2,000 m (6,600 ft).
The large volume of ice on the Antarctic continent stores around 70% of the world's fresh water. The Antarctic ice sheet mass balance is affected by snowfall accumulations, and ice discharge along the periphery. Under the influence of global warming, melt at the base of the ice sheet increases. Simultaneously, the capacity of the atmosphere to carry precipitation increases with temperature so that precipitation, in the form of snowfall, increases in global and regional models. The additional snowfall causes increased ice flow of the ice sheet into the ocean, so that the mass gain due to snowfall is partially compensated. Snowfall increased over the last two centuries, but no increase was found in the interior of Antarctica over the last four decades. Based on changes of Antarctica's ice mass balance over millions of years, due to natural climate fluctuations, researchers concluded that the sea-ice acts as a barrier for warmer waters surrounding the continent. Consequently, the loss of sea ice is a major driver of the instability of the entire ice sheet.
Different satellite methods for measuring ice mass and change are in good agreement, and combining methods leads to more certainty about how the East Antarctic Ice Sheet, the West Antarctic Ice Sheet, and the Antarctic Peninsula evolve. A 2018 systematic review study estimated that ice loss across the entire continent was 43 gigatons (Gt) per year on average during the period from 1992 to 2002, but has accelerated to an average of 220 Gt per year during the five years from 2012 to 2017. Most of the melt comes from the West Antarctic Ice Sheet, but the Antarctic Peninsula and East Antarctic Ice Sheet also contribute. The sea-level rise due to Antarctica has been estimated to be 0.25 mm per year from 1993–2005, and 0.42 mm per year from 2005 to 2015. All datasets generally show an acceleration of mass loss from the Antarctic ice-sheet, but with year-to-year variations.
The world's largest potential source of sea level rise is the East Antarctic Ice Sheet, which holds enough ice to raise global sea levels by 53.3 m (175 ft). The ice sheet has historically been considered to be relatively stable and has therefore attracted less scientific attention and observations compared to West Antarctica. A combination of satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance suggests the overall mass balance of the East Antarctic Ice Sheet was relatively steady or slightly positive for much of the period 1992–2017. A 2019 study however, using different methodology, concluded that East Antarctica is losing significant amounts of ice mass. The lead scientist Eric Rignot told CNN: "melting is taking place in the most vulnerable parts of Antarctica ... parts that hold the potential for multiple meters of sea level rise in the coming century or two."
Methods agree that the Totten Glacier has lost ice in recent decades in response to ocean warming and possibly a reduction in local sea ice cover. Totten Glacier is the primary outlet of the Aurora Subglacial Basin, a major ice reservoir in East Antarctica that could rapidly retreat due to hydrological processes. The global sea level potential of 3.5 m (11 ft) flowing through Totten Glacier alone is of similar magnitude to the entire probable contribution of the West Antarctic Ice Sheet. The other major ice reservoir on East Antarctica that might rapidly retreat is the Wilkes Basin which is subject to marine ice sheet instability. Ice loss from these outlet glaciers is possibly compensated by accumulation gains in other parts of Antarctica.
Even though East Antarctica contains the largest potential source of sea level rise, it is West Antarctica that currently experiences a net outflow of ice, causing sea levels to rise. Using different satellites from 1992 to 2017 shows melt is increasing significantly over this period. Antarctica as a whole has caused a total of 7.6 ± 3.9 mm (0.30 ± 0.15 in) of sea level rise. Considering the mass balance of the East Antarctic Ice Sheet which was relatively steady, the major contributor was West Antarctica. Significant acceleration of outflow glaciers in the Amundsen Sea Embayment may have contributed to this increase. In contrast to East Antarctica and the Antarctic Peninsula, temperatures on West Antarctica have increased significantly with a trend between 0.08 °C (0.14 °F) per decade and 0.96 °C (1.7 °F) per decade between 1976 and 2012.
Multiple types of instability are at play in West Antarctica. One is the Marine Ice Sheet Instability, where the bedrock on which parts of the ice sheet rest is deeper inland. This means that when a part of the ice sheet melts, a thicker part of the ice sheet is exposed to the ocean, which may lead to additional ice loss. Secondly, melting of the ice shelves, the floating extensions of the ice sheet, leads to a process named the Marine Ice Cliff Instability. Because they function as a buttress to the ice sheet, their melt leads to additional ice flow (see animation one minute into video). Melt of ice shelves is accelerated when surface melt creates crevasses and these crevasses cause fracturing.
The Thwaites and Pine Island glaciers have been identified to be potentially prone to these processes, since both glaciers bedrock topography gets deeper farther inland, exposing them to more warm water intrusion at the grounding line. With continued melt and retreat they contribute to raising global sea levels. Most of the bedrock underlying the West Antarctic Ice Sheet lies well below sea level. A rapid collapse of the West Antarctic Ice Sheet could raise sea level by 3.3 metres (11 ft).
Most ice on Greenland is part of the Greenland ice sheet which is 3 km (2 mi) at its thickest. The rest of the ice on Greenland is part of isolated glaciers and ice caps. The sources contributing to sea level rise from Greenland are from ice sheet melting (70%) and from glacier calving (30%). Dust, soot, and microbes and algae living on parts of the ice sheet further enhance melting by darkening its surface and thus absorbing more thermal radiation; these regions grew by 12% between 2000 and 2012, and are likely to expand further. Average annual ice loss in Greenland more than doubled in the early 21st century compared to the 20th century. Some of Greenland's largest outlet glaciers, such as Jakobshavn Isbræ and Kangerlussuaq Glacier, are flowing faster into the ocean.
A study published in 2017 concluded that Greenland's peripheral glaciers and ice caps crossed an irreversible tipping point around 1997, and will continue to melt. The Greenland ice sheet and its glaciers and ice caps are the largest contributor to sea level rise from land ice sources (excluding thermal expansion), combined accounting for 71 percent, or 1.32 mm per year during the 2012–2016 period.
Estimates on future contribution to sea level rise from Greenland range from 0.3 to 3 metres (1 to 10 ft), for the year 2100. The contribution of the Greenland ice sheet on sea level over the next couple of centuries can be very high due to a self-reinforcing cycle (a so-called positive feedback). After an initial period of melting, the height of the ice sheet will have lowered. As air temperature increases closer to the sea surface, more melt starts to occur. This melting may further be accelerated because the color of ice is darker while it is melting. There is a threshold in surface warming beyond which a partial or near-complete melting of the Greenland ice sheet occurs. Different research has put this threshold value as low as 1 °C (2 ℉), and definitely 4 °C (7 ℉), above pre-industrial temperatures.:1170
Less than 1% of glacier ice is in mountain glaciers, compared to 99% in Greenland and Antarctica. Still, mountain glaciers have contributed appreciably to historical sea level rise and are set to contribute a smaller, but still significant fraction of sea level rise in the 21st century. The roughly 200,000 glaciers on earth are spread out across all continents. Different glaciers respond differently to increasing temperatures. For instance, valley glaciers that have a shallow slope retreat under even mild warming. Every glacier has a height above which there is net gain in mass and under which the glacier loses mass. If that height changes a bit, this has large consequences for glaciers with a shallow slope.:345 Many glaciers drain into the ocean and ice loss can therefore increase when ocean temperatures increase.
Observational and modelling studies of mass loss from glaciers and ice caps indicate a contribution to sea-level rise of 0.2-0.4 mm per year, averaged over the 20th century. Over the 21st century, this is expected to rise, with glaciers contributing 7 to 24 cm (3 to 9 in) to global sea levels.:1165 Glaciers contributed around 40% to sea-level rise during the 20th century, with estimates for the 21st century of around 30%.
Sea ice melt contributes very slightly to global sea level rise. If the melt water from ice floating in the sea was exactly the same as sea water then, according to Archimedes' principle, no rise would occur. However melted sea ice contains less dissolved salt than sea water and is therefore less dense: in other words although the melted sea ice weighs the same as the sea water it was displacing when it was ice, its volume is still slightly greater. If all floating ice shelves and icebergs were to melt sea level would only rise by about 4 cm (1.6 in).
Land water storageEdit
Humans impact how much water is stored on land. Building dams prevents large masses of water from flowing into the sea and therefore increases the storage of water on land. On the other hand, humans extract water from lakes, wetlands and underground reservoirs for food production leading to rising seas. Furthermore, the hydrological cycle is influenced by climate change and deforestation, which can lead to further positive and negative contributions to sea level rise. In the 20th century, these processes roughly balanced, but dam building has slowed down and is expected to stay low for the 21st century.:1155
There are broadly two ways of modelling sea level rise and making future projections. On the one hand, scientists use process-based modelling, where all relevant and well-understood physical processes are included in a physical model. An ice-sheet model is used to calculate the contributions of ice sheets and a general circulation model is used to compute the rising sea temperature and its expansion. A disadvantage of this method is that not all relevant processes might be understood to a sufficient level. Alternatively, some scientist use semi-empirical techniques that use geological data from the past to determine likely sea level responses to a warming world in addition to some basic physical modelling. Semi-empirical sea level models rely on statistical techniques, using relationships between observed (contributions to) global mean sea level and global mean temperature. This type of modelling was partially motivated by the fact that in previous literature assessments by the Intergovernmental Panel on Climate Change (IPCC) most physical models underestimated the amount of sea level rise compared to observations of the 20th century.
Projections for the 21st centuryEdit
In its fifth assessment report (2013) the Intergovernmental Panel on Climate Change (IPCC) estimated how much sea level is likely to rise in the 21st century based on different levels of greenhouse gas emissions. These projections are based on well-known factors which contribute to sea level rise, but exclude other processes which are less well understood. If countries make rapid cuts to emissions (the RCP2.6 scenario), the IPCC deems it likely that the sea level will rise by 26–55 cm (10–22 in) with a 67% confidence interval. If emissions remain very high, the IPCC projects sea level will rise by 52–98 cm (20–39 in).
Since the publication of the 2013 IPCC assessment, attempts have been made to include more physical processes and to develop models that can project sea level rise using paleoclimate data. This typically led to higher estimates of sea level rise. For instance, a 2016 study led by Jim Hansen concluded that based on past climate change data, sea level rise could accelerate exponentially in the coming decades, with a doubling time of 10, 20 or 40 years, respectively, raising the ocean by several meters in 50, 100 or 200 years. However, Greg Holland from the National Center for Atmospheric Research, who reviewed the study, noted: “There is no doubt that the sea level rise, within the IPCC, is a very conservative number, so the truth lies somewhere between IPCC and Jim.”
In addition, one 2017 study's scenario, assuming high fossil fuel use for combustion and strong economic growth during this century, projects sea level rise of up to 132 cm (4.3 ft) on average — and an extreme scenario with as much as 189 cm (6.2 ft), by 2100. This could mean rapid sea level rise of up to 19 mm (0.75 in) per year by the end of the century. The study also concluded that the Paris climate agreement emissions scenario, if met, would result in a median 52 cm (20 in) of sea level rise by 2100.
According to the Fourth (2017) National Climate Assessment (NCA) of the United States it is very likely sea level will rise between 30 and 130 cm (1.0–4.3 feet) in 2100 compared to the year 2000. A rise of 2.4 m (8 feet) is physically possible under a high emission scenario but the authors were unable to say how likely. This worst-case scenario can only come about with a large contribution from Antarctica; a region that is difficult to model.
The possibility of a collapse of the West-Antarctic ice sheet and subsequent rapid sea level rise was suggested back in the 1970s. For instance, Mercer published a study in 1978 predicting that anthropogenic carbon dioxide warming and its potential effects on climate in the 21st century could cause a sea level rise of around 5 metres (16 ft) from melting of the West Antarctic ice-sheet alone.
In 2019, a study projected that in low emission scenario, sea level will rise 30 centimeters by 2050 and 69 centimetres by 2100, relatively to the level in 2000. In high emission scenario, it will be 34 cm by 2050 and 111 cm by 2100. There is the probability that the rise will be beyond 2 metres by 2100 in the high emission scenario, which will cause displacement of 187 million people.
Long-term sea level riseEdit
There is a widespread consensus among climate scientists that substantial long-term sea-level rise will continue for centuries to come even if the temperature stabilizes. Models are able to reproduce paleo records of sea level rise, which provides confidence in their application to long term future change.:1189
Both the Greenland ice sheet and Antarctica have tipping points for warming levels that could be reached before the end of the 21st century. Crossing such tipping points means that ice-sheet changes are potentially irreversible: a decrease to pre-industrial temperatures may not stabilize the ice sheet once the tipping point has been crossed. Quantifying the exact temperature change for which this tipping point is crossed remains controversial. For Greenland, estimates roughly range between 1 and 4 °C (2 to 7 ℉) above pre-industrial. The lower of these values has already been passed.
Melting of the Greenland ice sheet could contribute an additional 4 to 7.5 m (13 to 25 ft) over many thousands of years. A 2013 study estimated that there is a 2.3 m (7 ft 7 in) commitment to sea level rise for each degree of temperature rise within the next 2,000 years. More recent research, especially into Antarctica, indicates that this is probably a conservative estimate and true long-term sea level rise might be higher. Warming beyond the 2 °C (3.6 °F) target potentially lead to rates of sea-level rise dominated by ice loss from Antarctica. Continued carbon dioxide emissions from fossil fuel sources could cause additional tens of metres of sea level rise, over the next millennia, and the available fossil fuel on Earth is even enough to ultimately melt the entire Antarctic ice sheet, causing about 58 m (190 ft) of sea level rise. After 500 years, sea level rise from thermal expansion alone may have reached only half of its eventual level, which models suggest may lie within ranges of 0.5 to 2 m (2 to 7 ft).
Regional sea level changeEdit
Sea level rise is not uniform around the globe. Some land masses are moving up or down as a consequence of subsidence (land sinking or settling) or post-glacial rebound (land rising due to the loss of the weight of ice after melting), so that local sea level rise may be higher or lower than the global average. There are even regions near current and former glaciers and ice sheets where sea level falls. Furthermore, gravitational effects of changing ice masses and spatially varying patterns of warming lead to differences in the distribution of sea water around the globe. The gravitational effects comes into play when a large ice sheet melts. With the loss of mass, the gravitational pull becomes less and local water levels might drop. Further away from the ice sheet water levels will increase more than average. In this light, melt in Greenland has a different fingerprint on regional sea level than melt in Antarctica.
Many ports, urban conglomerations, and agricultural regions are built on river deltas, where subsidence of land contributes to a substantially increased relative sea level rise. This is caused by both unsustainable extraction of groundwater (in some places also by extraction of oil and gas), and by levees and other flood management practices that prevent accumulation of sediments from compensating for the natural settling of deltaic soils. Total human-caused subsidence in the Rhine-Meuse-Scheldt delta (Netherlands) is estimated at 3 to 4 m (10 to 13 ft), over 3 m (10 ft) in urban areas of the Mississippi River Delta (New Orleans), and over 9 m (30 ft) in the Sacramento-San Joaquin River Delta. Isostatic rebound causes relative sea level fall around the Hudson Bay in Canada and the northern Baltic.
The Atlantic is set to warm at a faster pace than the Pacific. This has consequences for Europe and the U.S. East Coast, which received a sea level rise 3–4 times the global average. The downturn of the Atlantic meridional overturning circulation (AMOC) has been also tied to extreme regional sea level rise on the US Northeast Coast.
Current and future sea level rise is set to have a number of impacts, particularly on coastal systems. Such impacts include increased coastal erosion, higher storm-surge flooding, inhibition of primary production processes, more extensive coastal inundation, changes in surface water quality and groundwater characteristics, increased loss of property and coastal habitats, increased flood risk and potential loss of life, loss of non-monetary cultural resources and values, impacts on agriculture and aquaculture through decline in soil and water quality, and loss of tourism, recreation, and transportation functions.:356 Many of these impacts are detrimental. Owing to the great diversity of coastal environments; regional and local differences in projected relative sea level and climate changes; and differences in the resilience and adaptive capacity of ecosystems, sectors, and countries, the impacts will be highly variable in time and space. River deltas in Africa and Asia and small island states are particularly vulnerable to sea-level rise.
Globally tens of millions of people will be displaced in the latter decades of the century if greenhouse gases are not reduced drastically. Many coastal areas have large population growth, which results in more people at risk from sea level rise. The rising seas pose both a direct risk: unprotected homes can be flooded, and indirect threats of higher storm surges, tsunamis and king tides. Asia has the largest population at risk from sea level with countries such as Bangladesh, China, India, Indonesia, and Vietnam having very densely populated coastal areas. The effects of displacement are very dependent on how successful governments will be in implementing defenses against the rising sea, with concerns for the poorest countries such as sub-Saharan countries and island nations.
Ten per cent of the world's population live in coastal areas that are less than 10 metres (33 ft) above sea level. Furthermore, two thirds of the world's cities with over five million people are located in these low-lying coastal areas. Future sea level rise could lead to potentially catastrophic difficulties for shore-based communities in the next centuries: for example, millions of people will be affected in cities such as Miami, Rio de Janeiro, Osaka and Shanghai if following the current trajectory of 3 °C (5.4 °F). The Egyptian city Alexandria faces a similar situation, where hundreds of thousands of people living in the low-lying areas may already have to be relocated in the coming decade. However, modest increases in sea level are likely to be offset when cities adapt by constructing sea walls or through relocating. Miami has been listed as "the number-one most vulnerable city worldwide" in terms of potential damage to property from storm-related flooding and sea-level rise.
Food production in coastal areas is affected by rising sea levels as well. Due to flooding and salt water intrusion into the soil, the salinity of agricultural lands near the sea increases, posing problems for crops that are not salt-resistant. Furthermore, salt intrusion in fresh irrigation water poses a second problem for crops that are irrigated. Newly developed salt-resistant crop variants are currently more expensive than the crops they are set to replace. The farmland in the Nile Delta is affected by salt water flooding, and there is now more salt in the soil and irrigation water in the Red River Delta and the Mekong Delta in Vietnam. Bangladesh and China are affected in a similar way, particularly their rice production.
Atolls and low-lying coastal areas on islands are particularly vulnerable to sea level rise. Possible impacts include coastal erosion, flooding and salt intrusion into soils and freshwater. It is difficult to assess how much of past erosion and floods have been caused by sea level change, compared to other environmental events such as hurricanes. Adaptation to sea level rise is costly for small island nations as a large portion of their population lives in areas that are at risk.
Maldives, Tuvalu, and other low-lying countries are among the areas that are at the highest level of risk. At current rates, sea level would be high enough to make the Maldives uninhabitable by 2100. Geomorphological events such as storms tend to have larger impacts on reef island than sea level rise, for instance at one of the Marshall Islands. These effects include the immediate erosion and subsequent regrowth process that may vary in length from decades to centuries, even resulting in land areas larger than pre-storm values. With an expected rise in the frequency and intensity of storms, they may become more significant in determining island shape and size than sea level rise. Five of the Solomon Islands have disappeared due to the combined effects of sea level rise and stronger trade winds that were pushing water into the Western Pacific.
In the case all islands of an island nation become uninhabitable or completely submerged by the sea, the states themselves would also become dissolved. Once this happens, all rights on the surrounding area (sea) are removed. This area can be significant as rights extend to a radius of 224 nautical miles (415 km; 258 mi) around the entire island state. Any resources, such as fossil oil, minerals and metals, within this area can be freely dug up by anyone and sold without needing to pay any commission to the (now dissolved) island state.
Coastal ecosystems are facing drastic changes as a consequence of rising sea levels. Many systems might ultimately be lost when sea levels rise too much or too fast. Some ecosystems can move land inward with the high-water mark, but many are prevented from migrating due to natural or artificial barriers. This coastal narrowing, sometimes called 'coastal squeeze' when considering human-made barriers, could result in the loss of habitats such as mudflats and marshes. Mangroves and tidal marshes adjust to rising sea levels by building vertically using accumulated sediment and organic matter. If sea level rise is too rapid, they will not be able to keep up and will instead be submerged. As both ecosystems protect against storm surges, waves and tsunamis, losing them makes the effects of sea level rise worse. Human activities, such as dam building, may restrict sediment supplies to wetlands, and thereby prevent natural adaptation processes. The loss of some tidal marshes is unavoidable as a consequence.
When seawater reaches inland, problems related to contaminated soils may occur. Also, fish, birds, and coastal plants could lose parts of their habitat. Coral, important for bird and fish life, needs to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. It has so far been able to keep up the vertical growth with the rising seas, but might not be able to do so in the future. In 2016, it was reported that the Bramble Cay melomys, which lived on a Great Barrier Reef island, had probably become extinct because of inundation due to sea level rises. This report was confirmed by the federal government of Australia when it declared the Bramble Cay melomys extinct as of February 2019, making this species the first known mammal to go extinct as a result of sea level rise.
Adaptation options to sea level rise can be broadly classified into retreat, accommodate and protect. Retreating is moving people and infrastructure to less exposed areas and preventing further development in areas that are at risk. This type of adaptation is potentially disruptive, as displacement of people might lead to tensions. Accommodation options are measurements that make societies more flexible to sea level rise. Examples are the cultivation of food crops that tolerate a high salt content in the soil and making new building standards which require building to be built higher and have less damage in the case a flood does occur. Finally, areas can be protected by the construction of dams, dikes and by improving natural defenses. These adaptation options can be further divided into hard and soft. Hard adaptation relies mostly on capital-intensive human-built infrastructure and involves large-scale changes to human societies and ecological systems. Because of its large scale, it is often not flexible. Soft adaptation involves strengthening natural defenses and adaptation strategies in local communities and the use of simple and modular technology, which can be locally owned. The two types of adaptation might be complementary or mutually exclusive.
Many countries are developing concrete plans for adaptation. An example is the extension of the Delta Works in the Netherlands, a country that sits partially below sea level and is subsiding. In 2008, the Dutch Delta Commission, advised in a report that the Netherlands would need a massive new building program to strengthen the country's water defenses against the anticipated effects of global warming for the following 190 years. This included drawing up worst-case plans for evacuations. The plan also included more than €100 billion (US$118 billion) in new spending through to the year 2100 to implement precautionary measures, such as broadening coastal dunes and strengthening sea and river dikes. The commission said the country must plan for a rise in the North Sea up to 1.3 metres (4 ft 3 in) by 2100 and plan for a 2–4 metres (7–13 ft) m rise by 2200.
Miami Beach is spending $500 million from 2015 to 2020 to address sea-level rise. Actions include a pump drainage system, and raising of roadways and sidewalks. U.S. coastal cities also conduct so called beach nourishment, also known as beach replenishment, where mined sand is trucked in and added. Some island nations, such as the Republic of Maldives, Kiribati and Tuvalu are considering international migration of their population in response to rising seas. Moving to different countries is not an easy solution, as those who move need to have a steady income and social network in their new country. It might be easier to adapt locally by moving further inland and increasing sediment supply needed for natural erosion protection. In the island nation of Fiji, residents are restoring coral reefs and mangroves to protect themselves against flooding and erosion, which is estimated to be more cost-efficient than building sea-walls.
In 2019 the elected president of Indonesia declared that he will move the capital of the country from Jakarta. Jakarta is one of the most vulnerable to sea level rise cities in the world and sinking in rate of 25 centimetres per year, approximately due to ground water drilling and the weight of its buildings. There are concerns, that the move will harm Indonesia tropical forest.
- January 2017 analysis from NOAA: Global and Regional Sea Level Rise Scenarios for the United States
- USGCRP (2017). "Climate Science Special Report. Chapter 12: Sea Level Rise". science2017.globalchange.gov. Retrieved 2018-12-27.
- WCRP Global Sea Level Budget Group (2018). "Global sea-level budget 1993–present". Earth System Science Data. 10 (3): 1551–1590. doi:10.5194/essd-10-1551-2018.
This corresponds to a mean sea-level rise of about 7.5 cm over the whole altimetry period. More importantly, the GMSL curve shows a net acceleration, estimated to be at 0.08mm/yr2.
- Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda (2016). "Future sea level rise constrained by observations and long-term commitment". Proceedings of the National Academy of Sciences. 113 (10): 2597–602. doi:10.1073/pnas.1500515113. ISSN 0027-8424. PMC 4791025. PMID 26903648.
- Climate Change 2014 Synthesis Report Fifth Assessment Report, AR5 (Report). Intergovernmental Panel on Climate Change. 2014. Under all RCP scenarios, the rate of sea level rise will very likely exceed the rate of 2.0 [1.7–2.3] mm/yr observed during 1971–2010
- IPCC, "Summary for Policymakers", Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, 2007, page 13-14"Models used to date do not include uncertainties in climate-carbon cycle feedback nor do they include the full effects of changes in ice sheet flow, because a basis in published literature is lacking."
- Mooney, Chris. "Scientists keep upping their projections for how much the oceans will rise this century". Washington Post.
- Ice sheet contributions to future sea-level rise from structured expert judgment
- Global and Regional Sea Level Rise Scenarios for the United States (PDF) (Report) (NOAA Technical Report NOS CO-OPS 083 ed.). National Oceanic and Atmospheric Administration. January 2017. p. vi. Retrieved 24 August 2018."The projections and results presented in several peer-reviewed publications provide evidence to support a physically plausible GMSL rise in the range of 2.0 meters (m) to 2.7 m, and recent results regarding Antarctic ice-sheet instability indicate that such outcomes may be more likely than previously thought."
- "The strange science of melting ice sheets: three things you didn't know". The Guardian. 12 September 2018.
- Bindoff, N.L., J. Willebrand, V. Artale, A, Cazenave, J. Gregory, S. Gulev, K. Hanawa, C. Le Quéré, S. Levitus, Y. Nojiri, C.K. Shum, L.D. Talley and A. Unnikrishnan (2007), "Section 5.5.1: Introductory Remarks", in IPCC AR4 WG1 2007 (ed.), Chapter 5: Observations: Ocean Climate Change and Sea Level, ISBN 978-0-521-88009-1, retrieved 25 January 2017CS1 maint: Multiple names: authors list (link)
- Box SYN-1: Sustained warming could lead to severe impacts, p. 5, in: Synopsis, in National Research Council 2011
- IPCC TAR WG1 2001.
- "Sea level to increase risk of deadly tsunamis". UPI. 2018.
- Holder, Josh; Kommenda, Niko; Watts, Jonathan; Holder, Josh; Kommenda, Niko; Watts, Jonathan. "The three-degree world: cities that will be drowned by global warming". The Guardian. ISSN 0261-3077. Retrieved 2018-12-28.
- "Sea Level Rise". National Geographic. January 13, 2017.
- Thomsen, Dana C.; Smith, Timothy F.; Keys, Noni (2012). "Adaptation or Manipulation? Unpacking Climate Change Response Strategies". Ecology and Society. 17 (3). JSTOR 26269087.
- "Sea level rise poses a major threat to coastal ecosystems and the biota they support". birdlife.org. Birdlife International. 2015.
- Church, J.A.; Clark, P.U. (2013). "Sea Level Change". In Stocker, T.F.; et al. (eds.). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
- Lambeck, Kurt; Rouby, Hélène; Purcell, Anthony; Sun, Yiying; Sambridge, Malcolm (2014). "Sea level and global ice volumes from the Last Glacial Maximum to the Holocene". Proceedings of the National Academy of Sciences. 111 (43): 15296–15303. doi:10.1073/pnas.1411762111. ISSN 0027-8424. PMC 4217469. PMID 25313072.
- Jones, Richard Selwyn (8 July 2019). "One of the most striking trends – over a century of global-average sea level change". Richard Selwyn Jones. Archived from the original on 30 July 2019. (link to image). For sea level change data, Jones cites Church, J. A.; White, N. J. (September 2011). "Sea-Level Rise from the Late 19th to the Early 21st Century". Surv Geophys. Springer Netherlands. 32 (4–5): 585–602. doi:10.1007/s10712-011-9119-1.
- Rovere, Alessio; Stocchi, Paolo; Vacchi, Matteo (2016). "Eustatic and Relative Sea Level Changes". Current Climate Change Reports.
- "Ocean Surface Topography from Space". NASA/JPL.
- "Jason-3 Satellite - Mission". www.nesdis.noaa.gov. Retrieved 2018-08-22.
- Nerem, R. S.; Beckley, B. D.; Fasullo, J. T.; Hamlington, B. D.; Masters, D.; Mitchum, G. T. (2018). "Climate-change–driven accelerated sea-level rise detected in the altimeter era". Proceedings of the National Academy of Sciences. 115 (9): 2022–2025. doi:10.1073/pnas.1717312115. ISSN 0027-8424. PMC 5834701. PMID 29440401.
- Michael Le Page (11 May 2015). "Apparent slowing of sea level rise is artefact of satellite data".
- Merrifield, Mark A.; Thompson, Philip R.; Lander, Mark (2012). "Multidecadal sea level anomalies and trends in the western tropical Pacific". Geophysical Research Letters. 39 (13): n/a. doi:10.1029/2012gl052032. ISSN 0094-8276.
- Mantua, N.J., Hare, S.R., Zhang, Y., Wallace, J.M., Francis, R.C.; Hare; Zhang; Wallace; Francis (1997). "A Pacific interdecadal climate oscillation with impacts on salmon production". Bulletin of the American Meteorological Society. 78 (6): 1069–79. Bibcode:1997BAMS...78.1069M. doi:10.1175/1520-0477(1997)078<1069:APICOW>2.0.CO;2. ISSN 1520-0477.CS1 maint: Multiple names: authors list (link)
- Rhein, Monika; Rintoul, Stephan (2013). "Observations: Ocean" (PDF). IPCC AR5 WGI. New York: Cambridge University Press. p. 285.
- "Other Long Records not in the PSMSL Data Set". PSMSL. Retrieved 11 May 2015.
- Hunter, John; R. Coleman; D. Pugh (2003). "The Sea Level at Port Arthur, Tasmania, from 1841 to the Present". Geophysical Research Letters. 30 (7): 1401. Bibcode:2003GeoRL..30.1401H. doi:10.1029/2002GL016813.
- Church, J. A., White, N.J.; White (2006). "20th century acceleration in global sea-level rise". Geophysical Research Letters. 33 (1): L01602. Bibcode:2006GeoRL..33.1602C. CiteSeerX 10.1.1.192.1792. doi:10.1029/2005GL024826.CS1 maint: Multiple names: authors list (link)
- "Historical sea level changes: Last decades". www.cmar.csiro.au. Retrieved 2018-08-26.
- Neil, White. "Historical Sea Level Changes". CSIRO. Retrieved 25 April 2013.
- "Global and European sea level". European Environmental Agency. 27 November 2017. Retrieved 11 January 2019.
- Lewis, Tanya (23 September 2013). "Sea level rise overflowing estimates". Science News.
- Morlighem, Mathieu; Wessem, Melchior J. van; Broeke, Michiel van den; Scheuchl, Bernd; Mouginot, Jérémie; Rignot, Eric (2019). "Four decades of Antarctic Ice Sheet mass balance from 1979–2017". Proceedings of the National Academy of Sciences. 116 (4): 1095–1103. doi:10.1073/pnas.1812883116. ISSN 0027-8424. PMC 6347714. PMID 30642972.
- Levitus, S., Boyer, T., Antonov, J., Garcia, H., and Locarnini, R. (2005) Ocean Warming 1955–2003 Archived 17 July 2009 at the Wayback Machine. Poster presented at the U.S. Climate Change Science Program Workshop, 14–16 November 2005, Arlington VA, Climate Science in Support of Decision-Making; Last viewed 22 May 2009 .
- Kuhlbrodt, T; Gregory, J.M. (2012). "Ocean heat uptake and its consequences for the magnitude of sea level rise and climate change". Geophysical Research Letters. 39 (18). doi:10.1029/2012GL052952.
- Upton, John (2016-01-19). "Deep Ocean Waters Are Trapping Vast Stores of Heat". Scientific American. Retrieved 2019-02-01.
- "How Stuff Works: polar ice caps". howstuffworks.com. 2000-09-21. Retrieved 2006-02-12.
- Winkelmann, R.; Levermann, A.; Martin, M. A.; Frieler, K. (2012-12-12). "Increased future ice discharge from Antarctica owing to higher snowfall". Nature. 492 (7428): 239–242. doi:10.1038/nature11616. ISSN 0028-0836. PMID 23235878.
- "Antarctica ice melt has accelerated by 280% in the last 4 decades". CNN. Retrieved January 14, 2019.
- Shepherd, Andrew; Ivins, Erik; et al. (IMBIE team) (2012). "A Reconciled Estimate of Ice-Sheet Mass Balance". Science. 338 (6111): 1183–1189. Bibcode:2012Sci...338.1183S. doi:10.1126/science.1228102. hdl:2060/20140006608. PMID 23197528.
- Shepherd, Andrew; Ivins, Erik; et al. (IMBIE team) (2018). "Mass balance of the Antarctic Ice Sheet from 1992 to 2017". Nature. 558 (7709): 219–222. doi:10.1038/s41586-018-0179-y. PMID 29899482. Lay summary – Ars Technica (2018-06-13).
- Fretwell, P.; Pritchard, H. D.; Vaughan, D. G.; Bamber, J. L.; Barrand, N. E.; Bell, R.; Bianchi, C.; Bingham, R. G.; Blankenship, D. D. (2013). "Bedmap2: improved ice bed, surface and thickness datasets for Antarctica". The Cryosphere. 7 (1): 375–393. doi:10.5194/tc-7-375-2013. ISSN 1994-0424.
- IMBIE team (2018). "Mass balance of the Antarctic Ice Sheet from 1992 to 2017" (PDF). Nature. 558 (7709): 219–222. doi:10.1038/s41586-018-0179-y. ISSN 0028-0836. PMID 29899482.
- Greene, Chad A.; Blankenship, Donald D.; Gwyther, David E.; Silvano, Alessandro; Wijk, Esmee van (2017). "Wind causes Totten Ice Shelf melt and acceleration". Science Advances. 3 (11): e1701681. doi:10.1126/sciadv.1701681. ISSN 2375-2548. PMC 5665591. PMID 29109976.
- Roberts, Jason; Galton-Fenzi, Benjamin K.; Paolo, Fernando S.; Donnelly, Claire; Gwyther, David E.; Padman, Laurie; Young, Duncan; Warner, Roland; Greenbaum, Jamin (2017). "Ocean forced variability of Totten Glacier mass loss". Geological Society, London, Special Publications. 461 (1): 175–186. doi:10.1144/sp461.6. ISSN 0305-8719.
- Greene, Chad A.; Young, Duncan A.; Gwyther, David E.; Galton-Fenzi, Benjamin K.; Blankenship, Donald D. (2018). "Seasonal dynamics of Totten Ice Shelf controlled by sea ice buttressing". The Cryosphere. 12 (9): 2869–2882. doi:10.5194/tc-12-2869-2018. ISSN 1994-0416.
- Pollard, David; DeConto, Robert; Alley, Richard (2015). "Potential Antarctic Ice Sheet retreat driven by hydrofracturing and ice cliff failure". Earth and Planetary Science Letters. 412: 112–121. doi:10.1016/j.epsl.2014.12.035. ISSN 0012-821X.
- Siegert, M. J.; Ommen, T. D. van; Warner, R. C.; Schroeder, D. M.; Legresy, B.; Aitken, A. R. A.; Roberts, J. L.; Richter, T. G.; Young, D. A. (2015). "Ocean access to a cavity beneath Totten Glacier in East Antarctica". Nature Geoscience. 8 (4): 294–298. doi:10.1038/ngeo2388. ISSN 1752-0908.
- Rignot, E.; Bamber, J. L.; Van Den Broeke, M. R.; Davis, C.; Li, Y.; Van De Berg, W. J.; Van Meijgaard, E. (2008). "Recent Antarctic ice mass loss from radar interferometry and regional climate modelling". Nature Geoscience. 1 (2): 106–110. Bibcode:2008NatGe...1..106R. doi:10.1038/ngeo102. PMC 4032514. PMID 24891394.
- Schellnhuber, Hans Joachim; Franzke, Christian L. E.; Bunde, Armin; Ludescher, Josef (2016). "Long-term persistence enhances uncertainty about anthropogenic warming of Antarctica". Climate Dynamics. 46 (1–2): 263–271. doi:10.1007/s00382-015-2582-5. ISSN 1432-0894.
- Roe, Gerard H.; Seroussi, Hélène; Robel, Alexander A. (2019-07-03). "Marine ice sheet instability amplifies and skews uncertainty in projections of future sea-level rise". Proceedings of the National Academy of Sciences: 201904822. doi:10.1073/pnas.1904822116. ISSN 0027-8424. PMID 31285345.
- Pattyn, Frank (2018). "The paradigm shift in Antarctic ice sheet modelling". Nature Communications. 9 (1): 2728. doi:10.1038/s41467-018-05003-z. ISSN 2041-1723. PMC 6048022. PMID 30013142.
- "After Decades of Losing Ice, Antarctica Is Now Hemorrhaging It". The Atlantic. 2018.
- "Marine ice sheet instability". AntarcticGlaciers.org. 2014.
- Pollard; et al. (2015). "Potential Antarctic Ice Sheet retreat driven by hydrofracturing and ice cliff failure". Nature. 412: 112–121. doi:10.1016/j.epsl.2014.12.035.
- Bamber J.L.; Riva R.E.M.; Vermeersen B.L.A.; LeBroq A.M. (2009). "Reassessment of the potential sea-level rise from a collapse of the West Antarctic Ice Sheet". Science. 324 (5929): 901–3. Bibcode:2009Sci...324..901B. doi:10.1126/science.1169335. PMID 19443778.
- Joughin, Ian; Alley, Richard B. (2011). "Stability of the West Antarctic ice sheet in a warming world". Nature Geoscience. 4 (8): 506–513. doi:10.1038/ngeo1194. ISSN 1752-0894.
- "NASA Earth Observatory - Newsroom". earthobservatory.nasa.gov. 18 January 2019.
- Bob Berwyn (2018). "What's Eating Away at the Greenland Ice Sheet?". Inside Climate News.
- Kjær, Kurt H.; Willerslev, Eske; Andresen, Camilla S.; Schomacker, Anders; Nuth, Christopher; Siggaard-Andersen, Marie-Louise; Broeke, Michiel van den; Colgan, William; Bamber, Jonathan L. (2015). "Spatial and temporal distribution of mass loss from the Greenland Ice Sheet since AD 1900". Nature. 528 (7582): 396–400. doi:10.1038/nature16183. ISSN 1476-4687. PMID 26672555.
- Joughin, I; et al. (2004). "Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier". Nature. 432 (7017): 608–610. Bibcode:2004Natur.432..608J. doi:10.1038/nature03130. PMID 15577906.
- Connor, Steve (2005). "Melting Greenland glacier may hasten rise in sea level". The Independent. London. Retrieved 2010-04-30.
- Noël; et al. (2017). "A tipping point in refreezing accelerates mass loss of Greenland's glaciers and ice caps". Nature Communications. 8: 14730. doi:10.1038/ncomms14730. PMC 5380968. PMID 28361871.
- Mosbergen, Dominique (2017). "Greenland's Coastal Ice Caps Have Melted Past The Point Of No Return". Huffington Post.
- Bamber; et al. (2018). "The land ice contribution to sea level during the satellite era". Environmental Research Letters. 13 (6): 063008. Bibcode:2018ERL....13f3008B. doi:10.1088/1748-9326/aac2f0.
- Robinson, Alexander; Calov, Reinhard; Ganopolski, Andrey (2012). "Multistability and critical thresholds of the Greenland ice sheet". Nature Climate Change. 2 (6): 429–432. doi:10.1038/nclimate1449. ISSN 1758-678X.
- Radić, Valentina; Hock, Regine (2011). "Regionally differentiated contribution of mountain glaciers and ice caps to future sea-level rise". Nature Geoscience. 4 (2): 91–94. doi:10.1038/ngeo1052. ISSN 1752-0894.
- Huss, Matthias; Hock, Regine (2015). "A new model for global glacier change and sea-level rise". Frontiers in Earth Science. 3. doi:10.3389/feart.2015.00054. ISSN 2296-6463.
- Vaughan, David G.; Comiso, Josefino C (2013). "Observations: Cryosphere" (PDF). IPCC AR5 WGI. New York: Cambridge University Press.
- Dyurgerov, Mark. (2002). Glacier Mass Balance and Regime: Data of Measurements and Analysis. INSTAAR Occasional Paper No. 55, ed. M. Meier and R. Armstrong. Boulder, CO: Institute of Arctic and Alpine Research, University of Colorado. Distributed by National Snow and Ice Data Center, Boulder, CO. A shorter discussion is at
- Noerdlinger, P.D.; Brower, K.R (2007). "The melting of floating ice raises the ocean level". Geophysical Journal International. 170 (1): 145–150. doi:10.1111/j.1365-246X.2007.03472.x.
- Wada, Yoshihide; Reager, John T.; Chao, Benjamin F.; Wang, Jida; Lo, Min-Hui; Song, Chunqiao; Li, Yuwen; Gardner, Alex S. (2016). "Recent Changes in Land Water Storage and its Contribution to Sea Level Variations". Surveys in Geophysics. 38 (1): 131–152. doi:10.1007/s10712-016-9399-6.
- This article incorporates public domain material from the NOAA document: NOAA GFDL, Geophysical Fluid Dynamics Laboratory – Climate Impact of Quadrupling CO2, Princeton, NJ, USA: NOAA GFDL
- Hoegh-Guldberg, O.; Jacob, Daniela; Taylor, Michael (2018). "Impacts of 1.5°C of Global Warming on Natural and Human Systems" (PDF). Special Report: Global Warming of 1.5 ºC. In Press.
- J. Hansen; M. Sato; P. Hearty; R. Ruedy; M. Kelley; V. Masson-Delmotte; G. Russell; G. Tselioudis; J. Cao; E. Rignot; I. Velicogna; E. Kandiano; K. von Schuckmann; P. Kharecha; A. N. Legrande; M. Bauer; K.-W. Lo (2016). "Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2 °C global warming could be dangerous". Atmospheric Chemistry and Physics. 16 (6): 3761–3812. arXiv:1602.01393. doi:10.5194/acp-16-3761-2016.
- "James Hansen's controversial sea level rise paper has now been published online". The Washington Post. 2015.
- Chris Mooney (October 26, 2017). "New science suggests the ocean could rise more — and faster — than we thought". The Chicago Tribune.
- Alexander Nauels; Joeri Rogelj; Carl-Friedrich Schleussner; Malte Meinshausen; Matthias Mengel (2017). "Linking sea level rise and socioeconomic indicators under the Shared Socioeconomic Pathways". Environmental Research Letters. 12 (11): 114002. Bibcode:2017ERL....12k4002N. doi:10.1088/1748-9326/aa92b6.
- Mercer, J. H. (1978). "West Antarctic ice sheet and CO2 greenhouse effect: a threat of disaster". Nature. 271 (5643): 321–325. doi:10.1038/271321a0. ISSN 1476-4687.
- L. Bamber, Jonathan; Oppenheimer, Michael; E. Kopp, Robert; P. Aspinall, Willy; M. Cooke, Roger (May 2019). "Ice sheet contributions to future sea-level rise from structured expert judgment". Proceedings of the National Academy of Sciences. 116 (23): 11195–11200. doi:10.1073/pnas.1817205116.
- America's Climate Choices: Panel on Advancing the Science of Climate Change, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, NATIONAL RESEARCH COUNCIL OF THE NATIONAL ACADEMIES (2010). "7 Sea Level Rise and the Coastal Environment". Advancing the Science of Climate Change. Washington, D.C.: The National Academies Press. p. 245. ISBN 978-0-309-14588-6. Retrieved 2011-06-17.CS1 maint: Multiple names: authors list (link)
- Broeke, Michiel van den; Trusel, Luke D.; Seroussi, Hélène; Robinson, Alexander; Payne, Antony J.; Nowicki, Sophie; Lenaerts, Jan T. M.; Munneke, Peter Kuipers; Golledge, Nicholas R. (2018). "The Greenland and Antarctic ice sheets under 1.5 °C global warming". Nature Climate Change. 8 (12): 1053–1061. doi:10.1038/s41558-018-0305-8. ISSN 1758-6798.
- Levermann, Anders; Clark, Peter U.; Marzeion, Ben; Glenn A. Milne; David Pollard; Valentina Radić; Alexander Robinson (2013). "The multimillennial sea-level commitment of global warming". PNAS. 110 (34): 13745–50. Bibcode:2013PNAS..11013745L. doi:10.1073/pnas.1219414110. PMC 3752235. PMID 23858443.
- Ricarda Winkelmann; Anders Levermann; Andy Ridgwell; Ken Caldeira (2015). "Combustion of available fossil fuel resources sufficient to eliminate the Antarctic Ice Sheet". Science Advances. 1 (8): e1500589. Bibcode:2015SciA....1E0589W. doi:10.1126/sciadv.1500589. PMC 4643791. PMID 26601273.
- Solomon, S., Plattner, G.K., Knutti, R., Friedlingstein, P.; Plattner; Knutti; Friedlingstein (2009). "Irreversible climate change due to carbon dioxide emissions". Proc. Natl. Acad. Sci. U.S.A. 106 (6): 1704–9. Bibcode:2009PNAS..106.1704S. doi:10.1073/pnas.0812721106. PMC 2632717. PMID 19179281.CS1 maint: Multiple names: authors list (link)
- Katsman, Caroline A.; Sterl, A.; Beersma, J. J.; van den Brink, H. W.; Church, J. A.; Hazeleger, W.; Kopp, R. E.; Kroon, D.; Kwadijk, J. (2011). "Exploring high-end scenarios for local sea level rise to develop flood protection strategies for a low-lying delta—the Netherlands as an example". Climatic Change. 109 (3–4): 617–645. doi:10.1007/s10584-011-0037-5. ISSN 0165-0009.
- Bucx et al. 2010, p. 88;Tessler et al. 2015, p. 638
- Bucx et al. 2010, pp. 81, 88,90
- Cazenave, Anny; Nicholls, Robert J. (2010). "Sea-Level Rise and Its Impact on Coastal Zones". Science. 328 (5985): 1517–1520. doi:10.1126/science.1185782. ISSN 0036-8075. PMID 20558707.
- "Why the U.S. East Coast could be a major 'hotspot' for rising seas". The Washington Post. 2016.
- Jianjun Yin & Stephen Griffies (March 25, 2015). "Extreme sea level rise event linked to AMOC downturn". CLIVAR.
- Mimura, Nobuo (2013). "Sea-level rise caused by climate change and its implications for society". Proceedings of the Japan Academy. Series B, Physical and Biological Sciences. 89 (7): 281–301. doi:10.2183/pjab.89.281. ISSN 0386-2208. PMC 3758961. PMID 23883609.
- McLeman, Robert (2018). "Migration and displacement risks due to mean sea-level rise". Bulletin of the Atomic Scientists. 74 (3): 148–154. doi:10.1080/00963402.2018.1461951. ISSN 0096-3402.
- Nicholls, Robert J.; Marinova, Natasha; Lowe, Jason A.; Brown, Sally; Vellinga, Pier; Gusmão, Diogo de; Hinkel, Jochen; Tol, Richard S. J. (2011). "Sea-level rise and its possible impacts given a 'beyond 4°C world' in the twenty-first century". Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 369 (1934): 161–181. doi:10.1098/rsta.2010.0291. ISSN 1364-503X. PMID 21115518.
- McGranahan, Gordon; Balk, Deborah; Anderson, Bridget (2007). "The rising tide: assessing the risks of climate change and human settlements in low elevation coastal zones". Environment and Urbanization. 19 (1): 17–37. doi:10.1177/0956247807076960. ISSN 0956-2478.
- Michaelson, Ruth (25 August 2018). "Houses claimed by the canal: life on Egypt's climate change frontline". The Guardian. Retrieved 30 August 2018.
- "IPCC's New Estimates for Increased Sea-Level Rise". Yale. 2013.
- Jeff Goodell (June 20, 2013). "Goodbye, Miami". Rolling Stone. Retrieved June 21, 2013.
The Organization for Economic Co-operation and Development lists Miami as the number-one most vulnerable city worldwide in terms of property damage, with more than $416 billion in assets at risk to storm-related flooding and sea-level rise.
- Nagothu, Udaya Sekhar (2017-01-18). "Food security threatened by sea-level rise". Nibio. Retrieved 2018-10-21.
- "Potential Impacts of Sea-Level Rise on Populations and Agriculture". www.fao.org. Retrieved 2018-10-21.
- Nurse, Leonard A.; McLean, Roger (2014). "29: Small Islands" (PDF). In Barros, VR; Field (eds.). AR5 WGII. Cambridge University Press.
- Megan Angelo (1 May 2009). "Honey, I Sunk the Maldives: Environmental changes could wipe out some of the world's most well-known travel destinations".
- Kristina Stefanova (19 April 2009). "Climate refugees in Pacific flee rising sea".
- Ford, Murray R.; Kench, Paul S. (2016). "Spatiotemporal variability of typhoon impacts and relaxation intervals on Jaluit Atoll, Marshall Islands". Geology. 44 (2): 159–162. Bibcode:2016Geo....44..159F. doi:10.1130/g37402.1.
- Klein, Alice. "Five Pacific islands vanish from sight as sea levels rise". New Scientist. Retrieved 2016-05-09.
- Alfred Henry Adriaan Soons (1989). Zeegrenzen en zeespiegelrijzing : volkenrechtelijke beschouwingen over de effecten van het stijgen van de zeespiegel op grenzen in zee : rede, uitgesproken bij de aanvaarding van het ambt van hoogleraar in het volkenrecht aan de Rijksuniversiteit te Utrecht op donderdag 13 april 1989 [Sea borders and rising sea levels: international law considerations about the effects of rising sea levels on borders at sea: speech, pronounced with the acceptance of the post of professor in international law at the University of Utrecht on 13 April 1989] (in Dutch). Kluwers. ISBN 978-90-268-1925-4.
- Pontee, Nigel (2013). "Defining coastal squeeze: A discussion". Ocean & Coastal Management. 84: 204–207. doi:10.1016/j.ocecoaman.2013.07.010. ISSN 0964-5691.
- Krauss, Ken W.; McKee, Karen L.; Lovelock, Catherine E.; Cahoon, Donald R.; Saintilan, Neil; Reef, Ruth; Chen, Luzhen (2013). "How mangrove forests adjust to rising sea level". New Phytologist. 202 (1): 19–34. doi:10.1111/nph.12605. ISSN 0028-646X. PMID 24251960.
- Crosby, Sarah C.; Sax, Dov F.; Palmer, Megan E.; Booth, Harriet S.; Deegan, Linda A.; Bertness, Mark D.; Leslie, Heather M. (2016). "Salt marsh persistence is threatened by predicted sea-level rise". Estuarine, Coastal and Shelf Science. 181: 93–99. doi:10.1016/j.ecss.2016.08.018. ISSN 0272-7714.
- Spalding M.; McIvor A.; Tonneijck F.H.; Tol S.; van Eijk P. (2014). "Mangroves for coastal defence. Guidelines for coastal managers & policy makers" (PDF). Wetlands International and The Nature Conservancy.
- Weston, Nathaniel B. (2013). "Declining Sediments and Rising Seas: an Unfortunate Convergence for Tidal Wetlands". Estuaries and Coasts. 37 (1): 1–23. doi:10.1007/s12237-013-9654-8. ISSN 1559-2723.
- Wong, Poh Poh; Losado, I.J.; Gattuso, J.-P.; Hinkel, Jochen (2014). "Coastal Systems and Low-Lying Areas" (PDF). Climate Change 2014: Impacts, Adaptation, and Vulnerability. New York: Cambridge University Press.
- Smith, Lauren (2016-06-15). "Extinct: Bramble Cay melomys". Australian Geographic. Retrieved 2016-06-17.
- Hannam, Peter (2019-02-19). "'Our little brown rat': first climate change-caused mammal extinction". The Sydney Morning Herald. Retrieved 2019-06-25.
- Fletcher, Cameron; Taylor, BM; Rambaldi, AN; Harman, BP; Heyenga, S; Ganegodage, KR; Lipkin, F; McAllister, RRJ (2013). Costs and coasts: an empirical assessment of physical and institutional climate adaptation pathways (PDF). Cold Coast: the National Climate Change Adaptation Research Facility.
- Sovacool, Benjamin K. (2011). "Hard and soft paths for climate change adaptation" (PDF). Climate Policy. 11.
- Kimmelman, Michael; Haner, Josh (2017-06-15). "The Dutch Have Solutions to Rising Seas. The World Is Watching". The New York Times. ISSN 0362-4331. Retrieved 2019-02-02.
- "Dutch draw up drastic measures to defend coast against rising seas". New York Times. 3 September 2008.
- "$500 million, 5-year plan to help Miami Beach withstand sea-level rise". Homeland security news wire. 6 April 2015.
- "Climate Change, Sea Level Rise Spurring Beach Erosion". Climate Central. 2012.
- Grecequet, Martina; Noble, Ian; Hellmann, Jessica (2017-11-16). "Many small island nations can adapt to climate change with global support". The Conversation. Retrieved 2019-02-02.
- "Adaptation to Sea Level Rise". UN Environment. 2018-01-11. Retrieved 2019-02-02.
- Englander, John (May 3, 2019). "As seas rise, Indonesia is moving its capital city. Other cities should take note". The Washington Post. Retrieved 5 May 2019.
- Rosane, Olivia (May 3, 2019). "Indonesia Will Move its Capital from Fast-Sinking Jakarta". Ecowatch. Retrieved 5 May 2019.
- IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L. (eds.), Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7).
- IPCC AR4 WG2 (2007), Parry, M.L.; Canziani, O.F.; Palutikof, J.P.; van der Linden, P.J.; Hanson, C.E. (eds.), Climate Change 2007: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88010-7 (pb: 978-0-521-70597-4).
- IPCC AR5 WG2 (2014), C.B.Field; V.R.Barros; D.J.Dokken; K.J.Mach; M.D. Mastrandrea (eds.), Climate Change 2014: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press
- IPCC AR5 WG1 (2013), Stocker, T.F.; Qin, G.; Plattner, K.; Tignor, M.; Allen, S.K.; Boschung, J.; Nauels, A.; Xia, Y. (eds.), Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7).
- USGCRP (2017), Wuebbles, D.J.; Fahey, D.W.; Hibbard, K.A.; Dokken, D.J.; Stewart, B.C.; T.K., Maycock (eds.), Climate Science Special Report: Fourth National Climate Assessment, Volume I
- Bucx, T.; Marchand, M.; Makaske, A.; van de Guchte, C. (December 2010), Comparative assessment of the vulnerability and resilience of 10 deltas: synthesis report, Delta Alliance report number 1, Delft-Wageningen, The Netherlands: Delta Alliance International, ISBN 978-94-90070-39-7
- Hanson, S.; Nicholls, R.; Ranger, N.; Hallegatte, S.; Corfee-Morlot, J.; Herweijer, C.; Chateau, J. (2011), "A global ranking of port cities with high exposure to climate extremes", Climatic Change, 104 (1): 89–111, doi:10.1007/s10584-010-9977-4
- Tessler, Z. D.; Vörösmarty, C. J.; Grossberg, M.; Gladkova, I.; Aizenman, H.; Syvitski, J. P. M.; Foufoula-Georgiou, E. (2015), "Profiling risk and sustainability in coastal deltas of the world", Science, 349 (6248): 638–43, Bibcode:2015Sci...349..638T, doi:10.1126/science.aab3574, PMID 26250684
- Byravan, S.; Rajan, S. C. (2010). "The ethical implications of sea-level rise due to climate change". Ethics and International Affairs. 24 (3): 239–60. doi:10.1111/j.1747-7093.2010.00266.x.
- Emery, K.O. & D. G. Aubrey (1991). Sea levels, land levels, and tide gauges. New York: Springer-Verlag. ISBN 978-0-387-97449-1.
- Menefee, Samuel Pyeatt (1991). "Half Seas Over: The Impact of Sea Level Rise on International Law and Policy". U.C.L.A. Journal of Environmental Law & Policy. 9: 175–218.
- Warrick, R. A., C. L. Provost, M. F. Meier, J. Oerlemans, and P. L. Woodworth (1996). "Changes in sea level". In Houghton, John Theodore (ed.). Climate Change 1995: The Science of Climate Change. Cambridge, UK: Cambridge University Press. pp. 359–405. ISBN 978-0-521-56436-6.CS1 maint: Multiple names: authors list (link)
- National Snow and Ice Data Center (February 19, 2018), "Contribution of the Cryosphere to Changes in Sea Level". Accessed October 7, 2018
- Maumoon Abdul Gayoom. "Address by his Excellency Mr. Maumoon Abdul Gahoom, President of the Republic of Maldives, at thenineteenth special session of the United Nations General Assembly for the purpose of an overall review and appraisal of the implementation of agenda 21 – June 24, 1997". Archived from the original on June 13, 2006. Retrieved 2006-01-06.
- Pilkey, O.H. and Young, R, The Rising Sea, Shearwater, 2009 ISBN 978-1-59726-191-3
- Douglas, Bruce C. (1995). "Global sea level change: Determination and interpretation". Reviews of Geophysics. 33: 1425–1432. Bibcode:1995RvGeo..33.1425D. doi:10.1029/95RG00355.
- Angela Williams (2008). "Turning the Tide: Recognizing Climate Change Refugees in International Law". Law & Policy. 30 (4).
- "Why does sea level rise threaten marine ecosystems?". Marine Conservation Institute. Retrieved 2018-10-07.
|The Wikibook Historical Geology has a page on the topic of: Sea level variations|
- NASA Satellite Data 1993-present
- Fourth National Climate Assessment Sea Level Rise Key Message
- Incorporating Sea Level Change Scenarios at the Local Level Outlines eight steps a community can take to develop site-appropriate scenarios
- The Global Sea Level Observing System (GLOSS)
- Sea Level Rise Viewer (NOAA)
- on YouTube – National Geographic film based on the 2007 book Six Degrees: Our Future on a Hotter Planet |
Cosmic Microwave Background Radiation
The Cosmic Microwave Background Radiation (CMBR) or Cosmic Background Radiation (CBR) is the afterglow from the early universe and provides strong evidence for the theory of a hot Big Bang. This article looks at what the CBR is, how it was detected and why it is important for cosmology.
The Cosmic Microwave Background Radiation (CBR) was discovered by accident at the Bell Labs Horn Antenna by Penzias and Wilson in 1965. While working with microwave communication technology Penzias and Wilson discovered a background noise, uniform in all directions, which they could not account for. Dicke and Peebles of Princeton University were at the time working on the hot Big Bang theory and realised that this "noise" was the radiation left over from the Big Bang. The radiation detected was measured at a temperature of 2.725 K and was found to be the same in every direction (isotropic).
The temperature of the CBR was first accurately measured by the Cosmic Background Explorer (COBE) satellite in 1990. This showed that the CBR was a blackbody (as predicted by Big Bang theory) and that there are very tiny variations (±0.003 K) across the visible universe. These slight variations are evidence of the conditions in the early universe and have been interpreted as evidence of a "lumpy" universe, also predicted by the Big Bang theory. These variations are thought to be the origin of galaxies and large structures in the universe. The CBR shows very strong evidence for the hot Big Bang theory as it suggests that the early universe was once very hot.
COBE microwave measurements show redshift and blueshift indicating that our galaxy is travelling through space. From these readings, we can deduce that the speed of our galaxy relative to the CBR is around 600 km s-1 towards the Virgo cluster.
The slide above shows a false colour image of the entire sky projected onto an oval (similar to a map of the Earth). The Milky Way extends horizontally across the centre of the image and has been subtracted from the results. The variations in the CBR temperature are shown as different colours. These variations are the "lumps" (technical term: density ripples), in the early universe. As the early universe expanded these clumps of matter went on to form galaxies and other large structures.
What is the Cosmic Background Radiation?
The microwave background radiation is believed to be the "visible" remnant from the Big Bang. In the very early stages of the universe the energy density was far greater than the mass density (there was more energy per unit volume than matter). This stage is called the radiation era (energy = radiation). During the radiation era, photons were flying around with so much energy that they ionised any matter that tried to form.
As the universe expanded, the radiation cooled and the matter was able to form without ionisation, causing the matter density to increase and became dominant. The stage is known as the matter era. The transitional phase between the two eras is called the age of recombination in which the universe expanded and cooled to around 3000K. During this time photons could exist without ionising atoms and instead began travelling across the universe.
The speed of light is finite and takes a time to reach us, for example when we look at the Sun, we see it as it was 8 minutes ago. When we look at the Andromeda galaxy, the light has taken about 2.2 million years to get to our galaxy, so we see the Andromeda galaxy as it was 2.2 million years ago. In exactly the same way, when we look at the CBR we see the universe as it was 13.7 billion years ago just after recombination. Due to the extremely rapid expansion of the universe, the CBR is observed in the microwave spectrum because the original light emitted has been redshifted into the microwave wavelengths.
Why is the Cosmic Background Radiation Important
The temperature of the CBR today has been measured at 2.725K. Because the universe is expanding, we can express the temperature of the universe through time as a function of the scale factor.
Equation 32 - Temperature scale factor
This can also be simplified to become a factor of redshift like so:
Equation 33 - Temperature redshift.
Where T0 represents the current temperature of the universe. This is a far more useful function because we do not always know a time, but we can measure redshift from observations of spectra. We can say that for a time equivalent to redshift (z = 1) the temperature of the universe was 5.45K.
We can estimate the redshift of the CBR to be approximately z = 1500, so substituting this value into Equation 33 will give the temperature of the universe at the time the CBR was formed. This gives:
Equation 34 - Temperature of the CBR
Importance of the CBR to Modern Cosmology
Results from COBE showed that there is a pattern to the polarisation of the CBR. This polarisation was an important discovery because it was initially predicted by the gravitational instability theory. This theory predicts that slight variations in density of the early universe will, over time, form larger structures. This not only verified results recording tiny lumps in the CBR, but it also validates our current understanding and theories of the universe.
The Wilkinson Microwave Anisotropy Probe (WMAP) probe, a successor to COBE, has extended the original measurements with an even greater resolution, which allow us to see greater detail in the variations in the temperature of the CBR and provide accurate data for models of the shape, content and evolution of the universe. This data can then be used to test the Big Bang theory, inflation theory and any other theory of the formation of the universe.
Analysis of the CBR by WMAP has revealed that there are distinct peaks in both its power spectrum. These peaks have been attributed to dark matter and have given evidence to support the inflationary theory of the universe.
The existence of the CBR is not only confirmation that our models for the evolution of the early universe are valid, but also through its analysis we are able to refine values for cosmological constants and ultimately move one step closer to understanding the universe in which we live.
Future of CBR analysis
Scheduled for launch in April 2009 the Planck Surveyor will take off from where WMAP left off. It will record and analyse the CBR in a higher resolution and will investigate the polarisation of the CBR, gravitational lensing, the geometry of the universe and cataloguing galaxy clusters through the Sunyaev-Zel'dovich effect (distortions caused by high energy electrons).
We eagerly await the first results from Planck Surveyor and the implications for Cosmology! |
Binary – base 2. We’re used to decimal, which is base 10, meaning we can use 0 through 9. In base 2, or binary, we use 0-1. The reason why computers use binary is that people tried and failed to make stable base 10 vacuum tubes back in the day. It’s much easier to represent just two possible states instead of 10. With fluctuations in voltage, a 3 could look like a 4; an 8 could look like a 7, and so on. Binary can mean 0 or 1, false or true, no or yes, or they can be bits in something larger. A 0 or 1 in binary is called a bit. ASCII uses 8 (or sometimes 7) bits, or one byte, to represent basic text. Unicode uses 2 bytes, or 16 bits, to represent basic alphanumeric text as well as many other kinds of characters, such as other languages and emojis.
Binaries – not to be confused with binary (the base 2 number system), when people refer to software as binaries, they are referring to compiled executables. In other words, finished programs you can run. To download a binary doesn’t mean to just download any old random 1s and 0s, it means you’re downloading code that has been compiled into something your computer can execute. A common shortening of binary is bin.
Octal – base 8, using the numbers 0-7.
Hexadecimal – base 16, often used as an easier way of showing binary data, such as a memory address. Hexadecimal is used much more often than octal for displaying things. Data is not stored as hexadecimal; we only view hexadecimal representations of binary data. 24 is 16, so a single hexadecimal character can be used to represent 4 binary bits. Hexadecimal uses the numbers 0-9 and the letters A-F. A is 10, B is 11, C is 12, and so on. An example of hexadecimal is 0x416C616E. The 0x is a hexadecimal prefix which tells you that the stuff following should be interpreted as hex.
Character encoding – a way of representing text using combinations of 1s and 0s. Some examples of character encoding are UTF-8 and UTF-16.
ASCII – a way of encoding characters, just basic text, numbers, and simple punctuation. An ASCII character is 8 bits, or one byte. There is also an exception, called 7-bit ASCII, but I won’t concentrate on that.
Unicode – unlike ASCII, Unicode has support for thousands of unique characters. It includes all that’s in ASCII, in addition to other language characters, as well as emojis.
Base64 – using plain alphanumeric characters to represent binary data. Base64 consists of 64 different characters – A-Z, a-z, 0-9, /, and +. Any data that can be stored on a computer can be represented in base64, but it’s typically used for binary data (or steganography/hacking, but that’s another topic entirely). Base64 makes text or files look like gibberish, but don’t be fooled – base64 is not encryption. It is trivially easy to decode base64-encoded data.
Text vs. binary data – there are two main classifications of data: text and binary. Text data is pretty straightforward. Binary data uses formats that are not human-readable but make sense to programs. An example of text data would be a .txt file, but something like a jpg image file is stored using a binary data format. If you tried to view the file in a text editor or hex editor, it would look like gibberish.
File IO – file IO means file input and output. Your first programs might not deal with opening or writing to files, but eventually, you will need to deal with them, because values only in RAM won’t do you much good. You want to save and load things now and then. You can make a file, delete a file, open a file, close a file, overwrite a file, or append to a file. When dealing with file IO, you will need to add exception handling to your code. Maybe your code will open a file called list_of_bands.txt. But what if that file doesn’t exist? You need to account for things like that.
When a file is opened, it’s locked and can’t be used by anything else. So you want to make sure to close a file when you’re done with it. When a file is opened, you can do things like get the lines from it and then store them as strings, or whatever you want to do with them. Just beware of running into the EOF, or End Of File. Continuing to parse a file when there’s nothing more in it can lead to problems.
You can perform actions like find and replace, or parse the whole file in its entirety. You can also append to a file, which is to add stuff to the end without deleting the rest of the contents.
If you write to a file, make sure you know the difference between overwriting and appending, otherwise, you can delete all the existing contents, and that might be a bad thing. If you don’t have permission to edit a file, you can get an error with that too.
There are different modes for opening a file. You can open something as read-only, or as writing to it. If all you want to do is load contents from a file into variables in RAM, then open it as read-only.
Common types of files you will deal with for file IO include .txt, .csv, .xml, and .json. These are all text-based files.
If you want to read or write other kinds of files, such as GIFs, that is referred to as binary IO, because you are dealing with binary data formats.
If you study computer science, you might have to write a file parser from scratch, but in the real world, you can use a parser that is either built into a standard library or on GitHub or something.
File IO isn’t the only kind of IO though. The kind of IO you will start with in programming is outputting text to the window and getting input from the keyboard.
Absolute vs. relative path – cat.jpg is a relative path in the current directory.
../images/cat.jpg is a relative path that goes up one level and then down into the images folder.
A relative path shows its path in relation to the current directory. An absolute path is an entire path, such as /Users/bob/Documents/images/cat.jpg
When you are making a website, don’t use absolute paths. Use relative ones. You might also want to make folders for separate things to keep it all organized.
input() – the way to get user input in Python is with the input() function. You need to assign the return value of the function to a variable if you want to keep it.
user_name = input(“Enter your name: “)
print(“Your name is ” + user_name)
Parser – A parser is a program that runs through something. I’ve written a CSV parser that parses through CSV-structured files. I used it for something that required sorting and searching through football players. Quite often, languages will have built-in parsers for numerous kinds of files, such as CSV or JSON. You don’t have to reinvent the wheel when instead you can just read your programming language’s documentation and see if it has what you need already.
Generally speaking, you might want to write or implement a parser to load the contents of a file into RAM or to access select parts of it rather than all of it. Parsers go through the contents of a file, and they need to stop at the EOF, or End Of File.
If someone makes a video game, and they want to add the ability to save and load games, they need to create their own file structure for a game save. They might implement this using XML or JSON, and then create a parser for it to load the save file into RAM so that the user can play where they left off the last time they saved.
EOF – end of file. When you’ve reached the EOF, it’s time to stop parsing a file.
CSV – comma-separated values. A straightforward file format. You can open a CSV file in a program such as Microsoft Excel or LibreOffice Calc, and it will look like a spreadsheet, with rows and columns.
Here’s an example of CSV:
joe,23,555 main street
alice,39,456 country road
bob,27,123 prairie lane
After a certain point, you might have too much data for a simple CSV file. At that point, you will want to move on and use databases instead, which are like spreadsheets on steroids. But to start, or for very simple programs, CSV is fine.
Delimiter – a way of breaking things up, or defining how things are broken up. When you hear the word delimiter, think separator. Comma-separated values (a.k.a. CSV), for example, are values delimited by commas. Command line arguments are delimited by spaces, such as this:
adder_program 4 5 7
In the above example, the arguments are 4, 5, and 7, because there are spaces in between them. But what about in the following example?
adder_program 45 7
In the above example, the command line arguments being passed to adder_program are 45 and 7. Because there is no space between the 4 and the 5, it knows that they’re part of the same argument, as there is no delimiter between the digits.
XML – Extensible Markup Language. XML is a data exchange format, kind of like JSON. A lot of older stuff is XML, whereas a lot of newer things use JSON instead. XML might be popular now, but it’s on its way out, while JSON is on the up and up.
XML looks like HTML, but it’s not. You can make whatever tags you want. It’s just a way to structure your data with tags. Many languages will have built-in parsers for XML, so as long as you import it in your program, such as in Python, you can easily extract the exact piece of the file that you want.
Let’s say you’re making blog software and it takes XML data and combines it with a template. You want to structure your data in a way that makes sense for the given use-case.
<title>Cats are great</title>
<caption>Picture of a cool cat</caption>
Cats are awesome. This is XML.
I could make any tag I want, even an <cat> tag or <whatever> tag if I really felt like it. Just like how you can name variables in a programming language, you can use whatever identifiers you want in XML. And please don’t get the wrong idea. The above XML example is not HTML, nor is it a web page. It is merely a structured representation of data. In web development, you want to separate content from layout, and XML is one way you can achieve that. XML says what the content of the page will be, but it says absolutely nothing about what the page will look like. There’s also something called XHTML, which is like a combination of HTML and XML. However, XHTML, which was part of HTML 4, has become obsolete and has been replaced by HTML 5. Not only that, but JSON is becoming more popular than XML these days.
For the blog software I was talking about before, if you wanted to put a particular part of the XML into the template, you’d do a find and replace in the HTML template that would retrieve the title to put it into the title placeholder, and so on for all the other parts of it too.
XML is like a dictionary, but it’s a little repetitious because you say each tag twice – once for opening and once for closing. I prefer JSON.
Here is a JSON example of the same article stuff from the XML example:
“title”: “Cats are great”,
“caption”: “Picture of a cool cat”,
“body”: “Cats are awesome. This is JSON.”
Compared to XML, it seems a little less repetitious. It also looks a little more like JS instead of HTML. You can also perform nesting, or have arrays and whatnot too. JSON schemas don’t have to be strict. They can be flexible, and as long as your JSON has certain things, it doesn’t always have to be precisely the same as your schema (depending on how you set it up). But depending on what you’re doing, you might want to be more rigid.
In PyPi for Python, you can use this command to install a JSON schema/validation package:
pip install jsonschema
My Wordpress websites have JSON APIs built-in, thanks to the CMS. I didn’t need to lift a finger; it’s already there. Here is an example of a website I’ve made, and how it has an API: https://smartfinancialresearch.com/wp-json/
Schema – a defined structure for something, such as JSON. If I have a JSON file for an article for a blog, the schema might define that there will be a title, date, author, and body text for it. If a file doesn’t have the right structure, it can’t be validated with the schema. Databases also use schemas.
Validation – to validate something is to compare it. In the context of schemas, to validate JSON against a schema is to see if it meets the criteria for what the JSON is supposed to be structured like.
Database – there are many different types of databases, but I’m going to concentrate on relational databases here, as they are the most common. There are other kinds of databases, but I won’t focus on them here. Databases are comprised of multiple tables, each of which can be designated for a separate purpose. A table consists of rows and columns. You can make whatever categories you want for the columns, such as a user table with columns for username, password (which should be hashed instead of being stored in an insecure way), email, and so on. After making the table with columns, you can specify further info about what kind of data should be in each section, and if it’s optional or not. A record, or row, is an entry in it, such as for when a user makes a new account on your website. Each row should have a different primary key to uniquely identify them from other rows.
With database tables, you can use SQL queries to do things like drop a table, select information from a table, search, update, and delete. Some types of database software include MySQL, PostgresQL, SQLite, phpMyAdmin, MySQL Workbench, and Bitnami’s LAMP VM or local WAMP/MAMP stack programs. |
PRACTICE PROBLEMS SURFACE AREA ANSWERS
Surface area word problems (practice) | Khan Academy
Practice: Surface area word problems. This is the currently selected item. Practice: Volume and surface area word problems. Surface area review. Solve word problems that involve surface area of pyramids and prisms. Solve word problems that involve surface area of pyramids and prisms.
Surface area | High school geometry (practice) | Khan Academy
Practice finding the surface area of 3D objects. Practice finding the surface area of 3D objects. If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *atic and *ndbox are unblocked.
IXL | Surface area of cubes and prisms | 7th grade math
Improve your math knowledge with free questions in "Surface area of cubes and prisms" and thousands of other math skills.
Ninth grade Lesson Surface Area and Area Differentiation
Next, I give my students a worksheet with several surface area problems. Since these are multistep problems, I think it is important for students to gain as much practice as possible before moving on.3 I tell students they must attend to precision, drawing each type of face when working through these problems.Author: Jessica Uy
Surface Area Worksheets
Cone, Sphere and HemisphereSurface Area of PyramidsMixed and Compound ShapesSurface Area of a ConeEach worksheet has 9 problems finding surface area of a cone:Sheet 1 |Sheet 2 |Sheet 3 |Grab 'em AllModerate:Sheet 1 |Sheet 2 |Sheet 3 |Grab 'em All 1. Download AllSurface Area of Sphere and HemisphereEach worksheet is a mix of both spheres and hemispheres. There are nine questions in each handout:Sheet 1 |Sheet 2 |Sheet 3 |Grab 'em AllModerate:Sheet 1 |Sheet 2 |Sheet 3 |Grab 'em AllDifficult:Sheet 1 |Sheet 2 |Sheet 3 |Grab 'em All 1. Download AllSee more on mathworksheets4kids
Geometry 2 – Unit Seven: Surface Area & Volume, Practice
PDF fileGeometry 2 – Unit Seven: Surface Area & Volume, Practice . In Problems #1 - #4, find the surface area and volume of each prism. 1. CUBE 2. RECTANGULAR PRISM . 9 cm 9 cm. 9 cm 11 mm 5 mm 3 mm. 3. TRIANGULAR PRISM 4. TRIANGULAR PRISM . 10 ft. 14.5 ft 6 ft. 16 in. 30 in 9 in. 5. A rectangular prism has a surface area of 448 cm2. Its length is 14
Calculus II - Surface Area (Practice Problems)
Here is a set of practice problems to accompany the Surface Area section of the Applications of Integrals chapter of the notes for Paul Dawkins Calculus II course at Lamar University.
Quiz & Worksheet - Lateral Surface Area | Study
About This Quiz & Worksheet. Review the lateral surface area of different shapes with this quiz. This quiz also includes problems related to lateral surface area that you will need to solve.
Surface Area of a Sphere (Solutions, Worksheets, Examples
calculate the surface area of a sphere, calculate the surface area of a hemisphere, solve problems about surface area of spheres, prove the formula of the surface area of a sphere, Surface area - sphere, rectangular solids, prisms, cylinders, spheres, cones, pyramids,
Volume and Surface Area Questions with Answers
If you face difficulties to solve these questions, you can visit on surface area and volume problems with solutions to resolve your problems. As well as you should learn how to use formulas of surface area and volume with examples with differential equations in competitive exams. Volume and Surface Area Questions and Answers |
Successfully reported this slideshow.
You’ve unlocked unlimited downloads on SlideShare!
GIS Vegetation Mapping of the Jama Coaque Reserve
Third Millennium Alliance
Tropical rainforests provide a significant amount of resources relative to any other
ecosystem on Earth and because of their fragility they are the subjects of important
conservation efforts. Biodiversity conservation efforts are also important due to the
forests’ high number of endemic species and human-caused habitat loss (Helmer et al.,
2002). The importance of tropical rainforests can be exemplified in several ways. Their
high species richness offers a higher opportunity and capacity for change that is
important in the unpredictable environment of the rainforest (Brunig, 1977). In addition,
tropical forests play a main role in the nutrient cycle because the vegetation acts as a filter
of food and nutrients. Stands of long living trees both store a large amount of nutrient in
their trunk and release nutrients through the annual fall of leafy wood litter (Brunig,
1977). Their efficiency in conserving and filtering nutrients is evident by the low
amounts of bioelements in the stream water (Brunig, 1977).
Human interference has had a negative effect on tropical rainforests. Degradation
of rainforests by deforestation cannot be reversed and depending on the crop that is
planted, it may take hundreds of years to rebuild the soil’s ecosystem (Brunig, 1977). In
many cases the complex nutrient cycle once held in place by the rainforest can never be
attained (Brunig, 1977). Simple man-made forests with a reduced canopy cover will alter
the capacity of exchange with the environment, therefore altering the delicate balance
tropical rainforests have created over many years (Brunig, 1977). Less vegetation cover
results in reduced cloud contact and increased evapotranspiration (Lawton et al., 2001).
This along with warmer temperatures created by climate change will affect the location
of cloud forests and all the endemic species that live within them (Still et al., 1999). If
humans convert forests to vegetation cover types with a lower biomass, then large
amounts of carbon dioxide will be released into the atmosphere affecting not just local
areas, but regional and global ones as well (Brunig, 1977).
This study focuses on the forests located in the Jama Coaque Reserve in the
coastal province of Manabí, Ecuador. The reserve is a biodiversity hotspot and is home to
some of the last remaining patches of both the tropical moist evergreen forest and the
premontane cloud forest. Tropical moist forests are richer in biodiversity than cloud
forests and occur at lower elevations. They are the most structurally-complex and diverse
of the land ecosystems and are found in regions where the average temperatures of the
three warmest and three coldest months do not differ by more than 5°C (“What is a
tropical rainforest,” 2015). Rainfall is evenly distributed which allows for the growth of
broad-leaved evergreen trees (“What is a tropical rainforest?,” 2015). However, many
tropical moist forests have distinct dry and rainy seasons. Another important
characteristic of tropical moist forests is their multiple layers of vegetation from
understory shrubs to trees of more than 40m in height (“What is a tropical rainforest?,”
Cloud forests are a subset of tropical rainforests that occur where mountains are
frequently enveloped by orographic clouds and convective rainfall (Still et al., 1999).
Their immersion in clouds reduces solar radiation, increases humidity, and reduces
transpiration (Lawton et al., 2001). They are also particularly important in local and
regional watersheds because they regulate the seasonal release of precipitation (Still et
al., 1999). For example, horizontal precipitation on vegetation provides a source of water
during the dry season by allowing water droplets to condense directly on plants that are
surrounded by clouds (Still et al., 1999).
Vegetation mapping, which is the main focus of this project, is important because
it provides information for managing landscapes in order to sustain their biodiversity and
the structure and function of their ecosystems (Helmer et al., 2002). Maps are critical for
biodiversity planning because vegetation is linked to species composition and habitat
types and as ecosystems change over time so will their inhabitants. Vegetation mapping
at the Jama Coaque Reserve is also important because it will aid in the planning and
executing of future research projects.
The problems with mapping tropical cloud forests, however, is that typical
methods such as remote sensing and satellite imagery are difficult to use for a few
reasons. First, these forest environments typically have a complex topography. This
means that ecological zones and illumination angles change over small areas leading to
spectral confusion in which varied vegetation communities have similar spectral
signatures (Helmer et al., 2002). Second, the persistent and intermittent cloud cover
requires the use of satellite images from different dates further complicating image-based
mapping (Helmer et al., 2002). Finally, the tendency to group similar but distinct
ecological zones together may extend the apparent distribution of endemic species with
narrow distribution (Helmer et al., 2002).
The first goal of this project is to create a map that delineates the boundaries
between the five main types of vegetation cover at the Jama Coaque Reserve (JCR):
primary and secondary tropical moist evergreen forests, primary and secondary
premontane cloud forests, and agroforestry plots. Finally, the second goal is to determine
the average elevation at which the cloud forest begins. It has previously been predicted
by members of the reserve that the boundary occurs at approximately 525m of elevation.
Methodology and Data Analysis
The first step in the mapping process was learning how to identify the five
different forest types in the reserve. Table 1, created by the Fall Session of 2014 intern
Justine Revenaz, demonstrates how to recognize the differences between the main
vegetation types in the field. In addition to the identification table, both the EcoHike and
hikes led by the local trail guide helped with learning how to recognize the different
TABLE 1. Description of the five different forest types at the Jama Coaque Reserve.
Modified from Justine Revenaz (2014).
The primary tropical moist evergreen forest is characterized by larger, older trees
and a high canopy. It also has a short understory that is relatively easy to walk through.
On the other hand, the secondary tropical moist forest is composed of mostly young trees
and saplings. It has a lower canopy than the primary moist forest and an impenetrable
understory. The transition to the premontane cloud forest occurs with the increased
presence of moss and epiphytes on trees and an increased amount of ferns and shrubs on
the ground. Selaginella, an ancient genus and a fern ally, is abundant in the reserve and
is a common understory plant (Kricher, 1999). In the cloud forest the canopy is dense and
the trees are smaller allowing little to no sunlight to shine through. The secondary
premontane cloud forest has a slightly reduced amount of epiphytes. The trees tend to be
smaller allowing more light to shine through the canopy. Groupings of crop plants mark
the agroforestry plants. Usually these include stands or rows of banana, cacao, orange,
citrus, lime, or coffee plants.
Mapping began by hiking the trails within the reserve. Every five minutes a GPS
waypoint was marked denoting the type of forest surrounding the trail. The vegetation
type at the point was analyzed based on Table 1, and the data was stored in the GPS
according to Table 2. If the type of forest changed within those five minutes the time
would stop and a waypoint of the new vegetation type would be marked. The counting
would then be reset to time zero and a new point would be taken after an additional five
minutes. If an agroforestry patch was only composed of a few trees then only one point
would be marked in the middle, but if it was a larger plot a waypoint would be marked at
TABLE 2. GPS Waypoint Forest Classifications
Primary tropical moist evergreen TMP
Secondary tropical moist evergreen TMS
Primary premontane cloud PC
Secondary premontane cloud PS
Agroforestry patch AGR
Transition zone TRNS
Modified from Justine Revenaz (2014).
In order to create the final maps the waypoints were entered as a vector layer into
the QGIS software and combined with other layers of the reserve such as trails, elevation,
and the theorized cloud forest. The forest type layers were created by first overlapping
the trail and GPS waypoint layers. The split tool was then used to divide the trail layer
into multiple layers wherever one forest section stopped and a new one started. This was
determined by looking at the GPS waypoints.
Two vegetation maps were created of the reserve. The first (Fig. 1) is a map of the
trails of the reserve color-coded by forest type and it includes all of the GPS waypoints
taken. It also includes the area previously theorized by reserve members to be the cloud
forest. The area of theorized cloud forest is considered any area above 525m. The second
map (Fig. 2) is similar to the first but does not include the waypoints. A box plot was also
created that shows the range in elevation for each forest type (Fig. 3).
The beginning of the cloud forests ranges from 450.21m to 560.95m and begins at
an average of 518.38m. On both the Ronquillo and Emergentes trails on the north side of
the reserve, the cloud forest begins at a much lower elevation than the theorized value
(Figure 1). Removing those two data points, the average beginning of the cloud forest is
Figure 1. Map of the Jama Coaque Reserve Trail System colorcoded by vegetation type. Included are all of the
GPS waypoints taken during data collection andthe area that is theorizedto be cloud forest. The forest color-
coding was created by connecting the data collectedby the GPS.
Figure 2. Map of the forest types along the trails of the Jama Coaque Reserve without the waypoints.
Figure 3. Box plot showing the range in elevations foreach forest type. The points on the left of each box
represent each of the GPS Waypoints taken for that forest type. The median is shown as a dark line andthe
mean is shown as a dashed line within the box. The whiskers represent the minimum andmaximum values of
the datasets. The average for agroforestry is 421.57m, for secondary moist is 359.18m, for primary moist is
381.48m, for transition is 484.04m, for secondary cloud is 619.23m, and for primary cloud is 579.97m.
Discussion and Conclusion
The average beginning of the cloud forest at 518.38m agrees with the previous
predictions that the cloud forest begins at around 525m. This trend does not apply to the
trails at the north side of the reserve. For example, at the Emergentes trail the cloud forest
begins at 450.21m and at the intersection of the Ronquillo and El Dorado trails the cloud
forest begins at 467.83m. One possible explanation is that the mountain ridge is lower at
this part of the reserve. This means that orographic precipitation may also occurs at a
lower elevation therefore affecting the location of the cloud forest. Another possible
explanation is that wind patterns on the northern end of the reserve are different enough
that they lead to cloud formation at a lower elevation. More research is needed in order to
provide a more detailed explanation.
Both the box plot and the maps display a general pattern of increasing elevation
amongst the different forest types. From lower to higher elevation the pattern goes
secondary tropical moist forest, primary tropical moist forest, transition zone, primary
cloud forest, and secondary cloud forest. The secondary cloud forest overlaps with the
primary cloud forest but is still located at higher elevations. This is due to the fact that the
highest points in the reserve are located at the ridge, which in many locations is also the
border of the reserve. The secondary cloud forest is mostly located in these areas of high
elevation where there is a neighboring pastureland or agricultural field.
Future research can be done in order to create a more detailed map of the reserve.
The type of mapping that has been done in this paper, which involves collecting GPS
points along the trails, has illuminated vertical patterns of forest change. In the future,
mapping in between the trails is needed in order to see if any patterns emerge laterally.
One possibility is to use the same method and collect GPS points by foot, but this time
traveling in between the trails. The main problem with this method is accessibility since
much of the forest floor is too dense to traverse by foot. Another possibility is the use of a
drone or unmanned aerial vehicle (UAV). Sugiura et al. (2005) developed a system that
involved mounting an imaging sensor on a UAV in order to generate a map regarding
crop status. Lucieer et al. (2010) also successfully used a UAV that provided ultra-high
resolution spatial data to map moss beds in Antarctica. In order to map the Jama-Coaque
reserve using drones and UAVs the variable topography and vegetation of the region
would need to be taken into account first.
Brunig, E. F. (January 01, 1977). The Tropical Rain Forest: A Wasted Asset or an
Essential Biospheric Resource?. Ambio, 6, 4, 187-191.
Helmer, E. H., Ramos, O., Lopez, T., Quiñones, M., Diaz, W. (2002). Mapping the Forest
Type and Land Cover of Puerto Rico, a Component of the Caribbean Biodiversity
Hotspot. Caribbean Journal of Science, 38, 3-4, 165-183.
Kricher, J. C. (1999). A neotropical companion: an introduction to the animals, plants,
and ecosystems of the New World tropics. Princeton University Press.
Lawton, R. O., Nair, U. S., Pielke, R. A., & Welch, R. M. (2001). Climatic impact of
tropical lowland deforestation on nearby montane cloud
forests. Science,294(5542), 584-587.
Lucieer, A., Robinson, S. A., & Turner, D. (2010). Using an unmanned aerial vehicle
(UAV) for ultra-high resolution mapping of Antarctic moss beds.
Revenaz, J. (2014) Establishing a vegetation mapping strategy for the Jama Coaque
Reserve. Unpublished manuscript, Third Millennium Alliance, Ecuador.
Still, C. J., Foster, P. N., & Schneider, S. H. (January 01, 1999). Simulating the effects of
climate change on tropical montane cloud forests. Nature, 398, 6728, 608.
Sugiura, R., Noguchi, N., & Ishii, K. (2005). Remote-sensing technology for vegetation
monitoring using an unmanned helicopter. Biosystems engineering,90(4), 369-
What is a tropical rainforest? (2015). Retrieved from Rainforest Conservation Fund |
1. Fundamental Concepts of Algebra. Real Numbers. Exponents and Radicals. Algebraic Expressions. Fractional Expressions. Chapter 1 Review Exercises. Chapter 1 Discussion Exercises. 2. Equations and Inequalities. Equations. Applied Problems. Quadratic Equations. Complex Numbers. Other Types of Equations. Inequalities. More on Inequalities. Chapter 2 Review Exercises. Chapter 2 Discussion Exercises. 3. Functions and Graphs. Rectangular Coordinate Systems. Graphs of Equations. Lines. Definition of Function. Graphs of Functions. Quadratic Functions. Operations on Functions. Inverse Functions. Variation. Chapter 3 Review Exercises. Chapter 3 Discussion Exercises. 4. Polynomial and Rational Functions. Polynomial Functions of Degree Greater than 2. Properties of Division. Zeros of Polynomials. Complex and Rational Zeros of Polynomials. Rational Functions. Chapter 4 Review Exercises. Chapter 4 Discussion Exercises. 5. Exponential and Logarithmic Functions. Exponential Functions. The Natural Exponential Function. Logarithmic Functions. Properties of Logarithms. Exponential and Logarithmic Equations. Chapter 5 Review Exercises. Chapter 5 Discussion Exercises. 6. The Trigonometric Functions. Angles. Trigonometric Functions of Angles. Trigonometric Functions of Real Numbers. Values of Trigonometric Functions. Trigonometric Graphs. Additional Trigonometric Graphs. Applied Problems. Chapter 6 Review Exercises. Chapter 6 Discussion Exercises. 7. Analytic Trigonometry. Verifying Trigonometric Identities. Trigonometric Equations. The Addition and Subtraction Formulas. Multiple-Angle Formulas. Product-to-Sum and Sum-to-Product Formulas. The Inverse Trigonometric Functions. Chapter 7 Review Exercises. Chapter 7 Discussion Exercises. 8. Applications of Trigonometry. The Law of Sines. The Law of Cosines. Trigonometric Form for Complex Numbers. De Moivre's Theorem and nth Roots of Complex Numbers. Vectors. The Dot Product. Chapter 8 Review Exercises. Chapter 8 Discussion Exercises. 9. Systems of Equations and Inequalities. Systems of Equations. Systems of Linear Equations in Two Variables. Systems of Linear Equations in More than Two Variables. Partial Fractions. Systems of Inequalities. Linear Programming. The Algebra of Matrices. The Inverse of a Matrix. Determinants. Properties of Determinants. Chapter 9 Review Exercises. Chapter 9 Discussion Exercises. 10. Sequences, Series, and Probability. Infinite Sequences and Summation Notation. Arithmetic Sequences. Geometric Sequences. Mathematical Induction. The Binomial Theorem. Permutations. Distinguishable Permutations and Combinations. Probability. Chapter 10 Review Exercises. Chapter 10 Discussion Exercises. 11. Topics from Analytic Geometry. Parabolas. Ellipses. Hyperbolas. Plane Curves and Parametric Equations. Polar Coordinates. Polar Equations of Conics. Chapter 11 Review Exercises. Chapter 11 Discussion Exercises. Appendixes. Answers to Selected Exercises. Index Of Applications. Index.
Algebra and Trigonometry with Analytic Geometry: (Classic Edition with CD-ROM and InfoTrac) / Edition 10by Earl Swokowski, Jeffrey A. Cole
Pub. Date: 02/28/2002
Publisher: Cengage Learning
This alternate version of ALGEBRA AND TRIGONOMETRY WITH ANALYTIC GEOMETRY (Classic Edition with CD-ROM), Tenth Edition is for IUPUI and Purdue Universities ONLY. Order this version if you are a qualifying customer. Other customers should order the standard version ALGEBRA AND TRIGONOMETRY WITH ANALYTIC GEOMETRY (with CD-ROM), Tenth Edition, ISBN: 0-534-39050-1,
This alternate version of ALGEBRA AND TRIGONOMETRY WITH ANALYTIC GEOMETRY (Classic Edition with CD-ROM), Tenth Edition is for IUPUI and Purdue Universities ONLY. Order this version if you are a qualifying customer. Other customers should order the standard version ALGEBRA AND TRIGONOMETRY WITH ANALYTIC GEOMETRY (with CD-ROM), Tenth Edition, ISBN: 0-534-39050-1, by Earl W. Swokowski and Jeffery A. Cole. See the Related Links" section (above) for more information.
- Cengage Learning
- Publication date:
- Edition description:
- Classic with CD-ROM & InfoTrac
- Product dimensions:
- 7.87(w) x 9.45(h) x (d)
Table of Contents
Most Helpful Customer Reviews
See all customer reviews |
Atlantic slave trade
The Atlantic slave trade or transatlantic slave trade involved the transportation by slave traders of enslaved African people, mainly to the Americas. The slave trade regularly used the triangular trade route and its Middle Passage, and existed from the 16th to the 19th centuries. The vast majority of those who were enslaved and transported in the transatlantic slave trade were people from central and western Africa, who had been sold by other West Africans to Western European slave traders (with a small number being captured directly by the slave traders in coastal raids), who brought them to the Americas. The South Atlantic and Caribbean economies especially were dependent on the supply of secure labour for the production of commodity crops, making goods and clothing to sell in Europe. This was crucial to those western European countries which, in the late 17th and 18th centuries, were vying with each other to create overseas empires.
The Portuguese, in the 16th century, were the first to engage in the Atlantic slave trade. In 1526, they completed the first transatlantic slave voyage to Brazil, and other European countries soon followed. Shipowners regarded the slaves as cargo to be transported to the Americas as quickly and cheaply as possible, there to be sold to work on coffee, tobacco, cocoa, sugar and cotton plantations, gold and silver mines, rice fields, construction industry, cutting timber for ships, in skilled labour, and as domestic servants. The first Africans imported to the English colonies were classified as "indentured servants", like workers coming from England, and also as "apprentices for life". By the middle of the 17th century, slavery had hardened as a racial caste, with the slaves and their offspring being legally the property of their owners, and children born to slave mothers were also slaves. As property, the people were considered merchandise or units of labour, and were sold at markets with other goods and services.
The major Atlantic slave trading nations, ordered by trade volume, were: the Portuguese, the British, the French, the Spanish, and the Dutch Empires. Several had established outposts on the African coast where they purchased slaves from local African leaders. These slaves were managed by a factor who was established on or near the coast to expedite the shipping of slaves to the New World. Slaves were kept in a factory while awaiting shipment. Current estimates are that about 12 to 12.8 million Africans were shipped across the Atlantic over a span of 400 years,:194 although the number purchased by the traders was considerably higher, as the passage had a high death rate. Near the beginning of the 19th century, various governments acted to ban the trade, although illegal smuggling still occurred. In the early 21st century, several governments issued apologies for the transatlantic slave trade.
The Atlantic slave trade developed after trade contacts were established between the "Old World" (Afro-Eurasia) and the "New World" (the Americas). For centuries, tidal currents had made ocean travel particularly difficult and risky for the ships that were then available, and as such there had been very little, if any, maritime contact between the peoples living in these continents. In the 15th century, however, new European developments in seafaring technologies resulted in ships being better equipped to deal with the tidal currents, and could begin traversing the Atlantic Ocean. Between 1600 and 1800, approximately 300,000 sailors engaged in the slave trade visited West Africa. In doing so, they came into contact with societies living along the west African coast and in the Americas which they had never previously encountered. Historian Pierre Chaunu termed the consequences of European navigation "disenclavement", with it marking an end of isolation for some societies and an increase in inter-societal contact for most others.
Historian John Thornton noted, "A number of technical and geographical factors combined to make Europeans the most likely people to explore the Atlantic and develop its commerce". He identified these as being the drive to find new and profitable commercial opportunities outside Europe as well as the desire to create an alternative trade network to that controlled by the Muslim Empire of the Middle East, which was viewed as a commercial, political and religious threat to European Christendom. In particular, European traders wanted to trade for gold, which could be found in western Africa, and also to find a maritime route to "the Indies" (India), where they could trade for luxury goods such as spices without having to obtain these items from Middle Eastern Islamic traders.
Although many of the initial Atlantic naval explorations were led by Iberians, members of many European nationalities were involved, including sailors from Portugal, Spain, the Italian kingdoms, England, France and the Netherlands. This diversity led Thornton to describe the initial "exploration of the Atlantic" as "a truly international exercise, even if many of the dramatic discoveries were made under the sponsorship of the Iberian monarchs". That leadership later gave rise to the myth that "the Iberians were the sole leaders of the exploration".
Slavery was prevalent in many parts of Africa for many centuries before the beginning of the Atlantic slave trade. There is evidence that enslaved people from some parts of Africa were exported to states in Africa, Europe, and Asia prior to the European colonization of the Americas.
The Atlantic slave trade was not the only slave trade from Africa, although it was the largest in volume and intensity. As Elikia M'bokolo wrote in Le Monde diplomatique:
The African continent was bled of its human resources via all possible routes. Across the Sahara, through the Red Sea, from the Indian Ocean ports and across the Atlantic. At least ten centuries of slavery for the benefit of the Muslim countries (from the ninth to the nineteenth) ... Four million enslaved people exported via the Red Sea, another four million through the Swahili ports of the Indian Ocean, perhaps as many as nine million along the trans-Saharan caravan route, and eleven to twenty million (depending on the author) across the Atlantic Ocean.
According to John K. Thornton, Europeans usually bought enslaved people who were captured in endemic warfare between African states. Some Africans had made a business out of capturing Africans from neighboring ethnic groups or war captives and selling them. A reminder of this practice is documented in the Slave Trade Debates of England in the early 19th century: "All the old writers ... concur in stating not only that wars are entered into for the sole purpose of making slaves, but that they are fomented by Europeans, with a view to that object." People living around the Niger River were transported from these markets to the coast and sold at European trading ports in exchange for muskets and manufactured goods such as cloth or alcohol. However, the European demand for slaves provided a large new market for the already existing trade. While those held in slavery in their own region of Africa might hope to escape, those shipped away had little chance of returning to Africa.
European colonization and slavery in West Africa
This section relies largely or entirely on a single source. (April 2011)
Upon discovering new lands through their naval explorations, European colonisers soon began to migrate to and settle in lands outside their native continent. Off the coast of Africa, European migrants, under the directions of the Kingdom of Castile, invaded and colonised the Canary Islands during the 15th century, where they converted much of the land to the production of wine and sugar. Along with this, they also captured native Canary Islanders, the Guanches, to use as slaves both on the Islands and across the Christian Mediterranean.
As historian John Thornton remarked, "the actual motivation for European expansion and for navigational breakthroughs was little more than to exploit the opportunity for immediate profits made by raiding and the seizure or purchase of trade commodities". Using the Canary Islands as a naval base, Europeans, at the time primarily Portuguese traders, began to move their activities down the western coast of Africa, performing raids in which slaves would be captured to be later sold in the Mediterranean. Although initially successful in this venture, "it was not long before African naval forces were alerted to the new dangers, and the Portuguese [raiding] ships began to meet strong and effective resistance", with the crews of several of them being killed by African sailors, whose boats were better equipped at traversing the west African coasts and river systems.
By 1494, the Portuguese king had entered agreements with the rulers of several West African states that would allow trade between their respective peoples, enabling the Portuguese to "tap into" the "well-developed commercial economy in Africa ... without engaging in hostilities". "Peaceful trade became the rule all along the African coast", although there were some rare exceptions when acts of aggression led to violence. For instance, Portuguese traders attempted to conquer the Bissagos Islands in 1535. In 1571 Portugal, supported by the Kingdom of Kongo, took control of the south-western region of Angola in order to secure its threatened economic interest in the area. Although Kongo later joined a coalition in 1591 to force the Portuguese out, Portugal had secured a foothold on the continent that it continued to occupy until the 20th century. Despite these incidences of occasional violence between African and European forces, many African states ensured that any trade went on in their own terms, for instance, imposing custom duties on foreign ships. In 1525, the Kongolese king, Afonso I, seized a French vessel and its crew for illegally trading on his coast.
Historians have widely debated the nature of the relationship between these African kingdoms and the European traders. The Guyanese historian Walter Rodney (1972) has argued that it was an unequal relationship, with Africans being forced into a "colonial" trade with the more economically developed Europeans, exchanging raw materials and human resources (i.e. slaves) for manufactured goods. He argued that it was this economic trade agreement dating back to the 16th century that led to Africa being underdeveloped in his own time. These ideas were supported by other historians, including Ralph Austen (1987). This idea of an unequal relationship was contested by John Thornton (1998), who argued that "the Atlantic slave trade was not nearly as critical to the African economy as these scholars believed" and that "African manufacturing [at this period] was more than capable of handling competition from preindustrial Europe". However, Anne Bailey, commenting on Thornton's suggestion that Africans and Europeans were equal partners in the Atlantic slave trade, wrote:
[T]o see Africans as partners implies equal terms and equal influence on the global and intercontinental processes of the trade. Africans had great influence on the continent itself, but they had no direct influence on the engines behind the trade in the capital firms, the shipping and insurance companies of Europe and America, or the plantation systems in Americas. They did not wield any influence on the building manufacturing centers of the West.
16th, 17th and 18th centuries
The Atlantic slave trade is customarily divided into two eras, known as the First and Second Atlantic Systems.
The First Atlantic system was the trade of enslaved Africans to, primarily, South American colonies of the Portuguese and Spanish empires; it accounted for slightly more than 3% of all Atlantic slave trade. It started (on a significant scale) in about 1502 and lasted until 1580 when Portugal was temporarily united with Spain. While the Portuguese were directly involved in trading enslaved peoples, the Spanish empire relied on the asiento system, awarding merchants (mostly from other countries) the license to trade enslaved people to their colonies. During the first Atlantic system, most of these traders were Portuguese, giving them a near-monopoly during the era. Some Dutch, English, and French traders also participated in the slave trade. After the union, Portugal came under Spanish legislation that prohibited it from directly engaging in the slave trade as a carrier. It became a target for the traditional enemies of Spain, losing a large share of the trade to the Dutch, English, and French.
The Second Atlantic system was the trade of enslaved Africans by mostly English, Portuguese, French and Dutch traders. The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave-dependent colonies in the New World. Slightly more than 3% of the enslaved people exported from Africa were traded between 1450 and 1600, and 16% in the 17th century.
It is estimated that more than half of the entire slave trade took place during the 18th century, with the British, Portuguese and French being the main carriers of nine out of ten slaves abducted in Africa. By the 1690s, the English were shipping the most slaves from West Africa. They maintained this position during the 18th century, becoming the biggest shippers of slaves across the Atlantic. By the 18th century, Angola had become one of the principal sources of the Atlantic slave trade.
Following the British and United States' bans on the African slave trade in 1808, it declined, but the period after still accounted for 28.5% of the total volume of the Atlantic slave trade. Between 1810 and 1860, over 3.5 million slaves were transported, with 850,000 in the 1820s.:193
A burial ground in Campeche, Mexico, suggests slaves had been brought there not long after Hernán Cortés completed the subjugation of Aztec and Mayan Mexico in the 16th century. The graveyard had been in use from approximately 1550 to the late 17th century.
The first side of the triangle was the export of goods from Europe to Africa. A number of African kings and merchants took part in the trading of enslaved people from 1440 to about 1833. For each captive, the African rulers would receive a variety of goods from Europe. These included guns, ammunition, and other factory-made goods. The second leg of the triangle exported enslaved Africans across the Atlantic Ocean to the Americas and the Caribbean Islands. The third and final part of the triangle was the return of goods to Europe from the Americas. The goods were the products of slave-labour plantations and included cotton, sugar, tobacco, molasses and rum. Sir John Hawkins, considered the pioneer of the British slave trade, was the first to run the Triangular trade, making a profit at every stop.
Labour and slavery
The Atlantic slave trade was the result of, among other things, labour shortage, itself in turn created by the desire of European colonists to exploit New World land and resources for capital profits. Native peoples were at first utilized as slave labour by Europeans until a large number died from overwork and Old World diseases. Alternative sources of labour, such as indentured servitude, failed to provide a sufficient workforce. Many crops could not be sold for profit, or even grown, in Europe. Exporting crops and goods from the New World to Europe often proved to be more profitable than producing them on the European mainland. A vast amount of labour was needed to create and sustain plantations that required intensive labour to grow, harvest, and process prized tropical crops. Western Africa (part of which became known as "the Slave Coast"), Angola and nearby Kingdoms and later Central Africa, became the source for enslaved people to meet the demand for labour.
The basic reason for the constant shortage of labour was that, with large amounts of cheap land available and lots of landowners searching for workers, free European immigrants were able to become landowners themselves after a relatively short time, thus increasing the need for workers.
Thomas Jefferson attributed the use of slave labour in part to the climate, and the consequent idle leisure afforded by slave labour: "For in a warm climate, no man will labour for himself who can make another labour for him. This is so true, that of the proprietors of slaves a very small proportion indeed are ever seen to labour."
African participation in the slave trade
Africans played a direct role in the slave trade, selling their captives or prisoners of war to European buyers. The prisoners and captives who were sold were usually from neighbouring or enemy ethnic groups. These captive slaves were considered "other", not part of the people of the ethnic group or "tribe"; African kings held no particular loyalty to them. Sometimes criminals would be sold so that they could no longer commit crimes in that area. Most other slaves were obtained from kidnappings, or through raids that occurred at gunpoint through joint ventures with the Europeans. But some African kings refused to sell any of their captives or criminals. King Jaja of Opobo, a former slave, refused to do business with the slavers completely.
Africans also participated in the slave trade through intermarriage, or cassare, meaning "to set up house". It is derived from the Portuguese word "casar", meaning "to marry". Cassare created political and economic bonds between European and African slave traders. Cassare was a pre-European practice used to integrate the "other" from a differing African tribe. Powerful West African groups used these marriages as an alliance used to strengthen their trade networks with European men by marrying off African women from families with ties to the slave trade. Early on in the Atlantic Slave trade, these marriages were common. The marriages were even performed using African customs, which Europeans did not object to, seeing how important the connections were.
European participation in the slave trade
Although Europeans were the market for slaves, Europeans rarely entered the interior of Africa, due to fear of disease and fierce African resistance. In Africa, convicted criminals could be punished by enslavement, a punishment which became more prevalent as slavery became more lucrative. Since most of these nations did not have a prison system, convicts were often sold or used in the scattered local domestic slave market.
As of 1778, Thomas Kitchin estimated that Europeans were bringing an estimated 52,000 slaves to the Caribbean yearly, with the French bringing the most Africans to the French West Indies (13,000 out of the yearly estimate). The Atlantic slave trade peaked in the last two decades of the 18th century, during and following the Kongo Civil War. Wars among tiny states along the Niger River's Igbo-inhabited region and the accompanying banditry also spiked in this period. Another reason for surplus supply of enslaved people was major warfare conducted by expanding states, such as the kingdom of Dahomey, the Oyo Empire, and the Asante Empire.
Slavery in Africa and the New World contrasted
Forms of slavery varied both in Africa and in the New World. In general, slavery in Africa was not heritable – that is, the children of slaves were free – while in the Americas, children of slave mothers were considered born into slavery. This was connected to another distinction: slavery in West Africa was not reserved for racial or religious minorities, as it was in European colonies, although the case was otherwise in places such as Somalia, where Bantus were taken as slaves for the ethnic Somalis.
The treatment of slaves in Africa was more variable than in the Americas. At one extreme, the kings of Dahomey routinely slaughtered slaves in hundreds or thousands in sacrificial rituals, and slaves as human sacrifices were also known in Cameroon. On the other hand, slaves in other places were often treated as part of the family, "adopted children", with significant rights including the right to marry without their masters' permission. Scottish explorer Mungo Park wrote:
The slaves in Africa, I suppose, are nearly in the proportion of three to one to the freemen. They claim no reward for their services except food and clothing, and are treated with kindness or severity, according to the good or bad disposition of their masters ... The slaves which are thus brought from the interior may be divided into two distinct classes – first, such as were slaves from their birth, having been born of enslaved mothers; secondly, such as were born free, but who afterwards, by whatever means, became slaves. Those of the first description are by far the most numerous ...
In the Americas, slaves were denied the right to marry freely and masters did not generally accept them as equal members of the family. New World slaves were considered the property of their owners, and slaves convicted of revolt or murder were executed.
Slave market regions and participation
There were eight principal areas used by Europeans to buy and ship slaves to the Western Hemisphere. The number of enslaved people sold to the New World varied throughout the slave trade. As for the distribution of slaves from regions of activity, certain areas produced far more enslaved people than others. Between 1650 and 1900, 10.24 million enslaved Africans arrived in the Americas from the following regions in the following proportions:
- Senegambia (Senegal and the Gambia): 4.8%
- Upper Guinea (Guinea-Bissau, Guinea and Sierra Leone): 4.1%
- Windward Coast (Liberia and Ivory Coast): 1.8%
- Gold Coast (Ghana and east of Ivory Coast): 10.4%
- Bight of Benin (Togo, Benin and Nigeria west of the Niger Delta): 20.2%
- Bight of Biafra (Nigeria east of the Niger Delta, Cameroon, Equatorial Guinea and Gabon): 14.6%
- West Central Africa (Republic of Congo, Democratic Republic of Congo and Angola): 39.4%
- Southeastern Africa (Mozambique and Madagascar): 4.7%
Although the slave trade was largely global, there was considerable intracontinental slave trade in which 8 million people were enslaved within the African continent. Of those who did move out of Africa, 8 million were forced out of Eastern Africa to be sent to Asia.
African kingdoms of the era
There were over 173 city-states and kingdoms in the African regions affected by the slave trade between 1502 and 1853 when Brazil became the last Atlantic import nation to outlaw the slave trade. Of those 173, no fewer than 68 could be deemed nation states with political and military infrastructures that enabled them to dominate their neighbours. Nearly every present-day nation had a pre-colonial predecessor, sometimes an African Empire with which European traders had to barter.
The different ethnic groups brought to the Americas closely corresponds to the regions of heaviest activity in the slave trade. Over 45 distinct ethnic groups were taken to the Americas during the trade. Of the 45, the ten most prominent, according to slave documentation of the era are listed below.
- The BaKongo of the Democratic Republic of Congo and Angola
- The Mandé of Upper Guinea
- The Gbe speakers of Togo, Ghana, and Benin (Adja, Mina, Ewe, Fon)
- The Akan of Ghana and Ivory Coast
- The Wolof of Senegal and the Gambia
- The Igbo of southeastern Nigeria
- The Mbundu of Angola (includes both Ambundu and Ovimbundu)
- The Yoruba of southwestern Nigeria
- The Chamba of Cameroon
- The Makua of Mozambique
The transatlantic slave trade resulted in a vast and as yet still unknown loss of life for African captives both in and outside America. Approximately 1.2 – 2.4 million Africans died during their transport to the New World. More died soon upon their arrival. The number of lives lost in the procurement of slaves remains a mystery but may equal or exceed the number who survived to be enslaved.
The savage nature of the trade led to the destruction of individuals and cultures. The following figures do not include deaths of enslaved Africans as a result of their labour, slave revolts, or diseases suffered while living among New World populations.
Historian Ana Lucia Araujo has noted that the process of enslavement did not end with arrival on the American shores; the different paths taken by the individuals and groups who were victims of the Atlantic slave trade were influenced by different factors—including the disembarking region, the kind of work performed, gender, age, religion, and language.
Estimates by Patrick Manning are that about 12 million slaves entered the Atlantic trade between the 16th and 19th century, but about 1.5 million died on board ship. About 10.5 million slaves arrived in the Americas. Besides the slaves who died on the Middle Passage, more Africans likely died during the slave raids in Africa and forced marches to ports. Manning estimates that 4 million died inside Africa after capture, and many more died young. Manning's estimate covers the 12 million who were originally destined for the Atlantic, as well as the 6 million destined for Asian slave markets and the 8 million destined for African markets. Of the slaves shipped to The Americas, the largest share went to Brazil and the Caribbean.
According to Kimani Nehusi, the presence of European slavers affected the way in which the legal code in African societies responded to offenders. Crimes traditionally punishable by some other form of punishment became punishable by enslavement and sale to slave traders. According to David Stannard's American Holocaust, 50% of African deaths occurred in Africa as a result of wars between native kingdoms, which produced the majority of slaves. This includes not only those who died in battles but also those who died as a result of forced marches from inland areas to slave ports on the various coasts. The practice of enslaving enemy combatants and their villages was widespread throughout Western and West Central Africa, although wars were rarely started to procure slaves. The slave trade was largely a by-product of tribal and state warfare as a way of removing potential dissidents after victory or financing future wars. However, some African groups proved particularly adept and brutal at the practice of enslaving, such as Oyo, Benin, Igala, Kaabu, Asanteman, Dahomey, the Aro Confederacy and the Imbangala war bands.
In letters written by the Manikongo, Nzinga Mbemba Afonso, to the King João III of Portugal, he writes that Portuguese merchandise flowing in is what is fueling the trade in Africans. He requests the King of Portugal to stop sending merchandise but should only send missionaries. In one of his letters he writes:
Each day the traders are kidnapping our people—children of this country, sons of our nobles and vassals, even people of our own family. This corruption and depravity are so widespread that our land is entirely depopulated. We need in this kingdom only priests and schoolteachers, and no merchandise, unless it is wine and flour for Mass. It is our wish that this Kingdom not be a place for the trade or transport of slaves ...
Many of our subjects eagerly lust after Portuguese merchandise that your subjects have brought into our domains. To satisfy this inordinate appetite, they seize many of our black free subjects ... They sell them. After having taken these prisoners [to the coast] secretly or at night ... As soon as the captives are in the hands of white men they are branded with a red-hot iron.
Before the arrival of the Portuguese, slavery had already existed in Kongo. Afonso believed that the slave trade should be subject to Kongo law. When he suspected the Portuguese of receiving illegally enslaved persons to sell, he wrote to King João III in 1526 imploring him to put a stop to the practice.
The kings of Dahomey sold war captives into transatlantic slavery; they would otherwise have been killed in a ceremony known as the Annual Customs. As one of West Africa's principal slave states, Dahomey became extremely unpopular with neighbouring peoples. Like the Bambara Empire to the east, the Khasso kingdoms depended heavily on the slave trade for their economy. A family's status was indicated by the number of slaves it owned, leading to wars for the sole purpose of taking more captives. This trade led the Khasso into increasing contact with the European settlements of Africa's west coast, particularly the French. Benin grew increasingly rich during the 16th and 17th centuries on the slave trade with Europe; slaves from enemy states of the interior were sold and carried to the Americas in Dutch and Portuguese ships. The Bight of Benin's shore soon came to be known as the "Slave Coast".
King Gezo of Dahomey said in the 1840s:
The slave trade is the ruling principle of my people. It is the source and the glory of their wealth ... the mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery ...
In 1807, the UK Parliament passed the Bill that abolished the trading of slaves. The King of Bonny (now in Nigeria) was horrified at the conclusion of the practice:
We think this trade must go on. That is the verdict of our oracle and the priests. They say that your country, however great, can never stop a trade ordained by God himself.
After being marched to the coast for sale, enslaved people were held in large forts called factories. The amount of time in factories varied, but Milton Meltzer states in Slavery: A World History that around 4.5% of deaths attributed to the transatlantic slave trade occurred during this phase. In other words, over 820,000 people are believed to have died in African ports such as Benguela, Elmina, and Bonny, reducing the number of those shipped to 17.5 million.
After being captured and held in the factories, slaves entered the infamous Middle Passage. Meltzer's research puts this phase of the slave trade's overall mortality at 12.5%. Their deaths were the result of brutal treatment and poor care from the time of their capture and throughout their voyage. Around 2.2 million Africans died during these voyages where they were packed into tight, unsanitary spaces on ships for months at a time. Measures were taken to stem the onboard mortality rate, such as enforced "dancing" (as exercise) above deck and the practice of force-feeding enslaved persons who tried to starve themselves. The conditions on board also resulted in the spread of fatal diseases. Other fatalities were suicides, slaves who escaped by jumping overboard. The slave traders would try to fit anywhere from 350 to 600 slaves on one ship. Before the African slave trade was completely banned by participating nations in 1853, 15.3 million enslaved people had arrived in the Americas.
Raymond L. Cohn, an economics professor whose research has focused on economic history and international migration, has researched the mortality rates among Africans during the voyages of the Atlantic slave trade. He found that mortality rates decreased over the history of the slave trade, primarily because the length of time necessary for the voyage was declining. "In the eighteenth century many slave voyages took at least 2½ months. In the nineteenth century, 2 months appears to have been the maximum length of the voyage, and many voyages were far shorter. Fewer slaves died in the Middle Passage over time mainly because the passage was shorter."
Despite the vast profits of slavery, the ordinary sailors on slave ships were badly paid and subject to harsh discipline. Mortality of around 20% was expected in a ship's crew during the course of a voyage; this was due to disease, flogging, overwork or slave uprisings. Disease (malaria or yellow fever) was the most common cause of death among sailors. A high crew mortality rate on the return voyage was in the captain's interests as it reduced the number of sailors who had to be paid on reaching the home port.
The slave trade was hated by many sailors and those who joined the crews of slave ships often did so through coercion or because they could find no other employment.
Meltzer also states that 33% of Africans would have died in the first year at the seasoning camps found throughout the Caribbean. Jamaica held one of the most notorious of these camps. Dysentery was the leading cause of death. Around 5 million Africans died in these camps, reducing the number of survivors to about 10 million.
Notable diseases not originally known as present in Americas before 1492 include those such as smallpox, malaria, bubonic plague, typhus, influenza, measles, diphtheria, yellow fever, and whooping cough. During the Atlantic slave trade following the discovery of the New World, diseases such as these possessed the capability of obliterating populations such as the Natives by the masses as short amount of time as a few weeks or months.
Smallpox was one of the epidemics that surrounded the Atlantic slave trade from the 15th to the 18th centuries. Diseases like smallpox were known for causing a significant decrease in the indigenous population of the New World. An explanation to aid in understanding the logistics behind the extensive population decrease includes the topic of immunity. The native population was not immune to this disease in that they did not have the pathogen required to resist the disease. Those such as the European colonizers and the African slaves brought to the New World, however, did possess this pathogen due to having been previously exposed to smallpox. It was known to be a common illness many underwent as children, which in turn built up their immunity to withstand this disease. As a result of the Native Americans no longer being able to work the lands (as the labor required them to mine gold and silver), European colonizers, such as the Portuguese took advantage of their access to regions of the African continent such as Angola from which to extract another source of labor in which Africans had proven to be prime candidates in that they survived this case of the disease- also known as variola intermedius- while the Natives had continued to fall to the various illnesses.
Effects of smallpox included fever, bodily eruptions, and was very noticeable in the disfigurement seen of the face, hands, and feet. This illness is a viral disease, contracted through contact with an exposed individual and through the air, and no drug treatments for it were available. Some Europeans, who believed the plague of syphilis in Europe to be the fault of the Amerindians, saw smallpox as the European revenge against the Natives.
For many diseases, the African and Eurasian population were able to have already acquired immunity- being able to resist an infection- due to prior exposure as children in which they were less likely to receive the same illness again. Upon arrival, these diseases were transmitted to the Native populations who did not have immunity due to no prior exposure having been from climates in which these germs, and pathogens surrounding these diseases were not common. Having been exposed to the illness as an adult, some effects would prove to be more enhanced than if they were to be at an adolescent age.
Evolutionary history also played a role in being immune to the diseases of the slave trade. Compared to African and Europeans, New World populations did not have a history of exposure to the disease, and therefore, no genetic resistance could be maintained as a result of adaptation through evolution.
Levels and extent of immunity varies from disease to disease. For smallpox and measles for example, those who survive are equipped with the cellular immunity to combat the disease for the rest of their life in that they cannot contract the disease again. There are also diseases in which immunity does not guarantee that an individual is not susceptible to becoming reinfected.
Due to a limited knowledge on the causation and range of effects of diseases surrounding the event of the slave trade, there were little to no methods for inoculation present during the time. In the late 16th century with the increased presence of smallpox, there existed some forms of inoculation or sometimes referred to as variolation in Africa and the Middle East. One practice features Arab traders in Africa "buying-off" the disease in which a cloth that had been previously exposed to the sickness was to be tied to another child's arm to increase immunity. Another practice involved taking pus from a smallpox scab and putting it in the cut of a healthy individual in an attempt to have a mild case of the disease in the future rather than the effects becoming fatal.
As epidemiology advances, causes and effective treatments are being discovered to combat historically destructive diseases such as syphilis- which in today can be treated with simple antibiotics such as penicillin and other medications that can inhibit and rid the body of harmful bacteria.
The trade of enslaved Africans in the Atlantic has its origins in the explorations of Portuguese mariners down the coast of West Africa in the 15th century. Before that, contact with African slave markets was made to ransom Portuguese who had been captured by the intense North African Barbary pirate attacks on Portuguese ships and coastal villages, frequently leaving them depopulated. The first Europeans to use enslaved Africans in the New World were the Spaniards, who sought auxiliaries for their conquest expeditions and labourers on islands such as Cuba and Hispaniola. The alarming decline in the native population had spurred the first royal laws protecting them (Laws of Burgos, 1512–13). The first enslaved Africans arrived in Hispaniola in 1501. After Portugal had succeeded in establishing sugar plantations (engenhos) in northern Brazil ca. 1545, Portuguese merchants on the West African coast began to supply enslaved Africans to the sugar planters. While at first these planters had relied almost exclusively on the native Tupani for slave labour, after 1570 they began importing Africans, as a series of epidemics had decimated the already destabilized Tupani communities. By 1630, Africans had replaced the Tupani as the largest contingent of labour on Brazilian sugar plantations. This ended the European medieval household tradition of slavery, resulted in Brazil's receiving the most enslaved Africans, and revealed sugar cultivation and processing as the reason that roughly 84% of these Africans were shipped to the New World.
As Britain rose in naval power and settled continental North America and some islands of the West Indies, they became the leading slave traders. At one stage the trade was the monopoly of the Royal Africa Company, operating out of London. But, following the loss of the company's monopoly in 1689, Bristol and Liverpool merchants became increasingly involved in the trade. By the late 17th century, one out of every four ships that left Liverpool harbour was a slave trading ship. Much of the wealth on which the city of Manchester, and surrounding towns, was built in the late 18th century, and for much of the 19th century, was based on the processing of slave-picked cotton and manufacture of cloth. Other British cities also profited from the slave trade. Birmingham, the largest gun-producing town in Britain at the time, supplied guns to be traded for slaves. 75% of all sugar produced in the plantations was sent to London, and much of it was consumed in the highly lucrative coffee houses there.
New World destinations
The first slaves to arrive as part of a labour force in the New World reached the island of Hispaniola (now Haiti and the Dominican Republic) in 1502. Cuba received its first four slaves in 1513. Jamaica received its first shipment of 4000 slaves in 1518. Slave exports to Honduras and Guatemala started in 1526.
The first enslaved Africans to reach what would become the United States arrived in July 1526 as part of a Spanish attempt to colonize San Miguel de Gualdape. By November the 300 Spanish colonists were reduced to 100, and their slaves from 100 to 70[why?]. The enslaved people revolted in 1526 and joined a nearby Native American tribe, while the Spanish abandoned the colony altogether (1527). The area of the future Colombia received its first enslaved people in 1533. El Salvador, Costa Rica and Florida began their stints in the slave trade in 1541, 1563 and 1581, respectively.
The 17th century saw an increase in shipments. Africans arrived in the English colony of Jamestown, Virginia in 1619. The first kidnapped Africans in English North America were classed as indentured servants and freed after seven years. Virginia law codified chattel slavery in 1656, and in 1662 the colony adopted the principle of partus sequitur ventrem, which classified children of slave mothers as slaves, regardless of paternity. Irish immigrants took slaves to Montserrat in 1651, and in 1655 slaves were shipped[by whom?] to Belize.
|British America (minus North America)||18.4%|
|British North America||6.45%|
|Dutch West Indies||2.0%|
|Danish West Indies||0.3%|
The number of the Africans who arrived in each region is calculated from the total number of slaves imported, about 10,000,000.
Economics of slavery
In France in the 18th century, returns for investors in plantations averaged around 6%; as compared to 5% for most domestic alternatives, this represented a 20% profit advantage. Risks—maritime and commercial—were important for individual voyages. Investors mitigated it by buying small shares of many ships at the same time. In that way, they were able to diversify a large part of the risk away. Between voyages, ship shares could be freely sold and bought.
By far the most financially profitable West Indian colonies in 1800 belonged to the United Kingdom. After entering the sugar colony business late, British naval supremacy and control over key islands such as Jamaica, Trinidad, the Leeward Islands and Barbados and the territory of British Guiana gave it an important edge over all competitors; while many British did not make gains, a handful of individuals made small fortunes. This advantage was reinforced when France lost its most important colony, St. Domingue (western Hispaniola, now Haiti), to a slave revolt in 1791 and supported revolts against its rival Britain, after the 1793 French revolution in the name of liberty. Before 1791, British sugar had to be protected to compete against cheaper French sugar.
After 1791, the British islands produced the most sugar, and the British people quickly became the largest consumers. West Indian sugar became ubiquitous as an additive to Indian tea. It has been estimated that the profits of the slave trade and of West Indian plantations created up to one-in-twenty of every pound circulating in the British economy at the time of the Industrial Revolution in the latter half of the 18th century.
Historian Walter Rodney has argued that at the start of the slave trade in the 16th century, although there was a technological gap between Europe and Africa, it was not very substantial. Both continents were using Iron Age technology. The major advantage that Europe had was in ship building. During the period of slavery, the populations of Europe and the Americas grew exponentially, while the population of Africa remained stagnant. Rodney contended that the profits from slavery were used to fund economic growth and technological advancement in Europe and the Americas. Based on earlier theories by Eric Williams, he asserted that the industrial revolution was at least in part funded by agricultural profits from the Americas. He cited examples such as the invention of the steam engine by James Watt, which was funded by plantation owners from the Caribbean.
Other historians have attacked both Rodney's methodology and accuracy. Joseph C. Miller has argued that the social change and demographic stagnation (which he researched on the example of West Central Africa) was caused primarily by domestic factors. Joseph Inikori provided a new line of argument, estimating counterfactual demographic developments in case the Atlantic slave trade had not existed. Patrick Manning has shown that the slave trade did have a profound impact on African demographics and social institutions, but criticized Inikori's approach for not taking other factors (such as famine and drought) into account, and thus being highly speculative.
Effect on the economy of West Africa
The neutrality of this section is disputed. (December 2013) (Learn how and when to remove this template message)
No scholars dispute the harm done to the enslaved people but the effect of the trade on African societies is much debated, due to the apparent influx of goods to Africans. Proponents of the slave trade, such as Archibald Dalzel, argued that African societies were robust and not much affected by the trade. In the 19th century, European abolitionists, most prominently Dr. David Livingstone, took the opposite view, arguing that the fragile local economy and societies were being severely harmed by the trade.
Because the negative effects of slavery on the economies of Africa have been well documented, namely the significant decline in population, some African rulers likely saw an economic benefit from trading their subjects with European slave traders. With the exception of Portuguese controlled Angola, coastal African leaders "generally controlled access to their coasts, and were able to prevent direct enslavement of their subjects and citizens". Thus, as African scholar John Thornton argues, African leaders who allowed the continuation of the slave trade likely derived an economic benefit from selling their subjects to Europeans. The Kingdom of Benin, for instance, participated in the African slave trade, at will, from 1715 to 1735, surprising Dutch traders, who had not expected to buy slaves in Benin. The benefit derived from trading slaves for European goods was enough to make the Kingdom of Benin rejoin the trans-Atlantic slave trade after centuries of non-participation. Such benefits included military technology (specifically guns and gunpowder), gold, or simply maintaining amicable trade relationships with European nations. The slave trade was, therefore, a means for some African elites to gain economic advantages. Historian Walter Rodney estimates that by c.1770, the King of Dahomey was earning an estimated £250,000 per year by selling captive African soldiers and enslaved people to the European slave-traders. Many West African countries also already had a tradition of holding slaves, which was expanded into trade with Europeans.
The Atlantic trade brought new crops to Africa and also more efficient currencies which were adopted by the West African merchants. This can be interpreted as an institutional reform which reduced the cost of doing business. But the developmental benefits were limited as long as the business including slaving.
Both Thornton and Fage contend that while African political elite may have ultimately benefited from the slave trade, their decision to participate may have been influenced more by what they could lose by not participating. In Fage's article "Slavery and the Slave Trade in the Context of West African History", he notes that for West Africans "... there were really few effective means of mobilizing labour for the economic and political needs of the state" without the slave trade.
Effects on the British economy
Historian Eric Williams in 1944 argued that the profits that Britain received from its sugar colonies, or from the slave trade between Africa and the Caribbean, was a major factor in financing Britain's industrial revolution. However, he says that by the time of its abolition in 1833 it had lost its profitability and it was in Britain's economic interest to ban it.
Other researchers and historians have strongly contested what has come to be referred to as the "Williams thesis" in academia. David Richardson has concluded that the profits from the slave trade amounted to less than 1% of domestic investment in Britain. Economic historian Stanley Engerman finds that even without subtracting the associated costs of the slave trade (e.g., shipping costs, slave mortality, mortality of British people in Africa, defense costs) or reinvestment of profits back into the slave trade, the total profits from the slave trade and of West Indian plantations amounted to less than 5% of the British economy during any year of the Industrial Revolution. Engerman's 5% figure gives as much as possible in terms of benefit of the doubt to the Williams argument, not solely because it does not take into account the associated costs of the slave trade to Britain, but also because it carries the full-employment assumption from economics and holds the gross value of slave trade profits as a direct contribution to Britain's national income. Historian Richard Pares, in an article written before Williams' book, dismisses the influence of wealth generated from the West Indian plantations upon the financing of the Industrial Revolution, stating that whatever substantial flow of investment from West Indian profits into industry there occurred after emancipation, not before.
Seymour Drescher and Robert Anstey argue the slave trade remained profitable until the end, and that moralistic reform, not economic incentive, was primarily responsible for abolition. They say slavery remained profitable in the 1830s because of innovations in agriculture.
Karl Marx in his influential economic history of capitalism Das Kapital wrote that "... the turning of Africa into a warren for the commercial hunting of black-skins, signaled the rosy dawn of the era of capitalist production". He argued that the slave trade was part of what he termed the "primitive accumulation" of capital, the 'non-capitalist' accumulation of wealth that preceded and created the financial conditions for Britain's industrialisation.
The demographic effects of the slave trade is a controversial and highly debated issue. Although scholars such as Paul Adams and Erick D. Langer have estimated that sub-Saharan Africa represented about 18 percent of the world's population in 1600 and only 6 percent in 1900, the reasons for this demographic shift have been the subject of much debate. In addition to the depopulation Africa experienced because of the slave trade, African nations were left with severely imbalanced gender ratios, with females comprising up to 65 percent of the population in hard-hit areas such as Angola. Moreover, many scholars (such as Barbara N. Ramusack) have suggested a link between the prevalence of prostitution in Africa today with the temporary marriages that were enforced during the course of the slave trade.
Walter Rodney argued that the export of so many people had been a demographic disaster which left Africa permanently disadvantaged when compared to other parts of the world, and it largely explains the continent's continued poverty. He presented numbers showing that Africa's population stagnated during this period, while those of Europe and Asia grew dramatically. According to Rodney, all other areas of the economy were disrupted by the slave trade as the top merchants abandoned traditional industries in order to pursue slaving, and the lower levels of the population were disrupted by the slaving itself.
Others have challenged this view. J. D. Fage compared the demographic effect on the continent as a whole. David Eltis has compared the numbers to the rate of emigration from Europe during this period. In the 19th century alone over 50 million people left Europe for the Americas, a far higher rate than were ever taken from Africa.
Other scholars accused Walter Rodney of mischaracterizing the trade between Africans and Europeans. They argue that Africans, or more accurately African elites, deliberately let European traders join in an already large trade in enslaved people and that they were not patronized.
As Joseph E. Inikori argues, the history of the region shows that the effects were still quite deleterious. He argues that the African economic model of the period was very different from the European model, and could not sustain such population losses. Population reductions in certain areas also led to widespread problems. Inikori also notes that after the suppression of the slave trade Africa's population almost immediately began to rapidly increase, even prior to the introduction of modern medicines.
Legacy of racism
Walter Rodney states,
The role of slavery in promoting racist prejudice and ideology has been carefully studied in certain situations, especially in the USA. The simple fact is that no people can enslave another for four centuries without coming out with a notion of superiority, and when the colour and other physical traits of those peoples were quite different it was inevitable that the prejudice should take a racist form.
End of the Atlantic slave trade
In Britain, America, Portugal and in parts of Europe, opposition developed against the slave trade. Davis says that abolitionists assumed "that an end to slave imports would lead automatically to the amelioration and gradual abolition of slavery". In Britain and America, opposition to the trade was led by the Religious Society of Friends (Quakers) and establishment Evangelicals such as William Wilberforce. Many people joined the movement and they began to protest against the trade, but they were opposed by the owners of the colonial holdings. Following Lord Mansfield's decision in 1772, slaves became free upon entering the British isles. Under the leadership of Thomas Jefferson, the new state of Virginia in 1778 became the first state and one of the first jurisdictions anywhere to stop the importation of slaves for sale; it made it a crime for traders to bring in slaves from out of state or from overseas for sale; migrants from other states were allowed to bring their own slaves. The new law freed all slaves brought in illegally after its passage and imposed heavy fines on violators. Denmark, which had been active in the slave trade, was the first country to ban the trade through legislation in 1792, which took effect in 1803. Britain banned the slave trade in 1807, imposing stiff fines for any slave found aboard a British ship (see Slave Trade Act 1807). The Royal Navy moved to stop other nations from continuing the slave trade and declared that slaving was equal to piracy and was punishable by death. The United States Congress passed the Slave Trade Act of 1794, which prohibited the building or outfitting of ships in the U.S. for use in the slave trade. In 1807 Congress outlawed the importation of slaves beginning on 1 January 1808, the earliest date permitted by the United States Constitution for such a ban.
William Wilberforce was a driving force in the British Parliament in the fight against the slave trade in the British Empire. On 22 February 1807, the House of Commons passed a motion 283 votes to 16 to abolish the Atlantic slave trade. The United States abolished the slave trade the same year, but not its internal slave trade which became the dominant character in American slavery until the 1860s. In 1805 the British Order-in-Council had restricted the importation of slaves into colonies that had been captured from France and the Netherlands. Britain continued to press other nations to end its trade; in 1810 an Anglo-Portuguese treaty was signed whereby Portugal agreed to restrict its trade into its colonies; an 1813 Anglo-Swedish treaty whereby Sweden outlawed its slave trade; the Treaty of Paris 1814 where France agreed with Britain that the trade is "repugnant to the principles of natural justice" and agreed to abolish the slave trade in five years; the 1814 Anglo-Netherlands treaty where the Dutch outlawed its slave trade.
The Royal Navy's West Africa Squadron, established in 1808, grew by 1850 to a force of some 25 vessels, which were tasked with combating slavery along the African coast. Between 1807 and 1860, the Royal Navy's Squadron seized approximately 1,600 ships involved in the slave trade and freed 150,000 Africans who were aboard these vessels. Several hundred slaves a year were transported by the navy to the British colony of Sierra Leone, where they were made to serve as "apprentices" in the colonial economy until the Slavery Abolition Act 1833.
The last recorded slave ship to land on U.S. soil was the Clotilde, which in 1859 illegally smuggled a number of Africans into the town of Mobile, Alabama. The Africans on board were sold as slaves; however, slavery in the U.S. was abolished five years later following the end of the American Civil War in 1865. The last survivor of the voyage was Cudjoe Lewis, who died in 1935. The last country to ban the Atlantic slave trade was Brazil in 1831. However, a vibrant illegal trade continued to ship large numbers of enslaved people to Brazil and also to Cuba until the 1860s, when British enforcement and further diplomacy finally ended the Atlantic slave trade. In 1870 Portugal ended the last trade route with the Americas where the last country to import slaves was Brazil. In Brazil, however, slavery itself was not ended until 1888, making it the last country in the Americas to end involuntary servitude.
The historian Walter Rodney contends that it was a decline in the profitability of the triangular trades that made it possible for certain basic human sentiments to be asserted at the decision-making level in a number of European countries- Britain being the most crucial because it was the greatest carrier of African captives across the Atlantic. Rodney states that changes in productivity, technology, and patterns of exchange in Europe and the Americas informed the decision by the British to end their participation in the trade in 1807. In 1809 President James Madison outlawed the slave trade with the United States.
Nevertheless, Michael Hardt and Antonio Negri argue that it was neither a strictly economic nor moral matter. First, because slavery was (in practice) still beneficial to capitalism, providing not only an influx of capital but also disciplining hardship into workers (a form of "apprenticeship" to the capitalist industrial plant). The more "recent" argument of a "moral shift" (the basis of the previous lines of this article) is described by Hardt and Negri as an "ideological" apparatus in order to eliminate the sentiment of guilt in western society. Although moral arguments did play a secondary role, they usually had major resonance when used as a strategy to undercut competitors' profits. This argument holds that Eurocentric history has been blind to the most important element in this fight for emancipation, precisely, the constant revolt and the antagonism of slaves' revolts. The most important of those being the Haitian Revolution. The shock of this revolution in 1804, certainly introduces an essential political argument into the end of the slave trade, which happened only three years later.
The African diaspora which was created via slavery has been a complex interwoven part of American history and culture. In the United States, the success of Alex Haley's book Roots: The Saga of an American Family, published in 1976, and the subsequent television miniseries based upon it Roots, broadcast on the ABC network in January 1977, led to an increased interest and appreciation of African heritage amongst the African-American community. The influence of these led many African Americans to begin researching their family histories and making visits to West Africa. In turn, a tourist industry grew up to supply them. One notable example of this is through the Roots Homecoming Festival held annually in the Gambia, in which rituals are held through which African Americans can symbolically "come home" to Africa. Issues of dispute have however developed between African Americans and African authorities over how to display historic sites that were involved in the Atlantic slave trade, with prominent voices in the former criticising the latter for not displaying such sites sensitively, but instead treating them as a commercial enterprise.
"Back to Africa"
In 1816, a group of wealthy European-Americans, some of whom were abolitionists and others who were racial segregationists, founded the American Colonization Society with the express desire of returning African Americans who were in the United States to West Africa. In 1820, they sent their first ship to Liberia, and within a decade around two thousand African Americans had been settled in the west African country. Such re-settlement continued throughout the 19th century, increasing following the deterioration of race relations in the southern states of the US following Reconstruction in 1877.
The Rastafari movement, which originated in Jamaica, where 92% of the population are descended from victims of the Atlantic slave trade, has made great efforts to publicize the slavery and to ensure it is not forgotten, especially through reggae music.
In 1998, UNESCO designated 23 August as International Day for the Remembrance of the Slave Trade and its Abolition. Since then there have been a number of events recognizing the effects of slavery.
At the 2001 World Conference Against Racism in Durban, South Africa, African nations demanded a clear apology for slavery from the former slave-trading countries. Some nations were ready to express an apology, but the opposition, mainly from the United Kingdom, Portugal, Spain, the Netherlands, and the United States blocked attempts to do so. A fear of monetary compensation might have been one of the reasons for the opposition. As of 2009, efforts are underway to create a UN Slavery Memorial as a permanent remembrance of the victims of the Atlantic slave trade.
In 1999, President Mathieu Kerekou of Benin (formerly the Kingdom of Dahomey) issued a national apology for the role Africans played in the Atlantic slave trade. Luc Gnacadja, minister of environment and housing for Benin, later said: "The slave trade is a shame, and we do repent for it." Researchers estimate that 3 million slaves were exported out of the Slave Coast bordering the Bight of Benin.
On 30 January 2006, Jacques Chirac (the then French President) said that 10 May would henceforth be a national day of remembrance for the victims of slavery in France, marking the day in 2001 when France passed a law recognising slavery as a crime against humanity.
At a UN conference on the Atlantic slave trade in 2001, the Dutch Minister for Urban Policy and Integration of Ethnic Minorities Roger van Boxtel said that the Netherlands "recognizes the grave injustices of the past." On 1 July 2013, at the 150th anniversary of the abolition of slavery in the Dutch West Indies, the Dutch government expressed "deep regret and remorse" for the involvement of the Netherlands in the Atlantic slave trade. The Dutch government has remained short of a formal apology for its involvement in the Atlantic slave trade, as an apology implies that it considers its own actions of the past as unlawful, and could lead to litigation for monetary compensation by descendants of the enslaved.
In 2009, the Civil Rights Congress of Nigeria has written an open letter to all African chieftains who participated in trade calling for an apology for their role in the Atlantic slave trade: "We cannot continue to blame the white men, as Africans, particularly the traditional rulers, are not blameless. In view of the fact that the Americans and Europe have accepted the cruelty of their roles and have forcefully apologized, it would be logical, reasonable and humbling if African traditional rulers ... [can] accept blame and formally apologize to the descendants of the victims of their collaborative and exploitative slave trade."
In 1998, President Yoweri Museveni of Uganda called tribal chieftains to apologize for their involvement in the slave trade: "African chiefs were the ones waging war on each other and capturing their own people and selling them. If anyone should apologise it should be the African chiefs. We still have those traitors here even today."
On 9 December 1999, Liverpool City Council passed a formal motion apologizing for the City's part in the slave trade. It was unanimously agreed that Liverpool acknowledges its responsibility for its involvement in three centuries of the slave trade. The City Council has made an unreserved apology for Liverpool's involvement and the continual effect of slavery on Liverpool's black communities.
On 27 November 2006, British Prime Minister Tony Blair made a partial apology for Britain's role in the African slavery trade. However African rights activists denounced it as "empty rhetoric" that failed to address the issue properly. They feel his apology stopped shy to prevent any legal retort. Blair again apologized on March 14, 2007.
On 24 August 2007, Ken Livingstone (Mayor of London) apologized publicly for London's role in the slave trade. "You can look across there to see the institutions that still have the benefit of the wealth they created from slavery", he said pointing towards the financial district, before breaking down in tears. He claimed that London was still tainted by the horrors of slavery. Jesse Jackson praised Mayor Livingstone and added that reparations should be made.
On 24 February 2007, the Virginia General Assembly passed House Joint Resolution Number 728 acknowledging "with profound regret the involuntary servitude of Africans and the exploitation of Native Americans, and call for reconciliation among all Virginians". With the passing of that resolution, Virginia became the first of the 50 United States to acknowledge through the state's governing body their state's involvement in slavery. The passing of this resolution came on the heels of the 400th-anniversary celebration of the city of Jamestown, Virginia, which was the first permanent English colony to survive in what would become the United States. Jamestown is also recognized as one of the first slave ports of the American colonies. On 31 May 2007, the Governor of Alabama, Bob Riley, signed a resolution expressing "profound regret" for Alabama's role in slavery and apologizing for slavery's wrongs and lingering effects. Alabama is the fourth state to pass a slavery apology, following votes by the legislatures in Maryland, Virginia, and North Carolina.
On 30 July 2008, the United States House of Representatives passed a resolution apologizing for American slavery and subsequent discriminatory laws. The language included a reference to the "fundamental injustice, cruelty, brutality and inhumanity of slavery and Jim Crow" segregation. On 18 June 2009, the United States Senate issued an apologetic statement decrying the "fundamental injustice, cruelty, brutality, and inhumanity of slavery". The news was welcomed by President Barack Obama.
|Wikisource has original text related to this article:|
- Arab slave trade
- Atlantic history
- Atlantic slave trade to Brazil
- Barbary slave trade
- Bristol slave trade
- History of slavery
- Indian indenture system
- Slave Trade Acts
- Slavery in Africa
- Slavery in Canada
- Slavery in the colonial United States
- Slavery in contemporary Africa
- Slavery in the United States
- United States labor law
- "The capture and sale of slaves". Liverpool: International Slavery Museum. Retrieved 14 October 2015.
- Mannix, Daniel (1962). Black Cargoes. The Viking Press. pp. Introduction–1–5.
- Weber, Greta (June 5, 2015). "Shipwreck Shines Light on Historic Shift in Slave Trade". National Geographic Society. Retrieved June 8, 2015.
- Klein, Herbert S., and Jacob Klein. The Atlantic Slave Trade. Cambridge University Press, 1999, pp. 103–139.
- Ronald Segal, The Black Diaspora: Five Centuries of the Black Experience Outside Africa (New York: Farrar, Straus and Giroux, 1995), ISBN 0-374-11396-3, p. 4. "It is now estimated that 11,863,000 slaves were shipped across the Atlantic." (Note in original: Paul E. Lovejoy, "The Impact of the Atlantic Slave Trade on Africa: A Review of the Literature", in Journal of African History 30 (1989), p. 368.)
- Meredith, Martin (2014). The Fortunes of Africa. New York: PublicAffairs. p. 191. ISBN 9781610396356.
- Eltis, David and Richardson, David, "The Numbers Game". In: Northrup, David: The Atlantic Slave Trade, 2nd edn, Houghton Mifflin Co., 2002, p. 95.
- Basil Davidson. The African Slave Trade.
- Thornton 1998, pp. 15–17.
- Christopher 2006, p. 127.
- Thornton 1998, p. 13.
- Chaunu 1969, pp. 54–58.
- Thornton 1998, p. 24.
- Thornton 1998, pp. 24–26.
- Thornton 1998, p. 27.
- "Historical survey, Slave societies". Encyclopædia Britannica. Archived from the original on 2014-10-06.
- Ferro, Mark (1997). Colonization: A Global History. Routledge, p. 221, ISBN 978-0-415-14007-2.
- "Slave trade: a root of contemporary African Crisis", Africa Economic Analysis 2000.
- Elikia M'bokolo, "The impact of the slave trade on Africa", Le Monde diplomatique, 2 April 1998.
- Thornton, p. 112.
- Thornton, p. 310.
- Slave Trade Debates 1806, Colonial History Series, Dawsons of Pall Mall, London 1968, pp. 203–204.
- Thornton, p. 45.
- Thornton, p. 94.
- Thornton 1998, pp. 28–29.
- Thornton 1998, p. 31.
- Thornton 1998, pp. 29–31.
- Thornton 1998, pp. 37.
- Thornton 1998, p. 38.
- Thornton 1998, p. 39.
- Thornton 1998, p. 40.
- Rodney 1972, pp. 95–113.
- Austen 1987, pp. 81–108.
- Thornton 1998, p. 44.
- Anne C. Bailey, African Voices of the Atlantic Slave Trade: Beyond the Silence and the Shame, Beacon Press, 2005, p. 62.
- Anstey, Roger: The Atlantic Slave Trade and British abolition, 1760–1810. London: Macmillan, 1975, p. 5.
- P. C. Emmer, The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation (1998), p. 17.
- Klein 2010.
- Keith Bradley; Paul Cartledge (2011). The Cambridge World History of Slavery. Cambridge University Press. p. 583. ISBN 0-521-84066-X.
- Hair & Law 1998, p. 257.
- Christopher 2006, p. 6.
- Domingues, da Silva, Daniel B. (1 January 2013). "The Atlantic Slave Trade from Angola: A Port-by-Port Estimate of Slaves Embarked, 1701–1867". The International Journal of African Historical Studies. 46 (1).
- Lovejoy, Paul E., "The Volume of the Atlantic Slave Trade. A Synthesis". In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath and Company, 1994.
- "Skeletons Discovered: First African Slaves in New World", 31 January 2006, LiveScience.com. Accessed September 27, 2006.
- Inikori, Joseph E.; Engerman, Stanley L. The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe.
- "Smallpox Through History". Archived from the original on 2009-10-31.
- "History Kingdom of Kongo". www.africafederation.net.
- Solow, Barbara (ed.). Slavery and the Rise of the Atlantic System, Cambridge: Cambridge University Press, 1991.
- "Notes on the State of Virginia Query 18".
- Ipsen, Pernille (2015). Daughters of the Trade: Atlantic Slavers and Interracial Marriage on the Gold Coast. University of Pennsylvania Press. pp. 1, 21, 31. ISBN 978-0-8122-4673-5.
- "Historical survey > The international slave trade".
- Kitchin, Thomas (1778). The Present State of the West-Indies: Containing an Accurate Description of What Parts Are Possessed by the Several Powers in Europe. London: R. Baldwin. p. 21.
- Thornton, p. 304.
- Thornton, p. 305.
- Thornton, p. 311.
- Thornton, p. 122.
- Howard Winant (2001), The World is a Ghetto: Race and Democracy Since World War II, Basic Books, p. 58.
- Catherine Lowe Besteman, Unraveling Somalia: Race, Class, and the Legacy of Slavery (University of Pennsylvania Press: 1999), pp. 83–84.
- Kevin Shillington, ed. (2005), Encyclopedia of African History, CRC Press, vol. 1, pp. 333–34; Nicolas Argenti (2007), The Intestines of the State: Youth, Violence and Belated Histories in the Cameroon Grassfields, University of Chicago Press, p. 42.
- Rights & Treatment of Slaves Archived 2010-12-23 at the Wayback Machine. Gambia Information Site.
- Mungo Park, Travels in the Interior of Africa v. II, Chapter XXII – War and Slavery.
- The Negro Plot Trials: A Chronology. Archived 2010-07-22 at the Wayback Machine
- Lovejoy, Paul E. Transformations in Slavery. Cambridge University Press, 2000.
- Inikori, Joseph (1992). The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe. Duke University Press. p. 120.
- Midlo Hall, Gwendolyn (2007). Slavery and African Ethnicities in the Americas. University of North Carolina Press. p. [page needed]. ISBN 978-0-8078-5862-2. Retrieved 2011-01-24.
- Quick guide: The slave trade; Who were the slaves? BBC News, 15 March 2007.
- Stannard, David. American Holocaust. Oxford University Press, 1993.
- Paths of the Atlantic Slave Trade: Interactions, Identities, and Images.
- Patrick Manning, "The Slave Trade: The Formal Dermographics of a Global System" in Joseph E. Inikori and Stanley L. Engerman (eds), The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe (Duke University Press, 1992), pp. 117–44, online at pp. 119–20.
- Maddison, Angus. Contours of the world economy 1–2030 AD: Essays in macro-economic history. Oxford University Press, 2007.
- Gomez, Michael A. Exchanging Our Country Marks. Chapel Hill, 1998
- Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400–1800, Cambridge University Press, 1998.
- Stride, G. T., and C. Ifeka. Peoples and Empires of West Africa: West Africa in History 1000–1800. Nelson, 1986.
- Hochschild, Adam (1998). King Leopold's Ghost: A Story of Greed, Terror, and Heroism in Colonial Africa. Houghton Mifflin Books. ISBN 0-618-00190-5.
- Winthrop, reading by John Thornton, "African Political Ethics and the Slave Trade" Archived March 16, 2010, at the Wayback Machine, Millersville College.
- Museum Theme: The Kingdom of Dahomey, Musee Ouidah.
- "Dahomey (historical kingdom, Africa)", Encyclopædia Britannica.
- "Benin seeks forgiveness for role in slave trade", Final Call, 8 October 2002.
- Le Mali précolonial. Archived 2011-12-01 at the Wayback Machine
- The Story of Africa, BBC.
- "The Anglo-American Magazine". V. July–December 1854. Retrieved 2 July 2014.
- African Slave Owners, BBC.
- Meltzer, Milton. Slavery: A World History. Da Capo Press, 1993.
- Wolfe, Brendan. "Slave Ships and the Middle Passage". encyclopediavirginia.org. Retrieved 24 March 2016.
- "Raymond L. Cohn". Archived from the original on 2007-06-22.
- Cohn, Raymond L. "Deaths of Slaves in the Middle Passage", Journal of Economic History, September 1985.
- Bernard Edwards; Bernard Edwards (Captain.) (2007). Royal Navy Versus the Slave Traders: Enforcing Abolition at Sea 1808–1898. Pen & Sword Books. pp. 26–27. ISBN 978-1-84415-633-7.
- Hochschild, Adam (2005). Bury the Chains: Prophets, Slaves, and Rebels in the First Human Rights Crusade. Houghton Mifflin. p. 94. ISBN 0618104690.
- Marcus Rediker (4 October 2007). The Slave Ship: A Human History. Penguin Publishing Group. pp. 138–138. ISBN 978-1-4406-2084-3.
- Kiple, Kenneth F. (2002). The Caribbean Slave: A Biological History. Cambridge University Press. p. 65. ISBN 0-521-52470-9.
- Paul., Kelton, (2007). Epidemics and enslavement : biological catastrophe in the Native Southeast, 1492–1715. Lincoln: University of Nebraska Press. ISBN 9780803215573. OCLC 182560175.
- P., Krieg, Joann (1992). Epidemics in the modern world. New York: Twayne Publishers. ISBN 0805788522. OCLC 25710386.
- J.),, Watts, S. J. (Sheldon (1997). Epidemics and history : disease, power, and imperialism. New Haven: Yale University Press. ISBN 0585356203. OCLC 47009810.
- Diamond, Jared; Panosian, Claire (2006). Hämäläinen, Pekka (ed.). When Disease Makes History: Epidemics and Great Historical Turning Points. Helsinki: Helsinki University Press. pp. 18–19, 25. ISBN 9515706408.
- "BBC – History – British History in depth: British Slaves on the Barbary Coast".
- Health In Slavery. Archived 2006-10-03 at the Wayback Machine
- "European traders". International Slavery Museum. Retrieved 7 July 2014.
- Elkins, Stanley: Slavery. New York: Universal Library, 1963, p. 48.
- Rawley, James: London, Metropolis of the Slave Trade, 2003.
- Anstey, Roger: The Atlantic Slave Trade and British Abolition, 1760–1810. London: Macmillan, 1975.
- "Slave-grown cotton in greater Manchester", Revealing Histories.
- Williams, David J. (2005). "The Birmingham Gun Trade and The American System of Manufactures" (PDF). Trans. Newcomen Soc. Archived from the original (PDF) on 2015-10-21. Retrieved 3 October 2015.
- Wynter, Sylvia (1984a). "New Seville and the Conversion Experience of Bartolomé de Las Casas: Part One". Jamaica Journal. 17 (2): 25–32.
- Dauenhauer, Nora Marks; Richard Dauenhauer; Lydia T. Black (2008). Anóoshi Lingít Aaní Ká, Russians in Tlingit America: The Battles of Sitka, 1802 and 1804. Seattle: University of Washington Press. pp. XXVI. ISBN 978-0-295-98601-2.
- Stephen D. Behrendt, David Richardson, and David Eltis, W. E. B. Du Bois Institute for African and African-American Research, Harvard University. Based on "records for 27,233 voyages that set out to obtain slaves for the Americas". Stephen Behrendt (1999). "Transatlantic Slave Trade". Africana: The Encyclopedia of the African and African American Experience. New York: Basic Civitas Books. ISBN 0-465-00071-1.
- Curtin, The Atlantic Slave Trade, 1972, p. 88.
- Daudin 2004.
- "Haiti, 1789 to 1806". www.fsmitha.com.
- Digital History. Archived February 26, 2009, at the Wayback Machine
- UN report. Archived January 1, 2016, at the Wayback Machine
- Walter Rodney, How Europe Underdeveloped Africa. ISBN 0950154644.
- Manning, Patrick: "Contours of Slavery and Social change in Africa". In: Northrup, David (ed.): The Atlantic Slave Trade. D.C. Heath & Company, 1994, pp. 148–160.
- Thornton, John. A Cultural History of the Atlantic World 1250–1820. 2012, p. 64.
- Fage, J. D. "Slavery and the Slave Trade in the Context of West African History", The Journal of African History, Vol. 10. No 3, 1969, p. 400.
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. p. 321. ISBN 9781107507180.
- Eric Williams, Capitalism & Slavery (University of North Carolina Press, 1944), pp. 98–107, 169–177.
- David Richardson, "The British Empire and the Atlantic Slave Trade, 1660–1807," in P. J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998), pp. 440–64.
- Stanley L. Engerman. "The Slave Trade and British Capital Formation in the Eighteenth Century". 46: 430–443. JSTOR 3113341.
- Richard Pares. "The Economic Factors in the History of the Empire". 7: 119–144. JSTOR 2590147.
- J.R. Ward, "The British West Indies in the Age of Abolition," in P. J. Marshall, ed. The Oxford History of the British Empire: Volume II: The Eighteenth Century (1998), pp. 415–39.
- Marx, Karl. "Chapter Thirty-One: Genesis of the Industrial Capitalist". Karl Marx: Capital Volume One. Retrieved 21 February 2014.
the turning of Africa into a warren for the commercial hunting of black-skins, signalised the rosy dawn of the era of capitalist production. These idyllic proceedings are the chief momenta of primitive accumulation.
- Adams, Paul; et al. (2000). Experiencing World History. New York: New York University Press. p. 334.
- Ramusack, Barbara (1999). Women in Asia: Restoring Women to History. Indiana University Press. p. 89.
- David Eltis, Economic Growth and the Ending of the Transatlantic Slave Trade.
- Thornton, John. Africa and Africans in the Making of the Atlantic World, 1400–1800. Cambridge University Press, 1992.
- Joseph E. Inikori, "Ideology versus the Tyranny of Paradigm: Historians and the Impact of the Atlantic Slave Trade on African Societies", African Economic History, 1994.
- Williams, Eric (1994) . Capitalism and Slavery. p. 7.
- David Brion Davis, The Problem of Slavery in the Age of Revolution: 1770–1823 (1975), p. 129.
- Library of Society of Friends Subject Guide: Abolition of the Slave Trade.
- Paul E. Lovejoy (2000). Transformations in Slavery: a history of slavery in Africa, Cambridge University Press, p. 290.
- John E. Selby and Don Higginbotham, The Revolution in Virginia, 1775–1783 (2007), p. 158.
- Erik S. Root, All Honor to Jefferson?: The Virginia Slavery Debates and the Positive Good Thesis (2008), p. 19.
- Bill to Prevent the Importation of Slaves 16 June 1777
- "Danish decision to abolish transatlantic slave trade in 1792". Archived from the original on 2016-09-21. Retrieved 2016-09-21.
- Marcyliena H. Morgan (2002). Language, Discourse and Power in African American Culture, Cambridge University Press, 2002, p. 20.
- Huw Lewis-Jones, "The Royal Navy and the Battle to End Slavery", BBC, 17 February 2011.
- Jo Loosemore, "Sailing against slavery", BBC, 24 September 2014.
- Caroline Davies, "William Wilberforce 'condoned slavery', Colonial Office papers reveal...Rescued slaves forced into unpaid 'apprenticeships'", The Guardian, 2 August 2010.
- "Navy News, June 2007". Retrieved 2008-02-09.
- "Question of the Month – Jim Crow Museum at Ferris State University".
- Diouf, Sylvianne (2007). Dreams of Africa in Alabama: The Slave Ship Clotilda and the Story of the Last Africans Brought to America. Oxford University Press. ISBN 0-19-531104-3.
- Hardt, M., and A. Negri(2000), Empire, Cambridge, Mass, Harvard University Press, pp. 114–128.
- "Africans in America". www.pbs.org.
- Handley 2006, pp. 21–23.
- Handley 2006, pp. 23–25.
- Osei-Tutu 2006.
- Handley 2006, p. 21.
- "Reggae and slavery", BBC, 9 October 2009.
- "Ending the Slavery Blame-Game", The New York Times, 22 April 2010.
- "Benin Officials Apologize For Role In U.S. Slave Trade". Chicago Tribune, 1 May 2000.
- "Chirac names slavery memorial day". BBC News, 30 January 2006. Accessed 22 July 2009.
- van Heeteren, Renee (25 June 2013). "BNR Juridische Zaken | 'Geen excuses want slavernij mogelijk niet onrechtmatig'" (in Dutch). BNR Nieuwsradio. Retrieved 8 March 2018.
- Smith, David. "African chiefs urged to apologise for slave trade". BBC News. Retrieved 1 March 2014.
- National Museums Liverpool, "Liverpool and the transatlantic slave trade". Accessed 31 August 2010.
- "Blair 'sorrow' over slave trade". BBC News, 27 November 2006. Accessed 15 March 2007.
- "Blair 'sorry' for UK slavery role". BBC News, 14 March 2007. Accessed 15 March 2007.
- "Livingstone breaks down in tears at slave trade memorial". Daily Mail, 24 August 2007. Accessed 22 July 2009.
- Muir, Hugh (24 August 2007). "Livingstone weeps as he apologises for slavery". The Guardian. Retrieved 30 July 2014.
- House Joint Resolution Number 728. Commonwealth of Virginia. Accessed 22 July 2009.
- Associated Press. "Alabama Governor Joins Other States in Apologizing For Role in Slavery" Archived 2013-05-22 at the Wayback Machine. Fox News, 31 May 2007. Accessed 22 July 2009.
- Fears, Darryl. "House Issues An Apology For Slavery". The Washington Post, 30 July 2008, p. A03. Accessed 22 July 2009.
- Agence France-Presse. "Obama praises 'historic' Senate slavery apology". Google News, 18 June 2009. Accessed 22 July 2009.
- Austen, Ralph (1987). African Economic History: Internal Development and External Dependency. London: James Currey. ISBN 978-0-85255-009-0.
- Christopher, Emma (2006). Slave Ship Sailors and Their Captive Cargoes, 1730–1807. Cambridge: Cambridge University Press. ISBN 0-521-67966-4.
- Rodney, Walter (1972). How Europe Underdeveloped Africa. London: Bogle L'Ouverture. ISBN 978-0-9501546-4-0.
- Thornton, John (1998). Africa and Africans in the Making of the Atlantic World, 1400–1800 (2nd ed.). New York: Cambridge University Press. ISBN 978-0-521-62217-2.
- Handley, Fiona J. L. (2006). "Back to Africa: Issues of hosting "Roots" tourism in West Africa". African Re-Genesis: Confronting Social Issues in the Diaspora. London: UCL Press: 20–31.
- Osei-Tutu, Brempong (2006). "Contested Monuments: African-Americans and the commoditization of Ghana's slave castles". African Re-Genesis: Confronting Social Issues in the Diaspora. London: UCL Press: 09–19.
- Anstey, Roger: The Atlantic Slave Trade and British Abolition, 1760–1810. London: Macmillan, 1975. ISBN 0-333-14846-0.
- Blackburn, Robin (2011). The American Crucible: Slavery, Emancipation and Human Rights. London & New York: Verso. ISBN 978-1-84467-569-2.
- Clarke, Dr. John Henrik: Christopher Columbus and the Afrikan Holocaust: Slavery and the Rise of European Capitalism. Brooklyn, NY: A & B Books, 1992. ISBN 1-881316-14-9.
- Curtin, Philip D. (1969). The Atlantic Slave Trade. Madison: University of Wisconsin Press. ISBN 9780299054007. OCLC 46413.
- Daudin, Guillaume (2004). "Profitability of Slave and Long-Distance Trading in Context: The Case of Eighteenth-Century France". The Journal of Economic History. 64 (1): 144–171. doi:10.1017/S0022050704002633. ISSN 1471-6372.
- Drescher, Seymour (1999). From Slavery to Freedom : Comparative Studies in the Rise and Fall of Atlantic Slavery. New York: New York University Press. ISBN 0333737482. OCLC 39897280.
- Eltis, David: "The volume and structure of the transatlantic slave trade: a reassessment", William and Mary Quarterly (2001): 17-46. in JSTOR
- Emmer, Pieter C.: The Dutch in the Atlantic Economy, 1580–1880. Trade, Slavery and Emancipation. Variorum Collected Studies Series CS614. Aldershot [u.a.]: Variorum, 1998. ISBN 0-86078-697-8.
- Eli Faber (1998). Jews, Slaves, and the Slave Trade: Setting the Record Straight. NYU Press. ISBN 9780814728796., argues the role was minimal
- Gleeson, David T., and Simon Lewis (eds): Ambiguous Anniversary: The Bicentennial of the International Slave Trade Bans (University of South Carolina Press; 2012) 207 pp.
- Gomez, Michael Angelo: Exchanging Our Country Marks (The Transformation of African Identities in the Colonial and AnteBellum South). Chapel Hill, N.C.: The University of North Carolina Press, 1998. ISBN 0-8078-4694-5.
- Guasco, Michael. Slaves and Englishmen: Human Bondage in the Early Modern Atlantic. Philadelphia, PA: University of Pennsylvania Press, 2014.
- Hall, Gwendolyn Midlo: Slavery and African Ethnicities in the Americas: Restoring the Links. Chapel Hill, N.C.: The University of North Carolina Press, 2006. ISBN 0-8078-2973-0.
- Horne, Gerald: The Deepest South: The United States, Brazil, and the African Slave Trade. New York, NY: New York University Press, 2007. ISBN 978-0-8147-3688-3, ISBN 978-0-8147-3689-0.
- Inikori, Joseph E., and Stanley L. Engerman (eds) (1992). The Atlantic Slave Trade: Effects on Economies, Societies and Peoples in Africa, the Americas, and Europe. Duke UP. ISBN 0822382377.CS1 maint: Uses authors parameter (link)
- Klein, Herbert S.: The Atlantic Slave Trade (2nd edn, 2010).
- Lindsay, Lisa A. Captives as Commodities: The Transatlantic Slave Trade. Prentice Hall, 2008. ISBN 978-0-13-194215-8
- McMillin, James A. The Final Victims: Foreign Slave Trade to North America, 1783–1810, (Includes database on CD-ROM) ISBN 978-1-57003-546-3
- Meltzer, Milton: Slavery: A World History. New York: Da Capo Press, 1993. ISBN 0-306-80536-7.
- Northrup, David: The Atlantic Slave Trade (3rd edn, 2010)
- Rawley, James A., and Stephen D. Behrendt: The Transatlantic Slave Trade: A History (University of Nebraska Press, 2005)
- Rediker, Marcus (2007). The Slave Ship: A Human History. New York, NY: Viking Press. ISBN 978-0-670-01823-9. Archived from the original on 2012-03-31.
- Rodney, Walter: How Europe Underdeveloped Africa. Washington, D.C.: Howard University Press; Revised edn, 1981. ISBN 0-88258-096-5.
- Rodriguez, Junius P. (ed.), Encyclopedia of Emancipation and Abolition in the Transatlantic World. Armonk, NY: M.E. Sharpe, 2007. ISBN 978-0-7656-1257-1.
- Solow, Barbara (ed.), Slavery and the Rise of the Atlantic System. Cambridge: Cambridge University Press, 1991. ISBN 0-521-40090-2.
- Thomas, Hugh: The Slave Trade: The History of the Atlantic Slave Trade 1440–1870. London: Picador, 1997. ISBN 0-330-35437-X.; comprehensive history
- Thornton, John: Africa and Africans in the Making of the Atlantic World, 1400–1800, 2nd edn Cambridge University Press, 1998. ISBN 0-521-62217-4, ISBN 0-521-62724-9, ISBN 0-521-59370-0, ISBN 0-521-59649-1.
- Williams, Eric (1994) . Capitalism & Slavery. Chapel Hill: University of North Carolina Press. ISBN 0-8078-2175-6.
- Araujo, Ana Lucia. Public Memory of Slavery: Victims and Perpetrators in the South Atlantic Cambria Press, 2010. ISBN 9781604977141
|Wikimedia Commons has media related to Slavery.|
|Wikivoyage has a travel guide for Atlantic slave trade.| |
Motion - Kinematics
Kinematics is the branch of mechanics concerned with the motions of objects without being concerned with the forces that cause the motion. In this latter respect it differs from dynamics, which is concerned with the forces that affect motion. There are three basic concepts in kinematics - speed, velocity and acceleration.
Speed[edit | edit source]
The speed of an object is how fast it is moving (the same as the ordinary, everyday definition). Speed in physics is defined as the rate of change of position with no respect to direction.
The standard equation for average speed is
- stands for average speed
- stands for distance = Ending Position - Start Position
- stands for time = Ending time - Beginning time
For example, if a bus takes two hours to travel 100 km, then it has an average speed of
Velocity[edit | edit source]
Velocity is defined as the rate of change of position of a body in a given direction. The velocity of an object (such as a bus) is how fast it is moving in a particular direction. To specify the velocity, both a speed and a direction must be given. Continuing with the bus from the example above, if it is moving east of west, then its velocity is 50 km/h, e of w.
Acceleration[edit | edit source]
Acceleration is the rate of change of velocity. Recalling the definition of velocity, this could mean a change in speed or direction. So, if the bus goes around a curve without slowing down, still traveling at 50 km/hr, but now turning toward the south (say), then it is accelerating, even though its speed isn't changing.
Acceleration will prove to be an important topic when it comes to dynamics, which is concerned with the forces that make objects move.
Uniform motion[edit | edit source]
The simplest type of motion is where the change in distance is the same for every second; in other words the speed is constant. We call this uniform motion and its distance versus time graph looks like this:
Notice that the speed is the slope of the graph (rise/run). The graph has the same slope (steepness = rise/run) at all times - the speed is constant. When the distance at time zero is zero, the deltas (change in) may be omitted and the formula v = d/t used. This is often written as d = vt. The letter v is commonly used for speed which is the magnitude (size) of velocity.
This graph is horizontal because the velocity is constant, the same at all times. Notice that the area under the graph, a rectangle, is vt which is the distance.
We have found relationships among the quantities:
- The slope on the distance versus time graph is the velocity
- The area under the v vs t graph is the distance
The formula only applies to uniform motion, but the graphical relationships turn out to be true for all kinds of motion.
Example 1[edit | edit source]
Sound travels at a velocity of about 340 m/s in air. How far will it go in one tenth of a second?
Example 2[edit | edit source]
Light travels at 3.0 x 10^8 m/s. How long will it take to go 10 km?
Example 3[edit | edit source]
Three year old Johnny takes off at 2 m/s. Five seconds later, his mom goes after him at 3 m/s. How far does Johnny get before his mother catches him?
- Solution 1
- It looks complicated, so let us picture it with a graph first. On this graph, the red line shows Johnny's motion and the blue line shows Mom's motion.
- You should be able to understand why the red line is drawn where it is - it starts at distance zero at time zero, and its slope must be 2 m/s. Mom's line (blue) starts 5 s later and has a slope of 3. The two meet at approximate time 15 s after Johnny starts, and distance 30 m. Graphical solutions are always approximate.
- Solution 2
Write d = vt for Johnny and again for Mom. We are interested in the time when their distances are equal.
d = vt d = vt
2t = 3(t-5) because Mom's running time is 5s less.
2t = 3t - 15 math expansion
0 = 1t - 15 subtract 2t from both sides
15 = t add 15 to both sides
In 15 seconds, Johnny goes 30 m. Mom travels for 10 seconds at 3 m/s so her d = vt = 3x10 = 30 m is the same. The exact answer is 30 meters.
Motion with constant acceleration[edit | edit source]
The next simplest type of motion is where the velocity (speed) is steadily increasing. Velocity is the slope on the distance vs time graph, so the graph must be curved upward. As time increases, the slope or velocity increases.
It is fascinating to see that the area under the velocity graph is equal to the distance in this case, too. We use the math notation A(t) to represent the area under the graph from time zero up to time t. A(t+Δt) is the area up to time t + Δt. The difference between these two areas is the blue shaded area in the diagram above:
A(t+Δt) - A(t) = shaded area = v(t) x Δt This second equality is not quite right because v(t)xt is only the rectangular area, but the small triangle does not matter as we take Δt to be very small. Mathematicians say, "in the limit as Δt tends to zero."
Dividing both sides by Δt this is
Velocity is the slope or rate of change of distance, and it is the rate of change of the area, so distance and the area under the velocity graph must be the same thing. It is probably more convincing to demonstrate this for yourself with some numerical data for the distance versus time. Any data will do; the relationship turns out to be true for any kind of motion.
Using the idea that distance is the area under the velocity graph, we can now find a formula for distance when the motion is accelerated.
- Example 4
- A falling object accelerates at 9.81 m/s2. A stone is dropped (no initial speed) over a cliff and it takes 2s to fall, how high is the cliff?
- Example 5
- If a car accelerates at 2 m/s2, how long will it take to travel 100m?
Acceleration is defined as the rate of change of velocity, or the slope on the velocity graph. This gives us or . Usually the motion begins at time zero, so Δt is the same as t, but there is often an initial velocity so . Thus the formula can be written as or most commonly as
- Example 6
- If a car moving at 12 m/s accelerates at 2 m/2 for 10 seconds while passing a truck, how fast will it be going after the 10s?
While the formulas are most convenient for most problems, it is worth remembering the graphical relationships, which can be useful for harder problems.
- Example 7
- A bullet moving at 300 m/s hits a wooden block and travels 0.015 m while decelerating in the wood. Assume that the motion is approximately constant accelerated (or decelerated). Calculate the acceleration of the bullet while stopping. |
As Huygens parachuted to the surface of Titan in January 2005, a battery of telescopes around the world were watching or listening.
The results of those observations are now being collected together and published for the first time. The work gives valuable additional context within which to interpret the 'ground truth' returned by Huygens.
Hundreds of scientists, working at 25 radio and optical telescopes situated mainly around the Pacific, from where Titan would be visible at the time of Huygens descent, observed the moon before, during and after the Huygens descent. It was one of the largest ground-based observational campaigns ever to take place in support of a space mission.
The first observations began well over a year before Huygens entered the alien world's atmosphere, when scientists used the fact that Titan would pass directly in front of two distant stars. By watching the way the light faded from the stars, scientists analysed the density, wind and temperature of Titan’s atmosphere. It helped to build confidence by confirming that the atmosphere was similar to their expectations.
A year later, telescopes monitored Titan's atmosphere and its surface at infrared wavelengths for the days and weeks around the Huygens descent. Even now, those observations are of critical importance to the scientists as they continue to interpret the data returned by the probe. "We wanted to know whether the day of the descent was a special day or not on Titan, so that we can place the Huygens data in the correct context," says Olivier Witasse, a Huygens scientist at ESA's European Space Research and Technology Centre (ESTEC) in The Netherlands.
Radio telescopes were used to track Huygens. Both Single-Dish Doppler-tracking, and a Very Long Baseline Interferometry (VLBI) observation that included 17 telescopes, were planned. Doppler-tracking was expected to complement the radio experiment onboard Huygens that used the probe-orbiter link. The VLBI project was initiated about two years before the Huygens entry as a test experiment. No one could predict for certain that the Huygens signal would be detectable but, if it were detected, it would provide unique information.
"One goal of the VLBI observation was to reconstruct the probe's descent trajectory to an accuracy of ten kilometres. At Titan's distance of more than 1 billion kilometres, this is the equivalent of determining positions with an accuracy of just three metres on our own Moon. Another goal was to demonstrate this as a new technique for future missions," says Jean-Pierre Lebreton, Huygens Project Scientist.
The radio experiments worked beyond expectations and even proved to be a 'safety net' when the reception of Huygens' second communications channel failed during the descent. The data from several of Huygens’ six experiments was lost, including that required for the Huygens radio experiment to track the winds during the whole descent. The Doppler-tracking data from the Green Bank Telescope (West Virginia, America) and from Parkes (Australia) provided real-time information about the probe's drift in the winds. The processing of the VLBI data set is not yet completed but initial results look very promising.
The combined analysis of the Huygens data with that acquired by the Cassini orbiter in the past two years allowed scientists to reconstruct the movement of the probe precisely. They pinpointed its landing to 10.33 degrees south and 192.32 degrees west. The VLBI data set will provide an independent reconstruction of the trajectory. It should help to confirm and most likely refine the whole descent trajectory and the coordinates of the landing site.
Note to editors:
The results appeared on 27 July 2006 in the Journal of Geophysical Research, in the article "Overview of the coordinated ground-based observations of Titan during the Huygens mission", by O. Witasse, J-P. Lebreton et al. (111, doi: 10.1029/2005JE002640). This issue of JGR contains 12 companion papers on results about Titan.
Huygens will certainly not be the last mission to benefit from a coordinated ground-based observational campaign. Promoting such observations is one of the main activities of the EU-sponsored Europlanet project. Jean-Pierre Lebreton, initiated the Titan coordinated observation campaign before the birth of Europlanet. "The Huygens experience is directly injected into the Europlanet activities," says Lebreton.
Europlanet's objective is to draw together planetary science research activities in Europe. "Europlanet's long-term goal is to pull together European resources to capitalise on the investments made in space missions. We will do this by promoting complementary and coordinated activities such as ground based observations, laboratory work, modelling and theory. Access to relevant archived laboratory data and ground-based data should become as easy as access to archived space-based planetary mission data," added Lebreton.
There is already an ongoing effort associated with ESA's comet chaser, Rosetta, which was launched on 2 March 2004. Rosetta's target was chosen after a prolonged series of ground-based observations. Even now, telescopes continue to monitor comet Churyumov-Gerasimenko as Rosetta cruises towards its 2014 rendezvous. A major coordinated ground-based campaign is expected when Rosetta reaches the comet.
For more information:
Jean-Pierre Lebreton, ESA Huygens Project Scientist
Email: jean-pierre.lebreton @ esa.int
Olivier Witasse, ESA Huygens scientist
Email: olivier.witasse @ esa.int |
Water exists even on the sunlit Moon, a new study shows — trapped in glass beads but still extractable for lunar exploration.
Permanently dark craters are no longer the only places to find water on the Moon. NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA) has found evidence of water molecules near in the Moon's sunlit Southern Highlands.
A separate analysis of lunar topography found that areas as small as a centimeter across — but adding up to 8,000 square kilometers (3,000 square miles) — could act as cold traps where water ice collects.
Both discoveries appearing today in Nature Astronomy give new insight into water distribution on the lunar surface. But they also require follow-up study to assess their implications for future exploration of the Moon.
Water Molecules on the Moon
Spacecraft observations have revealed the presence of water ice in permanently shadowed craters at the lunar poles, where temperatures stand below the 110 K (-262°F) vacuum sublimation point of water ice.
Other studies have also found that the spectral fingerprints of hydrogen at near-infrared wavelengths of 2.8 to 3 microns are widespread on the sunlit part of the Moon. However, while those features can come from water, they can also come from hydrogen-oxygen bonds in minerals containing hydroxyl (OH) groups. So they are not proof of water molecules. Sunlit regions are exposed to harsh ultraviolet radiation that breaks up water molecules but does not split hydroxyl groups.
Seeking to resolve that ambiguity, Casey Honniball (now at NASA Goddard Space Flight Center) and colleagues searched for spectral features present only in molecular water and identified an emission band at 6 microns. Finding an instrument to make the observations was much harder, as our atmosphere is completely opaque at that wavelength. No spacecraft could perform spectroscopic imaging of the Moon at that wavelength either.
Honniball’s only option was FORCAST, the faint-object infrared camera on SOFIA, the 2.5-meter telescope that flies aboard a modified 747 above 99% of the atmosphere's water vapor. SOFIA had never before observed the Moon.
She had only 20 minutes of observing time on a nine-hour flight in 2018, but she was able to observe two regions: one near 60°S (including Clavius Crater), where high hydrogen levels had previously been measured, and one at lower northern latitudes in Mare Serenitatis, the Sea of Serenity, where hydrogen levels were low. In Nature Astronomy, her group reports that the higher-latitude, high-hydrogen site has water molecule at concentrations 100 to 400 parts per million higher than what was found in the more equatorial site.
How did water molecules withstand destruction by the harsh ultraviolet light? Honniball’s team compared spectra of the Clavius site with those of meteorites that had interacted with water and of basalts from mid-ocean ridges on Earth. The researchers suggest that perhaps water is trapped in impact glasses formed after micrometeorites smash onto the lunar surface. The impact energy vaporizes both the impactor and some of the rock it’s impacting; that vapor cools quickly to form glass. She thinks already-present hydroxyl groups combine to form water molecules that are then trapped in the glass, which protects them from incident ultraviolet light.
“They clearly detect molecular H2O on the sunlit lunar surface for the first time,” says planetary scientist Paul Hayne (University of Colorado, Boulder). (Hayne was not involved in the SOFIA observations but published a separate analysis of cold traps in the same issue of Nature Astronomy). The presence of water molecules is important because they can be extracted from mineral grains much more readily than hydroxyl groups can.
These results also are good news for SOFIA, which NASA proposed cancelling in February because of its high operating costs. At $82 million a year, SOFIA's costs are second only to the Hubble Space Telescope. Congress later restored SOFIA to the budget.
Micro Cold Traps
In a separate study that appears in the same issue of Nature Astronomy, Hayne and colleagues reported the existence of large numbers of permanently shadowed cold traps as small as one centimeter on the lunar surface.
The researchers analyzed more than 5,000 images taken by NASA’s Lunar Reconnaissance Orbiter to identify the distribution of shadows where sunlight was striking the surface at steep angles. They then calculated how many of these regions would remain dark and cold enough to retain ice even if heated by adjacent sunlit areas.
“There are tens of billions of these micro cold traps from 1 cm to 1 m, which are widely distributed from 80° latitude to the poles,” Hayne told Sky & Telescope. Accounting for these smaller cold traps increases the total known area of cold traps about 20%, to about 40,000 square kilometers or 0.15% of the lunar surface.
Both studies raise further questions. So far, SOFIA has surveyed only two small areas for water. Honniball has two more hours of observations scheduled and has requested 72 additional hours to learn how water is distributed across the Moon’s sunlit surface.
Meanwhile, Hayne, who is principal investigator for the Lunar Compact Infrared Imaging System (L-CIRiS) due to fly on NASA's polar lander in 2022, will use the instrument’s panoramic images of the landing site to measure the size, abundance and temperatures of micro-cold traps. Further study should answer important questions about how water and other volatiles migrate within the inner solar system and across the lunar surface. |
Our CalculatorsPercentage Calculator Fraction to Percentage Calculator Factors Calculator Fraction of a Number Calculator Division Calculator
Basic Geometric Concepts
Geometry is the branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. As an introductory look into the world of geometry, this page will cover several basic geometric concepts including points, lines, angles, and shapes, as well as the principles of congruence and similarity.
Points, Lines, and Angles
The most basic geometric elements are points, lines, and angles. A point represents a location in space, a line is an infinite series of points extending in two directions, and an angle is formed by two rays that share a common endpoint, which is called the vertex of the angle.
In geometry, a point is usually represented by a dot and labeled with a capital letter.
A line is usually represented by a straight line with two arrowheads indicating that it extends infinitely in both directions. A line can be named by any two points on the line.
Angles are usually named by three points, with the vertex point in the middle, or simply by a single lowercase letter. The measure of an angle is given in degrees.
Shapes and Figures
Geometric figures or shapes are collections of points that form lines, angles, surfaces, and solids. Some of the most common geometric figures include circles, triangles, rectangles, squares, and polygons.
A circle is a set of points in a plane that are all the same distance from a fixed point called the center. The distance from the center to any point on the circle is called the radius.
A triangle is a polygon with three sides. There are various types of triangles, including equilateral (all sides are equal), isosceles (two sides are equal), and scalene (no sides are equal).
Rectangles and Squares:
A rectangle is a four-sided polygon where all angles are right angles. If all sides are also equal, then it is a square.
A polygon is a closed figure formed by a finite number of line segments. Polygons can have any number of sides - three sides form a triangle, four sides form a quadrilateral, five sides form a pentagon, and so on.
Congruence and Similarity
In geometry, congruence and similarity are concepts that compare two figures based on their shape and size. Two geometric figures are congruent if they have the same shape and size, while they are similar if they have the same shape but not necessarily the same size.
Two triangles are congruent, for example, if their corresponding sides and angles are equal. Congruence is often used in geometry to prove facts about figures.
Two figures are similar if their corresponding angles are equal and their corresponding sides are proportional. Similarity is often used in problems involving scaling or proportion.
Perimeter and Area
The perimeter of a geometric figure is the distance around it, while the area is the amount of space inside it. Different formulas are used to calculate the perimeter and area of different geometric figures.
The perimeter of a rectangle, for example, is calculated as twice the sum of its length and width. The perimeter of a circle, called the circumference, is calculated as 2π times the radius.
The area of a rectangle is calculated as the product of its length and width. The area of a circle is calculated as π times the square of the radius.
Volume and Surface Area
For three-dimensional figures, or solids, we also consider the concepts of volume and surface area. The volume is the amount of space inside a solid, while the surface area is the total area of the surface of the solid.
The volume of a cube, for example, is calculated as the cube of its side length. The volume of a sphere is calculated as four-thirds π times the cube of the radius.
The surface area of a cube is calculated as six times the square of its side length. The surface area of a sphere is calculated as 4π times the square of the radius.
Coordinate geometry, or analytic geometry, involves the study of geometry using a coordinate system. This section involves understanding the concepts of points, lines, and distances in the Cartesian coordinate system.
In a Cartesian coordinate system, a point is represented by a pair of numerical coordinates which are the distances from the point to two fixed perpendicular directed lines, measured in the same unit of length.
A line in a Cartesian coordinate system can be represented by an equation in two variables, typically x and y. The slope of the line represents how steep the line is, while the y-intercept represents the point where the line crosses the y-axis.
The distance between two points in a Cartesian coordinate system can be calculated using the distance formula, which is derived from the Pythagorean theorem.
By understanding these basic geometric concepts, you will be better prepared to explore more advanced topics in geometry and to apply your knowledge to real-world situations. |
Stellar-mass black holes — which weigh between a few and 100 times the mass of the Sun — speckle the universe. In our Milky Way alone, there are an estimated ten million to one billion stellar-mass black holes. That sounds like a lot, until you consider there are an estimated 100 to 400 billion stars in our galaxy.
But what exactly are stellar-mass black holes? And how do these mysterious voids in space differ from their supersized cousins?
Created by destruction
Not every star has the potential to become a black hole; only the most massive reach this coveted status. The smallest stellar-mass black holes come from stars packed with at least 2 to 3 times the mass of our Sun. (If you’re wondering, our petite Sun is too small to collapse into a black hole and instead will one day become a white dwarf).
Stars in the primes of their lives, like the Sun, burn hydrogen in their cores through a process known as nuclear fusion. This converts the hydrogen to helium and creates an outward pressure that counteracts the inward force of gravity. Following this hydrogen-burning phase, the most massive stars are hot enough to burn through their helium (just like less massive stars), then carbon, neon, oxygen, and, finally, silicon. After silicon, however, the star’s core is basically a hunk of iron, at which point no further energy can be unlocked through nuclear fusion. At this point, the inward crush of gravity has the upper hand.
In the most basic sense, the outer shell of the star, with no internal pressure to support it, implodes. For stars slightly more massive than the Sun, those collapsing outer layers rebound off the star’s core, detonating it as a supernova. But in the case of the most massive stars, nothing can stop the crushing collapse. Such stars are destined to become stellar-mass black holes upon their deaths.
But stellar old age isn’t the only way to form a black hole.
A white dwarf or neutron star remnant from a smaller star can also become a stellar-mass black hole, but it needs some help. It must syphon enough material from a nearby binary companion that it eventually climbs about the mass threshold needed to collapse into a black hole. Alternatively, the merger of a binary neutron star system could also create an object too massive to sustain itself as anything except a black hole.
There are also supermassive black holes, which weigh in at millions to billions of times the mass of the Sun. These gravitational Goliaths reside in the centers of most, if not all, galaxies. But although they are well documented, exactly how they first formed is still up for debate.
Anatomy of a monster
To really understand a black hole, you need to understand its anatomy.
At the center of a black hole lies the singularity, a theoretical point in space which has zero volume but contains all of the object’s mass. It is here that the black hole truly lives.
German theoretical physicist Karl Schwarzschild was the first to use Einstein’s theory of general relativity to show this point was mathematically possible. Encapsulating the singularity lies what most people picture when they think of a black hole: the event horizon. This is the spherical boundary beyond which nothing, not even light, can escape a black hole’s clutches.
However, contrary to the name, black holes are not (always) entirely invisible.
For years, these objects were only theorized because, by their nature, nothing can escape their clutches; they remained hidden. The only data that can be collected directly from a black hole is its spin and size. To find a black hole, researchers instead had to look at how these monsters interact with matter.
Ravenous, black holes consume anything that gets too close. And often, their plates overflow. That’s when the excess matter they’ve seized creates a hot swirling pool of doom around it called an accretion disk. This disk pulverizes everything within it, from gas to dust to asteroids to planets, and the material continues circling the black hole until it eventually gobbles it up. Because the dense rings of material can race around black holes at significant fractions of the speed of light, they get so hot that they emit X-rays, giving away the black hole’s position.
More recently, a secondary mode of detection was added to researcher’s toolbox, too: gravitational waves. When two black holes — or even a black hole and neutron star — collide, their merger sends out faint ripples in the fabric of space-time that scientists can now detect. Dozens of merging stellar-mass black holes have already been detected this way since 2015.
Still, an untold number of black holes (both stellar-mass and supermassive) lurk throughout the cosmos, influencing everything from nearby stars to galaxies themselves. But over the next few years, with the help of larger, more powerful telescopes and advanced gravitational-wave observatories like LIGO, scientists hope to continue exploring these mysterious and massive enigmas. |
Question: Explain the role of the price mechanism in allocating resources in an economy
As resources are scarce relative to the insatiable demands of human wants, economies are concerned with basic questions of allocation. The free market price mechanism (is the forces of demand and supply) answers the questions of- What and how much to produce? For whom to produce? How to produce?
In a market, resources are allocated based on the demand/supply in which prices plays an signalling function as it allocates resources to the production of different types of goods. It also acts as signalling mechanism between buyers and sellers; telling them how much and what to produce.
What and how much to produce?
Resources are limited and cannot produce enough goods and services to satisfy human wants which are unlimited.
The economy must make a choice on the types of goods and services that it wants to make available to the country. For example, an economy has to decide on the different types of goods to produce, determined jointly by producers and consumers through the signalling role of prices and their self-interest. Price shows how much consumers are willing and able to pay, signalled by the demand curve. How much producers are willing and able to produce, is shown by supply curve
In this way, the price acts as a signal telling the producers what to produce and how much of the good to produce.
Thus determines the allocation of resources among various goods. If market is in disequilibrium, the market will adjust until equilibrium price and quantity achieved a satisfaction of both buyers and sellers maximised. For example, when Qd < Qs for rice, ceteris paribus, a shortage results. There will be upward pressure on the price and the price increase will signal an increase in profit which leads to a reallocation of resources into the production of that good.
For whom to produce?
Price mechanism also shows who to produce these resources for. This is shown by the demand curve which signifies consumers’ willingness and ability to pay. In a way it represents their economic dollar votes and shows that producers should produce for these consumers.
Resources are scarce, no society can satisfy all the wants of its people.
How the limited supply of final goods/services produced is allocated among people?
Price acts as a mechanism in a market economy and distributes the output only to people who are able and willing to pay for the good.
This in turn depends on the purchasing power and the value that people place on the good. Consumers pay and consume goods to maximise consumer welfare while producer try to maximise profits. Using the DD/SS diagram above, the equilibrium output at Qe will be allocated to consumers who are willing and able to pay at least the equilibrium price which is set at Pe.
How to produce?
Prices of resources and factors of production also address the question of how to produce various goods and services. An economy can choose to produce using use various factors of production like labour (human) or capital (machines). Price of resources should guide firms’ production methods and firms choose resources that are cheapest and incur the lowest opportunity cost.
In the factor market the producers demand for resources and the consumers are factor owners that supply the resources. The allocation of resources among the competing uses is based on the prices of the resources.
For example, a manufactured good can either be produced by capital intensive methods (where there is little use of labour and greater use of machines) which are more efficiency or labour intensive methods (where greater use is made of labour).
A firm’s main aim is to reduce the cost of production as guided by relative prices of factors of production. In countries like China, where resources like LAND and LABOUR are abundant, they tend to be engaged in labor intensive production due to and it is easier to produce using such methods. |
What is IP Routing?
IP routing is the application of routing methodologies to IP networks. IP networks use the internet protocol suite, which is a set of communication protocols used in the internet and computer networks. An IP router, or IP network node, is an element that determines a suitable path for a network packet to traverse the IP network from its source to its destination. Although a router is physically connected to fiber optics, copper-based cabling or wireless media, IP routing is essentially about computing a route and transporting the digital bits across an IP network.
The network industry uses something called the OSI stack to conceptualize the seven layers that contribute to the act of communication. The physical layer or layer one (L1) comprises the physical media (e.g., copper cable). The data link layer (L2) establishes and terminates the link between two nodes (e.g., Ethernet). IP operates at the network layer or L3, which presupposes that L1 and L2 are in place and doing their jobs. The router’s job is to determine the route and forwarding path of the data bits.
When we talk about ‘data’, we generally mean user data, which can be any kind of digital information from any application. For instance, the data could be coming from an email application, a video conferencing application, or a voice application. It could also be data used to control a relay in an electrical transmission network or to tell an autonomous mining truck to stop. There are literally millions of applications that currently use IP to move their application-specific data across the network.
IP performs two functions with this user application data. First it encapsulates it into packets. The packet is made up of the ‘payload’, which is the user data, and a header, which has the destination address information. Think of the latter like the information form you might fill out at UPS when sending a package. It has the addresses that the packet comes from and where it is going, as well as other information related to the ‘payload’ contents.
Routing protocols are used to dynamically configure packet-forwarding tables to direct the IP packets to the next available IP router or network node on its path to the desired destination. To coordinate these actions, the IP routers talk to one another over what is often called the control plane. One way to think of it is that the control plane performs routing, while the data plane performs the forwarding.
IP was initially designed as a “best-effort” protocol, where quality of service (QoS) was not seen as critical. This made IP ideal for applications where latency isn’t critical, such as sending an email. It was less suitable for latency-sensitive applications such as a telephone conversation, where skips and stutters caused by late arriving packets degrade the experience, or a mission-critical application such as an autonomous vehicle or an electrical relay which cannot safely suffer from latency or delay.
“Best effort” was initially sufficient given the nature of the traffic transitioning the internet or computer networks. However, as applications have evolved with the introduction of more and more real-time traffic and interactive applications, QoS cannot be ignored. Services cannot be offered without the guarantee of QoS. This has led to the development of QoS mechanisms and protocols that enable IP networks to support the demanding needs of the multimedia and cloud services era.
Everything connected to everything else
As we look forward to services and applications only limited by human imagination, the reliance on our IP networks will only increase. As more and more applications share the same network infrastructure and are increasingly delivered using highly scalable clouds, the network needs to become much more agile, scalable, and resilient.
One of the focuses for IP networks is the continued shift to mission-critical and business-critical vertical applications. Access will be from evolving wireless and wireline access networks, which will connect to powerful IP edge routers that connect edge clouds and central clouds using data center interconnection services.
This means that IP routers must become faster, handling data center interconnection services at 400 and 800 Gbs, something that is beginning to happen today. This requires highly specialized network processors like the Nokia FP5 to analyze incoming packets, manage services and add encryption at very high line speeds while reducing the power they use.
Whatever the future brings, IP routing is here to stay.
We create technology that helps the world act together.
As a trusted partner for critical networks, we are committed to innovation and technology leadership across mobile, fixed and cloud networks. We create value with intellectual property and long-term research, led by the award-winning Nokia Bell Labs.
Adhering to the highest standards of integrity and security, we help build the capabilities needed for a more productive, sustainable and inclusive world. |
This article extends and investigates the ideas in the problem "Stretching Fractions".
The Pythagoreans noticed that nice simple ratios of string length made nice sounds together.
Can you see how to build a harmonic triangle? Can you work out the next two rows?
The Egyptians expressed all fractions as the sum of different unit fractions. Here is a chance to explore how they could have written different fractions.
Can you tangle yourself up and reach any fraction?
Can all unit fractions be written as the sum of two unit fractions?
It would be nice to have a strategy for disentangling any tangled ropes...
Twice a week I go swimming and swim the same number of lengths of the pool each time. As I swim, I count the lengths I've done so far, and make it into a fraction of the whole number of lengths I. . . .
Pick two rods of different colours. Given an unlimited supply of rods of each of the two colours, how can we work out what fraction the shorter rod is of the longer one?
Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation.
Take a line segment of length 1. Remove the middle third. Remove the middle thirds of what you have left. Repeat infinitely many times, and you have the Cantor Set. Can you picture it?
In this problem, we have created a pattern from smaller and smaller squares. If we carried on the pattern forever, what proportion of the image would be coloured blue?
There are some water lilies in a lake. The area that they cover doubles in size every day. After 17 days the whole lake is covered. How long did it take them to cover half the lake?
Using some or all of the operations of addition, subtraction, multiplication and division and using the digits 3, 3, 8 and 8 each once and only once make an expression equal to 24.
Find the maximum value of 1/p + 1/q + 1/r where this sum is less than 1 and p, q, and r are positive integers.
An activity based on the game 'Pelmanism'. Set your own level of challenge and beat your own previous best score.
Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters?
The Egyptians expressed all fractions as the sum of different unit fractions. The Greedy Algorithm might provide us with an efficient way of doing this.
Using an understanding that 1:2 and 2:3 were good ratios, start with a length and keep reducing it to 2/3 of itself. Each time that took the length under 1/2 they doubled it to get back within range.
A Sudoku with clues as ratios or fractions.
At the beginning of the night three poker players; Alan, Bernie and Craig had money in the ratios 7 : 6 : 5. At the end of the night the ratio was 6 : 5 : 4. One of them won $1 200. What were the. . . .
Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for?
A personal investigation of Conway's Rational Tangles. What were the interesting questions that needed to be asked, and where did they lead?
Take a look at the video and try to find a sequence of moves that will take you back to zero.
What fractions can you find between the square roots of 65 and 67?
Imagine a strip with a mark somewhere along it. Fold it in the middle so that the bottom reaches back to the top. Stetch it out to match the original length. Now where's the mark?
I need a figure for the fish population in a lake, how does it help to catch and mark 40 fish ?
What is the total area of the first two triangles as a fraction of the original A4 rectangle? What is the total area of the first three triangles as a fraction of the original A4 rectangle? If. . . .
In a certain community two thirds of the adult men are married to three quarters of the adult women. How many adults would there be in the smallest community of this type?
The scale on a piano does something clever : the ratio (interval) between any adjacent points on the scale is equal. If you play any note, twelve points higher will be exactly an octave on.
Take a line segment of length 1. Remove the middle third. Remove the middle thirds of what you have left. Repeat infinitely many times, and you have the Cantor Set. Can you find its length?
There are lots of ideas to explore in these sequences of ordered fractions.
According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have. . . .
Who first used fractions? Were they always written in the same way? How did fractions reach us here? These are the sorts of questions which this article will answer for you.
There are three tables in a room with blocks of chocolate on each. Where would be the best place for each child in the class to sit if they came in one at a time?
Written for teachers, this article describes four basic approaches children use in understanding fractions as equal parts of a whole.
Can you work out which drink has the stronger flavour?
What fractions of the largest circle are the two shaded regions?
At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and. . . .
In a race the odds are: 2 to 1 against the rhinoceros winning and 3 to 2 against the hippopotamus winning. What are the odds against the elephant winning if the race is fair?
Mike and Monisha meet at the race track, which is 400m round. Just to make a point, Mike runs anticlockwise whilst Monisha runs clockwise. Where will they meet on their way around and will they ever. . . .
Can you work out the parentage of the ancient hero Gilgamesh? |
The key hypothesis this work tries to inspect is: ”The feasibility of performing flip turns, a well known attribute of a bat-like landing maneuver, through the manipulation of inertial dynamics while excluding aerodynamic forces.” Bats (and birds) possess no energy-hungry motors widely used in flying robots for thrust vectoring yet they are more capable than any of these systems when agility and, of course, energy efficiency of flight are concerned.
Flying vertebrates apply the combination of inertial dynamics and aerodynamics manipulations to showcase extremely agile maneuvers. Unlike rotary- and fixed-wing systems wherein aerodynamic surface (i.e., ailerons, rudders, propellers, etc.) come with the sole purpose of aerodynamic force adjustment, the wings (also called appendages) in birds and bats possess more sophisticated roles. It is known that birds perform zero-angular-momentum turns by making differential adjustments (e.g., collapsing armwings) in the inertial forces led by one wing versus the other. Or, bats apply a similar mechanism to perform sharp banking turns .
Among these maneuvers, landing (or perching), which flying vertebrates do it in one way or another for a variety of reasons (e.g., transition to walking, resting on a perch, hanging from the ceiling of a cave, etc.), is an interesting maneuver to take inspiration from for aerial robot designs. Perching birds rotate their wings so that the aerodynamic drag is increased by creating a high-pressure region inside of the wings and a low-pressure region behind the wings. This brings the wings to a stalled condition at which point the generated lift is equal to zero and the animal falls naturally while employing the legs as a landing gear.
Bats do it in a radically different way. After the self-created stalled condition, they manifest an acrobatic heels-above-head maneuver that involves catapulting the lower body in a similar way that a free style swimmer flip turns.
Perching insects and birds have been the source of inspiration and bio-mimicry of them has led to interesting robot designs in recent years. Remarkably, the bio-mimicry of bat-like landing is overlooked mainly because not only the aerodynamics adjustments are involved but also unique design provisions are required to allow for the manipulation of inertial dynamics.
A part from the ordeals associated with the design and control of a robot that can land similar to a biological bat a number of unique applications can be identified for these systems. For instance, in scenarios wherein maintaining a high vantage point for extended times is demanded (surveillance and reconnaissance) and limited power budget does not allow to hover for extended time periods hanging from elevated structures such as steel frames in buildings can allow these systems not only safely accomplish their missions but also harvest energy within the time period the system is the most vulnerable.
In addition to the exciting applications, a bat-style landing maneuver is extremely rich in dynamics and control and its characteristics are overlooked. Much of attention has been paid to simpler dynamics such as hovering and straight flight. While mathematical models of insect-style, rotary- and fixed-wing robots of varying size and complexity are relatively well developed, models of airborne, fluidic-based vertebrates locomotion remain largely open due to the complex body articulation involved in their flight.
The mainstream school of thought inspired by insect flight has conceptualized wing as a mass-less, rigid structure, which is nearly planar and translates – as a whole or in two-three rigid parts – through space. In this view, wings possess no inertial effect, are fast that yield two-time-scale dynamics, permit quasi-static external force descriptions, and tractable dynamical system. Unfortunately, these paradigms fail to provide insight into airborne, vertebrate locomotion and an ingredient of a more complete and biologically realistic model is missing, that is, the manipulation of inertial dynamics. The manipulation of inertial dynamics is an under-appreciated aspect in existing paradigms.
The objective of this work is to manifest closed-loop aerial body reorientation and preparation for landing through the manipulation of inertial dynamics using a tiny ballistic robot whose characteristics are carefully scaled to match that of a small bat. It is shown that despite a number of prohibitive restrictions, and lack of multi-thruster designs typically found in quadrotors, extremely fast body reorientations and preparations for landing through inertial dynamics manipulations not only is possible but also is an effective biologically meaningful solution.
A brief overview of the system will be presented followed by a hybrid-model description of the landing maneuver. A closed-loop controller will be designed and the simulation and experimental results will be reported at the end followed by final remarks and conclusion.
Ii A hybrid-model description of upside-down perching
A little reflection reveals, however, that a perching bat dynamics like other complex animal behavior, which emerges from complex interactions between neural and sensory-motor systems, can be described with tractable mathematical models. The legitimate question that is posed here is: ”Can a hybrid model capture the dynamics?” What encourages me to pose the question directly stems from inspecting high-speed images of the landing maneuver of bats and that in a time envelope not exceeding about one fifth of a second before landing completion (touch-down) the appendages are fully retracted strongly suggesting that the aerodynamic forces are less likely to be as important as other forces. Based on a similar observation, conducted a simulation-based research and concluded that the inertial dynamics manipulation likely are the dominant players in a landing bat.
It is assumed for two modes: mode (1) with aerodynamics dominance (inertial forces are not negligible but are less effective) and mode (2) within which the inertial dynamics are dominant. In a perching bat, a transition from one mode to another triggered by propreoceptive sensing and neural reflexes occur at a remarkable high speed.
This hybrid model will allow the individual examination of each mode. Fluid-structure interactions are hard to model, however, the aerial flip turns (backspins) are mainly the result of differential inertial forces and are rigorously explainable by mathematical models. To describe mode (1), the current schemes can take a number of experimentally validated and successful forms . Therefore, the focus will be mode (2) and that what would be the best strategy to prepare for landing on steel frames upside-down.
One may raise the concern that how this transition – i.e., mode (1) to (2) – will occur in the first place, or, how the aerodynamic forces could vanish quickly? I will draw inspiration from biological examples and that there are a variety of deployable morpho-functional systems to consider . In bats, the transition occur similar to birds by facing the inside of the wings towards incoming flow to initiate stalled conditions at which point the lift force vanishes followed by swiftly collapsing the wings at the onset of the landing maneuver, which almost likely completely zeros the effects of delayed stall and aerodynamic resistance during the backspin motion.
Without such a mediolateral movement in wings, at high angle of attack maneuvers the aerodynamic forces will be roughly normal to the wing surface at all times resulting in large pressure forces that dominate the shear viscose forces acting parallel to the wings. Because fixed-wing systems have no retraction mechanism in their aerodynamic control surfaces they cannot easily backspin. Rotory-wing systems to be able to preform backspin, they require thruster units that can generate positive and negative aerodynamic force relative to the body and that can be achieved through a number of costly approaches. Another issue concerning fluid-structure interaction can arise here. The ”ceiling effects” are unknown for us but I believe this phenomena can cause instability in a similar way that ground effects can lead to control issues and instability for near-ground hovering rotory-wing systems.
An informal comparison of a landing bat and a perching rotorcraft is meaningfully relatable to the comparison between the performance of a pitcher and a robotic arm in throwing the baseball in a baseball game. One employs natural dynamics to perfection to pitch a ball the other is carefully restricted to its predefined joint trajectories completely suppressing the natural dynamics. Of course, unlike the fully-actuated manipulator, a quadrotor is underactuated and its internal dynamics is invariant of the supervisory controller which allows for a limited contribution from the natural dynamics.
Iii Brief Overview of ”Harpoon” System
The hypothesis explained earlier is tested with the help of a self-sustained ballistic object called ”Harpoon” (shown in Fig. 1). This robot is considered as the landing gear in a morpho-functional machine. Here, the objective is to reorient the robot towards an imaginary ceiling surface and extend (or shoot) the landing-gear towards the ceiling. This scenario is designed after carefully observing the high-speed images of bat landing maneuvers and once successfully reconstructed it will capture the dynamics once inspected by [3, 12, 2, 8].
Harpoon platform is laser-cut from a thin carbon fiber plate and hosts: two actuators, two motor controller, a receiver and a processor. The overall design allows for a limited self-sustained sensing, actuation, computation and communication between the robot and master computer. A brush-less motor adjusts the angle of a bob (appendage) with respect to the body. This bob is designed to precisely capture the body to appendage ratio found in bats. A trigger mechanism releases the landing gear once it is faced towards the surface. Since the projectile is charged with massive elastic energy beyond the output performance of the trigger actuator, a mechanism with 1:40 mechanical advantage is designed and fabricated at sub-millimeter-scale resolution as shown in Fig. 1.
The orientation angles including the Euler parameters roll, pitch and yaw are estimated at 200 Hz using an OptiTrack system with six cameras in a 8 m3 space and are fed back to a discrete controller with known asymptotic stability properties. Then, the computed control inputs are fed back to the robot in a wireless communication bus in Ethernet frames. This architecture allows for a robust communication for fast sensing and actuation. The computations take place off-line and it allows for an expensive real-time processing otherwise impossible with the limited on-board processing capabilities.
Additionally, a computer-automated mechanism, called releaser, ensures the robot is released with zero-angular momentum, therefore, the ballistic motion will be of zero-angular-momentum nature . This condition may put restrictions on the boundaries of the trajectories from mode (1), however, a change in the mechanism design can easily provide non-zero-angular-momentum turns, which can easily allow for the study of non-zero-angular-momentum backspins. To completely suppress the aerodynamic effects, wings are excluded from the model and the host morpho-functional machine will be unveiled in the future works.
Iv Manipulation of inertial dynamics
The system dynamics and the equations of motion are described in a coordinate-free fashion, i.e., a system on manifold without considering any local coordinate charts. The kinetic energy is considered as a Riemannian metric and in writing the discrete Hamilton’s principle associated Riemannian inter-connections are considered. The discrete-time model on manifold with body-fixed forces and invariant kinetic energies is erected based on the Lie group variational integrator evolving on which unlike a rigid body results in a nontrivial dynamical model.
The numerical integrator obtained from the discrete variational principle exhibits excellent geometric conservation properties and because of this it is used here. This means that the robot attitude automatically evolves on the rotation group embedded in the space of special orthogonal matrices whereas the required angular velocity of appendage is computed at the level of the Lie algebra and the matrix exponential are employed to update the reference solutions.
Last, an almost-globally stabilizing, geometric discrete-time controller based on will be applied to regulate the attitude in preparation for landing.
Iv-a Notation and Assumptions
It is assumed that a body coordinate frame is fixated to the robot and it position, orientation and angular velocity are denoted by , and , respectively. Another body coordinate frame is attached to the appendage where the orientation and angular velocities are denoted by and , respectively. Therefore, the configuration space (C-space) is . The physical properties including the inertia matrix and mass are denoted by and . A small change in is denoted by and denotes variations on . The Lagrangian functional, kinetic and potential energy are denoted by , and , respectively. Other widely used notations such as for wedge operator, for a discrete value of at the kth sample, for the trace of a matrix, as the transpose operator, as the eigen-values of a matrix and as the variation operator are adopted. The control input is denoted by
, the identity matrix is denoted by, is the z-axis unit vector and are the distance between the joint and the center of mass in body and appendage.
Iv-B Moser-Veselov Description of the Discrete-time, Constrained Dynamics
Sensing and actuation occur in discrete-time, therefore, the discrete-time version of the dynamical system on is considered and that arises the constrained dynamics once investigated by . Indirect methods based on Rodrigues’ approach to approximate matrix exponential and first-order approximations of variations on Lie groups are considered as well which will be explained in this section.
Iv-B1 Schur-form update law to and
The discretized version of Hamilton’s principle determines the equations of motion from a variational problem. The Lie group variational dynamics for the system of body and appendage when external forces are absent are given by:
where allows for the inertial dynamics contributions from the appendages. In Eqs. 1, 2, 3 and 4, the solution to the discrete-time dynamical system can be resolved first by obtaining an answer to , which involves solving the continuous-time algebraic Riccati equation (ARE) given by Eq. 1. However, the solutions to the ARE are not unique and for these solution to belong to special orthogonal solutions () the convex constraint given by Eq. 4 must be satisfied . In general, a change in the orientation of the body cannot be uniquely mapped to as are minuscule changes in . To obtain , the steps from are applied. The Hamiltonian matrix is given by
and after synthesizing the Schur form of the Hamiltonian Matrix , is partitioned as . Hence, the special orthogonal solutions to the ARE are given by
Iv-B2 Rodrigues’ approximation to the variations in and
In addition to the Schur-form solutions, another approach is considered. In this way, special orthogonal solutions to Eqs. 1-4 are iteratively resolved by expressing as an exponential form of where belongs to . The Rodrigues’ formula for the changes in the body orientation gives
and the ARE given in Eq. 1 is re-written into the equivalent vector form given by
Rodrigues’ approach was considered in addition to the Schur method to provision for the numeric difficulties endured when attempted to use Eqs. 1-4 in a real-time reference trajectory governor. The Eqs. 1-4 yield the symplectic geometry of the problem in hand and is followed by computational issues about solutions in finding invariant subspaces of a Hamiltonian matrics. Other methods that could take into account the particular structure of Hamiltonian matrices are not less costly. Worthy of noting is that it was noticed – in this experimentally motivated work – extra care must be paid to as not all of the sampling rates (time intervals denoted by ) can guarantee the existence and uniqueness of the solutions in Eqs. 1-4. Intuitively, this makes sense as a relatively large time interval leads to an approximations of that violates the topological structure of . The following equation can help observe this problem
Obviously, cannot take any arbitrary number. This equation is the direct result of defining such that . Using the kinematic relationship and considering an approximation for Eq. 7 is obtained.
Almost all of these methods resolve unique positive definite solutions for ARE when sample intervals are small, however, showing how the size of the sample intervals can violate the Hamiltonian matrix with no imaginary eigenvalues is well beyond the scope of this work.
Although the sufficient and necessary convex quadratic constraint in Eq. 4 is a strong condition that secures the existence of such a solution, its dependence to the sampling time intervals can be limiting.
When is positive definite then Eq. 1 has a unique positive semi-definite solution , i.e., has positive real parts. Consequently,
is the special orthogonal matrix being sought. When
is positive definite an invertible matrixexists such that . Consequently, and will be stabilizable and detectable, respectively, which results in with positive real parts. Since are the form they have negative real part. Therefore, and are stabilizable, that is, is stabilizable and is detectable.
Iv-B3 Base-line coordinate-free system on manifold
Based on Hamilton’s Principle, the action integral is given from the Lagrangian and its variations are zero. Because the configuration group is the Lie group, this variation should be consistent with this geometry, therefore, the varied in continuous-time mode is given by
and in discrete-time mode is given by
Consequently the variations of the action integral are obtained. In driving the variations of continuous-time action integral, is used. Note that the continuous- and discrete-time variations vanish at the time boundaries, that is, and for the continuous system and and for the discrete system. From Hamilton’s principle, the above continuous- and discrete-time variations in the action integral should be zero for all variation of Lie algebra. Hence, the continuous time system on manifold is given by:
where and are the states and control inputs, respectively; and are obtained after sorting the varaitions and applying some basic algebra. The symmetric matrix is given by
and is given by
where , , , and for . The discreteization of this continuous dynamics (Eq.1-4) is achieved by applying the discrete-time variation explained above and applying Hamilton’s principle. Next, I will briefly describe the closed-loop feedback used to regulate the body orientation.
Iv-C Discrete feedback control
A scalar measure of the attitude error is given by the rotation angle about the eigen-axis needed to rotate the robot form to the desired attitude and is given by . This metric is applied to indicate the time of launching the landing gear.
The controller applied here does not require any knowledge in regard to the mass and inertia of the robot body and appendage. A major limitation in designing the closed-loop feedback for this constrained dynamics is that not all of the arbitrary reference trajectories can be enforced by the tracking controller and the tracking problem is subject to the constraints given by Eq. 4.
The issue of constraint satisfaction in nonlinear systems affine- and non-affine-in-control and its separation from the issue of closed-loop system design have been addressed very extensively. Almost all of these works do so by introducing a reference governor, an auxiliary nonlinear device that operates between the reference command and the input to the closed-loop system. A similar approach to is adopted here.
The control input with known stability properties is given by
where is the feedback gain and can be calculated by the following relation,
where is a symmetric positive-definite matrix and is the reference input to the body orientation and is resolved by finding in a nonlinear optimizer subject to the following constraints:
Note that is the desired body orientation before considering the convex constraint given by Eq. 4.
V Numerical & Experimental Results
The appendage inertial model is scaled to account for 16% of the total body mass according to [8, 7]. A total estimated body weight of 30 gr is considered, which is obtained after assuming a density of according to , while equally distanced bobs from the body joint (shown in Fig. 2) are considered. A simple body geometry is considered for a seamless inertia matching.
When only the dynamics restricted to the longitudinal plan is considered the C-space is the two-sphere , where is a unit vector (Note that the body and appendage angles are embedded in ). The tangent space at , namely , is identified with where represents the angular velocity.
A fixed desired body orientation is constructed according to the Euler convention by ramping the pitch angles to 120 and 180 deg and updated by the reference governor subject to the constraints explained above (shown in Fig. 3(a) and 3(b)). Samples (purple dots) are considered by the optimizer at the neighborhood of the desired angles (black dots) before resolving the actual trajectories. Note that due to issues in the cluttered raster image some of the purple dots are plotted. In Fig. 3(c), the orientations of a 3D-rigid body representing the body (note that the appendage is not shown) are shown along the time axis for a simultaneous tracking of roll and pitch angels -20 and 60 deg, respectively. In Fig. 3(d), the simulated Euler angles are shown.
In experiment, only the longitudinal motion is considered. The framework was tested in a series of experiments devised with an off-board computer, motion capture system and high-speed imaging equipment. In Fig. 3, the snapshots from one experiment are shown. The computer-automated releaser mechanism releases the robot with zero-angular momentum in a free fall configuration while the motion capture system captures the body and appendage orientations. The commanded (shown in Fig. 4) are resolved and transmitted to the robot for the desired body orientation angle 180 deg.
The ability of animals to reorient their bodies mid-air without the benefit of net external torques is known yet remarkably has been overlooked in aerial drone design. Instead, current designs consider thrust vectoring for agile aerial robotics, which is known to be limited based on aerodynamics laws. This work used a tiny robot called Harpoon with prohibitive restrictions associated with size and payload not allowing for multi-thruster designs typically found in quadrotors and demonstrated that extremely fast body reorientation and preparation for upside-down landing unique to bats is possible through the closed-loop manipulations of inertial dynamics in the robot. To prepare for landing, a rubber-band-propelled landing gear design and associated trigger mechanism was proposed. The closed-loop manipulations of inertial dynamics took place based on a symplectic description of the dynamical system (body and appendage), which is known to exhibit an excellent geometric conservation properties.
- A distributed control model for the air-righting of a cat. pp. 9 (en). Cited by: §III.
- (2013-03) Glide performance and aerodynamics of non-equilibrium glides in northern flying squirrels (Glaucomys sabrinus). Journal of The Royal Society Interface 10 (80), pp. 20120794. External Links: Cited by: §III.
- (2010) Fruit flies modulate passive wing pitching to generate in-flight turns. Physical review letters 104 (14), pp. 148101. Cited by: §II, §III.
- (1986) A symplectic qr like algorithm for the solution of the real algebraic riccati equation. IEEE Transactions on Automatic Control 31 (12), pp. 1104–1113. Cited by: §IV-B1.
- (2002) Nonlinear tracking control in the presence of state and control constraints: a generalized reference governor. Automatica 38 (12), pp. 2063–2073. Cited by: §IV-C.
Geometric numerical integration: structure-preserving algorithms for ordinary differential equations. Vol. 31, Springer Science & Business Media. Cited by: §IV.
- (2011-05) Whole-body kinematics of a fruit bat reveal the influence of wing inertia on body accelerations. Journal of Experimental Biology 214 (9), pp. 1546–1553 (en). External Links: Cited by: §V.
- (2008) Kinematics of slow turn maneuvering in the fruit bat Cynopterus brachyotis.. The Journal of experimental biology 211 (Pt), pp. 3478–3489. External Links: Cited by: §III, §V.
- (1991) Discrete versions of some classical integrable systems and factorization of matrix polynomials. Communications in Mathematical Physics 139 (2), pp. 217–243. Cited by: §IV-B1, §IV-B.
- (2017) A biomimetic robotic platform to study flight specializations of bats. Science Robotics 2 (3), pp. Art–No. Cited by: §II.
- (2012) Upstroke wing flexion and the inertial cost of bat flight. Proceedings of the Royal Society B: Biological Sciences 279 (1740), pp. 2945–2950. Cited by: §I.
- (2009) Bats go head-under-heels: the biomechanics of landing on a ceiling.. The Journal of experimental biology 212 (Pt), pp. 945–953. External Links: Cited by: §III.
- (2009) Inertia-free spacecraft attitude tracking with disturbance rejection and almost global stabilization. Journal of guidance, control, and dynamics 32 (4), pp. 1167–1178. Cited by: §IV-C, §IV.
- (2003) Deployable Structures in Biology. In Morpho-functional Machines: The New Species, F. Hara and R. Pfeifer (Eds.), pp. 23–40 (en). External Links: Cited by: §II. |
Humanity is building powerful rockets like the SpaceX BFR and NASA Space Launch System that can take a payload far away from Earth. However, making the return trip means you have to lug a lot more fuel with you. Efforts to send humans to Mars in the coming decades would be helped if we could make fuel on the red planet. That may be more feasible than we thought. NASA team lead Kurt Leucht has explained how the agency might use Martian soil to make the fuel astronauts need to get home after a mission.
According to Leucht, it’s best to make whatever you can at the destination because of the inescapable realities of physics. The “gear-ratio” for Mars is 226:1, meaning every kilogram of material you send requires a rocket to burn 225 kilograms of fuel. That’s true for any material — water, food, scientific equipment, people, and even reserve fuel for the return trip. With payloads being so expensive, it makes sense to produce whatever you can on Mars. This is known as in situ resource utilization (ISRU).
If you’re determined to make fuel on Mars, you’ll want to find a source of water. Water molecules contain hydrogen and oxygen, which you can split up to make fuel. You won’t come across many large chunks of water ice on Mars (the poles are mostly carbon dioxide ice), but the soil might have more than enough. Under the dusty surface layer, many regions of Mars have significant deposits of water. Leucht notes that gypsum sand dunes in the lower latitudes are about 8 percent water.
NASA calls the process of making fuel from Martian regolith “dust-to-thrust,” and it’s working on robots that can potentially do all the heavy lifting before humans even land on Mars. The Regolith Advanced Surface Systems Operations Robot (RASSOR) uses two opposing bucket drums with multiple digging scoops to gather up material as the wheels drive the robot slowly forward. NASA designed RASSOR to operate in a low-gravity environment — the drums spin in opposite direction to cancel out most of the digging force.
Collecting the regolith is just the start. NASA plans to build an autonomous chemical refinery that can process the soil to extract water and split it with an electrolyzer into hydrogen and oxygen. The oxygen is useful for human respiration in addition to rocket fuel. In either case, it can be stored in liquid form. Liquid hydrogen is harder to store, but NASA plans to turn it into liquid methane until it’s needed. Mars’ atmosphere is mostly carbon dioxide, which should serve as a good source of the necessary carbon.
NASA has demonstrated several parts of this system on Earth using simulated Martian regolith. The agency estimates the dust-to-thrust system would need to produce seven metric tons of liquid methane and about 22 metric tons of liquid oxygen in 16 months to be viable. Scientists still need to identify the best landing areas and refine the machinery to know if it’s possible to hit those goals with current technology, but ISRU is where NASA’s Mars exploration is moving.
Now read: University Develops Simulated Martian Soil, Is Selling It for $20 Per Kilogram, NASA Says Terraforming Mars Is Currently Impossible, and Most Martian Dust Probably Came From a Single Geological Formation |
Alternative 3D Coordinates Help (page 2)
Alternative 3D Coordinates - Latitude and Longitude
Here are some coordinate systems that are used in mathematics and science when working in 3D space.
Latitude And Longitude
Latitude and longitude angles uniquely define the positions of points on the surface of a sphere or in the sky. The scheme for geographic locations on the earth is illustrated in Fig. 10-15A. The polar axis connects two specified points at antipodes on the sphere. These points are assigned latitude θ = 90° (north pole) and θ = −90° (south pole). The equatorial axis runs outward from the center of the sphere at a 90° angle to the polar axis. It is assigned longitude φ = 0°.
Latitude θ is measured positively (north) and negatively (south) relative to the plane of the equator. Longitude φ is measured counterclockwise (positively) and clockwise (negatively) relative to the equatorial axis. The angles are restricted as follows:
− 90° ≤ θ ≤ 90°
− 180° < φ ≥ 180°
On the earth’s surface, the half-circle connecting the 0° longitude line with the poles passes through Greenwich, England (not Greenwich Village in New York City!) and is known as the Greenwich meridian or the prime meridian . Longitude angles are defined with respect to this meridian.
Space and Time - Celestial Coordinates and Hours, Minutes, and Seconds
Celestial Coordinates - Celestial Latitude and Longitude
Celestial latitude and celestial longitude are extensions of the earth’s latitude and longitude into the heavens. The same set of coordinates used for geographic latitude and longitude applies to this system. An object whose celestial latitude and longitude coordinates are (θ,φ) appears at the zenith in the sky (directly overhead) from the point on the earth’s surface whose latitude and longitude coordinates are (θ,φ) .
Celestial Coordinates - Declination and Right Ascension
Declination and right ascension define the positions of objects in the sky relative to the stars. Figure 10-15B applies to this system. Declination (θ) is identical to celestial latitude. Right ascension (φ) is measured eastward from the vernal equinox (the position of the sun in the heavens at the moment spring begins in the northern hemisphere). The angles are restricted as follows:
− 90° ≤ θ ≤ 90°
0° ≤ φ < 360°
Hours, Minutes, And Seconds
Astronomers use a peculiar scheme for right ascension. Instead of expressing the angles of right ascension in degrees or radians, they use hours, minutes , and seconds based on 24 hours in a complete circle (corresponding to the 24 hours in a day). That means each hour of right ascension is equivalent to 15°. As if that isn’t confusing enough, the minutes and seconds of right ascension are not the same as the fractional degree units by the same names more often encountered. One minute of right ascension is 1/60 of an hour or ¼ of a degree, and one second of right ascension is 1/60 of a minute or 1/240 of a degree.
Figures 10-16A and 10-16B show two systems of cylindrical coordinates for specifying the positions of points in three-space.
Schematic for Mathematicians, Engineers, and Scientists
In the system shown in Fig. 10-16A, we start with Cartesian xyz -space. Then an angle θ is defined in the xy -plane, measured in degrees or radians (but usually radians) counterclockwise from the positive x axis, which is called the reference axis . Given a point P in space, consider its projection P′ onto the xy -plane. The position of P is defined by the ordered triple (θ, r,h ) . In this ordered triple, θ represents the angle measured counterclockwise between P′ and the positive x axis in the xy -plane, r represents the distance or radius from P′ to the origin, and h represents the distance, called the altitude or height, of P above the xy -plane. (If h is negative, then P is below the xy -plane.) This scheme for cylindrical coordinates is preferred by mathematicians, and also by some engineers and scientists.
Schematic for Navigators and Aviators
In the system shown in Fig. 10-16B, we again start with Cartesian xyz -space. The xy -plane corresponds to the surface of the earth in the vicinity of the origin, and the z axis runs straight up (positive z values) and down (negative z values). The angle θ is defined in the xy -plane in degrees (but never radians) clockwise from the positive y axis, which corresponds to geographic north. Given a point P in space, consider its projection P′ onto the xy -plane. The position of P is defined by the ordered triple (θ,r,h) , where θ represents the angle measured clockwise between P′ and geographic north, r represents the distance or radius from P′ to the origin, and h represents the altitude or height of P above the xy -plane. (If h is negative, then P is below the xy -plane.) This scheme is preferred by navigators and aviators.
Fig. 10-16 . (A) Mathematician’s form of cylindrical coordinates for defining points in three-space. (B) Astronomer’s and navigator’s form of cylindrical coordinates for defining points in three-space.
Figures 10-17A to 10-17C show three systems of spherical coordinates for defining points in space. The first two are used by astronomers and aerospace scientists, while the third one is of use to navigators and surveyors.
Angles Represent Declination and Right Ascension - Astronomers and Aerospace Scientists
In the scheme shown in Fig. 10-17A, the location of a point P is defined by the ordered triple (θ,φ, r ) such that θ represents the declination of P, φ represents the right ascension of P , and r represents the distance or radius from P to the origin. In this example, angles are specified in degrees (except in the case of the astronomer’s version of right ascension, which is expressed in hours, minutes, and seconds as defined earlier in this chapter). Alternatively, the angles can be expressed in radians. This system is fixed relative to the stars.
- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Signs Your Child Might Have Asperger's Syndrome
- Theories of Learning
- A Teacher's Guide to Differentiating Instruction
- Child Development Theories
- Social Cognitive Theory
- Curriculum Definition
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development |
NCERT exercise 13.2 class 8 solution -students can click here to download the entire pdf chapterwise .here free pdf can be download for NCERT solution class 8. all Exercise are available here. These solutions are available in downloadable PDF format as well. it will help students in getting rid of all the doubts about those particular topics that are covered in the exercise. The NCERT textbook provides plenty of questions for the students to solve and practise. Solving and practising is more than enough to score high in the Class 8 examinations. Moreover, students should make sure that they practise every problem given in the textbook . exercise 13.2 class 8 solution Pdf can be downloaded free here.
Exercise 13.2 class 8 solution – what is a line graph
A line graph, also known as a line chart, is a type of data visualization that is used to represent and display data points over a continuous interval or time series. Line graphs are particularly useful for showing how one variable (often called the dependent variable) changes in relation to another variable (usually the independent variable) as it progresses in a sequential order. Here are the key characteristics and components of a line graph:
Continuous Data Representation: Line graphs are best suited for data that varies continuously, such as data points collected at regular time intervals or data along a continuous range. The x-axis typically represents the independent variable, which is usually time, distance, or another continuous variable.
Data Points: Data is plotted as points on the graph, with each point representing a specific measurement or observation. These points are marked using dots, crosses, or other symbols.
Lines: The data points are connected with straight lines. These lines show the progression and trends in the data. By connecting the data points, line graphs allow for the visualization of how the dependent variable changes as the independent variable increases or decreases.
Axis Labels: The x-axis and y-axis are labeled with titles that describe what each axis represents. The labels often include units of measurement.
Title: The graph typically has a title that provides an overview of what the graph represents. It may also include a brief description of the data.
Exercise 13.2 class 8 solution -exercise preview
here the exercise 13.2 class 8 solution preview is given below.
Exercise 13.2 class 8 solution – solution pdf
students can view or download the pdf from here.click at the bottom to scroll the pdf pages.we provide Exercise 13.2 class 8 solution, just to help student to achieve their efficiency.13.2
Exercise 13.2 class 8 solution- line graph examples
A line graph displays data that changes continuously over periods of time for example when a person fell sick, the doctor maintains a record of his or her body temperature, taken after every fixed interval of time. Study the following table. It’s Pictorial representation is known as time-temperature graph.
Timings shows on x-axis and temperature on y-axis.
Plot the points (6,102), (8, 100), (10, 101), (12, 100), (2,99), (4,98), (6,98)
What does this graph tell you? For example
you can see the pattern of highest temperature at 6am then decreased till 8m then again increased upto 10 am, then decreased upto 4p.m and then remained constant upto 6p.m
Example 13.5 Study the graph shown in figure 13.10
and answer the following questions.
(i)What information the graph shows?
(ii) What is the time when temperature is 99″ F?
(iii) The temperature was same two times during the given period what are the times ?
(iv) What is the temperature at 6p.m.?
solve.(i)The graph shows the body temperature of a person recorded after every 2 hours.
(ii)The temperature is 99° F at 10 am.
(iii) The temperature was same at 12 noon and 4.00 p.m.
(iv) The temperature is 97.5° F at 6 p.m.
Example 13.6 The given graph (13.11) describes the distance of a car from a city A at different times when it was travelling from city A to city B which are 500km apart. Study the graph and answer the following.
.(i) When the car started its journey?
(ii)How far did the car go in the first hour?
(iii) Did the car stop for some duration during its journey? For what time duration the car stopped?
(iv) When the car reached at B?
(v) What was the total distance traveled by car in first five hours?
Sol. (i)The car started from point A at 8 a.m.
(ii)he car travelled 100 km in first hour
(III)Yes, the car stopped during the journey. It stopped from 11 am to 1p.m. as no distance is travelled between these hours.
(iv) The car reached at point B at 3p.m.
(v) The total distance travelled in first five hours is upto 1 p.m. is 300km.
Example 13.7. The given graph (fig 13.12) represents the total runs scored by two batsman A and B during each of the five different matches in 2017. Study the graph and answer the following.
(1) What information is given on the two axes?
(ii) Which line shows the run scored by batsman B?
(iii) Whether in any match batsman A and B scored same run.
(iv) Which player is more consistent? (Give Reason)
solve.(i)The horizontal axis (or x-axis) indicates the matches played during year 2017 and the vertical axis (y-axis) shows the run scored by two batsman.
(ii) Dark line Red
(iv) Batsman B is more consistent. As graph of batsman A has very ups and downs, where as batsman B shows almost medium performance in all five matches. |
Stars of different mass and age have varying internal structures. Stellar structure models describe the internal structure of a star in detail and make detailed predictions about the luminosity, the color and the future evolution of the star.
Convection is the dominant mode of energy transport when the temperature gradient is steep enough so that a given parcel of gas within the star will continue to rise if it rises slightly via an adiabatic process. In this case, the rising parcel is buoyant and continues to rise if it is warmer than the surrounding gas; if the rising particle is cooler than the surrounding gas, it will fall back to its original height. In regions with a low temperature gradient and a low enough opacity to allow energy transport via radiation, radiation is the dominant mode of energy transport.
The internal structure of a main sequence star depends upon the mass of the star.
In solar mass stars (0.3–1.5 solar masses), including the Sun, hydrogen-to-helium fusion occurs primarily via proton-proton chains, which do not establish a steep temperature gradient. Thus, radiation dominates in the inner portion of solar mass stars. The outer portion of solar mass stars is cool enough that hydrogen is neutral and thus opaque to ultraviolet photons, so convection dominates. Therefore, solar mass stars have radiative cores with convective envelopes in the outer portion of the star.
In massive stars (greater than about 1.5 solar masses), the core temperature is above about 1.8×107 K, so hydrogen-to-helium fusion occurs primarily via the CNO cycle. In the CNO cycle, the energy generation rate scales as the temperature to the 17th power, whereas the rate scales as the temperature to the 4th power in the proton-proton chains. Due to the strong temperature sensitivity of the CNO cycle, the temperature gradient in the inner portion of the star is steep enough to make the core convective. In the outer portion of the star, the temperature gradient is shallower but the temperature is high enough that the hydrogen is nearly fully ionized, so the star remains transparent to ultraviolet radiation. Thus, massive stars have a radiative envelope.
Equations of stellar structure
The simplest commonly used model of stellar structure is the spherically symmetric quasi-static model, which assumes that a star is in a steady state and that it is spherically symmetric. It contains four basic first-order differential equations: two represent how matter and pressure vary with radius; two represent how temperature and luminosity vary with radius.
In forming the stellar structure equations (exploiting the assumed spherical symmetry), one considers the matter density , temperature , total pressure (matter plus radiation) , luminosity , and energy generation rate per unit mass in a spherical shell of a thickness at a distance from the center of the star. The star is assumed to be in local thermodynamic equilibrium (LTE) so the temperature is identical for matter and photons. Although LTE does not strictly hold because the temperature a given shell "sees" below itself is always hotter than the temperature above, this approximation is normally excellent because the photon mean free path, , is much smaller than the length over which the temperature varies considerably, i. e. .
Integrating the mass continuity equation from the star center () to the radius of the star () yields the total mass of the star.
Considering the energy leaving the spherical shell yields the energy equation:
where is the luminosity produced in the form of neutrinos (which usually escape the star without interacting with ordinary matter) per unit mass. Outside the core of the star, where nuclear reactions occur, no energy is generated, so the luminosity is constant.
The energy transport equation takes differing forms depending upon the mode of energy transport. For conductive luminosity transport (appropriate for a white dwarf), the energy equation is
where k is the thermal conductivity.
In the case of radiative energy transport, appropriate for the inner portion of a solar mass main sequence star and the outer envelope of a massive main sequence star,
The case of convective luminosity transport (appropriate for non-radiative portions of main sequence stars and all of giants and low mass stars) does not have a known rigorous mathematical formulation, and involves turbulence in the gas. Convective energy transport is usually modeled using mixing length theory. This treats the gas in the star as containing discrete elements which roughly retain the temperature, density, and pressure of their surroundings but move through the star as far as a characteristic length, called the mixing length. For a monatomic ideal gas, when the convection is adiabatic, meaning that the convective gas bubbles don't exchange heat with their surroundings, mixing length theory yields
where is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, .) When the convection is not adiabatic, the true temperature gradient is not given by this equation. For example, in the Sun the convection at the base of the convection zone, near the core, is adiabatic but that near the surface is not. The mixing length theory contains two free parameters which must be set to make the model fit observations, so it is a phenomelogical theory rather than a rigorous mathematical formulation.
Also required are the equations of state, relating the pressure, opacity and energy generation rate to other local variables appropriate for the material, such as temperature, density, chemical composition, etc. Relevant equations of state for pressure may have to include the perfect gas law, radiation pressure, pressure due to degenerate electrons, etc. Opacity cannot be expressed exactly by a single formula. It is calculated for various compositions at specific densities and temperatures and presented in tabular form. Stellar structure codes (meaning computer programs calculating the model's variables) either interpolate in a density-temperature grid to obtain the opacity needed, or use a fitting function based on the tabulated values. A similar situation occurs for accurate calculations of the pressure equation of state. Finally, the nuclear energy generation rate is computed from particle physics experiments, using reaction networks to compute reaction rates for each individual reaction step and equilibrium abundances for each isotope in the gas.
Combined with a set of boundary conditions, a solution of these equations completely describes the behavior of the star. Typical boundary conditions set the values of the observable parameters appropriately at the surface () and center () of the star: , meaning the pressure at the surface of the star is zero; , there is no mass inside the center of the star, as required if the mass density remains finite; , the total mass of the star is the star's mass; and , the temperature at the surface is the effective temperature of the star.
Although nowadays stellar evolution models describes the main features of color magnitude diagrams, important improvements have to be made in order to remove uncertainties which are linked to the limited knowledge of transport phenomena. The most difficult challenge remains the numerical treatment of turbulence. Some research teams are developing simplified modelling of turbulence in 3D calculations.
The above simplified model is not adequate without modification in situations when the composition changes are sufficiently rapid. The equation of hydrostatic equilibrium may need to be modified by adding a radial acceleration term if the radius of the star is changing very quickly, for example if the star is radially pulsating. Also, if the nuclear burning is not stable, or the star's core is rapidly collapsing, an entropy term must be added to the energy equation.
- Hansen, Kawaler & Trimble (2004, §5.1.1)
- Hansen, Kawaler & Trimble (2004, Tbl. 1.1)
- Hansen, Kawaler & Trimble (2004, §2.2.1)
- This discussion follows those of, e. g., Zeilik & Gregory (1998, §16-1–16-2) and Hansen, Kawaler & Trimble (2004, §7.1)
- Hansen, Kawaler & Trimble (2004, §5.1)
- Ostlie, Dale A. and Carrol, Bradley W., An introduction to Modern Stellar Astrophysics, Addison-Wesley (2007)
- Iglesias, C.A.; Rogers, F.J. (June 1996), "Updated Opal Opacities", Astrophysical Journal 464: 943–+, Bibcode:1996ApJ...464..943I, doi:10.1086/177381.
- Rauscher, T.; Heger, A.; Hoffman, R.D.; Woosley, S.E. (September 2002), "Nucleosynthesis in Massive Stars with Improved Nuclear and Stellar Physics", The Astrophysical Journal 576 (1): 323–348, arXiv:astro-ph/0112478, Bibcode:2002ApJ...576..323R, doi:10.1086/341728.
- Moya, A.; Garrido, R. (August 2008), "Granada oscillation code (GraCo)", Astrophysics and Space Science 316 (1–4): 129–133, arXiv:0711.2590, Bibcode:2008Ap&SS.316..129M, doi:10.1007/s10509-007-9694-2.
- Mueller, E. (July 1986), "Nuclear-reaction networks and stellar evolution codes – The coupling of composition changes and energy release in explosive nuclear burning", Astronomy and Astrophysics 162: 103–108, Bibcode:1986A&A...162..103M.
- Kippenhahn, R.; Weigert, A. (1990), Stellar Structure and Evolution, Springer-Verlag
- Hansen, Carl J.; Kawaler, Steven D.; Trimble, Virginia (2004), Stellar Interiors (2nd ed.), Springer, ISBN 0-387-20089-4
- Kennedy, Dallas C.; Bludman, Sidney A. (1997), "Variational Principles for Stellar Structure", Astrophysical Journal 484 (1): 329, arXiv:astro-ph/9610099, Bibcode:1997ApJ...484..329K, doi:10.1086/304333
- Weiss, Achim; Hillebrandt, Wolfgang; Thomas, Hans-Christoph; Ritter, H. (2004), Cox and Giuli's Principles of Stellar Structure, Cambridge Scientific Publishers
- Zeilik, Michael A.; Gregory, Stephan A. (1998), Introductory Astronomy & Astrophysics (4th ed.), Saunders College Publishing, ISBN 0-03-006228-4
- opacity code retrieved November 2009
- The Yellow CESAM code, stellar evolution and structure FORTRAN source code
- EZ to Evolve ZAMS Stars a FORTRAN 90 software derived from Eggleton's Stellar Evolution Code, a web-based interface can be found here .
- Geneva Grids of Stellar Evolution Models (some of them including rotational induced mixing)
- The BaSTI database of stellar evolution tracks |
A pictorial or network solution could be drawn such that a dot
represents a person, and each line segment represents a handshake
between two people. (In the drawing below, this scheme has been used,
but color‑coding also shows that the first person (red) shakes hands
with eight people; then, the second person (blue) shakes hand with only
seven people, since he has already shaken hands with red; then, the
third person (yellow) shakes only six hands, because she has shaken
hands with red and blue; and so on.)
An organized list could also be used to show all the handshakes.
Note that every pair of numbers is included just once in the list
below; that is, if the pair 4‑6 is included, the pair 6‑4 is not also
included, because it represents the same handshake. Further, pairs with
the same number are not included, such as 7‑7, because they represent a
person shaking his or her own hand.
To allow varied approaches to be displayed, give each group a
transparency sheet and overhead marker so that they may create a visual
model to explain their solution to the class. Begin the discussion of
solution strategies with the physical model of the problem. Have
nine students stand in a line the front of the class. The first student
walks down the line, shaking hands with each person, while the class
counts the number of handshakes aloud (8). She then sits down. The next
student walks down the line, shaking hands with each person, while the
class counts aloud (7). The next student shakes 6 hands, then 5, 4, 3,
2, and 1. The last student has no hands to shake, since he has already
shaken the hands of all people in line before him, so he just sits
down. The total number of handshakes is
8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 36.
Now ask, "How many handshakes occur when there are 30 people?
How many handshakes occur with the whole class? Do we want everyone in
the class to stand up, and continue counting out loud?" Probe student
thinking to see if there is a different, or more efficient, way that
would make sense when considering larger groups.
Have each group use their transparency to explain their
various ways to get the solution. To engage students in examining
varied representations for the same problem, ask, "Does this make sense
to you? How is this group’s explanation similar to your explanation?
How is it different?"
Once all students are convinced that nine Supreme Court
Justices have a total of 36 handshakes, extend the problem. Ask, "How
many handshakes occur with 10 people?" Using the table, students may
see that one more is added in each row than was added in the previous
row; therefore, for 10 people, there would be 36 + 9 = 45 handshakes.
To allow students to investigate the relationship between number of people and number of handshakes, allow them to explore the
This interactive demonstration allows them to see a pictorial
representation of the situation as well as see the pattern of numbers
appear in a table. In particular, students can investigate the change
that occurs in the number of handshakes as the number of people
increases by 1, and noticing this change can be very powerful.
Handshake Online Activity
This is called a recursive relation, because the number of handshakes for n people can be described in terms of the number of handshakes that occurred for (n – 1) people.
Students may be comfortable adding on or computing manually for
groups up to 20 people. If that seems to be the case, and if students
are not looking for a generalized solution, pose the question, "What if
100 Senators greeted one another with a handshake when they met each
morning? How many handshakes would there be?" Distribute the
activity sheet, and allow time for students to complete the table and
discover relationships. (You might wish to display the activity sheet
as a transparency on the overhead projector and have the class work
together to fill in the first several rows. Many of the groups will
already have answers for the number of handshakes in groups of
Have various students explain the relationships they see. With
each suggestion, have the class decide if using that relationship will
allow them to determine the number of handshakes for 30, 100, or n people. Some possible relationships that students may see:
Add the number of previous people to their number of handshakes, and that will give the next number of handshakes;
For instance, there were 6 handshakes with 4 people; therefore, there are 6 + 4 = 10 handshakes for a group of 5 people.
The differences between the numbers in the second column form a linear patern, 1, 2, 3, 4, ….
As a result of these discoveries, students should realize that the
number of handshakes for 30 people is 1 + 2 + 3 + … + 29 = 435. Value
all student suggestions, but keep probing to determine the number of
handshakes for 100 people.
To lead students to determine a closed‑form rule for the
relationship, have students look for a rule that uses multiplication,
and ask the following leading questions:
For 7 people, there are 21 handshakes. How is 7 related to 21? [Multiply by 3.]
For 9 people, there are 36 handshakes. How is 9 related to 36? [Multiply by 4.]
What about for 8 people? There are 28 handshakes. How is 8 related to 28? [Multiply by 3.5.]
Students should see that the number of handshakes is equal to the
previous number of people multiplied by the current number of people,
divided by 2. In algebraic terms, the formula is:
Another way to attain the solution is to use an organized table.
If there are nine people, then we can list the individuals along the
top row and left column, as shown below. The entries within the table,
then, indicate handshakes. However, the handshakes in yellow cells
indicate that a person shakes his or her own hand, so they should not
be counted; and, the entries in red cells are the mirror images of the
entries in blue cells, so they represent the same handshakes and only
half of them should be counted. For nine people, there are 81 entries
in the table, but we do not count the nine entries along the diagonal,
and we only count half of those remaining. This gives ½(81 – 9) = 36.
In general, for n people, there are n2 entries in the table, and there are n entries along the diagonal. Therefore, the number of handshakes is ½(n2 – n), which is equivalent to the algebraic formula stated above.
When students arrive at the formula, ask, "Does it matter if you
multiply first and then divide by 2? Can you divide by 2 first and then
multiply?" [Because of the commutative property, order does not
matter.] This is an important point, because students can use mental
math to perform calculations with this formula in three different ways:
- Multiply n by (n – 1), and then divide by 2;
- Divide n by 2 , and then multiply by (n – 1); or,
- Divide (n – 1) by 2 , and then multiply by n.
Students should decide which number to divide by 2, depending on whether n or (n – 1) is even. As an example, for 15 people, n = 15 and (n – 1) = = 14, so it makes sense to divide 14 by 2 and then multiply by 15: 7 × 15 = 105. On the other hand, for 20 people, n = 20 and (n – 1) = 19, so it makes sense to divide 20 by 2 and then multiply by 19: 10 × 19 = 190.
As a final step, students can plot the relationship between
number of people and number of handshakes. Students should describe the
shape of the graph and answer the following questions:
- Is the relationship linear? [No, it is nonlinear.]
- How would you know from the table that the relationship is not linear? [There is not a constant rate of change.]
- How would you know from the variable expression that the relationship is not linear? [The variable n is multiplied by (n – 1), and the product contains n2, which means the curve will be quadratic.]
- How would you know from the graph that the relationship is not linear? [The graph is a curve, not a straight line.]
By the end of this lesson, students will have used (or at least
seen) a solution involving a table, a verbal description, a pictorial
representation, and a variable expression. It may be important to
highlight this to students, and it would be good to encourage students
to use all of these various types of representations. Each
representation provides different information and may offer insight
when solving problems. |
Sep 9, 2022
Inflation 101: What is the inflation rate?
The inflation rate refers to the change in purchasing power of a given currency over time. Typically, prices rise, diminishing the value of currency. In periods marked by high inflation rates, rising prices may outpace wages, making it more difficult to buy the things you’re used to.
Keeping track of the inflation rate is a crucial part of your financial planning because it gives you an idea of how well your investments need to perform in order to maintain your standard of living during retirement. While your financial advisor needs to understand the ins and outs of inflation, it is helpful for you to understand as well. Read on to find out how inflation works and how it is calculated.
What does the inflation rate mean?
Inflation is the change in purchasing power over time, and the inflation rate is a measure of how much prices have risen or fallen over a certain period of time. The inflation rate is given as a percentage. For example, an annual 2% inflation rate means that the overall price of goods and services has risen 2% over the course of the year. For example: something that cost $10 last year would cost $10.20 this year.
Inflation is natural in a healthy economy and is typically offset by rising wages. Goods and services may cost a bit more, but workers should generally be earning more as well. Any gap when inflation rises more than wages (or when there’s widespread unemployment) can cause a period of economic downturn, such as a recession.
The primary cause of inflation is an increase in a nation’s money supply. In the U.S., the Federal Reserve can influence the inflation rate by pumping more money into the financial system in the form of reserve account credits (a type of loan to banks).
The inflation rate can also increase due to rising costs of raw materials and production. This is called cost-push inflation. Conversely, demand-pull inflation can occur when demand is higher than supply. Prices may go up, but people are making money and ready to spend it.
What is an example of inflation?
Most of us have first-hand experience with inflation at places like the gas pump and grocery store. Here are two examples of how prices have changed over time:
- Average price per gallon in July 1995: $1.25
- Average price per gallon in July 2022: $4.77
- Average price per gallon in 1995: $2.48
- Average price per gallon in 2021: $3.55
While prices have definitely risen over the decades, wages have as well. In 1965, for example, the average median family income was $6,900, compared to $79,900 in 2021.123
How to calculate the inflation rate
The actual formula for calculating the inflation rate requires just two pieces of data: the price in the current or target year (T) and the inflation base year (B). From there, you can calculate the rate of change as follows:
((T – B)/B) x 100
The answer is the inflation rate as a percentage. Let’s find out the inflation rate of milk between 1995 and 2021 using the prices above.
((3.55-2.48)/2.48) x 100 = 43.15
That means the inflation rate for a gallon of milk over a 26-year period was 43.15%. On an annualized basis, that represents an inflation rate of 1.37%.
In economics, there are many indices used to track a variety of inflation rates. Here is a brief overview of the U.S. Bureau of Labor Statistics’ top inflation indexes.
Consumer Price Index
The Consumer Price Index (CPI) measures the average change over time in the prices paid by urban consumers for a broad basket of goods and services each month. This information is used to determine the overall pace of annual inflation.
Producer Price Index
The Producer Price Index (PPI) monitors the selling prices of domestic producers. It focuses on the sellers’ perspective rather than the consumers’. Goods, services, and construction products are all included in the PPI.
International Price Program
The International Price Program (IPP) looks at how import and export prices change between the U.S. and other countries.
The presence of inflation is normal and healthy for a growing economy, and we all see the impact it has on the wages we earn and the prices we pay. If you’re wondering how the level of inflation can impact your investment choices and financial planning considerations, that’s a good conversation to have with your investment advisor.
ABOUT THE AUTHOR
Dowling & Yahnke is a fee‐only registered investment adviser. Since 1991, Dowling & Yahnke has provided time-tested, objective financial planning advice and investment management services designed for the financial health and personal freedom of its clients. Located in San Diego, California, the Firm manages approximately $5.7 billion for more than 1,300 clients, primarily individuals, families, and nonprofit organizations.
Our team consists of highly-educated, experienced, and ethical professionals devoted to the highest standards of client service. We design custom wealth management solutions delivered with the highest level of personalized service.
This information is for educational purposes and is not intended to provide, and should not be relied upon for, accounting, legal, tax, insurance, or investment advice. This does not constitute an offer to provide any services, nor a solicitation to purchase securities. The contents are not intended to be advice tailored to any particular person or situation. We believe the information provided is accurate and reliable, but do not warrant it as to completeness or accuracy. This information may include opinions or forecasts, including investment strategies and economic and market conditions; however, there is no guarantee that such opinions or forecasts will prove to be correct, and they also may change without notice. We encourage you to speak with a qualified professional regarding your scenario and the then-current applicable laws and rules.
Different types of investments involve degrees of risk. Future performance of any investment or wealth management strategy, including those recommended by us, may not be profitable, suitable, or prove successful. Past performance is not indicative of future results. One cannot invest directly in an index or benchmark, and those do not reflect the deduction of various fees which would diminish results. Any index or benchmark performance figures are for comparison purposes only, and client account holdings will not directly correspond to any such data.
Our clients must, in writing, advise us of personal, financial, or investment objective changes and any restrictions desired on our services so that we may re-evaluate any previous recommendations and adjust our advisory services as needed. For current clients, please advise us immediately if you are not receiving monthly account statements from your custodian. We encourage you to compare your custodial statements to any information we provide to you. |
Newton's laws of motion
|Part of a series of articles about|
Newton's laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a quantitative measure of the force, and the third asserts that a single isolated force doesn't exist. These three laws have been expressed in several ways, over nearly three centuries,[i] and can be summarised as follows:
|First law:||In an inertial frame of reference, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a force.|
|Second law:||In an inertial frame of reference, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant – see below.)|
|Third law:||When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.|
The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), first published in 1687. Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the third volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation, explained Kepler's laws of planetary motion.
- 1 Overview
- 2 Laws
- 3 History
- 4 Importance and range of validity
- 5 Relationship to the conservation laws
- 6 See also
- 7 References
- 8 Bibliography
- 9 External links
Newton's laws are applied to objects which are idealised as single point masses, in the sense that the size and shape of the object's body are neglected to focus on its motion more easily. This can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star.
In their original form, Newton's laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newton's laws of motion for rigid bodies called Euler's laws of motion, later applied as well for deformable bodies assumed as a continuum. If a body is represented as an assemblage of discrete particles, each governed by Newton's laws of motion, then Euler's laws can be derived from Newton's laws. Euler's laws can, however, be taken as axioms describing the laws of motion for extended bodies, independently of any particle structure.
Newton's laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Some authors interpret the first law as defining what an inertial reference frame is; from this point of view, the second law holds only when the observation is made from an inertial reference frame, and therefore the first law cannot be proved as a special case of the second. Other authors do treat the first law as a corollary of the second. The explicit concept of an inertial frame of reference was not developed until long after Newton's death.
In the given interpretation mass, acceleration, momentum, and (most importantly) force are assumed to be externally defined quantities. This is the most common, but not the only interpretation of the way one can consider the laws to be a definition of these quantities.
Newton's first law
The first law states that if the net force (the vector sum of all forces acting on an object) is zero, then the velocity of the object is constant. Velocity is a vector quantity which expresses both the object's speed and the direction of its motion; therefore, the statement that the object's velocity is constant is a statement that both its speed and the direction of its motion are constant.
The first law can be stated mathematically when the mass is a non-zero constant, as,
- An object that is at rest will stay at rest unless a force acts upon it.
- An object that is in motion will not change its velocity unless a force acts upon it.
This is known as uniform motion. An object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest (demonstrated when a tablecloth is skilfully whipped from under dishes on a tabletop and the dishes remain in their initial state of rest). If an object is moving, it continues to move without turning or changing its speed. This is evident in space probes that continuously move in outer space. Changes in motion must be imposed against the tendency of an object to retain its state of motion. In the absence of net forces, a moving object tends to move along a straight line path indefinitely.
Newton placed the first law of motion to establish frames of reference for which the other laws are applicable. The first law of motion postulates the existence of at least one frame of reference called a Newtonian or inertial reference frame, relative to which the motion of a particle not subject to forces is a straight line at a constant speed. Newton's first law is often referred to as the law of inertia. Thus, a condition necessary for the uniform motion of a particle relative to an inertial reference frame is that the total net force acting on it is zero. In this sense, the first law can be restated as:
In every material universe, the motion of a particle in a preferential reference frame Φ is determined by the action of forces whose total vanished for all times when and only when the velocity of the particle is constant in Φ. That is, a particle initially at rest or in uniform motion in the preferential frame Φ continues in that state unless compelled by forces to change it.
Newton's first and second laws are valid only in an inertial reference frame. Any reference frame that is in uniform motion with respect to an inertial frame is also an inertial frame, i.e. Galilean invariance or the principle of Newtonian relativity.
Newton's second law
The second law states that the rate of change of momentum of a body is directly proportional to the force applied, and this change in momentum takes place in the direction of the applied force.
The second law can also be stated in terms of an object's acceleration. Since Newton's second law is valid only for constant-mass systems, m can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,
where F is the net force applied, m is the mass of the body, and a is the body's acceleration. Thus, the net force applied to a body produces a proportional acceleration. In other words, if a body is accelerating, then there is a force on it. An application of this notation is the derivation of G Subscript C.
Consistent with the first law, the time derivative of the momentum is non-zero when the momentum changes direction, even if there is no change in its magnitude; such is the case with uniform circular motion. The relationship also implies the conservation of momentum: when the net force on the body is zero, the momentum of the body is constant. Any net force is equal to the rate of change of the momentum.
Any mass that is gained or lost by the system will cause a change in momentum that is not the result of an external force. A different equation is necessary for variable-mass systems (see below).
Newton's second law is an approximation that is increasingly worse at high speeds because of relativistic effects.
Since force is the time derivative of momentum, it follows that
This relation between impulse and momentum is closer to Newton's wording of the second law.
Impulse is a concept frequently used in the analysis of collisions and impacts.
Variable-mass systems, like a rocket burning fuel and ejecting spent gases, are not closed and cannot be directly treated by making mass a function of time in the second law; that is, the following formula is wrong:
The falsehood of this formula can be seen by noting that it does not respect Galilean invariance: a variable-mass object with F = 0 in one frame will be seen to have F ≠ 0 in another frame. The correct equation of motion for a body whose mass m varies with time by either ejecting or accreting mass is obtained by applying the second law to the entire, constant-mass system consisting of the body and its ejected/accreted mass; the result is
where u is the velocity of the escaping or incoming mass relative to the body. From this equation one can derive the equation of motion for a varying mass system, for example, the Tsiolkovsky rocket equation. Under some conventions, the quantity u dm/dt on the left-hand side, which represents the advection of momentum, is defined as a force (the force exerted on the body by the changing mass, such as rocket exhaust) and is included in the quantity F. Then, by substituting the definition of acceleration, the equation becomes F = ma.
Newton's third law
The third law states that all forces between two objects exist in equal magnitude and opposite direction: if one object A exerts a force FA on a second object B, then B simultaneously exerts a force FB on A, and the two forces are equal in magnitude and opposite in direction: FA = −FB. The third law means that all forces are interactions between different bodies, or different regions within one body, and thus that there is no such thing as a force that is not accompanied by an equal and opposite force. In some situations, the magnitude and direction of the forces are determined entirely by one of the two bodies, say Body A; the force exerted by Body A on Body B is called the "action", and the force exerted by Body B on Body A is called the "reaction". This law is sometimes referred to as the action-reaction law, with FA called the "action" and FB the "reaction". In other situations the magnitude and directions of the forces are determined jointly by both bodies and it isn't necessary to identify one force as the "action" and the other as the "reaction". The action and the reaction are simultaneous, and it does not matter which is called the action and which is called reaction; both forces are part of a single interaction, and neither force exists without the other.
The two forces in Newton's third law are of the same type (e.g., if the road exerts a forward frictional force on an accelerating car's tires, then it is also a frictional force that Newton's third law predicts for the tires pushing backward on the road).
From a conceptual standpoint, Newton's third law is seen when a person walks: they push against the floor, and the floor pushes against the person. Similarly, the tires of a car push against the road while the road pushes back on the tires—the tires and road simultaneously push against each other. In swimming, a person interacts with the water, pushing the water backward, while the water simultaneously pushes the person forward—both the person and the water push against each other. The reaction forces account for the motion in these examples. These forces depend on friction; a person or car on ice, for example, may be unable to exert the action force to produce the needed reaction force.
Newton's 1st Law
From the original Latin of Newton's Principia:
|“||Lex I: Corpus omne perseverare in statu suo quiescendi vel movendi uniformiter in directum, nisi quatenus a viribus impressis cogitur statum illum mutare.||”|
Translated to English, this reads:
|“||Law I: Every body persists in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by force impressed.||”|
The ancient Greek philosopher Aristotle had the view that all objects have a natural place in the universe: that heavy objects (such as rocks) wanted to be at rest on the Earth and that light objects like smoke wanted to be at rest in the sky and the stars wanted to remain in the heavens. He thought that a body was in its natural state when it was at rest, and for the body to move in a straight line at a constant speed an external agent was needed continually to propel it, otherwise it would stop moving. Galileo Galilei, however, realised that a force is necessary to change the velocity of a body, i.e., acceleration, but no force is needed to maintain its velocity. In other words, Galileo stated that, in the absence of a force, a moving object will continue moving. (The tendency of objects to resist changes in motion was what Johannes Kepler had called inertia.) This insight was refined by Newton, who made it into his first law, also known as the "law of inertia"—no force means no acceleration, and hence the body will maintain its velocity. As Newton's first law is a restatement of the law of inertia which Galileo had already described, Newton appropriately gave credit to Galileo.
The law of inertia apparently occurred to several different natural philosophers and scientists independently, including Thomas Hobbes in his Leviathan. The 17th-century philosopher and mathematician René Descartes also formulated the law, although he did not perform any experiments to confirm it.
Newton's 2nd Law
Newton's original Latin reads:
|“||Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.||”|
This was translated quite closely in Motte's 1729 translation as:
|“||Law II: The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd.||”|
According to modern ideas of how Newton was using his terminology, this is understood, in modern terms, as an equivalent of:
The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
This may be expressed by the formula F = p', where p' is the time derivative of the momentum p. This equation can be seen clearly in the Wren Library of Trinity College, Cambridge, in a glass case in which Newton's manuscript is open to the relevant page.
Motte's 1729 translation of Newton's Latin continued with Newton's commentary on the second law of motion, reading:
If a force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both.
The sense or senses in which Newton used his terminology, and how he understood the second law and intended it to be understood, have been extensively discussed by historians of science, along with the relations between Newton's formulation and modern formulations.
Newton's 3rd Law
|“||Lex III: Actioni contrariam semper et æqualem esse reactionem: sive corporum duorum actiones in se mutuo semper esse æquales et in partes contrarias dirigi.||”|
Translated to English, this reads:
|“||Law III: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.||”|
Newton's Scholium (explanatory comment) to this law:
Whatever draws or presses another is as much drawn or pressed by that other. If you press a stone with your finger, the finger is also pressed by the stone. If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavour to relax or unbend itself, will draw the horse as much towards the stone, as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other. If a body impinges upon another, and by its force changes the motion of the other, that body also (because of the equality of the mutual pressure) will undergo an equal change, in its own motion, toward the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of the bodies; that is to say, if the bodies are not hindered by any other impediments. For, as the motions are equally changed, the changes of the velocities made toward contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions, as will be proved in the next scholium.
In the above, as usual, motion is Newton's name for momentum, hence his careful distinction between motion and velocity.
Newton used the third law to derive the law of conservation of momentum; from a deeper perspective, however, conservation of momentum is the more fundamental idea (derived via Noether's theorem from Galilean invariance), and holds in cases where Newton's third law appears to fail, for instance when force fields as well as particles carry momentum, and in quantum mechanics.
Importance and range of validity
Newton's laws were verified by experiment and observation for over 200 years, and they are excellent approximations at the scales and speeds of everyday life. Newton's laws of motion, together with his law of universal gravitation and the mathematical techniques of calculus, provided for the first time a unified quantitative explanation for a wide range of physical phenomena.
These three laws hold to a good approximation for macroscopic objects under everyday conditions. However, Newton's laws (combined with universal gravitation and classical electrodynamics) are inappropriate for use in certain circumstances, most notably at very small scales, very high speeds (in special relativity, the Lorentz factor must be included in the expression for momentum along with the rest mass and velocity) or very strong gravitational fields. Therefore, the laws cannot be used to explain phenomena such as conduction of electricity in a semiconductor, optical properties of substances, errors in non-relativistically corrected GPS systems and superconductivity. Explanation of these phenomena requires more sophisticated physical theories, including general relativity and quantum field theory.
In quantum mechanics, concepts such as force, momentum, and position are defined by linear operators that operate on the quantum state; at speeds that are much lower than the speed of light, Newton's laws are just as exact for these operators as they are for classical objects. At speeds comparable to the speed of light, the second law holds in the original form F = dp/dt, where F and p are four-vectors.
Relationship to the conservation laws
In modern physics, the laws of conservation of momentum, energy, and angular momentum are of more general validity than Newton's laws, since they apply to both light and matter, and to both classical and non-classical physics.
This can be stated simply, "Momentum, energy and angular momentum cannot be created or destroyed."
Because force is the time derivative of momentum, the concept of force is redundant and subordinate to the conservation of momentum, and is not used in fundamental theories (e.g., quantum mechanics, quantum electrodynamics, general relativity, etc.). The standard model explains in detail how the three fundamental forces known as gauge forces originate out of exchange by virtual particles. Other forces, such as gravity and fermionic degeneracy pressure, also arise from the momentum conservation. Indeed, the conservation of 4-momentum in inertial motion via curved space-time results in what we call gravitational force in general relativity theory. The application of the space derivative (which is a momentum operator in quantum mechanics) to the overlapping wave functions of a pair of fermions (particles with half-integer spin) results in shifts of maxima of compound wavefunction away from each other, which is observable as the "repulsion" of the fermions.
Newton stated the third law within a world-view that assumed instantaneous action at a distance between material particles. However, he was prepared for philosophical criticism of this action at a distance, and it was in this context that he stated the famous phrase "I feign no hypotheses". In modern physics, action at a distance has been completely eliminated, except for subtle effects involving quantum entanglement. (In particular, this refers to Bell's theorem—that no local model can reproduce the predictions of quantum theory.) Despite only being an approximation, in modern engineering and all practical applications involving the motion of vehicles and satellites, the concept of action at a distance is used extensively.
The discovery of the second law of thermodynamics by Carnot in the 19th century showed that not every physical quantity is conserved over time, thus disproving the validity of inducing the opposite metaphysical view from Newton's laws. Hence, a "steady-state" worldview based solely on Newton's laws and the conservation laws does not take entropy into account.
|Wikimedia Commons has media related to Newton's laws of motion.|
- Euler's laws of motion
- Hamiltonian mechanics
- Lagrangian mechanics
- List of scientific laws named after people
- Mercury, orbit of
- Modified Newtonian dynamics
- Newton's law of universal gravitation
- Principle of least action
- Principle of relativity
- Reaction (physics)
- Principia. 1 (1729 translation ed.). p. 19.
- Browne, Michael E. (July 1999). Schaum's outline of theory and problems of physics for engineering and science (Series: Schaum's Outline Series). McGraw-Hill Companies. p. 58. ISBN 978-0-07-008498-8.
- Holzner, Steven (December 2005). Physics for Dummies. Wiley, John & Sons, Incorporated. p. 64. Bibcode:2005pfd..book.....H. ISBN 978-0-7645-5433-9.
- See the Principia on line at Andrew Motte Translation
- Andrew Motte translation of Newton's Principia (1687) Axioms or Laws of Motion
- Greiner, Walter (2003). Classical mechanics: point particles and relativity. New York: Springer. ISBN 978-0-387-21851-9.
- Zeidler, E. (1988). Nonlinear Functional Analysis and its Applications IV: Applications to Mathematical Physics. New York: Springer. ISBN 978-1-4612-4566-7.
- Wachter, Armin; Hoeber, Henning (2006). Compendium of theoretical physics. New York: Springer. ISBN 978-0-387-25799-0.
- Truesdell, Clifford A.; Becchi, Antonio; Benvenuto, Edoardo (2003). Essays on the history of mechanics: in memory of Clifford Ambrose Truesdell and Edoardo Benvenuto. New York: Birkhäuser. p. 207. ISBN 978-3-7643-1476-7.
[...] while Newton had used the word 'body' vaguely and in at least three different meanings, Euler realized that the statements of Newton are generally correct only when applied to masses concentrated at isolated points;
- Lubliner, Jacob (2008). Plasticity Theory (Revised Edition) (PDF). Dover Publications. ISBN 978-0-486-46290-5. Archived from the original (PDF) on 31 March 2010.
- Galili, I.; Tseitlin, M. (2003). "Newton's First Law: Text, Translations, Interpretations and Physics Education". Science & Education. 12 (1): 45–73. Bibcode:2003Sc&Ed..12...45G. doi:10.1023/A:1022632600805.
- Benjamin Crowell (2001). "4. Force and Motion". Newtonian Physics. ISBN 978-0-9704670-1-0. Archived from the original on 16 February 2007.
- In making a modern adjustment of the second law for (some of) the effects of relativity, m would be treated as the relativistic mass, producing the relativistic expression for momentum, and the third law might be modified if possible to allow for the finite signal propagation speed between distant interacting particles.
- NMJ Woodhouse (2003). Special relativity. London/Berlin: Springer. p. 6. ISBN 978-1-85233-426-0.
- Beatty, Millard F. (2006). Principles of engineering mechanics Volume 2 of Principles of Engineering Mechanics: Dynamics-The Analysis of Motion. Springer. p. 24. ISBN 978-0-387-23704-6.
- Thornton, Marion (2004). Classical dynamics of particles and systems (5th ed.). Brooks/Cole. p. 53. ISBN 978-0-534-40896-1.
- Plastino, Angel R.; Muzzio, Juan C. (1992). "On the use and abuse of Newton's second law for variable mass problems". Celestial Mechanics and Dynamical Astronomy. 53 (3): 227–232. Bibcode:1992CeMDA..53..227P. doi:10.1007/BF00052611. ISSN 0923-2958. "We may conclude emphasizing that Newton's second law is valid for constant mass only. When the mass varies due to accretion or ablation, [an alternate equation explicitly accounting for the changing mass] should be used."
- Halliday; Resnick. Physics. 1. p. 199. ISBN 978-0-471-03710-1.
It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(Mv) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass having parts among which there is an interchange of mass.[Emphasis as in the original]
Kleppner, Daniel; Robert Kolenkow (1973). An Introduction to Mechanics. McGraw-Hill. pp. 133–134. ISBN 978-0-07-035048-9.
Recall that F = dP/dt was established for a system composed of a certain set of particles[. ... I]t is essential to deal with the same set of particles throughout the time interval[. ...] Consequently, the mass of the system can not change during the time of interest.
- Hannah, J., Hillier, M.J., Applied Mechanics, p221, Pitman Paperbacks, 1971
- Raymond A. Serway; Jerry S. Faughn (2006). College Physics. Pacific Grove CA: Thompson-Brooks/Cole. p. 161. ISBN 978-0-534-99724-3.
- I. Bernard Cohen (Peter M. Harman & Alan E. Shapiro, Eds) (2002). The investigation of difficult things: essays on Newton and the history of the exact sciences in honour of D.T. Whiteside. Cambridge: Cambridge University Press. p. 353. ISBN 978-0-521-89266-7.
- W.J. Stronge (2004). Impact mechanics. Cambridge: Cambridge University Press. pp. 12 ff. ISBN 978-0-521-60289-1.
- Resnick; Halliday; Krane (1992). Physics, Volume 1 (4th ed.). p. 83.
- C Hellingman (1992). "Newton's third law revisited". Phys. Educ. 27 (2): 112–115. Bibcode:1992PhyEd..27..112H. doi:10.1088/0031-9120/27/2/011.
Quoting Newton in the Principia: It is not one action by which the Sun attracts Jupiter, and another by which Jupiter attracts the Sun; but it is one action by which the Sun and Jupiter mutually endeavour to come nearer together.
- Resnick & Halliday (1977). Physics (Third ed.). John Wiley & Sons. pp. 78–79.
Any single force is only one aspect of a mutual interaction between two bodies.
- Hewitt (2006), p. 75
- Isaac Newton, The Principia, A new translation by I.B. Cohen and A. Whitman, University of California press, Berkeley 1999.
- Thomas Hobbes wrote in Leviathan:
That when a thing lies still, unless somewhat else stir it, it will lie still forever, is a truth that no man doubts. But [the proposition] that when a thing is in motion it will eternally be in motion unless somewhat else stay it, though the reason be the same (namely that nothing can change itself), is not so easily assented to. For men measure not only other men but all other things by themselves. And because they find themselves subject after motion to pain and lassitude, [they] think every thing else grows weary of motion and seeks repose of its own accord, little considering whether it be not some other motion wherein that desire of rest they find in themselves, consists.
- Cohen, I.B. (1995). Science and the Founding Fathers: Science in the Political Thought of Jefferson, Franklin, Adams and Madison. New York: W.W. Norton. p. 117. ISBN 978-0393315103. Archived from the original on 22 March 2017.
- Cohen, I.B. (1980). The Newtonian Revolution: With Illustrations of the Transformation of Scientific Ideas. Cambridge, England: Cambridge University Press. pp. 183–184. ISBN 978-0521273800.
- According to Maxwell in Matter and Motion, Newton meant by motion "the quantity of matter moved as well as the rate at which it travels" and by impressed force he meant "the time during which the force acts as well as the intensity of the force". See Harman and Shapiro, cited below.
- See for example (1) I Bernard Cohen, "Newton's Second Law and the Concept of Force in the Principia", in "The Annus Mirabilis of Sir Isaac Newton 1666–1966" (Cambridge, Massachusetts: The MIT Press, 1967), pp. 143–185; (2) Stuart Pierson, "'Corpore cadente...': Historians Discuss Newton’s Second Law", Perspectives on Science, 1 (1993), pp. 627–658; and (3) Bruce Pourciau, "Newton's Interpretation of Newton's Second Law", Archive for History of Exact Sciences, vol.60 (2006), pp. 157–207; also an online discussion by G E Smith, in 5. Newton's Laws of Motion, s. 5 of "Newton's Philosophiæ Naturalis Principia Mathematica" in (online) Stanford Encyclopedia of Philosophy, 2007.
- This translation of the third law and the commentary following it can be found in the Principia on p. 20 of volume 1 of the 1729 translation Archived 25 April 2016 at the Wayback Machine.
- Newton, Principia, Corollary III to the laws of motion
- Crowell, Benjamin (2011). Light and Matter. Section 4.2, Newton's First Law, Section 4.3, Newton's Second Law, and Section 5.1, Newton's Third Law.
- Feynman, R.P.; Leighton, R.B.; Sands, M. (2005). The Feynman Lectures on Physics. Vol. 1 (2nd ed.). Pearson/Addison-Wesley. ISBN 978-0-8053-9049-0.
- Fowles, G.R.; Cassiday, G.L. (1999). Analytical Mechanics (6th ed.). Saunders College Publishing. ISBN 978-0-03-022317-4.
- Likins, Peter W. (1973). Elements of Engineering Mechanics. McGraw-Hill Book Company. ISBN 978-0-07-037852-0.
- Marion, Jerry; Thornton, Stephen (1995). Classical Dynamics of Particles and Systems. Harcourt College Publishers. ISBN 978-0-03-097302-4.
- NMJ Woodhouse (2003). Special Relativity. London/Berlin: Springer. p. 6. ISBN 978-1-85233-426-0.
- Newton,, Isaac. "Axioms or Laws of Motion". Mathematical Principles of Natural Philosophy. 1, containing Book 1 (1729 English translation based on 3rd Latin edition (1726) ed.). p. 19.
- Newton,, Isaac. "Axioms or Laws of Motion". Mathematical Principles of Natural Philosophy. 2, containing Books 2 & 3 (1729 English translation based on 3rd Latin edition (1726) ed.). p. 19.
- Thomson, W (Lord Kelvin); Tait, P G, (1867). "242, Newton's laws of motion". Treatise on natural philosophy. 1. |
On the night of August 7, 1996, astronomers Eric Elst and Guido Pizarro were observing what was previously thought to be an ordinary asteroid.
To their surprise, the object revealed a faint but distinct tail similar to that of a comet. Initially, this was written off as a minor impact kicking up a debris cloud, but when the tail returned in 2002, when the supposed asteroid again returned to perihelion (the closest approach to the Sun), it once again displayed a tenuous tail. The “asteroid” was then given the designation of 133P/Elst-Pizarro. In 2005, two new asteroids were discovered to sport tails: P/2005 U1 and 118401. In 2008, yet another one of these odd objects was found (P/2008 R1). This new class of objects has been dubbed “Main Belt Comets (MBCs)”.
So where are these Objects coming from?
A previous article on Universe Today explored the possibility that these objects formed like other asteroids in the main belt. After all, each of the objects has an orbit consistent with other apparently normal asteroids. They have a similar distance at with they orbit the Sun, as well as similar eccentricities and inclinations of their orbit. So trying to explain these objects as having origins in the outer solar system that migrated just right into the asteroid belt seemed like little more than special pleading.
Furthermore, a 2008 study by Schorghofer at the University of Hawaii predicted that, if such an icy body were to form, it would be able to avoid sublimation for several billion years if only it were covered with a few meters of dust and dirt thus negating the problems of these objects suffering an early death. (Keep in mind that, much like a melting snowball, the water will evaporate but the dirt won’t, so the dirt will pile up quickly on the surface making this entirely plausible!) However, if the ice were covered by such an amount of dust, it would take a collision to remove the dust and trigger the cometary appearance.
In a recent paper, Nader Haghighipour also at the University of Hawaii explores the viability of collisions to trigger this activation as well as the stability of the orbits of these objects to assess the expectation that they were formed at the same time as other asteroids in the main belt.
For the orbital range in which three of the MBCs lie, it was predicted that “on average, one m[eter]-sized object collides … every 40,000 years.” They stress this is an upper limit since their simulation did not include other, nearby asteroids which would likely deplete the number of available impactors.
When they explored the orbital stability of these objects, the discovered at least two of them were dynamically unstable and would eventually be ejected from their orbits on a timescale of 20 million years. As such, it would be unreasonable to expect such objects to have lasted for the nearly 5 billion year history of the solar system. Thus, an in-situ formation was ruled out. However, due to a similarity in orbital characteristics to a family of asteroids known as the Themis family, suggesting they may have resulted from the same break up of a larger body that created this group. This begs the question of whether or not more of these asteroids are secretly hiding water ice reservoirs and are just waiting for an impact to expose them.
Distinctly separate from this orbital family was P/2008 R1 which exists in an especially unstable orbit near one of the resonances from Jupiter. This suggests that this MBC was likely scattered to its present location, but from where remains to be determined.
So while such Main Belt Comets may not have formed simply as they are now, they are likely to be in orbits not far removed from their original formation. Also, this work supported the earlier notion that minor impacts could reliably expected to expose ice allowing for the cometary tails. Whether or not more asteroids have tails tucked between their legs will be the target of future exploration. |
Prior to statehood, Michigan was part of the Northwest Territory. During this period, Detroit was the territorial capital, and, instead of a state legislature, Michigan was governed by a unicameral body called the Territorial Council. See Christopher J. Carl, Michigan’s Four Constitutions, 1 (Legislative Service Bureau, Research Brief Series No. 13, 1994).
The Ordinance of 1787 was drafted as a "forever [and] unalterable" "compact between the original states, and the people and states in the said territory." It promised regions in the Northwest Territory future statehood. It also provided for religious freedom and protections such as due process and the writ of habeas corpus. The Ordinance of 1787 also prohibited slavery and cruel and unusual punishment.
MICHIGAN’S FOUR CONSTITUTIONS
States possess plenary power. Therefore, Michigan’s current constitution and its predecessors, like other state constitutions, differ from the federal constitution in that they do not, and need not, enumerate governmental powers. Despite this fundamental difference, most state constitutions draw heavily on the federal Constitution, and Michigan is no exception.
Michigan’s constitutions have all created or maintained three branches of government in the state: executive, legislative, and judicial. They have all established or maintained a bicameral legislature. Michigan’s constitutions have all provided the state with a bill of rights, in some form or another, enumerating state citizens’ individual rights and liberties.
The Michigan constitutions are democratic documents. Each was drafted by elected delegates and enacted by a vote of the state electorate. See Carl, supra, at 1. Except for court decisions finding provisions of a Michigan Constitution unconstitutional, nothing in a Michigan Constitution may be changed without voter approval. Id.
Because each constitution’s authority stems solely from its enactment by the people of Michigan, and not from the drafting process, courts use the rule of common understanding to interpret constitutional provisions. See Kuhn v. Secretary of State, 228 Mich. App. 319, 324 579 N.W.2d 101, 104 (1998) (applying the rule of common understanding to interpret a provision of the Michigan Constitution). See also, Kelley v. Riley, 417 Mich. 119, 137, 332 N.W.2d 353, 356 (1983). This means that, when construing the Michigan Constitution, courts must ask what meaning the general public would ascribe to the language in question. See Kelley, 417 Mich. at 137.
Justice Cooley described the rule as follows.
"A constitution is made for the people and by the people. The interpretation that should be given it is that which reasonable minds, the great mass of the people themselves, would give it. 'For as the Constitution does not derive its force from the convention which framed, but from the people who ratified it, the intent to be arrived at is that of the people, and it is not to be supposed that they have looked for any dark or abstruse meaning in the words employed, but rather that they have accepted them in the sense most obvious to the common understanding, and ratified the instrument in the belief that that was the sense designed to be conveyed.' (Cooley's Const Lim 81)." Traverse City School District v. Attorney General, 384 Mich. 390, 405; 185 N.W.2d 9, 14 (1971).
Despite giving deference to 'common understanding', courts may consult Constitutional Convention Records in order to ascertain the intent of a specific provision. See Kuhn, 228 Mich. App. at 324. Interpretations that support the document’s constitutionality are preferred. See Traverse City School District, 384 Mich. 390 at 406.
Over the years, Michigan voters have approved a total of four constitutions. Carl, supra, at 1. Two other proposed constitutions were rejected by voters in 1868 and 1874. Id.
In 1834, Governor Steven T. Mason requested a census that demonstrated Michigan had the requisite population for statehood. See Susan P. Fino, THE MICHIGAN STATE CONSTITUTION: A REFERENCE GUIDE 5 (Greenwood Press, 1996). Once Michigan had decided to pursue statehood, it needed a state constitution. See Carl, supra, at 1. Accordingly, in 1835, ninety-one elected delegates met from May 11 to June 24 to draft what would become Michigan’s first constitution. Id. After drafting the 1835 Constitution, Michigan attained statehood in 1837.
Most of the delegates at the state’s first Constitutional Convention were Jacksonian Democrats. See Fino at 5. Many were farmers or tradesman, although some east coast academics also participated. Id. These framers borrowed heavily language from other state constitutions, the New York and Connecticut Constitutions in particular. Id.
The 1835 Constitution authorized, inter alia:
- vesting all executive powers into an elected governor, see Mich. Const. art V, sections 1 and 3 (repealed 1850);
- a bicameral elected state legislature, see Mich. Const. art IV, sections 1 and 4 (repealed 1850); and
- a state supreme court, and justices appointed for seven year terms by the governor, upon "the advice and consent of the senate." See Mich. Const. art VI, sections 1 and 2 (repealed 1850).
The 1835 Constitution emphasized individual liberties, providing Michigan citizens with greater protection than the federal Bill of Rights. Article I was titled "Bill of Rights", and included detailed provisions regarding freedom of speech, religion, conscience, and the press, prohibiting unlawful search and seizure, and providing jury trials. See Mich. Const. art I (repealed 1850).
Other than Supreme Court Justices, who were to be appointed, the 1835 Constitution favored electing officials to office. Lower court judges were to be elected, as were county treasurers, sheriffs, and so on. In fact, Michigan’s historic preference for elected offices is rooted in its first constitution. See Fino at 7.
The 1835 Constitution handled apportionment similarly to the current Michigan Constitution. Senators and Representatives both were elected by constituents based on population (the sexism and racism built into the 1835 document notwithstanding). See Frank Ravitch, The Four Michigan Constitutions, in THE HISTORY OF MICHIGAN LAW 128 (Finkelman and Hershock, Eds., 2006).
The 1835 Constitution also authorized state spending on internal improvements. Mich. Const. art. XII, section 3 (repealed 1850).
Only fifteen years after enacting the 1835 Constitution, Michigan delegates met again to draft the 1850 Constitution. See Carl, supra, at 2. This new constitution was longer, more complex, and incorporated issues generally left to statutes, but voters embraced it enthusiastically because it addressed concerns that had developed among the electorate since 1835. Id.
Jacksonian democracy’s popularity had increased over the preceding fifteen years, and resonated with Michigan voters. See Id. See also Fino. Accordingly, the 1850 Constitution reflected the new political landscape by including provisions that increased government by the people. The 1850 Constitution, for example, dramatically increased the number of elected government positions, including:
- Supreme Court Justices, Mich. Const. art. VI, section 2 (repealed 1908);
- Secretary of State, Mich. Const. art. VIII, section 1 (repealed 1908);
- Superintendent of Public Instruction, Mich. Const. art. VIII, section 1 (repealed 1908);
- State Treasurer, Mich. Const. art. VIII, section 1 (repealed 1908);
- Commissioner of the Land Office, Mich. Const. art. VIII, section 1 (repealed 1908); and
- Auditor General, Mich. Const. art. VIII, section 1 (repealed 1908).
But drafters feared that frequent elections could wreak havoc on state policies. As a preventative measure they incorporated into the constitution numerous policy related provisions more suited to statutes. See Fino at 9. These policy statements covered a lot of territory, including the following.
- Prohibitions on lotteries, Mich. Const. art. IV, section 27(repealed 1908).
- Limits on leases of agricultural land, Mich. Const. art. XVIII, section 12 (repealed 1908).
- The specific salaries of state officers, Mich. Const. art. IX, section 1 (repealed 1908).
- The dedication of educational funds, Mich. Const. art. XIII, section 2 (repealed 1908). See Fino at 9
Mistrust of Banks and Internal Improvements
Throughout the 1830s, land speculation flourished in the United States, and Michigan in particular. See Malcolm J. Rohrbough, The Land Office Business: The Settlement and Administration of American Public Lands, 1789-1837, 221-249 (Oxford University Press, 1968). Speculation was accompanied by large increases in public and private debt. See Susan Fino, Perspectives: Federal Jurisprudence, State Autonomy: De Tocqueville or Disney? The Renquist Court’s Idea of Federalism, 66 Alb. L. Rev. 765, 769 (2003). And New York banks, which were central to the country’s infant banking system, strained to meet ever increasing demands for specie. President Jackson’s “Specie Circular”, which required payment for public lands to be made in cash, provided the proverbial last straw that broke the national banking system’s back. See Peter Rousseau, Jacksonian Monetary Policy, Specie Flows, and the Panic, Journal of Economic History, 62(2)(2002).
The speculation bubble burst in 1837. Michigan’s internal improvements program foundered, nearly bankrupting the state. See Carl at 2. Subsequent bank failures, coupled with individuals’ personal financial crises and high unemployment fueled mistrust of the financial sector. To address voters’ fundamental mistrust of banks and banking, drafters included in the 1850 constitution a provision requiring voter approval of any changes to banking regulations. See Carl,supra, at 2. Because the state’s internal improvements program was intertwined with events leading to the 1837 economic crisis, drafters also added a provision forbidding state involvement in internal improvements. See Mich. Const. art. XIV, section 9 (repealed 1908).
Another constitutional change impacted the proportionality of Michigan’s apportionment system. In particular, Article IV provided, in part, that “[e]ach county hereafter organized, with such territory as may be attached thereto, shall be entitled to a separate representative, when it has attained a population equal to a moiety of the ratio of representation.” Mich Const. art. IV, section 3 (repealed 1908). This meant that, as populations increased during the interims between constitutionally mandated reapportionments, certain urban districts went underrepresented on a per capita basis, while counties allocated a representative upon reaching a moiety were over represented. See Ravitch, supra, at 129.
As Michigan moved into the new century, its largely democratic government fell before the reformist minded Republican Party led by Theodore Roosevelt. Fred M. Warner, the republican governor of Michigan from 1905 to 1911, successfully called for a constitutional convention and a new constitution for a new century. See Fino, supra, at 13. As originally drafted, the 1906 constitution made few significant changes to the 1850 constitution. It did, however, reinsert as article II a bill of rights—the 1850 constitution had dispensed with the separate bill of rights included in the 1835 constitution, relegating important liberties to the status of miscellany. See Carl, supra, at 3. The 1906 Constitution also gave women property owners the vote on certain tax or bond matters, Mich. Const. art. X, section 25 (repealed 1963), although it did not fully enfranchise women until after the passage of the 19th Amendment to the U.S. Constitution. Id.
Other changes included the following.
- General reorganization (for the better)
- Authorization for wage and labor law legislation, including for women and children, Mich. Const. art. V, section 29 (repealed 1963)
- A gubernatorial line item veto on appropriations bills, Mich. Const. art. V, section 37 (repealed 1963)
- Greater local government autonomy , Mich. Const. art. X, sections 21-24 (repealed 1963)
A 1952 amendment to the 1908 Constitution had a negative impact on apportionment. It retained the earlier moiety provision, but counties entitled to additional representatives were awarded additional seats only when the “full ratio of representation was reached.” Mich. Const. art. V, section 3, as amended (1952)(repealed 1963). See also Ravitch, supra, at 130. Moreover, this amendment to the 1908 Constitution created an apportionment scheme for senatorial seats that resulted in unequal per capita representation that generally favored rural areas. Id.
Like earlier versions of the state constitution, the Constitution of 1963, our current state constitution, is a product of its time. As such, it reflects certain civil rights era values. For the first time, the convention included African American and women delegates. See Fino, supra, at 21. These delegates were cognizant of the Warren Court’s efforts on behalf of equal rights for African Americans. Id. at 23. As a result, they drafted a progressive equal protection guarantee. Id. Seealso Mich. Const. art. I, section 2. They also established the first constitutional Civil Rights Commission in the country. Carl, supra, at 4. The delegates were not so progressive, however, as to provide protection for women’s equal rights. Id at 31.
Still, article I of Michigan’s 1963 Constitution specifically protects against racial discrimination and safeguards citizens’ civil and political rights. Some courts have argued that the language used in article I establishes protections greater than those of its federal counterpart, the Fourteenth Amendment. Id. See also NAACP v. City of Dearborn, 173 Mich. App. 602; 434 N.W.2d 444 (1988)(affirming that “Article I, section 2 of the Michigan Constitution [goes] beyond the limits of the Fourteenth Amendment by prohibiting all racial segregation, without regard to whether it was caused by a segregative purpose”). But see,Harville v. State Plumbing and Heating, Inc., 218 Mich. App. 302; 553 N.W. 2d 377 (1996) (asserting that Doe v. Dep’t of Social Services, 439 Mich. 650 (1992) “impliedly overruled” NAACP). According to the Harville Court, however, the drafters of the Michigan Constitution added specific language regarding discrimination to article I, section 2 in order to avoid the public accommodation and state action debate that was then playing out in the U.S. Supreme Court. 218 Mich. App. 302, 318.
Although the constitutional delegates were forward thinking on equal rights issues, they fell short elsewhere. They knew, for instance, that they needed to revisit legislative apportionment. During the time of the Constitutional Convention, the U.S. Supreme Court held, in Baker v. Carr, 369 U.S. 186, 235 (1962) that federal courts could hear and decide cases involving voters’ claims of malapportioned or debased votes and “denial of equal protection.” In light of Baker, the U.S. Supreme Court vacated and remanded a Michigan Supreme Court decision upholding the state’s apportionment scheme. Scholle v. Hare, 369 U.S. 429 (1962). On remand, the Michigan Supreme Court held that sections 2 and 4 of article V were unconstitutional. Scholle v. Hare, 367 Mich. 176, 116 N.W.2d 350 (1962).
The prospect of a federal court finding a 14th Amendment violation in a Michigan constitutional provision made delegates nervous. They wanted to avoid any potentially suspect system of apportionment. Fino, supra, at 23. Despite their caution, the delegates selected methods for apportioning both house and senate seats based on population, and population and land area, respectively that, only one year later, were found to have violated the one-person, one vote rule laid out in Reynolds v. Sims, 377 U.S. 533 (1964), and thus federal equal protection law. In re Apportionment of State Legislature, 413 Mich. 96; 321 N.W.2d 565 (1982).
Search and Seizure
The delegates also retained search and seizure language rejecting the “exclusionary rule” that they knew was unconstitutional. See Fino, supra, at 23. Article I, Section 11, contains language, added by amendment in 1935 (1935 Joint Resolution No. 1, ratified November 3, 1936), that states Section 11 "shall not be construed to bar from evidence in any criminal proceeding any narcotic drug, firearm, bomb, explosive or any other dangerous weapon, seized by a peace officer outside the curtilage of any dwelling house in this state." Mich. Const. art I, section 11. In 1961, however, the Supreme Court had held that the exclusionary rule applied to states. Mapp v. Ohio, 367 U.S. 643 (1961). Being familiar with Mapp, the delegates understood that the language in Section 11 rejecting the exclusionary rule was unconstitutional. But they left the questionable language intact. Several years later, the Michigan Supreme Court confirmed that, to the extent it deviates from federal law, Section 11 was indeed unconstitutional. People v. Pennington, 383 Mich. 611, 178 N.W.2d 471 (1970).
Like earlier constitutions, the 1963 Constitution also reflected the recent economic trials of the people of Michigan. Between 1908 and 1963, the population of Michigan had grown dramatically. See Albert Sturm, Constitution Making in Michigan 1961-1963, 21 (Institute of Public Administration, University of Michigan, 1963). Similarly, manufacturing and manufacturing jobs had increased. Id. But by 1953, jobs had peaked. Increasing automation in the automobile industry started a decline in employment to which the legislature failed to respond adequately—in large part because the legislature and the governor could not agree on a tax program. Id. But other factors, inter alia, inequitable representation and the legislature’s lack of control over state funds were also to blame. Id. By 1959, the state was in the midst of a full blown financial crisis. Id. The crisis itself was resolved fairly quickly, but it left the people of Michigan anxious for constitutional reforms, and led to intense scrutiny of tax and funding provisions. See Fino, supra, at 22.
Other changes to the constitution were a response to a popular view that executive agencies should be consolidated into a less unwieldy structure. Carl at 3. Some more radical constituents even lobbied for a unicameral legislature. See Sturm, supra, at 21. As a result, the Constitution of 1963 reorganized the executive branch of state government. It also extended the terms of the Governor, Lieutenant Governor, Secretary of State, and Attorney General. Mich. Const. art. V, section 21.
The executive branch was not the only branch of government targeted for change. Senators’ terms were extended to be concurrent with the Governor’s, from two to four years. Mich. Const. art. IV, section 2. And delegates added several new provisions to Article VI, which relates to the judiciary. Section 2 reduced the number of justices from eight to seven. Mich. Const. art. VI, section 2. And section 8 established the Court of Appeals. Mich. Const. art. VI, section 8.
The 1963 Constitution was not universally applauded. Democratic groups primarily, led by former Governor Swain, opposed the ratification of the new constitution. The AFL-CIO and the NAACP also expressed concerns. Supporters, led in part by former Governor Romney, succeeded in having the document ratified, however, and, despite its flaws, the 1963 Constitution, as amended, remains in place to this day. |
This article needs additional citations for verification. (April 2020)
Rocket propellant is the reaction mass of a rocket. This reaction mass is ejected at the highest achievable velocity from a rocket engine to produce thrust. The energy required can either come from the propellants themselves, as with a chemical rocket, or from an external source, as with ion engines.
Rockets create thrust by expelling mass rear-ward, at high velocity. The thrust produced can be calculated by multiplying the mass flow rate of the propellants by their exhaust velocity relative to the rocket (specific impulse). A rocket can be thought of as being accelerated by the pressure of the combusting gases against the combustion chamber and nozzle, not by "pushing" against the air behind or below it. Rocket engines perform best in outer space because of the lack of air pressure on the outside of the engine. In space it is also possible to fit a longer nozzle without suffering from flow separation.
Most chemical propellants release energy through redox chemistry, more specifically combustion. As such, both an oxidizing agent and a reducing agent (fuel) must be present in the mixture. Decomposition, such as that of highly unstable peroxide bonds in monopropellant rockets, can also be the source of energy.
In the case of bipropellant liquid rockets, a mixture of reducing fuel and oxidizing oxidizer is introduced into a combustion chamber, typically using a turbopump to overcome the pressure. As combustion takes place, the liquid propellant mass is converted into a huge volume of gas at high temperature and pressure. This exhaust stream is ejected from the engine nozzle at high velocity, creating an opposing force that propels the rocket forward in accordance with Newton's laws of motion.
Chemical rockets can be grouped by phase. Solid rockets use propellant in the solid phase, liquid fuel rockets use propellant in the liquid phase, gas fuel rockets use propellant in the gas phase, and hybrid rockets use a combination of solid and liquid or gaseous propellants.
In the case of solid rocket motors, the fuel and oxidizer are combined when the motor is cast. Propellant combustion occurs inside the motor casing, which must contain the pressures developed. Solid rockets typically have higher thrust, less specific impulse, shorter burn times, and a higher mass than liquid rockets, and additionally cannot be stopped once lit.
In space, the maximum change in velocity that a rocket stage can impart on its payload is primarily a function of its mass ratio and its exhaust velocity. This relationship is described by the rocket equation. Exhaust velocity is dependent on the propellant and engine used and closely related to specific impulse, the total energy delivered to the rocket vehicle per unit of propellant mass consumed. Mass ratio can also be affected by the choice of a given propellant.
Rocket stages that fly through the atmosphere usually use lower performing, high molecular mass, high-density propellants due to the smaller and lighter tankage required. Upper stages, which mostly or only operate in the vacuum of space, tend to use the high energy, high performance, low density liquid hydrogen fuel.
Solid chemical propellants
Solid propellants come in two main types. "Composites" are composed mostly of a mixture of granules of solid oxidizer, such as ammonium nitrate, ammonium dinitramide, ammonium perchlorate, or potassium nitrate in a polymer binding agent, with flakes or powders of energetic fuel compounds (examples: RDX, HMX, aluminium, beryllium). Plasticizers, stabilizers, and/or burn rate modifiers (iron oxide, copper oxide) can also be added.
Single-, double-, or triple-bases (depending on the number of primary ingredients) are homogeneous mixtures of one to three primary ingredients. These primary ingredients must include fuel and oxidizer and often also include binders and plasticizers. All components are macroscopically indistinguishable and often blended as liquids and cured in a single batch. Ingredients can often have multiple roles. For example, RDX is both a fuel and oxidizer while nitrocellulose is a fuel, oxidizer, and structural polymer.
Further complicating categorization, there are many propellants that contain elements of double-base and composite propellants, which often contain some amount of energetic additives homogeneously mixed into the binder. In the case of gunpowder (a pressed composite without a polymeric binder) the fuel is charcoal, the oxidizer is potassium nitrate, and sulphur serves as a reaction catalyst while also being consumed to form a variety of reaction products such as potassium sulfide.
The newest nitramine solid propellants based on CL-20 (HNIW) can match the performance of NTO/UDMH storable liquid propellants, but cannot be throttled or restarted.
Solid propellant rockets are much easier to store and handle than liquid propellant rockets. High propellant density makes for compact size as well. These features plus simplicity and low cost make solid propellant rockets ideal for military and space applications.
Their simplicity also makes solid rockets a good choice whenever large amounts of thrust are needed and the cost is an issue. The Space Shuttle and many other orbital launch vehicles use solid-fueled rockets in their boost stages (solid rocket boosters) for this reason.
Solid fuel rockets have lower specific impulse, a measure of propellant efficiency, than liquid fuel rockets. As a result, the overall performance of solid upper stages is less than liquid stages even though the solid mass ratios are usually in the .91 to .93 range, as good as or better than most liquid propellant upper stages. The high mass ratios possible with these unsegmented solid upper stages is a result of high propellant density and very high strength-to-weight ratio filament-wound motor casings.
A drawback to solid rockets is that they cannot be throttled in real time, although a programmed thrust schedule can be created by adjusting the interior propellant geometry. Solid rockets can be vented to extinguish combustion or reverse thrust as a means of controlling range or accommodating warhead separation. Casting large amounts of propellant requires consistency and repeatability to avoid cracks and voids in the completed motor. The blending and casting take place under computer control in a vacuum, and the propellant blend is spread thin and scanned to assure no large gas bubbles are introduced into the motor.
Solid fuel rockets are intolerant to cracks and voids and require post-processing such as X-ray scans to identify faults. The combustion process is dependent on the surface area of the fuel. Voids and cracks represent local increases in burning surface area, increasing the local temperature, which increases the local rate of combustion. This positive feedback loop can easily lead to catastrophic failure of the case or nozzle.
During the 1950s and 60s, researchers in the United States developed ammonium perchlorate composite propellant (APCP). This mixture is typically 69-70% finely ground ammonium perchlorate (an oxidizer), combined with 16-20% fine aluminium powder (a fuel), held together in a base of 11-14% polybutadiene acrylonitrile (PBAN) or Hydroxyl-terminated polybutadiene (polybutadiene rubber fuel). The mixture is formed as a thickened liquid and then cast into the correct shape and cured into a firm but flexible load-bearing solid. Historically, the tally of APCP solid propellants is relatively small. The military, however, uses a wide variety of different types of solid propellants, some of which exceed the performance of APCP. A comparison of the highest specific impulses achieved with the various solid and liquid propellant combinations used in current launch vehicles is given in the article on solid-fuel rockets.
In the 1970s and 1980s, the U.S. switched entirely to solid-fueled ICBMs: the LGM-30 Minuteman and LG-118A Peacekeeper (MX). In the 1980s and 1990s, the USSR/Russia also deployed solid-fueled ICBMs (RT-23, RT-2PM, and RT-2UTTH), but retains two liquid-fueled ICBMs (R-36 and UR-100N). All solid-fueled ICBMs on both sides had three initial solid stages, and those with multiple independently targeted warheads had a precision maneuverable bus used to fine tune the trajectory of the re-entry vehicles.
Liquid chemical propellants
Liquid-fueled rockets have higher specific impulse than solid rockets and are capable of being throttled, shut down, and restarted. Only the combustion chamber of a liquid-fueled rocket needs to withstand high combustion pressures and temperatures. Cooling can be done regeneratively with the liquid propellant. On vehicles employing turbopumps, the propellant tanks are at a lower pressure than the combustion chamber, decreasing tank mass. For these reasons, most orbital launch vehicles use liquid propellants.
The primary specific impulse advantage of liquid propellants is due to the availability of high-performance oxidizers. Several practical liquid oxidizers (liquid oxygen, dinitrogen tetroxide, and hydrogen peroxide) are available which have better specific impulse than the ammonium perchlorate used in most solid rockets when paired with suitable fuels.
The main difficulties with liquid propellants are also with the oxidizers. Storable oxidizers, such as nitric acid and nitrogen tetroxide, tend to be extremely toxic and highly reactive, while cryogenic propellants by definition must be stored at low temperature and can also have reactivity/toxicity issues. Liquid oxygen (LOX) is the only flown cryogenic oxidizer. Others such as FLOX, a fluorine/LOX mix, have never been flown due to instability, toxicity, and explosivity. Several other unstable, energetic, and toxic oxidizers have been proposed: liquid ozone (O3), ClF3, and ClF5.
Liquid-fueled rockets require potentially troublesome valves, seals, and turbopumps, which increase the cost of the launch vehicle. Turbopumps are particularly troublesome due to high performance requirements.
Current cryogenic types
- Liquid oxygen (LOX) and highly refined kerosene (RP-1). Used for the first stages of the Atlas V, Falcon 9, Falcon Heavy, Soyuz, Zenit, and developmental rockets like Angara and Long March 6. This combination is widely regarded as the most practical for boosters that lift off at ground level and therefore must operate at full atmospheric pressure.
- LOX and liquid hydrogen. Used on the Centaur upper stage, the Delta IV rocket, the H-IIA rocket, most stages of the European Ariane 5, and the Space Launch System core and upper stages.
- LOX and liquid methane (from Liquefied natural gas) are planned for use on several rockets in development, including Vulcan, New Glenn, and SpaceX Starship.
Current storable types
- Dinitrogen tetroxide (N2O4) and hydrazine (N2H4), MMH, or UDMH. Used in military, orbital, and deep space rockets because both liquids are storable for long periods at reasonable temperatures and pressures. N2O4/UDMH is the main fuel for the Proton rocket, older Long March rockets (LM 1-4), PSLV, Fregat, and Briz-M upper stages. This combination is hypergolic, making for attractively simple ignition sequences. The major inconvenience is that these propellants are highly toxic and require careful handling.
- Monopropellants such as hydrogen peroxide, hydrazine, and nitrous oxide are primarily used for attitude control and spacecraft station-keeping where their long-term storability, simplicity of use, and ability to provide the tiny impulses needed outweighs their lower specific impulse as compared to bipropellants. Hydrogen peroxide is also used to drive the turbopumps on the first stage of the Soyuz launch vehicle.
The theoretical exhaust velocity of a given propellant chemistry is proportional to the energy released per unit of propellant mass (specific energy). In chemical rockets, unburned fuel or oxidizer represents the loss of chemical potential energy, which reduces the specific energy. However, most rockets run fuel-rich mixtures, which result in lower theoretical exhaust velocities.
However, fuel-rich mixtures also have lower molecular weight exhaust species. The nozzle of the rocket converts the thermal energy of the propellants into directed kinetic energy. This conversion happens in the time it takes for the propellants to flow from the combustion chamber through the engine throat and out the nozzle, usually on the order of one millisecond. Molecules store thermal energy in rotation, vibration, and translation, of which only the latter can easily be used to add energy to the rocket stage. Molecules with fewer atoms (like CO and H2) have fewer available vibrational and rotational modes than molecules with more atoms (like CO2 and H2O). Consequently, smaller molecules store less vibrational and rotational energy for a given amount of heat input, resulting in more translation energy being available to be converted to kinetic energy. The resulting improvement in nozzle efficiency is large enough that real rocket engines improve their actual exhaust velocity by running rich mixtures with somewhat lower theoretical exhaust velocities.
The effect of exhaust molecular weight on nozzle efficiency is most important for nozzles operating near sea level. High expansion rockets operating in a vacuum see a much smaller effect, and so are run less rich.
LOX/hydrocarbon rockets are run slightly rich (O/F mass ratio of 3 rather than stoichiometric of 3.4 to 4) because the energy release per unit mass drops off quickly as the mixture ratio deviates from stoichiometric. LOX/LH2 rockets are run very rich (O/F mass ratio of 4 rather than stoichiometric 8) because hydrogen is so light that the energy release per unit mass of propellant drops very slowly with extra hydrogen. In fact, LOX/LH2 rockets are generally limited in how rich they run by the performance penalty of the mass of the extra hydrogen tankage instead of the underlying chemistry.
Another reason for running rich is that off-stoichiometric mixtures burn cooler than stoichiometric mixtures, which makes engine cooling easier. Because fuel-rich combustion products are less chemically reactive (corrosive) than oxidizer-rich combustion products, a vast majority of rocket engines are designed to run fuel-rich. At least one exception exists: the Russian RD-180 preburner, which burns LOX and RP-1 at a ratio of 2.72.
Additionally, mixture ratios can be dynamic during launch. This can be exploited with designs that adjust the oxidizer to fuel ratio (along with overall thrust) throughout a flight to maximize overall system performance. For instance, during lift-off thrust is more valuable than specific impulse, and careful adjustment of the O/F ratio may allow higher thrust levels. Once the rocket is away from the launchpad, the engine O/F ratio can be tuned for higher efficiency.
Although liquid hydrogen gives a high Isp, its low density is a disadvantage: hydrogen occupies about 7 times more volume per kilogram than dense fuels such as kerosene. The fuel tankage, plumbing, and pump must be correspondingly larger. This increases the vehicle's dry mass, reducing performance. Liquid hydrogen is also relatively expensive to produce and store, and causes difficulties with design, manufacture, and operation of the vehicle. However, liquid hydrogen is extremely well suited to upper stage use where Isp is at a premium and thrust to weight ratios are less relevant.
Dense propellant launch vehicles have a higher takeoff mass due to lower Isp, but can more easily develop high takeoff thrusts due to the reduced volume of engine components. This means that vehicles with dense-fueled booster stages reach orbit earlier, minimizing losses due to gravity drag and reducing the effective delta-v requirement.
The proposed tripropellant rocket uses mainly dense fuel while at low altitude and switches across to hydrogen at higher altitude. Studies in the 1960s proposed single stage to orbit vehicles using this technique. The Space Shuttle approximated this by using dense solid rocket boosters for the majority of the thrust during the first 120 seconds. The main engines burned a fuel-rich hydrogen and oxygen mixture, operating continuously throughout the launch but providing the majority of thrust at higher altitudes after SRB burnout.
Other chemical propellants
Hybrid propellants: a storable oxidizer used with a solid fuel, which retains most virtues of both liquids (high ISP) and solids (simplicity).
A hybrid-propellant rocket usually has a solid fuel and a liquid or NEMA oxidizer.[clarification needed] The fluid oxidizer can make it possible to throttle and restart the motor just like a liquid-fueled rocket. Hybrid rockets can also be environmentally safer than solid rockets since some high-performance solid-phase oxidizers contain chlorine (specifically composites with ammonium perchlorate), versus the more benign liquid oxygen or nitrous oxide often used in hybrids. This is only true for specific hybrid systems. There have been hybrids which have used chlorine or fluorine compounds as oxidizers and hazardous materials such as beryllium compounds mixed into the solid fuel grain. Because just one constituent is a fluid, hybrids can be simpler than liquid rockets depending motive force used to transport the fluid into the combustion chamber. Fewer fluids typically mean fewer and smaller piping systems, valves and pumps (if utilized).
Hybrid motors suffer two major drawbacks. The first, shared with solid rocket motors, is that the casing around the fuel grain must be built to withstand full combustion pressure and often extreme temperatures as well. However, modern composite structures handle this problem well, and when used with nitrous oxide and a solid rubber propellant (HTPB), relatively small percentage of fuel is needed anyway, so the combustion chamber is not especially large.
The primary remaining difficulty with hybrids is with mixing the propellants during the combustion process. In solid propellants, the oxidizer and fuel are mixed in a factory in carefully controlled conditions. Liquid propellants are generally mixed by the injector at the top of the combustion chamber, which directs many small swift-moving streams of fuel and oxidizer into one another. Liquid-fueled rocket injector design has been studied at great length and still resists reliable performance prediction. In a hybrid motor, the mixing happens at the melting or evaporating surface of the fuel. The mixing is not a well-controlled process and generally, quite a lot of propellant is left unburned, which limits the efficiency of the motor. The combustion rate of the fuel is largely determined by the oxidizer flux and exposed fuel surface area. This combustion rate is not usually sufficient for high power operations such as boost stages unless the surface area or oxidizer flux is high. Too high of oxidizer flux can lead to flooding and loss of flame holding that locally extinguishes the combustion. Surface area can be increased, typically by longer grains or multiple ports, but this can increase combustion chamber size, reduce grain strength and/or reduce volumetric loading. Additionally, as the burn continues, the hole down the center of the grain (the 'port') widens and the mixture ratio tends to become more oxidizer rich.
There has been much less development of hybrid motors than solid and liquid motors. For military use, ease of handling and maintenance have driven the use of solid rockets. For orbital work, liquid fuels are more efficient than hybrids and most development has concentrated there. There has recently been an increase in hybrid motor development for nonmilitary suborbital work:
- Several universities have recently experimented with hybrid rockets. Brigham Young University, the University of Utah and Utah State University launched a student-designed rocket called Unity IV in 1995 which burned the solid fuel hydroxy-terminated polybutadiene (HTPB) with an oxidizer of gaseous oxygen, and in 2003 launched a larger version which burned HTPB with nitrous oxide. Stanford University researches nitrous-oxide/paraffin wax hybrid motors. UCLA has launched hybrid rockets through an undergraduate student group since 2009 using HTPB.
- The Rochester Institute of Technology was building an HTPB hybrid rocket to launch small payloads into space and to several near-Earth objects. Its first launch was in the Summer of 2007.
- Scaled Composites SpaceShipOne, the first private manned spacecraft, was powered by a hybrid rocket burning HTPB with nitrous oxide: RocketMotorOne. The hybrid rocket engine was manufactured by SpaceDev. SpaceDev partially based its motors on experimental data collected from the testing of AMROC's (American Rocket Company) motors at NASA's Stennis Space Center's E1 test stand.
Some rocket designs impart energy to their propellants with external energy sources. For example, water rockets use a compressed gas, typically air, to force the water reaction mass out of the rocket.
Ion thrusters ionize a neutral gas and create thrust by accelerating the ions (or the plasma) by electric and/or magnetic fields.
Thermal rockets use inert propellants of low molecular weight that are chemically compatible with the heating mechanism at high temperatures. Solar thermal rockets and nuclear thermal rockets typically propose to use liquid hydrogen for a specific impulse of around 600–900 seconds, or in some cases water that is exhausted as steam for a specific impulse of about 190 seconds. Nuclear thermal rockets use the heat of nuclear fission to add energy to the propellant. Some designs separate the nuclear fuel and working fluid, minimizing the potential for radioactive contamination, but nuclear fuel loss was a persistent problem during real-world testing programs. Solar thermal rockets use concentrated sunlight to heat a propellant, rather than using a nuclear reactor.
For low performance applications, such as attitude control jets, compressed gases such as nitrogen have been employed. Energy is stored in the pressure of the inert gas. However, due to the low density of all practical gases and high mass of the pressure vessel required to contain it, compressed gases see little current use.
- ALICE (propellant)
- Timeline of hydrogen technologies
- Category:Rocket fuels
- Comparison: Aviation fuel
- Nuclear propulsion
- Ion thruster
- Crawford burner
- McGowen, Tom (2008). Space Race: The Mission, the Men, the Moon. Enslow Pub Inc. p. 7. ISBN 978-0766029101.
- Games, Alex (2007). Balderdash & Piffle. BBC Books. pp. 199. ISBN 978-0563493365.
- Gref, Lynn G. (2010). The Rise and Fall of American Technology. Algora. p. 95. ISBN 978-0875867533.
- Greatrix, David R. (2012). Powered Flight: The Engineering of Aerospace Propulsion. Springer. pp. 1. ISBN 978-1447124849.
- Mahaffey, James (2017). Atomic Adventures: Secret Islands, Forgotten N-Rays, and Isotopic Murder - A Journey Through The Wild World of Nuclear Science. Pegasus Books. ISBN 978-1681774213.
- M. D. Black, The Evolution of ROCKET TECHNOLOGY, 3rd Ed., 2012, payloadz.com ebook/History pp. 109-112 and pp. 114-119
- Jones, C., Masse, D., Glass, C., Wilhite, A., and Walker, M. (2010), "PHARO: Propellant harvesting of atmospheric resources in orbit," IEEE Aerospace Conference.
- on YouTube
- Rocket Propulsion, Robert A. Braeunig, Rocket and Space Technology, 2012.
- "Robert Salkeld'S". Pmview.com. Retrieved 2014-01-18.
- Ignition! An Informal History of Liquid Rocket Propellants, John D. Clark (Rutgers University Press, 1972), Chapter 12
- "Rocket Project at UCLA".
- Steyn, Willem H; Hashida, Yoshi (1999). "An Attitude Control System for a Low-Cost Earth Observation Satellite with Orbit Maintenance Capability". Small Satellite Conference. USU Small Satellite Conference Surrey Space Centre. Retrieved 18 October 2016.
- G.R. Schmidt; J.A. Bunornetti; P.J. Morton. Nuclear Pulse Propulsion – Orion and Beyond (PDF). 36th AIAA / ASME / SAE / ASEE Joint Propulsion Conference & Exhibit, Huntsville, Alabama, 16–19 July 2000. AlAA 2000-3856.
|Wikimedia Commons has media related to Aircraft fuels.|
- Rocket Propellants (from Rocket & Space Technology) |
The contribution of whales to reducing greenhouse gases takes a surprise route, as is often the case in nature: Whale excrement is the preferred food of phytoplankton. These plant micro-organisms suspended in the seas and oceans are major consumers of the minerals in faecal matter, which is rich in iron and nitrogen.
Capturing CO2 by eating whale waste
Phytoplankton make a decisive contribution to the natural equilibrium of our world:
- They produce over half of the planet’s oxygen.
- When the micro-organisms that eat whale waste grow, they capture about 40% of the CO2 in the atmosphere, which totals about 37 billion tonnes of carbon dioxide every year.
- Phytoplankton capture as much CO2 as 1.7 trillion trees, the equivalent of four times the number in the Amazonian forest.
The diet of whales plays as decisive a part as that of phytoplankton in the fight against global warming. These massive aquatic mammals eat enormous amounts of phytoplankton and, as a result, store CO2 their entire lives. The larger the whale, the more carbon it stores.
An adult blue whale eats the equivalent of 424 kg of carbon each year on average. When a whale dies, it sinks and takes the CO2 it stored with it. The average 33 tonnes of CO2 every whale stores remains trapped in the depths for centuries in a phenomenon known as blue carbon sequestration.
One threat: Fishing
This makes the preservation of whales an even more vital cause. However, although commercial whaling has been prohibited for nearly 40 years, subsistence hunting for food and hunting for scientific research are still allowed. Accordingly, the whale population has not recovered.
It is estimated that ocean fisheries have released at least 0.73 billion metric tonnes of CO2 into the atmosphere since 1950. Globally, 43.5% of the blue carbon extracted by fisheries in the high seas comes from areas that would be economically unprofitable without subsidies. Limiting blue carbon extraction by fisheries, particularly in unprofitable areas, would cut CO2 emissions by burning less fuel and reactivating a natural carbon pump as fish stocks rebuild and carcasses’ deadfall increases.
Another threat: Shipping
Collisions with cargo ships also threaten whales. Nearly 100 whales are killed every year along the west coast of the US. The solution would be to have the ships take different routes. However, this is not easy to do because ship-owners are protective of the maritime routes their vessels use.
Protecting whales is an integral part of the blue economy. We believe it is essential at a time when the need fight greenhouse gases and global warming is greater than ever. As incredible carbon sinks, whales are animals that must be protected at all costs for the well-being of the entire planet.
Investing in the blue economy
As a global sustainability theme, investing in the blue economy is fully aligned with BNP Paribas Asset Management’s sustainable investment priorities. These are focused on the energy transition, environmental protection and equality & inclusive growth.
We believe investing in the blue economy will help advance the fight against climate change and ensure that the oceans can continue to function as a sink for carbon emissions from human activity. Such investments are suited for investors with a long-term perspective, an interest in contributing to a greener future and making a positive impact.
In our view, finance can play a major role in pushing companies linked to the blue economy to improve their practices. Those investors who consider the preservation of marine resources as an absolute priority are set to see investment opportunities in companies that develop marine and ocean projects opening up as awareness of the blue economy’s appeal grows.
Source: Let more big fish sink: Fisheries prevent blue carbon sequestration—half in unprofitable areas; https://advances.sciencemag.org/content/6/44/eabb4848/tab-figures-data
- Blue economy – The ocean… land of innovation
- A deep dive into the ‘blue economy’ with a dedicated index
- The great blue economy wave
- How ocean states can benefit from a ‘blue recovery’ from COVID-19
- Water: a pervasive resource and a portfolio staple
Read more about sustainable investing
- Positioning for a green recovery from COVID-19
- Crisis and resilience – Navigating a sustainable recovery
Any views expressed here are those of the author as of the date of publication, are based on available information, and are subject to change without notice. Individual portfolio management teams may hold different views and may take different investment decisions for different clients. This document does not constitute investment advice.
The value of investments and the income they generate may go down as well as up and it is possible that investors will not recover their initial outlay. Past performance is no guarantee for future returns.
Investing in emerging markets, or specialised or restricted sectors is likely to be subject to a higher-than-average volatility due to a high degree of concentration, greater uncertainty because less information is available, there is less liquidity or due to greater sensitivity to changes in market conditions (social, political and economic conditions).
Some emerging markets offer less security than the majority of international developed markets. For this reason, services for portfolio transactions, liquidation and conservation on behalf of funds invested in emerging markets may carry greater risk. |
An essential skill for children to learn is being able to find the percent of a whole number. Finding these quantities can be broken down to a few simple steps and allows children to tackle a variety of percentage-related problems once this concept has been mastered. Take a look at the following example:
6 is what percent of 50?
In this problem, 6 is some percent of 50, making 6 a part of 50, the whole number. This question can be reworded generally as “The part is some percent of the whole.” We can transform this statement into an equation:
the part = some percent x the whole
Since a percent is a ratio whose denominator is 100, the equation can be revised:
the part = x the whole
To find the percent, we divide both sides of the equation by the whole:
Now that we have our equation ready, we can plug in variables to solve for the percent. Taking the original example, our equation looks like this (note that we have replaced “percent” with x as a variable):
After plugging in the numbers from the problem, cross multiply:
Remembering that we are solving for the percent (or in this case, x), we divide both sides of the equation by 50:
After dividing by 50, we are left with our answer:
Thus, 6 is 12% of 50.
Solving for percentages of a whole is a relatively simple task that only requires a handful of steps to accomplish. This problem is great for your children to practice their multiplication and division skills as they also learn basic concepts about solving for variables.
Another great aspect about this type of problem is that you can have your child solve for different parts of the equation to test their understanding of the concept. For example, instead of asking the question, “6 is what percent of 50?” you can try, “what is 20% of 30?” or “7 is 15% of what number?” In each case, your child will solve for different variables but will use the same formula. They will have a greater challenge but will be able to refer to previous problems for help, preventing these alternate problems from being too difficult to grasp. Be creative in how you present your kids with math problems to keep them both challenged and excited to learn. |
Federalism is the theory or advocacy of federal principles for dividing powers between member units and common institutions. Unlike in a unitary state, sovereignty in federal political orders is non-centralized, often constitutionally, between at least two levels so that units at each level have final authority and can be self governing in some issue area. Citizens thus have political obligations to, or have their rights secured by, two authorities. The division of power between the member unit and center may vary, typically the center has powers regarding defense and foreign policy, but member units may also have international roles. The decision-making bodies of member units may also participate in central decision-making bodies. Much recent philosophical attention is spurred by renewed political interest in federalism and backlashes against particular instances, coupled with empirical findings concerning the requisite and legitimate basis for stability and trust among citizens in federal political orders. Philosophical contributions have addressed the dilemmas and opportunities facing Canada, Australia, Europe, Russia, Iraq, Nepal and Nigeria, to mention just a few areas where federal arrangements are seen as interesting solutions to accommodate differences among populations divided by ethnic or cultural cleavages yet seeking a common, often democratic, political order.
- 1. Taxonomy
- 2. History of Federalism in Western Thought
- 3. Reasons for Federalism
- 4. Further Philosophical Issues
- Academic Tools
- Other Internet Resources
- Related Entries
Much valuable scholarship explicates the central terms ‘federalism’, ‘federation’ and ‘federal systems’ (cf. Wheare 1964, King 1982, Elazar 1987, Elazar 1987a, Riker 1993, Watts 1998).
A federal political order is here taken to be “the genus of political organization that is marked by the combination of shared rule and self-rule” (Watts 1998, 120). Federalism is the theory or advocacy of such an order, including principles for dividing final authority between member units and the common institutions.
A federation is one species of such a federal order; other species are unions, confederations, leagues and decentralised unions—and hybrids such as the present European Union (Elazar 1987, Watts 1998). A federation in this sense involves a territorial division of power between constituent units—sometimes called ‘provinces’, ‘cantons’, possibly ‘cities’, or confusingly ‘states’—and a common government. This division of power is typically entrenched in a constitution which neither a member unit nor the common government can alter unilaterally. The member unit and the common government both have direct effect on the citizenry—the common government operates “on the individual citizens composing the nation” (Federalist Paper 39)—and the authorities of both are directly elected (Watts 1998, 121). In comparison, decentralized authority in unitary states can typically be revoked by the central legislature at will. Many multilevel forms of governance may also be revised by units at one level without consent by bodies at other levels. Such entrenchments notwithstanding, some centralization often occurs owing to the constitutional interpretations by a federal level court in charge of settling conflicts regarding the scopes of final legislative and/or judicial authority.
In contrast, ‘confederation’ has come to mean a political order with a weaker center than a federation, often dependent on the constituent units (Watts 1998, 121). Typically, in a confederation a) member units may legally exit, b) the center only exercises authority delegated by member units, c) the center is subject to member unit veto on many issues, d) center decisions bind member units but not citizens directly, e) the center lacks an independent fiscal or electoral base, and/or f) the member units do not cede authority permanently to the center. Confederations are often based on agreements for specific tasks, and the common government may be completely exercised by delegates of the member unit governments. Thus many would count as confederations the North American states during 1776–1787, Switzerland 1291–1847, and the present European Union—though it has several elements typical of federations.
In symmetric (con)federations the member units have the same bundles of powers, while in asymmetric (con)federations such as Russia, Canada, the European Union, Spain, or India the bundles may be different among member units; some member units may for instance have special rights regarding language or culture. Some asymmetric arrangements involve one smaller state and a larger, where the smaller partakes in governing the larger while retaining sovereignty on some issues (Elazar 1987, Watts 1998).
If the decisions made centrally do not involve member units at all, we may speak of separate (split or compact) federalism. The USA is often given as example, since the two Senators from each state are not representing or selected by member unit (i.e. State) authorities but by electors voted directly by citizens—though this is by member unit decision (U.S. Constitution Art. II Section 1; cf. Dahl 2001). Federations can involve member units in central decision-making in at least two different ways in various forms of interlocking (or cooperative) federalism. Member unit representatives can participate within central bodies—in cabinets or legislatures—(collective agency compositional arrangement); in addition they often constitute one central body that interacts with other such bodies, for instance where member unit government representatives form an Upper House with power to veto or postpone decisions by majority or qualified majority vote (divided agency/relational arrangements).
Several authors identify two quite distinct processes that lead to a federal political order (Friedrich 1968, Buchanan 1995, Stepan 1999 and others). Independent states may aggregate by ceding or pooling sovereign powers in certain domains for the sake of goods otherwise unattainable, such as security or economic prosperity. Such coming together federal political orders are typically arranged to constrain the center and prevent majorities from overriding a member unit. Examples include the present USA, Canada, Switzerland, and Australia. Holding together federal political orders develop from unitary states, as governments devolve authority to alleviate threats of unrest or secession by territorially clustered minorities. Such federal political orders often grant some member units particular domains of sovereignty e.g. over language and cultural rights in an asymmetric federation, while maintaining broad scope of action for the central government and majorities. Examples include India, Belgium and Spain.
In addition to territorially organized federal political orders, other interesting alternatives to unitary states occur when non-territorial member units are constituted by groups sharing ethnic, religious or other characteristics. These systems are sometimes referred to as ‘non-territorial’ federations. Karl Renner and Otto Bauer explored such arrangements for geographically dispersed cultural minorities, allowing them some cultural and “personal” autonomy without territorial self rule (Bauer 1903; Renner 1907; Bottomore and Goode 1978; cf. Tamir 1993 and Nimni 2005). Consociations consist of somewhat insulated groups in member units who in addition are represented in central institutions often governing by unanimity rather than by majority (Lijphart 1977).
A wide-spread interest among political philosophers in topics concerning the centralised nation state have fuelled attention to historical contributions on unitary sovereignty. However, we can also identify a steady stream of contributions to the philosophy of federalism, also by those more well known for their arguments concerning centralised power (cf. Karmis and Norman 2005 for such readings).
Several of the early contributors to federalist thought explored the rationale and weaknesses of centralised states as they emerged and developed in the 17th and 18th century. Johannes Althusius (1557–1630) is often regarded as the father of modern federalist thought. He argued in Politica Methodice Digesta (Althusius 1603) for autonomy of his city Emden, both against its Lutheran provincial Lord and against the Catholic Emperor. Althusius was strongly influenced by French Huguenots and Calvinism. As a permanent minority in several states, Calvinists developed a doctrine of resistance as the right and duty of “natural leaders” to resist tyranny. Orthodox Calvinists insisted on sovereignty in the social circles subordinate only to God's laws. The French Protestant Huguenots developed a theory of legitimacy further, presented 1579 by an author with the telling pseudonym “Junius Brutus” in Vindiciae Contra Tyrannos. The people, regarded as a corporate body in territorial hierarchical communities, has a God-granted right to resist rulers without rightful claim. Rejecting theocracy, Althusius developed a non-sectarian, non-religious contractualist political theory of federations that prohibited state intervention even for purposes of promoting the right faith. Accommodation of dissent and diversity prevailed over any interest in subordinating political powers to religion or vice versa.
Since humans are fundamentally dependent on others for the reliable provision of requirements of a comfortable and holy life, we require communities and associations that are both instrumentally and intrinsically important for supporting [subsidia] our needs. Families, guilds, cities, provinces, states and other associations owe their legitimacy and claims to political power to their various roles in enabling a holy life, rather than to individuals' interest in autonomy. Each association claims autonomy within its own sphere against intervention by other associations. Borrowing a term originally used for the alliance between God and men, Althusius holds that associations enter into secular agreements—pactum foederis—to live together in mutual benevolence.
Several early contributors explored what we may now regard as various species of federal political orders, partly with an eye to resolving inter-state conflicts.
Ludolph Hugo (ca. 1630–1704) was the first to distinguish confederations based on alliances, decentralized unitary states such as the Roman Empire, and federations, characterized by ‘double governments’ with territorial division of powers, in De Statu Regionum Germanie (1661) (cf. Elazar 1998; Riley 1976).
In The Spirit of Laws (1748) Charles de Secondat, Baron de Montesquieu (1689–1755) argued for confederal arrangements as combining the best of small and large political units, without the disadvantages of either. On the one hand they could provide the advantages of small states such as republican participation and liberty understood as non-domination—that is, security against abuse of power. At the same time confederal orders secure the benefits of larger states such as military security, without the risks of small and large states. A ‘confederate republic’ with separation of powers allows sufficient homogeneity and identification within sufficiently small member units. The member units in turn pool powers sufficient to secure external security, reserving the right to secede (Book 9, 1). Member units serve as checks on each other, since other member units may intervene to quell insurrection and power abuse in one member unit. These themes reoccur in later contributions, up to and including discussions concerning the European Union (cf. Levy 2004, 2005, 2007).
David Hume (1711–1776) disagreed with Montesquieu that smaller size is better. Instead, “in a large democracy … there is compass and room enough to refine the democracy.” In “Idea of a Perfect Commonwealth” (Hume 1752) Hume recommended a federal arrangement for deliberation of laws involving both member unit and central legislatures. Member units enjoy several powers and partake in central decisions, but their laws and court judgments can always be overruled by the central bodies, hence it seems that Hume’s model is not federal as the term is used here. He held that such a numerous and geographically large system would do better than small cities in preventing decisions based on “intrigue, prejudice or passion” against the public interest.
Several 18th century peace plans for Europe recommended confederal arrangements. The 1713 Peace Plan of Abbé Charles de Saint-Pierre (1658–1743) would allow intervention in member units to quell rebellion and wars on non-members to force them to join an established confederation, and required unanimity for changes to the agreement.
Jean-Jacques Rousseau (1712–1778) presented and critiqued Saint-Pierre’s proposal, listing several conditions including that all major powers must be members, that the joint legislation must be binding, that the joint forces must be stronger than any single state, and that secession must be illegal. Again, unanimity was required for changes to the agreement.
Immanuel Kant (1724–1804) defended a confederation for peace in On Perpetual Peace (1796). His Second Definite Article of a Perpetual Peace holds that the right of nations shall be based on a pacific federation among free states rather than a peace treaty or an international state: “This federation does not aim to acquire any power like that of a state, but merely to preserve and secure the freedom of each state in itself, along with that of the other confederated states, although this does not mean that they need to submit to public laws and to a coercive power which enforces them, as do men in a state of nature.”
The discussions surrounding the U.S. Constitutional Convention of 1787 marks a clear development in federal thought. A central feature is that federations were seen as uniting not only member units as in confederations, but also the citizenry directly.
The Articles of Confederation of 1781 among the 13 American states fighting British rule had established a center too weak for law enforcement, defense and for securing interstate commerce. What has become known as the U.S. Constitutional Convention met May 25–September 17, 1787. It was explicitly restricted to revise the Articles, but ended up recommending more fundamental changes. The proposed constitution prompted widespread debate and arguments addressing the benefits and risks of federalism versus confederal arrangements, leading eventually to the Constitution that took effect in 1789.
The “Anti-federalists” were fearful of undue centralization. They worried that the powers of central authorities were not sufficiently constrained e.g., by a bill of rights (John DeWitt 1787, Richard Henry Lee) that was eventually ratified in 1791. They also feared that the center might gradually usurp the member units’ powers. Citing Montesquieu, another pseudonymous ‘Brutus’ doubted whether a republic of such geographical size with so many inhabitants with conflicting interests could avoid tyranny and would allow common deliberation and decision based on local knowledge (Brutus (Robert Yates?) 1787).
In The Federalist Papers, James Madison (1751–1836), Alexander Hamilton (1755–1804) and John Jay (1745–1829) argued vigorously for the suggested model of interlocking federal arrangements (Federalist 10, 45, 51, 62). Madison and Hamilton agreed with Hume that the risk of tyranny by passionate majorities was reduced in larger republics where member units of shared interest could and would check each other: “A rage for paper money, for an abolition of debts, for an equal division of property, or for any improper or wicked project, will be less likely to pervade the whole body of the Union than a particular member of it.” (Federalist 10). Splitting sovereignty between member unit and center would also protect individuals’ rights against abuse by authorities at either level, or so believed Hamilton, quoting Montesquieu at length to this effect (Federalist 9).
Noting the problems of allocating powers correctly, Madison supported placing some authority with member units since they would be best fit to address “local circumstances and lesser interests” otherwise neglected by the center (Federalist 37).
Madison and Hamilton urged centralized powers of defense and interstate commerce (Federalist 11, 23), and argued for the need to solve coordination and assurance problems of partial compliance, through two new means: Centralized enforcement and direct applicability of central decisions to individuals (Federalist 16, also noted by Tocqueville 1835–40). They were wary of granting member units veto power typical of confederal arrangements, since that would render the center weak and cause “tedious delays; continual negotiation and intrigue; contemptible compromises of the public good.” (Madison and Hamilton, Federalist 22; and cf. 20).
They were particularly concerned to address worries of undue centralization, arguing that such worries should be addressed not by constraining the extent of power in the relevant fields, such as defense, but instead by the composition of the central authority (Federalist 31). They also claimed that the people would maintain stronger “affection, esteem, and reverence” towards the member unit government owing to its public visibility in the day-to-day administration of criminal and civil justice (Federalist 17).
John Stuart Mill (1806–1873), in chapter 17 of Considerations on Representative Government (1861), recommended federations among “portions of mankind” not disposed to live under a common government, to prevent wars among themselves and protect against aggression. He would also allow the center sufficient powers so as to ensure all benefits of union—including powers to prevent frontier duties to facilitate commerce. He listed three necessary conditions for a federation: sufficient mutual sympathy “of race, language, religion, and, above all, of political institutions, as conducing most to a feeling of identity of political interest”; no member unit so powerful as to not require union for defense nor tempt unduly to secession; and rough equality of strength among member units to prevent internal domination by one or two. Mill also claimed among the benefits of federations that they reduce the number of weak states hence reduce temptation to aggression, ending wars and restrictions on commence among member units; and that federations are less aggressive, only using their power defensively. Further benefits from federations—and from decentralized authority in general—might include learning from ‘experiments in living'.
Pierre-Joseph Proudhon (1809–1865), in Du Principe fédératif (1863) defended federalism as the best way to retain individual liberty within ‘natural’ communities such as families and guilds who enter pacts among themselves for necessary and specific purposes. The state is only one of several non-sovereign agents in charge of coordinating, without final authority.
While Proudhon was wary of centralisation, authors such as Harold Laski warned of ‘The Obsolesence of Federalism’ (1939). The important problems, such as those wrought by ‘giant capitalism,’ require more centralised responses than federal arrangements can muster.
Philosophical reflections on federalism were invigorated during and after the Second World War, for several reasons. Altiero Spinelli and Ernesto Rossi called for a European federal state in the Ventotene Manifesto, published 1944. They condemned totalitarian, centralised states and the never ending conflicts among them. Instead there should be enough shared control over military and economic power, yet “each State will retain the autonomy it needs for a plastic articulation and development of political life according to the particular characteristics of the various peoples.” Many explain and justify the European Union along precisely these lines, while others are more critical.
Hannah Arendt (1906–1975) traced both totalitarianism and industrialized mass murder to flaws in the sovereign nation-state model. Skeptical both of liberal internationalism and political realism, she instead urged a Republican federal model or ideal type wherein “the federated units mutually check and control their powers” (Arendt 1972).
The exit of colonial powers also left multi-ethnic states that required creative solutions to combine self rule and shared rule (Karmis and Norman 2005). In addition, globalisation has prompted not only integration and harmonisation, but also—partly in response—explorations of ways to still maintain some local self rule (Watts 1998).
Developments of the European Union and backlash against its particular forms of political and legal integration is one major cause of renewed attention to the philosophy of federalism. Recent philosophical discussions have addressed several issues, including centrally the reasons for federalism, and attention to the sources of stability and instability; the legitimate division of power between member unit and center; distributive justice, challenges to received democratic theory, and concerns about the politics of recognition.
Many arguments for federalism have traditionally been put in terms of promoting various forms of liberty in the form of non-domination, immunity or enhanced opportunity sets (Elazar 1987a). When considering reasons offered in the literature for federal political orders, many appear to be in favor of decentralization without requiring constitutional entrenchment of split authority. Two sets of arguments can be distinguished: Arguments favoring federal orders compared with secession and completely independent sovereign states; and arguments supporting federal arrangements rather than a (further) centralized unitary state. They occur in different forms and from different starting points, in defense of ‘coming together’ federalism, and in favor of ‘holding together’ federalism.
There are several suggested reasons for a federal order rather than separate states or secession.
Federations may foster peace, in the senses of preventing wars and preventing fears of war, in several ways. States can join a (con)federation to become jointly powerful enough to dissuade external aggressors, and/or to prevent aggressive and preemptive wars among themselves. The European federalists Altieri Spinelli, Ernesto Rossi and Eugenio Colorni argued the latter in the 1941 Ventotene Manifesto: Only a European federation could prevent war between totalitarian, aggressive states. Such arguments assume, of course, that the (con)federation will not become more aggressive than each state separately, a point Mill argued.
Federations can promote economic prosperity by removing internal barriers to trade, through economies of scale, by establishing and maintaining inter-member unit trade agreements, or by becoming a sufficiently large global player to affect international trade regimes (for the latter regarding the EU, cf. Keohane and Nye 2001, 260).
Federal arrangements may protect individuals against political authorities by constraining state sovereignty, placing some powers with the center. By entrusting the center with authority to intervene in member units, the federal arrangements can protect minorities’ human rights against member unit authorities (Federalist, Watts 1999). Such arguments assume, of course, that abuse by the center is less likely.
Federations can facilitate some objectives of sovereign states, such as credible commitments,certain kinds of coordination, and control over externalities, by transferring some powers to a common body. Since cooperation in some areas can ‘spill over’ and create demands for further coordination in other sectors, federations often exhibit creeping centralization.
Federal arrangements may enhance the political influence of formerly sovereign governments, both by facilitating coordination, and—particularly for small states—by giving these member units influence or even veto over policy making, rather than remaining mere policy takers.
Federal political orders can be preferred as the appropriate form of nested organizations, for instance in ‘organic’ conceptions of the political and social order. The federation may promote cooperation, justice or other values among and within member units as well as among and within their constituent units, for instance by monitoring, legislating, enforcing or funding agreements, human rights, immunity from interference, or development. Starting with the family, each larger unit responsible for facilitating the flourishing of member units and securing common goods beyond their reach without a common authority. Such arguments have been offered by such otherwise divergent authors as Althusius, the Catholic traditions of subsidiarity as expressed by popes Leo XIII (1891) and Pius XI (1931), and Proudhon.
There are several reasons for preferring federal orders over a unitary state:
Federal arrangements may protect against central authorities by securing immunity and non-domination for minority groups or nations. Constitutional allocation of powers to a member unit protects individuals from the center, while interlocking arrangements provide influence on central decisions via member unit bodies (Madison, Hume, Goodin 1996). Member units may thus check central authorities and prevent undue action contrary to the will of minorities: “A great democracy must either sacrifice self-government to unity or preserve it by federalism. The coexistence of several nations under the same State is a test, as well as the best security of its freedom … The combination of different nations in one State is as necessary a condition of civilized life as the combination of men in society” (Acton 1907, 277).
More specifically, federal arrangements can accommodate minority nations who aspire to self determination and the preservation of their culture, language or religion. Such autonomy and immunity arrangements are clearly preferable to the political conflicts that might result from such groups' attempts at secession. Central authorities may respond with human rights abuses, civil wars or ethnic cleansing to prevent such secessionist movements.
Federal orders may increase the opportunities for citizen participation in public decision-making; through deliberation and offices in both member unit and central bodies that ensures character formation through political participation among more citizens (Mill 1861, ch. 15).
Federal orders may facilitate learning by fostering alternative solutions to similar problems and sharing lessons from such a laboratory of ‘experiments in living' (Rose-Ackerman 1980).
Federations may facilitate efficient preference maximization more generally, as formalized in the literature on economic and fiscal federalism—though many such arguments support decentralization rather than federalism proper. Research on ‘fiscal federalism’ addresses the optimal allocation of authority, typically recommending central redistribution but local provision of public goods. Federal arrangements may allow more optimal matching of the authority to create public goods to specific affected subsets of the populations. If individuals' preferences vary systematically by territory according to external or internal parameters such as geography or shared tastes and values, federal—or decentralized—arrangements that allow local variation may be well suited for several reasons. Local decisions prevent overload of centralised decision-making, and local decision-makers may also have a better grasp of affected preferences and alternatives, making for better service than would be provided by a central government that tends to ignore local preference variations (Smith 1776, 680). Granting powers to population subsets that share preferences regarding public services may also increase efficiency by allowing these subsets to create such ‘internalities’ and ‘club goods’ at costs borne only by them (Musgrave 1959, 179–80, Olson 1969, Oates’ 1972 ‘Decentralization Theorem’).
Federal arrangements can also shelter territorially based groups with preferences that diverge from the majority population, such as ethnic or cultural minorities, so that they are not subject to majority decisions severely or systematically contrary to their preferences. Non-unitary arrangements may thus minimize coercion and be responsive to as many citizens as possible (Mill 1861 ch. 15, Elazar 1968; Lijphart 1999). Such considerations of economic efficiency and majority decisions may favor federal solutions, with “only indivisibilities, economies of scale, externalities, and strategic requirements … acceptable as efficiency arguments in favor of allocating powers to higher levels of government” (Padou-Schioppa 1995, 155).
Federal arrangements may not only protect existing clusters of individuals with shared values or preferences, but may also promote mobility and hence territorial clustering of individuals with similar preferences. Member unit autonomy to experiment may foster competition for individuals who are free to move where their preferences are best met. Such mobility towards member units with like-minded individuals may add to the benefits of local autonomy over the provision of public services—absent economies of scale and externalities (Tiebout 1956, Buchanan 2001)—though the result may be that those with costly needs and who are less mobile are left worse off.
Much recent attention has focused on philosophical issues arising from empirical findings concerning federalism, and has been spurred by quite different dilemmas facing—inter alia—Canada, Australia, Nepal, several European states and the European Union.
Federal political orders require attention to several constitutional and other institutional issues, some of which raise peculiar and intriguing issues of normative political theory (Watts 1998; Norman 2006).
Composition: How to determine the boundaries of the member units, e.g., along geographical, ethnic or cultural lines; whether establishment of new member units from old should require constitutional changes, whether to allow secession and if so how, etc.
Distribution of Power: The allocation of legislative, executive, judicial and constitution-amending power between the member units and the central institutions. In asymmetric arrangements some of these may differ among member units.
Power Sharing: The form of influence by member units in central decision-making bodies within the interlocking political systems.
These tasks must be resolved taking due account of several important considerations noted below.
As political orders go, federal political arrangements pose peculiar problems concerning stability and trust. Federations tend to drift toward disintegration in the form of secession, or toward centralization in the direction of a unitary state.
Such instability should come as no surprise given the tensions typically giving rise to federal political orders in the first place, such as tensions between majority and minority national communities in multinational federations. Federal political orders are therefore often marked by a high level of ‘constitutional politics’. The details of their constitutions and other institutions may affect these conflicts and their outcomes in drastic ways. Political parties often disagree on constitutional issues regarding the appropriate areas of member unit autonomy, the forms of cooperation and how to prevent fragmentation. Such sampling bias among states that federalize to hold together makes it difficult to assess claims that federal responses perpetuate cleavages and fuel rather than quell secessionist movements. Some nevertheless argue that democratic, interlocking federations alleviate such tendencies (Simeon 1998, Simeon and Conway 2001, Linz 1997; cf. McKay 2001, Filippov, Ordeshook and Shvetsova 2004).
Many authors note that the challenges of stability must be addressed not only by institutional design, but also by ensuring that citizens have an ‘overarching loyalty’ to the federation as whole in addition to loyalty toward their own member unit (Franck 1968, Linz 1997). The legitimate bases, content and division of such a public dual allegiance are central topics of political philosophies of federalism (Norman 1995a, Choudhry 2001). Some accept (limited) appeals to considerations such as shared history, practices, culture, or ethnicity for delineating member units and placing certain powers with them, even if such ‘communitarian’ features are regarded as more problematic bases for (unitary) political orders (Kymlicka 1995, Habermas 1996, 500). The appropriate consideration that voters and their member unit politicians should give to the interests of others in the federation in interlocking arrangements must be clarified if the notion of citizen of two commonwealths is to be coherent and durable.
Another and related central philosophical topic is the critical assessment of alleged grounds for federal arrangements in general, and the division of power between member units and central bodies in particular, indicated in the preceding sections. Recent contributions include Knop et al. 1995, Kymlicka 2001, Kymlicka and Norman 2000, Nicolaidis and Howse 2001, Norman 2006. Among the important issues, especially due to the risks of instability, are:
How the powers should be allocated, given that they should be used—but may be abused—by political entrepreneurs at several levels to affect their claims. The concerns about stability require careful attention to the impact of these powers on the ability to create and maintain ‘dual loyalties’ among the citizenry.
How to ensure that neither member units nor the central authorities overstep their jurisdiction. As Mill noted, “the power to decide between them in any case of dispute should not reside in either of the governments, or in any functionary subject to it, but in an umpire independent of both.” (1861) Such a court must be sufficiently independent, yet not utterly unaccountable. Many scholars seem to detect a centralising tendency among such courts (Watts 1998).
How to maintain sufficient democratic control over central bodies when these are composed by representatives of the executive branch of member units? The chains of accountability may be too long for adequate responsiveness. This is part of the core concerns about a ‘democratic deficit’ in the European Union (Watts 1998, Føllesdal and Hix 2006).
Who shall have the authority to revise the constitutionally embedded division of power? Some hold that a significant shift in national sovereignty occurs when such changes may occur without the unanimity characteristic of treaties.
The “Principle of Subsidiarity” has often been used to guide the decisions about allocation of power. This principle has recently received attention owing to its inclusion in European Union treaties. It holds that authority should rest with the member units unless allocating them to a central unit would ensure higher comparative efficiency or effectiveness in achieving certain goals. This principle can be specified in several ways, for instance concerning which units are included, which goals are to be achieved, and who has the authority to apply it. The principle has multiple pedigrees, and came to recent political prominence largely through its role in quelling fears of centralization in Europe—a contested role which the principle has not quite filled (Fleiner and Schmitt 1996, Burgess and Gagnon 1993, Føllesdal 1998).
Regarding distributive justice, federal political orders must manage tensions between ensuring member unit autonomy and securing the requisite redistribution within and among the member units. Indeed, the Federalists regarded federal arrangements as an important safeguard against “the equal division of property” (Federalist 10). The political scientists Linz and Stepan may be seen as finding support for the Federalists’ hypothesis: Compared to unitary states in the OECD, the ‘coming together’ federations tend to have higher child poverty rate in solo mother households and a higher percentage of population over-sixty living in poverty. Linz and Stepan explain this inequality as stemming from the ‘demos constraining’ arrangements of these federations, seeking to protect individuals and member units from central authorities, combined with a weak party system. By comparison, the Constitution of Germany (not a ‘coming together’ federation) explicitly requires equalization of living conditions among the member units (Art. 72.2) Normative arguments may also support some distributive significance of federal arrangements, for instance owing to trade-offs between member unit autonomy and redistributive claims among member units (Follesdal 2001), or the relevance of a shared ‘identity‘ (Grégoire and Jewkes 2015, de Schutter 2011). A central normative issue is to what extent a shared culture and bonds among citizens within a historically sovereign state reduce the claims on redistribution among the member units.
Federalism raises several challenges to democratic theory, especially as developed for unitary states. Federal arrangements are often more complex, thereby challenging standards of transparency, accountability and public deliberation (Habermas 2001). The restricted political agendas of each center of authority also require defense (Dahl 1983; Braybrooke 1983). One of several particular issues concern the standing of member units (for further issues, cf., Norman 2006, 144–150).
The power that member units wield in federations often restricts or violates majority rule, in ways that merit careful scrutiny. Democratic theory has long been concerned with how to prevent domination of minorities, and many federal political orders do so by granting member units some influence over common decisions. Federal political orders typically influence individuals' political influence by skewing their voting weight in favor of citizens of small member units, or by granting member unit representatives veto rights on central decisions. Minorities thus exercise control in apparent violation of principles of political equality and one-person-one-vote—more so when member units are of different size. These features raises fundamental normative questions concerning why member units should matter for the allocation of political power among individuals who live in different member units.
Many federal political orders accommodate minority groups in two ways discussed above: both through a division of power, and by granting them influence over common decisions. These measures of identity politics can be valuable ways to give public acknowledgment and recognition to groups and their members, sometimes on the very basis of previous domination. But identity politics also create challenges (Gutman 1994), especially in federal arrangements that face greater risks of instability and must maintain citizens' dual political loyalties. Self-government arrangements may threaten the federal political order: “demands for self-government reflect a desire to weaken the bonds with the larger community and, indeed, question its very nature, authority and permanence” (Kymlicka and Norman 1994, 375). The emphasis on “recognition and institutionalization of difference could undermine the conditions that make a sense of common identification and thus mutuality possible” (Carens 2000, 193).
Federations are often thought to be sui generis, one-of-a-kind deviations from the ideal-type unitary sovereign state familiar from the Westphalian world order. Indeed, every federation may well be federal in its very own way, and not easy to summarize and assess as an ideal-type political order. Yet the phenomenon of non-unitary sovereignty is not new, and federal accommodation of differences may well be better than the alternatives. When and why this is so has long been the subject of philosophical, theoretical and normative analysis and reflection. Such public arguments may themselves contribute to develop the overarching loyalty required among citizens of stable, legitimate federations, who must understand themselves as members of two commonwealths.
Several of the historical writings—those marked ‘*’ below and others—are reprinted in part or full in Theories of Federalism: A Reader, Dimitrios Karmis and Wayne Norman (eds.), New York: Palgrave, 2005.
- Brutus, Junius (Philippe Duplessis-Mornay?), 1579, Vindiciae contra tyrannos, George Garnett (transl. and ed.), Cambridge: Cambridge University Press, 1994.
- *Althusius, Johannes, 1603, Politica Methodice Digesta, Frederick S. Carney (transl.), Daniel J. Elazar (introd.), Indianapolis: Liberty Press, 1995.
- Arendt, Hannah, 1972, “Thoughts on Politics and Revolution,” in Crises of the Republic, New York: Harcourt Brace, 199–233.
- Hugo, Ludolph, 1661, De Statu Regionum Germaniae. Helmstadt: Sumptibus Hammianis.
- Saint-Pierre, Abbé Charles, 1713, Projet pour rendre la paix perpêtuelle en Europe (Project to make peace perpetual in Europe), Paris: Fayard, 1986.
- *Montesquieu, Baron de, 1748, The Spirit of Laws, Amherst, NY: Prometheus Books, 2002.
- *Rousseau, Jean-Jacques, 1761, A Lasting Peace Through the Federation of Europe, C.E. Vaughan (trans.), London: Constable, 1917.
- *–––, 1761, “Summary and Critique of Abbé Saint-Pierre's Project for Perpetual Peace,” in Grace G. Roosevelt (ed.), Reading Rousseau in the Nuclear Age, Philadelphia: Temple University Press, 1990.
- Hume, David, 1752, “Idea of a Perfect Commonwealth,” in T.H. Green and T.H. Grose (eds.), Essays moral, political and literary, London: Longmans, Green, 1882
- Smith, Adam, 1776, An Inquiry into the Nature and Causes of the Wealth of Nations, London: Dent, 1954.
- Storing, Herbert, and Murray Dry (eds.), 1981–, The Complete Anti-Federalist (7 Volumes), Chicago: University of Chicago Press.
- *Hamilton, Alexander, James Madison, and John Jay, 1787–88, The Federalist Papers, Jacob E. Cooke (ed.), Middletown, CT: Wesleyan University Press, 1961.
- Kant, Immanuel, 1784, “An Answer to the Question: ‘What Is Enlightenment?’” in Hans Reiss (ed.), Kant's Political Writings, Cambridge: Cambridge University Press, 1970, 54–60.
- *–––, 1796, “Perpetual Peace: A Philosophical Sketch,” in Hans Reiss (ed.), Kant's Political Writings, Cambridge: Cambridge University Press, 1970, 93–130.
- *de Tocqueville, Alexis, 1835–40, Democracy in America, P. Bradley (ed.), New York: Vintage, 1945 [Text available online].
- *Mill, John Stuart, 1861, Considerations on Representative Government, New York: Liberal Arts Press, 1958 [Text available online].
- *Proudhon, Pierre Joseph, 1863, Du Principe Federatif, J.-L. Puech and Th. Ruyssen (eds.), Paris: M. Riviere, 1959.
- Leo XIII, 1891, “Rerum Novarum,” in The Papal Encyclicals 1903–1939, Raleigh: Mcgrath, 1981.
- Renner, Karl, 1899, Staat und Nation, Vienna. Reprinted as “State and Nation” in Ephraim Nimni (ed.), National Cultural Autonomy and Its Contemporary Critics, London: Routledge, 2005, 64-82.
- Pius XI, 1931. “Quadragesimo Anno,” in The Papal Encyclicals 1903–1939, Raleigh: Mcgrath, 1981.
- *Spinelli, Altiero, and Ernesto Rossi, 1944, Il manifesto di Ventotene (The Ventotene Manifesto), Naples: Guida, 1982; reprinted in Karmis and Norman 2005. [Text available online]
- Aroney, Nicholas, 2007, “Subsidiarity, Federalism and the Best Constitution: Thomas Aquinas on City, Province and Empire,” Law and Philosophy, 26(2): 161–228.
- Dimitrios, Karmis, 2010, “Togetherness in Multinational Federal Democracies. Tocqueville, Proudhon and the Theoretical Gap in the Modern Federal Tradition,” in M. Burgess and A. G. Gagnon (eds.), Federal Democracies, Abingdon: Routledge, 46–63.
- Elazar, Daniel J., 1987, Federalism As Grand Design: Political Philosophers and the Federal Principle, Lanham, MD: University Press of America.
- Forsyth, Murray, 1981, Union of States: the Theory and Practice of Confederation, Leicester: Leicester University Press.
- Hueglin, Thomas O., 1999, Early Modern Concepts for a Late Modern World: Althusius on Community and Federalism, Waterloo, Ontario: Wilfrid Laurier University Press.
- Jewkes, Michael, 2016, “Diversity, Federalism and the Nineteenth-Century Liberals,” Critical Review of International Social and Political Philosophy, 19(2): 184–205.
- Klusmeyer, Douglas, 2010, “Hannah Arendt’s Case for Federalism,” Publius, 40(1): 31–58.
- Levinson, Sanford, 2015, An Argument Open to All: Reading “the Federalist” in the 21st Century, New Haven: Yale University Press.
- Riley, Patrick, 1973, “Rousseau As a Theorist of National and International Federalism,” Publius, 3(1): 5–18.
- –––, 1976, “Three Seventeenth-Century Theorists of Federalism: Althusius, Hugo and Leibniz,” Publius, 6(3): 7–42.
- –––, 1979, “Federalism in Kant's Political Philosophy,” Publius, 9(4): 43–64.
- Stoppenbrink, Katja, 2016, “Representative Government and Federalism in John Stuart Mill,” in D. Heidemann and K. Stoppenbrink (eds.), Join, or Die: Philosophical Foundations of Federalism, Berlin: de Gruyter: 209–232.
Publius: The Journal of Federalism, regularly publishes philosophical articles.
- Bakvis, Herman, and William M. Chandler (eds.), 1987, Federalism and the Role of the State, Toronto: Toronto University Press.
- Bottomore, Tom and Patrick Goode (eds.), 1978, Austro-Marxism, Oxford: Oxford University Press.
- Burgess, Michael, and Alain G. Gagnon (eds.), 1993, Comparative Federalism and Federation: Competing Traditions and Future Directions, London: Harvester Wheatsheaf.
- –––, 2010, Federal Democracies, Abingdon: Routledge.
- De Schutter, Helder, 2011, “Federalism as Fairness,” The Journal of Political Philosophy, 19(2): 167–189.
- Fleiner, Thomas, and Nicolas Schmitt (eds.), 1996, Towards European Constitution: Europe and Federal Experiences, Fribourg: Institute of Federalism.
- Fleming, James E., and Jacob T. Levy (eds.), 2014, Federalism and Subsidiarity, Nomos (Volume 55), New York: New York University Press.
- Franck, Thomas M. (ed.), 1968, Why Federations Fail: An Inquiry into the Requisites for Successful Federalism, New York: New York University Press.
- Gagnon, Alain-G., and James Tully (eds.), 2001, Multinational Democracies, Cambridge: Cambridge University Press.
- Grégoire, Jean-Francois, and Michael Jewkes, (eds.), 2015, Recognition and Redistribution in Multinational Federations, Leuven: Leuven University Press.
- Gutmann, Amy (ed.), 1994, Multiculturalism: Examining the Politics of Recognition, Princeton: Princeton University Press.
- Härtel, Ines (ed.), 2012, Handbuch Föderalismus: – Föderalismus als Demokratische Rechtsordnung und Rechtskultur in Deutschland, Europa und der Welt, 4 volumes, Berlin: Springer.
- Heidemann, Dietmar, and Katja Stoppenbrink (eds.), 2016, Join, or Die: Philosophical Foundations of Federalism, Berlin: De Gruyter.
- Karmis, Dimitrios, and Wayne Norman (eds.), 2005, Theories of Federalism: A Reader, New York: Palgrave.
- Knop, Karen, Sylvia Ostry, Richard Simeon and Katherine Swinton (eds.), 1995, Rethinking Federalism: Citizens, Markets and Governments in a Changing World, Vancouver: University of British Columbia Press.
- Kymlicka, Will, and Wayne Norman (eds.), 2000, Citizenship in Diverse Societies, Oxford: Oxford University Press.
- Nicolaidis, Kalypso, and Robert Howse (eds.), 2001, The Federal Vision: Legitimacy and Levels of Governance in the US and the EU, Oxford: Oxford University Press.
- Nimni, Ephraim (ed.), 2005, National-Cultural Autonomy and its Contemporary Critics, Milton Park: Routledge.
- Trechsel, Alexander (ed.), 2006, Towards a Federal Europe, London: Routledge.
- Tushnet, Mark (ed.), 1990, Comparative Constitutional Federalism: Europe and America, New York: Greenwood Press.
Books and Articles
- Acton, Lord, 1907, “Nationality,” in J. N. Figgis (ed.), The History of Freedom and Other Essays, London: Macmillan.
- Bauer, Otto, 2000, The Question of Nationalities and Social Democracy (with introduction by Ephraim Nimni), Minneapolis: University of Minnesota Press. [original 1907, Die Nationalitätenfrage und die Sozialdemokratie, Vienna].
- Beer, Samuel H., 1993, To Make a Nation: the Rediscovery of American Federalism, Cambridge, MA: Harvard University Press.
- Braybrooke, David, 1983, “Can Democracy Be Combined With Federalism or With Liberalism?” , in J. R. Pennock and John W. Chapman (eds.), Nomos XXV: Liberal Democracy, New York, London: New York University Press.
- Buchanan, James, 1995, “Federalism as an ideal political order and an objective for constitutional reform,” Publius, 25(2): 19–27.
- –––, 1999/2001, Federalism, Liberty and the Law, Collected Works (Volume 18), Indianapolis: Liberty Fund.
- Carens, J. H., 2000, Culture, Citizenship, and Community. A Contextual Exploration of Justice as Evenhandedness, Oxford: Oxford University Press.
- Choudhry, Sujit, 2001, “Citizenship and Federations: Some Preliminary Reflections,” in Nicolaidis and Howse (eds.) 2001, 377–402.
- Dahl, Robert A., 1983, “Federalism and the Democratic Process,” in J. R. Pennock and J. W. Chapman (eds.), NOMOS XXV: Liberal Democracy, New York: New York University Press, 95–108.
- –––, 2001, How Democratic Is the American Constitution?, New Haven: Yale University Press.
- Elazar, Daniel J., 1968, “Federalism,” International Encyclopedia of the Social Sciences, New York: Macmillan, p. 356–361.
- –––, 1987, Exploring Federalism, Tuscaloosa: University of Alabama Press.
- –––, 1998, Covenant and Civil Society, New Brunswick, NJ: Transaction Publishers.
- Filippov, Mikhail, Peter C. Ordeshook, and Olga Shvetsova, 2004, Designing Federalism: A Theory of Self-sustainable Federal Institutions, Cambridge: Cambridge University Press.
- Follesdal, Andreas, 1998, “Subsidiarity,” Journal of Political Philosophy, 6(2): 231–59.
- –––, 2001, “Federal Inequality Among Equals: A Contractualist Defense,” Metaphilosophy, 32: 236–55.
- –––, and Simon Hix, 2006, “Why There Is a Democratic Deficit in the EU,” Journal of Common Market Studies, 44(3): 533–62.
- Friedrich, Carl, 1968, Trends of Federalism in Theory and Practice., New York: Praeger.
- Golemboski, David, 2015, “Federalism and the Catholic Principle of Subsidiarity,” Publius, 45(4): 526–51.
- Goodin, Robert, 1996, “Designing constitutions: the political constitution of a mixed commonwealth,” Political Studies, 44: 635–46.
- Habermas, Jürgen, 1992, Faktizität und Geltung, Frankfurt am Main: Suhrkamp; translated as Between Facts and Norms, William Rehg (trans.), Oxford: Polity, 1996.
- –––, 2001, Postnational Constellation, Cambridge, Mass: MIT Press
- Keohane, Robert O., and Joseph S. Nye, 2001, Power and Interdependence: World Politics in Transition (3rd Edition), New York: Longman.
- King, Preston, 1982, Federalism and Federation, Baltimore: Johns Hopkins, and London: Croom Helm.
- Klusmeyer, Douglas, 2010, “Hannah Arendt’s Case for Federalism,” Publius, 40(1): 31–58.
- Kymlicka, Will, 2001, “Minority Nationalism and Multination Federalism,” in Politics in the Vernacular, Oxford: Oxford University Press, pp. 91–119.
- –––, 2002, Contemporary Political Philosophy: An Introduction (2nd edition), Oxford: Clarendon Press.
- Kymlicka, W., and Wayne Norman, 1994, “Return of the Citizen: A Survey of Recent Work on Citizenship Theory,” Ethics, 104(2): 352–381.
- Laski, Harold, 1939, “The Obsolesence of Federalism,” The New Republic, 98 (May 3): 367–69. Reprinted in Karmis and Norman 2005.
- Levy, Jacob, 2004, “National Minorities Without Nationalism,” in Alain Dieckhoff (ed.), The Politics of Belonging: Nationalism, Liberalism, and Pluralism, Lanham, MD: Rowman & Littlefield.
- –––, 2006, “Beyond Publius: Montesquieu, Liberal Republicanism, and the Small-Republic Thesis,” History of Political Thought, 27(1): 50–90.
- –––, 2007a, “Federalism, Liberalism, and the Separation of Loyalties,” American Political Science Review, 101(3): 459–77.
- –––, 2007b, “Federalism and the Old and New Liberalisms,” Social Philosophy and Policy, 24(1): 306–26.
- –––, 2008, “Self-determination, non-domination, and federalism,” Hypatia, 23(3): 60–78.
- Lijphart, Arend, 1977, Democracy in Plural Societies: A Comparative Exploration, Yale University Press, New Haven.
- –––, 1999, Patterns of Democracy: Government Forms and Performance in Thirty-Six Countries, New Haven: Yale University Press.
- Linz, Juan J., 1999, “Democracy, multinationalism and federalism,” in A. Busch and W. Merkel (eds.), Demokratie in Ost und West, Frankfurt am Main: Suhrkamp, 382–401.
- McKay, David, 2001, Designing Europe—Comparative Lessons from the Federal Experience, Oxford: Oxford University Press.
- Musgrave, Robert A., 1959, The Theory of Public Finance: a Study in Political Economy, New York: Mcgraw-Hill.
- Norman, Wayne J., 1994, “Towards a Philosophy of Federalism,” in Judith Baker (ed.), Group Rights, Toronto: University of Toronto Press, 79–100.
- –––, 1995a, “The Ideology of Shared Values: A Myopic Vision of Unity in the Multi-Nation State,” in Joseph Carens (ed.), Is Quebec Nationalism Just? Perspectives From Anglophone Canada, Montreal: McGill-Queens University Press, 137–59.
- –––, 1995b, “The Morality of Federalism and the Evolution of the European Union,” Archiv für Rechts- und Sozialphilosophie, 59: 202–211.
- –––, 2006, Negotiating Nationalism: Nation-building, Federalism and Secession in the Multinational State, Oxford: Oxford University Press.
- Oates, Wallace, 1972, Fiscal Federalism, New York: Harcourt Brace Jovanovich.
- Olson, Mancur, 1969, “Strategic Theory and Its Applications: the Principle of ‘Fiscal Equivalence’: the Division of Responsibility Among Different Levels of Government,” American Economic Review, 59(2): 479–532.
- Padou-Schioppa, Tommaso, 1995, “Economic Federalism and the European Union,” in Knop et al. 1995, 154–65.
- Renner, Karl, 1917, “The Development of the National Idea,” in Bottomore and Goode 1978, 118–25.
- Riker, William H., with Andreas Føllesdal, 2007, “Federalism,” in Robert E. Goodin, Philip Pettit and Thomas Pogge (eds.), A Companion to Contemporary Political Philosophy, Oxford: Blackwell.
- Rose-Ackerman, Susan, 1980, “Risk Taking and Reelection: Does Federalism Promote Innovation?,” The Journal of Legal Studies, 9(3): 593–616
- Sbragia, Alberta M., 1992, “Thinking about the European future: the uses of comparison,” in Alberta M. Sbragia (ed.), Euro-Politics: Institutions and policymaking in the ‘new’ European Community, Washington, DC: The Brookings Institution, 257–90.
- Shepsle, Kenneth A., 1986, “Institutional equilibrium and Equilibrium institutions”, in Herbert Weisberg (ed.), Political Science: The science of politics, New York: Agathon Press, 51–81.
- Simeon, Richard, 1998, “Considerations on the Design of Federations: The South African Constitution in Comparative Perspective,” SA Publiekreg/Public Law, 13(1): 42–72.
- Simeon, Richard, and Daniel-Patrick Conway, 2001, “Federalism and the Management of Conflict in Multinational Societies,”, in Gagnon and Tully 2001, 338–65.
- Stepan, Alfred, 1999, “Federalism and Democracy: Beyond the U.S. Model,” Journal of Democracy, 10: 19–34; reprinted in Karmis and Norman 2005.
- Tamir, Yael, 1993, Liberal Nationalism, Princeton: Princeton University Press.
- Taylor, Charles, 1993, Reconciling the Solitudes: Essays on Canadian Federalism and Nationalism, Montreal: McGill-Queen's Press.
- Tiebout, Charles M., 1956, “A Pure Theory of Local Expenditures,” Journal of Political Economy, 64(5): 416–24.
- Tully, James, 1995, Strange Multiplicity: Constitutionalism in an Age of Diversity, Cambridge: Cambridge University Press.
- Tushnet, Mark, 1996, “Federalism and Liberalism,” Cardozo Journal of International and Comparative Law, 4: 329–44.
- Watts, Ronald L., 1998, “Federalism, Federal Political Systems, and Federations,” Annual Review of Political Science, 1: 117–37.
- –––, 1999, Comparing Federal Systems, Montreal: McGill-Queens University Press.
- Weinstock, Daniel, 2011, “Self-Determination for (some) cities?” in Axel Gosseries and Yannick Vanderborght (eds.), Arguing About Justice: Essays for Philippe Van Parijs, Louvain: Presses universitaires de Louvain, 377–86.
- Wheare, Kenneth C., 1964, Federal Government (4th edition), Oxford: Oxford University Press.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Daniel J. Elazar's writings on federalism (maintained at the Jerusalem Center for Public Affairs)
Suggestions from Douglas Klusmeyer, Silje Langvatn, Antoinette Scherz, and Katja Stoppenbrink have improved this entry. |
AP biology makes heavy use of the scientific method through experiments in an attempt to test a hypothesis and learn something about organisms. AP biology students must individually plan a biologically interesting phenomenon to be investigated, a hypothesis related to that phenomenon and an experiment to determine whether the hypothesis is valid.
Biologists are often interested in the growth of organisms and the factors that influence the rate at which organisms grow. An experiment investigating growth for your AP biology course should specify an organism whose growth is worthy of note, a hypothesis related to the factors of its growth and an experiment to show how the modification of these factors can influence rate of growth.
One specific idea for this type of experiment is to investigate mold. Mold is important because it affects the rate at which food spoils. Design a controlled experiment in which you can alter the environmental conditions at which the mold grows. Consider investigating the impact of light, temperature and humidity on the growth of various samples of mold.
Plants are common subjects of AP biology experiments because they are inexpensive and easy to control. Design an experiment that investigates how various factors influence the phenomena related to plants. Some ideas are investigating plant color, growth and oxygen output.
If you are interested in oxygen output, many common aquarium plants such as elodea produce easily observable levels of oxygen. You can observe oxygen output by placing the plants inside tubes that go above the water in the aquarium. The oxygen will accumulate at the top of the tubes. Create different aquarium conditions and see how these conditions contribute to the amount of oxygen output.
AP classes integrate some basic concepts of chemistry into their course materials. You can conduct an experiment that incorporates these chemical concepts by using fundamental chemistry tools. Inexpensive tools such as beakers, heating apparatuses and litmus paper are easily available to high school students. You can use these tools to investigate organisms.
One simple example is to create an experiment in which you grow different types of onions or other fruits and vegetables and then test them for acidity. You can apply the litmus paper to the foods to see how different factors, such as species or growing conditions, affect the pH levels of these foods.
Much of what you learn in AP biology is related to the vital micro-components of an organism. Consider designing an experiment in which you investigate parts of an organism or single-celled organisms under a microscope. One idea is to purchase euglena or other single-celled bacteria and subject them to magnetic fields, all while observing their reactions under a microscope.
- "AP Biology for Dummies"; Peter Mikulecky, et al.; 2008
- "46 Science Fair Projects for the Evil Genius"; Bob Bonnet and Dan Keen; 2009
- Photo Credit Jupiterimages/Photos.com/Getty Images
High School Biology Experiment Ideas
High School Biology Experiment Ideas. High school level biology covers all aspects of biology, including animals, plant life and humans. That should...
Science Fair Project Ideas for AP Biology
Science Fair Project Ideas for AP Biology. Advanced Placement biology students are expected to conform to a higher standard of excellence than...
Anatomy & Physiology Vs. Biology
Science contains many different specialized areas of study. The general study of biology also encompasses specialities such as anatomy and physiology.
Landforms & Erosion Experiments for Kids
Erosion is the wearing away of landforms by water, wind, and ice. Before understanding erosion and how it works, students must study... |
Come enjoy the Academy for free this Sunday, December 11.
Earth’s oceans come from outer space. This theory is nothing new— the source of the blue of our blue marble has been a subject for debate among astronomers for decades. Until now asteroids were thought to have provided most of the water.
Now, new evidence, published last week in Nature, supports the theory that comets delivered a significant portion of Earth's oceans, which scientists believe formed about 8 million years after the planet itself.
“Life would not exist on Earth without liquid water, and so the questions of how and when the oceans got here is a fundamental one,” says University of Michigan astronomer Ted Bergin. “It's a big puzzle and these new findings are an important piece.”
Bergin is a co-investigator on HiFi, the Heterodyne Instrument for the Infrared on the Hershel Space Observatory. With measurements from HiFi, the researchers found that the ice on a comet called Hartley 2 has the same chemical composition as our oceans. Both have similar D/H ratios. The D/H ratio is the proportion of deuterium, or heavy hydrogen, in the water. A deuterium atom is a hydrogen with an extra neutron in its nucleus.
This was the first time ocean-like water was detected in a comet. “We were all surprised,” admits Bergin.
Six other comets HiFi measured in recent years had a much different D/H ratio than our oceans, meaning similar comets could not have been responsible for more than 10 percent of Earth's water.
The astronomers hypothesize that Hartley 2 was born in a different part of the solar system than the other six. Hartley most likely formed in the Kuiper belt, which starts near Pluto at about 30 times farther from the sun than Earth is. The other six hail from the Oort Cloud more than 5,000 times farther out.
“The results show that the amount of material out there that could have contributed to Earth’s oceans is perhaps larger than we thought,” speculates Bergin.
Image courtesy of NASA |
Automotive electrical systems are designed to perform a variety of functions.
These systems contain five electrical circuits: charging, starting, ignition,
lighting, and accessory. Electrical power and control signals must be delivered
to electrical devices reliably and safely. This goal is accomplished through
careful circuit design, prudent component selection, and practical equipment
location. By carefully studying this course, you will understand how these
circuits work and the adjustments and repairs required to maintain the
electrical systems in peak condition.
When you have completed this handbook, you will be able to:
The basic charging system consists of a battery, alternator, voltage regulator, ignition switch, and indicator light or indicator gauge or both. They must all work together to provide a source of electricity for the vehicle to operate. The charging system performs several functions:
The storage battery is the heart of the charging circuit (Figure 1). It is an electrochemical device for producing and storing electricity.
Figure 1 — Battery.
A vehicle battery has several important functions:
The type of battery used in automotive, construction, and weight-handling equipment is a lead-acid cell-type battery. This type of battery produces direct current (DC) electricity that flows in only one direction. When the battery is discharging, it changes chemical energy into electrical energy, thereby, releasing stored energy. During charging (current flowing into the battery from the charging system), electrical energy is converted into chemical energy. The battery can then store energy until the vehicle requires it.
The lead-acid cell-type storage battery is built to withstand severe vibration, cold weather, engine heat, corrosive chemicals, high current discharge, and prolonged periods without use. To test and service batteries properly, you must understand battery construction. The construction of a basic lead-acid cell-type battery is as follows:
The battery element is made up of negative plates, positive plates, separators, and straps (Figure 2). The element fits into a cell compartment in the battery case. Most automotive batteries have six elements.
Figure 2 — Battery element.
Each cell compartment contains two kinds of chemically active lead plates, known as positive and negative plates. The battery plates are made of a stiff mesh framework coated with porous lead. These plates are insulated from each other by suitable separators and are submerged in a sulfuric acid solution (electrolyte).
Charged negative plates contain spongy (porous) lead (Pb), which is gray in color. Charged positive plates contain lead peroxide (PbO2), which has a chocolate brown color. These substances are known as the active materials of the plates. Calcium or antimony is normally added to the lead to increase battery performance and to decrease gassing. Since the lead on the plates is porous like a sponge, the battery acid easily penetrates into the material. This aids the chemical reaction and the production of electricity.
Lead battery straps or connectors run along the upper portion of the case to connect the plates. The battery terminals (post or side terminals) are constructed as part of one end of each strap.
To prevent the plates from touching each other and causing a short circuit, sheets of insulating material (micro-porous rubber, fibrous glass, or plastic impregnated material), called separators, are inserted between the plates. These separators are thin and porous so the electrolyte will flow easily between the plates. The side of the separator that is placed against the positive plate is grooved so the gas that forms during charging will rise to the surface more readily. These grooves also provide room for any material that flakes from the plates to drop to the sediment space below.
The battery case is made of hard rubber or a high-quality plastic. The case must withstand extreme vibration, temperature change, and the corrosive action of the electrolyte. The dividers in the case form individual containers for each element. A container with its element is one cell.
Stiff ridges or ribs are molded in the bottom of the case to form a support for the plates and a sediment recess for the flakes of active material that drop off the plates during the life of the battery. The sediment is thus kept clear of the plates so it will not cause a short circuit across them.
The battery cover is made of the same material as the container and is bonded to and seals the container. The cover provides openings for the two battery posts and a cap for each cell.
Battery caps either screw or snap into the openings in the battery cover. The battery caps (vent plugs) allow gas to escape and prevent the electrolyte from splashing outside the battery. They also serve as spark arresters. The battery is filled through the vent plug openings. Maintenance-free batteries have a large cover that is not removed during normal service.
Hydrogen gas can collect at the top of a battery. If this gas is exposed to a flame or spark, it can explode.
Battery terminals provide a means of connecting the battery plates to the electrical system of the vehicle. Either two round post or two side terminals can be used. Battery terminals are round metal posts extending through the top of the battery cover. They serve as connections for battery cable ends. The positive post will be larger than the negative post. It may be marked with red paint and a positive (+) symbol. The negative post is smaller, may be marked with black or green paint, and has a negative (-) symbol on or near it.
Side terminals are electrical connections located on the side of the battery. They have internal threads that accept a special bolt on the battery cable end. Side terminal polarity is identified by positive and negative symbols marked on the case.
The electrolyte solution in a fully charged battery is a solution of concentrated sulfuric acid in water. This solution is about 60 percent water and about 40 percent sulfuric acid.
The electrolyte in the lead-acid storage battery has a specific gravity of 1.28, which means that it is 1.28 times as heavy as water. The amount of sulfuric acid in the electrolyte changes with the amount of electrical charge; the specific gravity of the electrolyte also changes with the amount of electrical charge. A fully charged battery will have a specific gravity of 1.28 at 80°F. The figure will go higher with a temperature decrease, and lower with a temperature increase.
As a storage battery discharges, the sulfuric acid is depleted and the electrolyte is gradually converted into water. This action provides a guide in determining the state of discharge of the lead-acid cell. The electrolyte that is placed in a lead-acid battery has a specific gravity of 1.280.
The specific gravity of an electrolyte is actually the measure of its density. The electrolyte becomes less dense as its temperature rises, and a low temperature means a high specific gravity. The hydrometer that you use is marked to read specific gravity at 80°F only. Under normal conditions, the temperature of your electrolyte will not vary much from this mark. However, large changes in temperature require a correction in your reading.
For every 10-degree change in temperature ABOVE 80°F, you must add 0.004 to your specific gravity reading. For every 10-degree change in temperature below 80°F, you must subtract 0.004 from your specific gravity reading. Suppose you have just taken the gravity reading of a cell. The hydrometer reads 1.280. A thermometer in the cell indicates an electrolyte temperature of 60°F. That is a normal difference of 20 degrees from the normal of 80°F. To get the true gravity reading, you must subtract 0.008 from 1.280. Thus the specific gravity of the cell is actually 1.272. A hydrometer conversion chart is usually found on the hydrometer. From it, you can obtain the specific gravity correction for temperature changes above or below 80°F.
The capacity of a battery is measured in ampere hours. The ampere-hour capacity is equal to the product of the current in amperes and the time in hours during which the battery is supplying current. The ampere-hour capacity varies inversely with the discharge current. The size of a cell is determined generally by its ampere-hour capacity. The capacity of a cell depends upon many factors, the most important of which are as follows:
Battery ratings were developed by the Society of Automotive Engineers (SAE) and the Battery Council International (BCI). They are set according to national test standards for battery performance. They let the mechanic compare the cranking power of one battery to another. The two methods of rating lead-acid storage batteries are the cold-cranking rating and the reserve capacity rating.
The cold cranking rating determines how much current in amperes the battery can deliver for thirty seconds at 0°F while maintaining terminal voltage of 7.2 volts or 1.2 volts per cell. This rating indicates the ability of the battery to crank a specific engine (based on starter current draw) at a specified temperature.
For example, one manufacturer recommends a battery with 305 cold-cranking amps for a small four-cylinder engine but a 450 cold-cranking amp battery for a larger V-8 engine. A more powerful battery is needed to handle the heavier starter current draw of the larger engine.
The reserve capacity rating is the time needed to lower battery terminal voltage below 10.2 V (1.7 V per cell) at a discharge rate of 25 amps. This is with the battery fully charged and at 80°F. Reserve capacity will appear on the battery as a time interval in minutes.
For example, if a battery is rated at 90 minutes and the charging system fails, the operator has approximately 90 minutes (1 1/2 hours) of driving time under minimum electrical load before the battery goes completely dead.
Under normal conditions, a hydrometer reading below 1.265 specific gravity at 80°F is a warning signal that the battery needs charging or is defective.
When testing shows that a battery requires charging, a battery charger is required to reenergize it. The battery charger will restore the charge on the plates by forcing current back into the battery. The battery charger uses AC (Alternating Currnet) current from a wall outlet, usually 120 volts, and steps it down to a voltage slightly above that of a battery, usually 14-15 volts. There are basically two types of chargers, the slow charger and the fast (quick) charger.
The slow charger is also known as the trickle charger. It feeds a small amount current back into the battery over a long period of time. When using a trickle charger, it takes about 12 hours at 10 amps to fully charge a dead battery. However, the chemical action inside the battery is improved. During a slow charge, the active materials are put back onto the battery plates stronger than they are during a fast charge. It is always better for the battery to use a trickle charge when time allows.
The fast charger, or quick charger and sometimes called the boost charger, forces a high amount of current flow back into the battery. A fast charger is commonly used in shops to start an engine or get the vehicle out of the shop quickly because there is no time to wait for a slow charge. Fast charging is beneficial if you just need to start the engine; if time allows, use the slow charge.
When using a fast charger, do not exceed a charge rate in excess of 35 amps. Also, ensure the battery temperature does not exceed 125o F. Exceeding either one could cause damage to the battery.
If there is a possibility that the battery is frozen, do not charge the battery. Charging a frozen circuit can rupture the battery case and cause an explosion. Always allow the battery time to thaw before charging it.
It is easy to connect the battery to the charger, turn the charging current on, and, after a normal charging period, turn the charging current off and remove the battery. Certain precautions, however, are necessary both before and during the charging period. These practices are as follows:
Do not permit the baking soda and water solution to enter the cells. To do so would neutralize the acid within the electrolyte.
See that the vent holes are clear and open. Do NOT remove battery caps during charging. This prevents acid from spraying onto the top of the battery and keeps dirt out of the cells.
Check the electrolyte level before charging begins and during charging. Add distilled water if the level of electrolyte is below the top of the plate.
Keep the charging room well ventilated. Do NOT smoke near batteries being charged. Batteries on charge release hydrogen gas. A small spark may cause an explosion.
Take frequent hydrometer readings of each cell and record them. You can expect the specific gravity to rise during the charge. If it does not rise, remove the battery and dispose of it as per local hazardous material disposal instruction.
Keep close watch for excessive gassing, especially at the very beginning of the charge, when using the constant voltage method. Reduce the charging current if excessive gassing occurs. Some gassing is normal and aids in remixing the electrolyte. Do not remove a battery until it has been completely charged.
New batteries may come to you full of electrolyte and fully charged. In this case, all that is necessary is to install the batteries properly in the piece of equipment. Most batteries shipped to NCF units are received charged and dry.
Charged and dry batteries will retain their state of full charge indefinitely so long as moisture is not allowed to enter the cells. Therefore, batteries should be stored in a dry place. Moisture and air entering the cells will allow the negative plates to oxidize. The oxidation causes the battery to lose its charge.
To activate a dry battery, remove the restrictors from the vents and remove the vent caps. Then fill all the cells to the proper level with electrolyte. The best results are obtained when the temperature of the battery and electrolyte is within the range of 60°F to 80°F.
Some gassing will occur while you are filling the battery due to the release of carbon dioxide that is a product of the drying process of the hydrogen sulfide produced by the presence of free sulfur. Therefore, the filling operations should be in a well-ventilated area. These gases and odors are normal and are no cause for alarm.
Approximately 5 minutes after adding electrolyte, check the battery for voltage and electrolyte strength. More than 6 volts or more than 12 volts, depending upon the rated voltage of the battery, indicates the battery is ready for service. From 5 to 6 volts or from 10 to 12 volts indicates oxidized negative plates, and the battery should be charged before use. Less than 5 or less than 10 volts, depending upon the rated voltage, indicates a bad battery, which should not be placed in service.
If, before the battery is placed in service, the specific gravity, when corrected to 80°F, is more than .030 points lower than it was at the time of initial filling or if one or more cells gas violently after adding the electrolyte, the battery should be fully charged before use. If the electrolyte reading fails to rise during charging, discard the battery.
Most shops receive ready-mixed electrolyte. Some units may still get concentrated sulfuric acid that must be mixed with distilled water to get the proper specific gravity for electrolyte.
Mixing electrolyte is a dangerous job. You have probably seen holes appear in a uniform for no apparent reason. Later you remembered replacing a storage battery and having carelessly brushed against the battery.
When mixing electrolyte, you are handling pure sulfuric acid, which can burn clothing quickly and severely bum your hands and face. Always wear rubber gloves, an apron, goggles, and a face shield for protection against splashes or accidental spilling.
When you are mixing electrolyte, NEVER pour water into the acid. Always pour acid into water. If water is added to concentrated sulfuric acid, the mixture may explode or splatter and cause severe burns. Pour the acid into the water slowly, stirring gently but thoroughly all the time. Large quantities of acid may require hours of safe dilution.
Let the mixed electrolyte cool down to room temperature before adding it to the battery cells. Hot electrolyte will eat up the cell plates rapidly. To be on the safe side, do not add the electrolyte if its temperature is above 90°F. After filling the battery cells, let the electrolyte cool again because more heat is generated by its contact with the battery plates. Next, take hydrometer readings. The specific gravity of the electrolyte will correspond quite closely to the values on the mixing chart if the parts of water and acid are mixed correctly.
If a battery is not properly maintained, its service life will be drastically reduced. Battery maintenance should be done during every vehicle serviceing. Complete battery maintenance includes the following:
If the electrolyte level in the battery is low, fill the cells to the correct level with distilled water (purified water). Distilled water should be used because it does not contain the impurities found in tap water. Tap water contains many chemicals that reduce battery life. The chemicals contaminate the electrolyte and collect in the bottom of the battery case. If enough contaminates collect in the bottom of the case, the cell plates short out, ruining the battery.
If water must be added at frequent intervals, the charging system may be overcharging the battery. A faulty charging system can force excessive current into the battery. Battery gassing can then remove water from the battery.
Maintenance-free batteries do NOT need periodic electrolyte service under normal conditions. They are designed to operate for long periods without loss of electrolyte.
Do NOT use a scraper or knife to clean battery terminals. This action removes too much metal and can ruin the terminal connection.
To clean the terminals, remove the cables and inspect the terminal posts to see if they are deformed or broken. Clean the terminal posts and the inside surfaces of the cable clamps with a cleaning tool before replacing them on the terminal posts.
When reinstalling the cables, tighten the terminals just enough to secure the connection, over-tightening will strip the cable bolt threads. Coat the terminals with petroleum or white grease. This will keep acid fumes off the connections and keep them from corroding again.
Non maintenance-free batteries can have the state of charge checked with a hydrometer. The hydrometer tests specific gravity of the electrolyte. It is fast and simple to use.
A fully charged battery should have a hydrometer reading of at least 1.265 or higher. If below 1.265, the battery needs to be recharged, or it may be defective.
A defective battery can be discovered by using a hydrometer to check each cell. If the specific gravity in any cell varies excessively from other cells (25 to 50 points), the battery is bad. Cells with low readings may be shorted. When all of the cells have equal specific gravity, even if they are low, the battery can usually be recharged. On maintenance-free batteries a charge indicator eye shows the battery charge. The charge indicator changes color with levels of battery charge. For example, the indicator may be green with the battery fully charged. It may turn black when discharged or yellow when the battery needs to be replaced. If there is no charge indicator eye or when in doubt of its reliability, you can use a voltmeter and ammeter or a load tester to determine battery condition quickly.
As a mechanic you will be expected to test batteries for proper operation and condition. These tests are as follows:
Figure 3 — Battery leak test.
To perform a battery terminal test, connect the negative voltmeter lead to the battery cable end (Figure 4). Touch the positive lead to the battery terminal. With the ignition or injection system disabled so that the engine will not start, crank the engine while watching the voltmeter reading.
Figure 4 — Battery terminal leak test.
If the voltmeter reading is .5 volts or above, there is high resistance at the battery cable connection. This indicates that the battery connections need to be cleaned. A good, clean battery will have less than a .5 volt drop.
Figure 5 — Battery voltage test.
The battery voltage test is used on maintenance-free batteries because these batteries do not have caps that can be removed for testing with a hydrometer. To perform this test, connect the voltmeter or battery tester across the battery terminals. Turn on the vehicle headlights or heater blower to provide a light load. Now read the meter or tester. A well-charged battery should have over 12 volts. If the meter reads approximately 11.5 volts, the battery is not charged adequately, or it may be defective.
To perform a cell voltage test, use a low voltage reading voltmeter with special cadmium (acid-resistant metal) tips (Figure 6). Insert the tips into each cell, starting at one end of the battery and working your way to the other. Test each cell carefully. If the cells are low but equal, recharging usually will restore the battery. If cell voltage readings vary more than .2 volts, the battery is BAD.
Figure 6 — Cell voltage test
A battery drain test checks for abnormal current draw with the ignition off. If a battery goes dead without being used, you need to check for a current drain. To perform a battery drain test, set up an ammeter, as shown in Figure 7. Pull the fuse if the vehicle has a dash clock. Close all doors and the trunk (if applicable). Then read the ammeter. If everything is off, there should be a zero reading. Any reading indicates a problem. To help pinpoint the problem, pull fuses one at a time until there is a zero reading on the ammeter. This action isolates the circuit that has the problem.
Figure 7 — Battery drain test.
Before load testing a battery, you must calculate how much current draw should be applied to the battery. If the ampere-hour rating of the battery is given, load the battery to three times its amp-hour rating. For example, if the battery is rated at 60 amp hours, test the battery at 180 amps (60 x 3 = 180). The majority of the batteries are now rated in SAE cold cranking amps, instead of amp-hours. To determine the load test for these batteries, divide the cold-crank rating by two. For example, a battery with 400 cold cranking amps rating should be loaded to 200 amps (400 ÷ 2 = 200). Connect the battery load tester, as shown in Figure 8. Turn the control knob until the ammeter reads the correct load for your battery.
Figure 8 — Battery load test.
After checking the battery charge and finding the amp load value, you are ready to test battery output. Make sure that the tester is connected properly. Turn the load control knob until the ammeter reads the correct load for your battery. Hold the load for 15 seconds. Next, read the voltmeter while the load is applied. Then turn the load control completely off so the battery will not be discharged. If the voltmeter reads 9.5 volts or more at room temperature, the battery is good. If the battery reads below 9.5 volts at room temperature, battery performance is poor. This condition indicates that the battery is not producing enough current to run the starting motor properly.
Familiarize yourself with proper operating procedures for the type of tester you have available. Improper operation of electrical test equipment may result in serious damage to the test equipment or the unit being tested.
The alternator has replaced the DC (Direct Current) generator because of its improved efficiency (Figure 9. It is smaller, lighter, and more dependable than the DC generator. The alternator also produces more output during idle, which makes it ideal for late model vehicles. The alternator has a spinning magnetic field. The output windings (stator) are stationary. As the magnetic field rotates, it induces current in the output stator windings.
Figure 9 — Alternator.
Knowledge of the construction of an alternator is required before you can understand the proper operation, testing procedures, and repair procedures applicable to an alternator.
The primary components of an alternator are as follows:
Figure 10 — Rotor assembly.
The fingers on one of the claw-shaped pole pieces produce south (S) poles and the other produces north (N) poles. As the rotor rotates inside the alternator, alternating N-S-N-S polarity and AC current are produced. An external source of electricity (DC) is required to excite the magnetic field of the alternator.
Slip rings are mounted on the rotor shaft to provide current to the rotor windings. Each end of the field coil connects to the slip rings.
The stator assembly produces the electrical output of the alternator (Figure 11). The stator, which is part of the alternator frame when assembled, consists of three groups of windings or coils which produce three separate AC currents. This is known as three-phase output. One end of the windings is connected to the stator assembly and the other is connected to a rectifier assembly. The windings are wrapped around a soft laminated iron core that concentrates and strengthen the magnetic field around the stator windings. There are two types of stators—Ytype stator and delta-type stator.
Figure 11 — Stator assembly.
The Y-type stator has the wire ends from the stator windings connected to a neutral junction (Figure 12, View A). The circuit looks like the letter Y. The Ytype stator provides good current output at low engine speeds.
Figure 12 — Stator assembly
The delta-type stator (Figure 12, View B) has the stator wires connected endto-end. With no neutral junction, two circuit paths are formed between the diodes. A delta-type stator is used in high output alternators.
The rectifier diodes are mounted in a heat sink or diode bridge. Three positive diodes are press fit in an insulated frame. Three negative diodes are mounted into an uninsulated or grounded frame.
When an alternator is producing current, the insulated diodes pass only outflowing current to the battery. The diodes provide a block, preventing reverse current flow from the alternator.
The operation of an alternator is somewhat different than the DC generator. An alternator has a rotating magnet (rotor) which causes the magnetic lines of force to rotate with it. These lines of force are cut by the stationary (stator) windings in the alternator frame as the rotor turns with the magnet rotating the N and S poles to keep changing positions. When S is up and N is down, current flows in one direction, but when N is up and S is down, current flows in the opposite direction. This is called alternating current as it changes direction twice for each complete revolution. If the rotor speed were increased to 60 revolutions per second, it would produce 60-cycle AC.
Since the engine speed varies in a vehicle, the frequency also varies with the change of speed. Likewise, increasing the number of pairs of magnetic north and south poles will increase the frequency by the number pair of poles. A four-pole generator can generate twice the frequency per revolution of a two-pole rotor.
A voltage regulator controls alternator output by changing the amount of current flow through the rotor windings. Any change in rotor winding current changes the strength of the magnetic field acting on the stator windings. In this way, the voltage regulator can maintain a preset charging voltage. The three basic types of voltage regulators are as follows:
The contact point voltage regulator uses a coil, set of points, and resistors that limit system voltage. The electronic or solid-state regulators have replaced this older type. For operation, refer to the "Regulation of Generator Output" section of this handbook.
The electronic voltage regulators use an electronic circuit to control rotor field strength and alternator output. It is a sealed unit and is not repairable. The electronic circuit must be sealed to prevent damage from moisture, excessive heat, and vibration. A rubberlike gel surrounds the circuit for protection.
An integral voltage regulator is mounted inside or on the rear of the alternator. This is the most common type used on modern vehicles. It is small, efficient, dependable, and composed of integrated circuits.
To reduce alternator output, the voltage regulator increases the resistance between the battery and the rotor windings. The magnetic field decreases, and less current is induced into the stator windings.
Alternator speed and load determines whether the regulator increases or decreases charging output. If the load is high or rotor speed is low (engine at idle), the regulator senses a drop in system voltage. The regulator then increases the rotor’s magnetic field current until a preset output voltage is obtained. If the load drops or rotor speed increases, the opposite occurs.
Alternator testing and service call for special precautions since the alternator output terminal is connected to the battery at all times. Use care to avoid reversing polarity when performing battery service of any kind. A surge of current in the opposite direction could burn the alternator diodes.
Do not purposely or accidentally "short" or "ground" the system when disconnecting wires or connecting test leads to terminals of the alternator or regulator. For example, grounding of the field terminal at either alternator or regulator will damage the regulator. Grounding of the alternator output terminal will damage the alternator and possibly other portions of the charging system.
Never operate an alternator on an open circuit. With no battery or electrical load in the circuit, alternators are capable of building high voltage (50 to over 110 volts) which may damage diodes and endanger anyone who touches the alternator output terminal.
Alternator maintenance is minimized by the use of pre-lubricated bearings and longer-lasting brushes. If a problem exists in the charging circuit, check for a complete field circuit by placing a large screwdriver on the alternator rear-bearing surface. If the field circuit is complete, there will be a strong magnetic pull on the blade of the screwdriver, which indicates that the field is energized. If there is no field current, the alternator will not charge because it is excited by battery voltage.
Should you suspect troubles within the charging system after checking the wiring connections and battery, connect a voltmeter across the battery terminals. If the voltage reading, with the engine speed increased, is within the manufacturer’s recommended specification, the charging system is functioning properly. Should the alternator tests fail, the alternator should be removed for repairs or replacement. Do NOT forget, you must ALWAYS disconnect the cables from the battery first.
To determine what component or components have caused the problem, you will be required to disassemble and test the alternator.
To test the rotor for grounds, shorts, and opens, perform the following:
Figure 13 — Testing for grounds.
Figure 14 — Testing for shorts.
The stator winding can be tested for opens and grounds after it has been disconnected from the alternator end frame and voltage regulator.
If the ohmmeter reading is low when connected between each pair of stator leads, the stator winding is electrically good (Figure 15).
Figure 15 — Testing stator for opens.
Figure 16 — Testing stator for grounds.
A high ohmmeter reading or failure of the test lamp to light when connected from any one of the leads to the stator frame indicates the windings are not grounded (Figure 16). It is not practical to test the stator for shorts due to the very low resistance of the winding.
To test for correct diode operation, disconnect the stator windings and perform the test with an ohmmeter as follows:
After completing the required test and making any necessary repairs or replacement of parts, reassemble the alternator and install it on the vehicle. After installation, start the engine and check that the charging system is functioning properly. Never attempt to polarize an alternator. Attempts to do so serve no purpose and may damage the diodes, wiring, and other charging circuit components.
Charging system tests should be performed when problems point to low alternator voltage and current. These tests will quickly determine the operating condition of the charging system. Common charging system tests are as follows:
Charging system tests are performed in two ways—by using a load tester or by using a volt-ohm millimeter (VOM/multimeter). The load tester provides the accurate method for testing a charging system by measuring both system current and voltage.
The charging system output test measures system voltage and current under maximum load. To check output with a load tester, connect tester leads as described by the manufacturer, as you may have either an inductive (clip-on) amp pickup type or a noninductive type tester. Testing procedures for an inductive type tester are as follows:
Current output specifications will depend on the size (rating) of the alternator. A vehicle with few electrical accessories may have an alternator rated at 35 amps, whereas a larger vehicle with more electrical requirements could have an alternator rated from 40 to 80 amps. Always check the manufacturer’s service manual for exact values.
A regulator voltage test checks the calibration of the voltage regulator and detects a low or high setting. Most voltage regulators are designed to operate between 13.5 to 14.5 volts. This range is stated for normal temperatures with the battery fully charged.
Set the load tester selector to the correct position using the manufacturer’s manual. With the load control off, run the engine at 2,000 rpm or specified test speed. Note the voltmeter reading and compare it to the manufacturer’s specifications.
If the voltmeter reading is steady and within manufacturer’s specifications, then the regulator setting is okay. However, if the volt reading is steady but too high or too low, then the regulator needs adjustment or replacement. If the reading were not steady, this would indicate a bad wiring connection, an alternator problem, or a defective regulator, and further testing is required.
A regulator bypass test is an easy and quick way of determining if the alternator, regulator, or circuit is faulty. Procedures for the regulator bypass test are similar to the charging system output test, except that the regulator is taken out of the circuit. Direct battery voltage (unregulated voltage) is used to excite the rotor field. This should allow the alternator to produce maximum voltage output.
Depending upon the system, there are several ways to bypass the voltage regulator. The most common ways are as follows:
Follow the manufacturer’s directions to avoid damaging the circuit. You must NOT short or connect voltage to the wrong wires, or the diodes or voltage regulator may be ruined.
When the regulator bypass test is being performed, charging voltage and current will increase to normal levels. This indicates a bad regulator. If the charging voltage and current remain the same, then you have a bad alternator.
A circuit resistance test is used to locate faulty wiring, loose connections, partially burnt wire, corroded terminals, or other similar types of problems.
There are two common circuit resistance tests: insulated resistance test and ground circuit resistance test.
To perform an insulated resistance test, connect the load tester as described by the manufacturer. A typical connection setup is shown in Figure 17, View A. Notice how the voltmeter is connected across the alternator output terminal and positive battery terminal.
Figure 17 — Circuit resistance test.
With the vehicle running at a fast idle, rotate the load control knob to obtain a 20-amp current flow at 15 volts or less. All accessories and lights are to be turned off. Read the voltmeter. The voltmeter should NOT read over 0.7-volt drop (0.1 volt per electrical connection) for the circuit to be considered in good condition. However, if the voltage drop is over 0.7 volt, circuit resistance is high and a poor electrical connection exists.
To perform a ground circuit test, place the voltmeter leads across the negative battery terminal and alternator housing (Figure 17, View B).
The voltmeter should NOT read over 0.1 volt per electrical connection. If the reading is higher, this indicates such problems as loose or faulty connections, burnt plug sockets, or other similar malfunctions.
|Test Your Knowledge
1. What substance is contained in a positive plate of a fully charged battery?
2. What type of gas collects at the top of a battery?
3. What assembly in the alternator contains the heat sink, the diodes, the diode plate, and the electrical terminals?
4. What type of alternator stator is used in high output alternators?
- To Table of Contents -
The internal combustion engine is not capable of self-starting. Automotive engines (both spark-ignition and diesel) are cranked by a small but powerful electric motor. This motor is called a cranking motor, starting motor, or starter.
The battery sends current to the starter when the operator turns the ignition switch to start. This causes a pinion gear in the starter to mesh with the teeth of the ring gear, thereby rotating the engine crankshaft for starting.
The typical starting circuit consists of the battery, the starter motor and drive mechanism, the ignition switch, the starter relay or solenoid, a neutral safety switch (automatic transmissions), and the wiring to connect these components.
The starting motor converts electrical energy from the battery into mechanical or rotating energy to crank the engine (Figure 18). The main difference between an electric starting motor and an electric generator is that in a generator, rotation of the armature in a magnetic field produces voltage. In a motor, current is sent through the armature and the field; the attraction and repulsion between the magnetic poles of the field and armature coil alternately push and pull the armature around. This rotation (mechanical energy), when properly connected to the flywheel of an engine, causes the engine crankshaft to turn.
Figure 18 — Starter.
The construction of all starting motors is very similar. There are, however, slight design variations. The main parts of a starting motor are as follows:
The armature shaft supports the armature assembly as it spins inside the starter housing. The armature core is made of iron and holds the armature windings in place. The iron increases the magnetic field strength of the windings.
The commutator serves as a sliding electrical connection between the motor windings and the brushes and is mounted on one end of the armature shaft. The commutator has many segments that are insulated from each other. As the windings rotate away from the pole shoe (piece), the commutator segments change the electrical connection between the brushes and the windings. This action reverses the magnetic field around the windings. The constant changing electrical connection at the windings keeps the motor spinning.
The brushes ride on top of the commutator. They slide on the commutator to carry battery current to the spinning windings. The springs force the brushes to maintain contact with the commutator as it spins, thereby no power interruptions occurs. The armature shaft bushing supports the commutator end of the armature shaft.
The pinion gear is a small gear on the armature shaft that engages the ring gear on the flywheel. Most starter pinion gears are made as part of a pinion drive mechanism. The pinion drive mechanism slides over one end of the starter armature shaft. The pinion drive mechanism found on starting motors that you will encounter is of three designs: the bendix drive, the overrunning clutch, and the dyer drive.
Figure 19 — Field winding configurations.
The two windings, parallel (the wiring of the two field coils in parallel) increases their strength because they receive full voltage. Note that two additional pole shoes are used. Though they have no windings, their presence will further strengthen the magnetic field.
The four windings, series-parallel (the wiring of four field coils in a series-parallel combination) creates a stronger magnetic field than the two field coil configuration.
The four windings, series (the wiring of four field coils in series) provides a large amount of low-speed torque, which is desirable for automotive starting motors. However, series wound motors can build up excessive speed if allowed to run free, to the point where they will destroy themselves.
The six windings, series-parallel (three pairs of series-wound field coils) provides the magnetic field for a heavy-duty starter motor. This configuration uses six brushes.
The three windings, two series, one shunt (the use of one field coil that is shunted to ground with a series-wound motor) controls motor speed. Because the shunt coil is not affected by speed, it will draw a steady heavy current, effectively limiting speed.
There are two types of starting motors that you will encounter on equipment: the direct drive starter and the double reduction starter. All starters require the use of gear reduction to provide the mechanical advantage required to turn the engine flywheel and crankshaft.
Direct drive starters make use of a pinion gear on the armature shaft of the starting motor. This gear meshes with teeth on the ring gear. There are between 10 to 16 teeth on the ring gear for every one tooth on the pinion gear. Therefore, the starting motor revolves 10 to 16 times for every revolution of the ring gear. In operation, the starting motor armature revolves at a rate of 2,000 to 3,000 revolutions per minute, thus turning the engine crankshaft at speeds up to 200 rpm.
The double reduction starter makes use of gear reduction within the starter and the reduction between the drive pinion and the ring gear. The gear reduction drive head is used on heavy-duty equipment.
Figure 20 shows a typical gear reduction starter. The gear on the armature shaft does not mesh directly with the teeth on the ring gear, but with an intermediate gear which drives the driving pinion. This action provides additional breakaway, or starting torque, and greater cranking power. The armature of a starting motor with a gear reduction drive head may rotate as many as 40 revolutions for every revolution of the engine flywheel.
Figure 20 — Gear reduction starter.
A starter motor’s operation is dependent upon the type of drive it contains. Below are the three drive systems, along with an explanation of the operation of each.
The Bendix drive relies on the principle of inertia to cause the pinion gear to mesh with the ring gear (Figure 21). When the starting motor is not operating, the pinion gear is out of mesh and entirely away from the ring gear. When the ignition switch is engaged, the total battery voltage is applied to the starting motor, and the armature immediately starts to rotate at high speed.
Figure 21 — Bendix drive starter.
The pinion, being weighted on one side and having internal screw threads, does not rotate immediately with the shaft but because of inertia, runs forward on the revolving threaded sleeve until it engages with the ring gear. If the teeth of the pinion and ring gear do not engage, the drive spring allows the pinion to revolve and forces the pinion to mesh with the ring gear. When the pinion gear is engaged fully with the ring gear, the pinion is then driven by the starter through the compressed drive spring and cranks the engine. The drive spring acts as a cushion while the engine is being cranked against compression. It also breaks the severity of the shock on the teeth when the gears engage and when the engine kicks back due to ignition. When the engine starts and runs on its own power, the ring gear drives the pinion at a higher speed than does the starter. This action causes the pinion to turn in the opposite direction on the threaded sleeve and automatically disengages from the ring gear. This prevents the engine from driving the starter.
The overrunning clutch provides positive meshing and demeshing of the starter motor pinion gear and the ring gear (Figure 22). The starting motor armature shaft drives the shell and sleeve assembly of the clutch. The rotor assembly is connected to the pinion gear, which meshes with the engine ring gear. Spring-loaded steel rollers are located in tapered notches between the shell and the rotor. The springs and plungers hold the rollers in position in the tapered notches. When the armature shaft turns, the rollers are jammed between the notched surfaces, forcing the inner and outer members of the assembly to rotate as a unit and crank the engine.
Figure 22 — Overrunning clutch starter.
After the engine is started, the ring gear rotates faster than the pinion gear, thus tending to work the rollers back against the plungers, and thereby causing an overrunning action. This action prevents excessive speed of the starting motor. When the starting motor is released, the collar and spring assembly pulls the pinion out of mesh with the ring gear.
The Dyer drive provides complete and positive meshing of the drive pinion and ring gear before the starting motor is energized (Figure 23). It combines principles of both the Bendix and overrunning clutch drives and is commonly used on heavy-duty engines.
Figure 23 — Dyer drive starter.
A starter solenoid is used to make the electrical connection between the battery and the starting motor. The starter solenoid is an electromagnetic switch; it is similar to other relays but is capable of handling higher current levels. A starter solenoid, depending on the design of the starting motor, has the following functions:
The starter solenoid may be located away from or on the starting motor. When mounted away from the starter, the solenoid only makes and breaks electrical connection. When mounted on the starter, it also slides the pinion gear into the flywheel.
In operation, the solenoid is actuated when the ignition switch is turned or when the starter button is depressed. The action causes current to flow through the solenoid (causing a magnetic attraction of the plunger) to ground. The movement of the plunger causes the shift lever to engage the pinion with the ring gear. After the pinion is engaged, further travel of the plunger causes the contacts inside the solenoid to close and directly connects the battery to the starter.
If cranking continues after the control circuit is broken, it is most likely to be caused by either shorted solenoid windings or by binding of the plunger in the solenoid. Low voltage from the battery is often the cause of the starter making a clicking sound. When this occurs, check all starting circuit connections for cleanliness and tightness.
|Test Your Knowledge
5. What type of starter uses gear reduction within the starter and gear reduction between the drive pinion and the ring gear?
6. What term refers to the center housing of a starter that holds the field coils and pole shoes?
- To Table of Contents -
Vehicles equipped with automatic transmissions require the use of a neutral safety switch. The neutral safety switch prevents the engine from being started unless the shift selector of the transmission is in neutral or park. It disables the starting circuit when the transmission is in gear.
The neutral safety switch is wired into the circuit going to the starter solenoid. When the transmission is in forward or reverse gear, the switch is in the open position (disconnected). This action prevents current from activating the solenoid and starter when the ignition switch is turned to the start position. When the transmission is in neutral or park, the switch is closed (connected), allowing current to flow to the starter when the ignition is turned.
A misadjusted or bad neutral safety switch can keep the engine from cranking. If the vehicle does not start, you should check the action of the neutral safety switch by moving the shift lever into various positions while trying to start the vehicle. If the starter begins to work, the switch needs to be readjusted.
To readjust a neutral safety switch, loosen the fasteners that hold the switch. With the switch loosened, place the shift lever into park (P). Then, while holding the ignition switch in the start position, slide the neutral switch on its mount until the engine cranks. Without moving the switch, tighten the fasteners. The engine should now start with the shift lever in park or neutral. Check for proper operation after the adjustment.
If after adjusting the switch to normal, operation is not resumed, you may need to test the switch. All that is required to test the switch is a 12-volt test light. To test the switch, touch the test light to the switch output wire connection while moving the shift lever. The light should glow as the shift lever is slid into park or neutral. The light should not work in any other position. If the light is not working properly, check the mechanism that operates the switch. If the problem is in the switch, replace it.
Some late model vehicles have the brake light switch wired into the same control circuit as the neutral safety switch. In order to operate the starter, you must press and hold the brake pedal. This is in addition to ensuring that the vehicle is in neutral or park and in the case of a manual transmission; the clutch pedal is pressed down as well.
Vehicles equipped with manual transmissions require the use of a clutch safety switch to prevent engine cranking. The switch is closed only when the operator presses the clutch pedal down. This prevents the vehicle from moving while the engine is cranking.
The condition of the starting motor should be carefully checked at each PM service. This permits you to take appropriate action, where needed, so equipment failures caused by a faulty starter can be reduced, if not eliminated. A visual inspection for clean, tight electrical connections and secure mounting at the flywheel housing is the extent of the maintenance check. Then operate the starter and observe the speed of rotation and the steadiness of operation.
Do NOT crank the engine for more than 30 seconds or starter damage can result. If the starter is cranked too long, it will overheat. Allow the starter to cool for a few minutes if more cranking time is needed.
If the starter is not operating properly, remove the starter, disassemble it, and check the commutator and brushes. If the commutator is dirty, you may clean it with a piece of No. 00 sandpaper. However, if the commutator is rough, pitted, or out-of-round or if the insulation between the commutator bars is high, it must be reconditioned using an armature lathe.
Brushes should be at least half of their original size. If not, replace them. The brushes should have free movement in the brush holders and make good, clean contact with the commutator.
Once you have checked the starter and repaired it as needed, you should reassemble it, making sure that the starter brushes are seated. Align the housings and install the bolts securely. Install the starter in the opening in the flywheel housing and tighten the attaching bolts to the specified torque. Connect the cable and wire lead firmly to clean terminals.
There are many ways of testing a starting motor circuit to determine its operating condition. The most common tests are as follows:
The starter current draw test measures the amount of amperage used by the starting circuit. It quickly tells you about the condition of the starting motor and other circuit components. If the current draw is lower or higher than the manufacturer’s specifications, there is a problem in the circuit.
To perform a starter current draw test, you may use either a voltmeter or inductive ammeter or a battery load tester. These meters are connected to the battery to measure battery voltage and current flow out of the battery. For setup procedures, use the manufacturer’s manual for the type of meter you intend to use.
To keep a gasoline engine from starting during testing, disconnect the coil supply wire or ground the coil wire. With a diesel engine, disable the fuel injection system or unhook the fuel shutoff solenoid. Check the manufacturer’s service manual for details.
With the engine ready for testing, crank the engine and note the voltage and current readings. Check the manufacturer’s service manual. If they are not within specifications, there is something wrong with the starting circuit.
A voltage drop test will quickly locate a component with higher than normal resistance. This test provides an easy way of checking circuit condition. You do NOT have to disconnect any wires or components to check for voltage drops. The two types of voltage drop tests are the insulated circuit resistance test and the starter ground circuit test.
The insulated circuit resistance test checks all components between the positive terminal of the battery and the starting motor for excess resistance. Using a voltmeter, connect the leads to the positive terminal of the battery and the starting motor output terminal.
With the ignition or injection system disabled, crank the engine. Note the voltmeter reading. It should not be over 0.5 volts. If voltage drop is greater, something within the circuit has excessive resistance. There may be a burned or pitted solenoid contact, loose electrical connections, or other malfunctions. Each component must then be tested individually.
The starter ground circuit test checks the circuit between the starting motor and the negative terminal of the battery.
Using a voltmeter, connect the leads to the negative terminal of the battery and to the end frame of the starting motor. Crank the engine and note the voltmeter reading. If it is higher than 0.5 volts, check the voltage drop across the negative battery cable. The engine may not be properly grounded. Clean, tighten, or replace the battery cable if needed. A battery cable problem can produce symptoms similar to a dead battery, bad solenoid, or weak starting motor. If the cables do NOT allow enough current to flow, the starter will turn slowly or not at all.
|Test Your Knowledge
7. What safety switch prevents a vehicle equipped with a manual transmission from starting in an unsafe situation?
8. What is the maximum amount of time, in seconds, a starter may be cranked before damage can occur?
- To Table of Contents -
The ignition circuit supplies high voltage surges (some as high as 100,000 volts in electronic ignition circuits) to the spark plugs in the engine cylinders. These surges produce electric sparks across the spark plug gaps. The heat from the spark ignites the compressed air-fuel mixture in the combustion chambers. When the engine is idling, the spark appears at the spark plug gap just as the piston nears top dead center (TDC) on the compression stroke. When the engine is operating at higher speeds, the spark is advanced. It is moved ahead and occurs earlier in the compression stroke. This design gives the compressed mixture more time to burn and deliver its energy to the pistons.
The functions of an ignition circuit are as follows:
The ignition circuit is actually made of two separate circuits which work together to cause the electric spark at the spark plugs: the primary and secondary.
The primary circuit of the ignition circuit includes all of the components and wiring operating on low voltage (battery or alternator voltage). Wiring in the primary circuit uses conventional wire, similar to the wire used in other electrical circuits on the vehicle.
The secondary circuit of the ignition circuit is the high voltage section. It consists of the wire and components between the coil output and the spark plug ground. Wiring in the secondary circuit must have a thicker insulation than that of the primary circuit to prevent leaking (arcing) of the high voltage.
Various ignition circuit components are designed to achieve the functions of the ignition circuit. Basic ignition circuit components are as follows:
The ignition switch enables the operator to turn the ignition on for starting and running the engine and to turn it off to stop the engine (Figure 24). Most automotive ignition switches incorporate four positions: off, accessory, ignition on, and start:
Figure 24 — Ignition switch.
The off position shuts off the electrical system. Systems such as the headlights are usually not wired through the ignition switch and will continue to operate.
The accessory position turns on power to the entire vehicle’s electrical system with the exception of the ignition circuit. The ignition-on position turns on the entire electrical system including the ignition circuit.
The start position energizes the starter solenoid circuit to crank the engine.
The start position is spring-loaded to return to the ignition-on position when the key is released automatically.
An ignition distributor can be a contact point (Figure 25, View A), or pickup coil type (Figure 25, View B). A contact point distributor is commonly found in older vehicles, whereas the pickup coil type distributor is used on many modern vehicles.
Figure 25 — Ignition distributors.
The ignition distributor has several functions:
The distributor cap is an insulating plastic component that covers the top of the distributor housing. Its center terminal transfers voltage from the coil wire to the rotor. The distributor cap also has outer terminals that send electric arcs to the spark plugs. Metal terminals are molded into the plastic cap to provide electrical connections.
The distributor rotor transfers voltage from the coil wire to the spark plug wires. The rotor is mounted on top of the distributor shaft. It is an electrical switch that feeds voltage to each spark plug wire in turn.
A metal terminal on the rotor touches the distributor cap center terminal. The outer end of the rotor almost touches the outer cap terminals. Voltage is high enough that it can jump the air space between the rotor and cap. Approximately 4,000 volts are required for the spark to jump this rotor-to-cap gap.
An electronic ignition, also called solid state ignition, uses an electronic control circuit and distributor pickup coil to operate the ignition coil.
An electronic ignition is more dependable than a system of contact points because there are no mechanical breakers to burn out or wear down. This avoids trouble with ignition timing.
An electronic ignition is capable of producing a significantly higher secondary voltage over a points system. This allows for a wider spark plug gap and higher voltage to burn lean air-fuel mixtures. Leaner mixtures are now used to reduce emissions and improve fuel economy.
A distributorless ignition uses multiple ignition coils, a coil control unit, engine sensors, and a computer to operate the spark plugs (Figure 26).
Figure 26 — Distributorless ignition.
The electronic coil module consists of more than one coil and a coil control unit that operates the coils. The module’s control unit performs about the same function as the Ignition Control Module (ICM) in an electronic ignition. It will analyze data from different engine sensors and the system computer.
The coils are wired so they fire two spark plugs at the same time. One plug will fire on the power stroke and the other will fire on the exhaust stroke (there is no effect on engine operation). This system reduces the number of ignition coils required to operate the engine. For instance, a four cylinder would have only two coils, a six cylinder would have only three coils and so on.
A camshaft position sensor is installed in place of the ignition distributor. It sends an electrical pulse to the coil control unit providing data on camshaft and valve position.
A coil over plug ignition system has coils mounted on top of each spark plug (Figure 27). This type of system operates very similar to the distributorless ignition except for the lack of spark plug wires and the increase of coils. Sensor inputs allow the electronic control module to alter ignition timing with changes in operating conditions.
Figure 27 — Coil over plug ignition.
The spark plug consists of a porcelain insulator in which there is an insulated electrode supported by a metal shell with a grounded electrode. Spark plugs have the simple purpose of supplying a fixed gap in the cylinder across which the high voltage surges from the coil must jump after passing through the distributor.
The spark plugs use ignition coil high voltage to ignite the fuel mixture. Somewhere between 4,000 and 10,000 volts are required to make current jump the gap at the plug electrodes. This is much lower than the output potential of the coil.
Spark plug gap is the distance between the center and side electrodes. Normal gap specifications range between .030 to .060 inch. Smaller spark plug gaps are used on older vehicles equipped with contact point ignition systems.
Spark plugs are either resistor or non-resistor types (Figure 28). A resistor spark plug has internal resistance (approximately 10,000 ohms) designed to reduce the static in radios. Most new vehicles require resistor type plugs. Non-resistor spark plugs have a solid metal rod forming the center electrode. This type of spark plug is NOT commonly used except for racing and off-road vehicles.
Figure 28 — Spark plugs.
The heat range of the spark plug determines how hot the plug will get. The length and diameter of the insulator tip and the ability of the spark plug to transfer heat into the cooling system determine spark plug heat range.
A hot spark plug has a long insulator tip that prevents heat transfer into the water jackets. It will also bum off any oil deposits. This provides a self-cleaning action. A cold spark plug has a shorter insulator tip and operates at a cooler temperature. The cooler tip helps prevent overheating and pre-ignition. A cold spark plug is used in engines operated at high speeds.
Vehicle manufacturers recommend a specific spark plug heat range for their engines. The heat range is coded and given as a number on the spark plug insulator. The larger the number on the plug, the hotter the spark plug tip will operate. For example, a 54 plug would be hotter than a 44 or 34 plug.
The only time you should change from spark plug heat-range specifications is when abnormal engine or operating conditions are encountered. For instance, if the plug runs too cool, sooty carbon will deposit on the insulator around the center electrode. This deposit could soon build up enough to short out the plug. Then high voltage surges would leak across the carbon instead of producing a spark across the spark plug gap. Using a hotter plug will bum this carbon deposit away or prevent it from forming.
Spark plug reach is the distance between the end of the spark plug threads and the seat or sealing surface of the plug. Plug reach determines how far the plug reaches through the cylinder head. If spark plug reach is too long, the spark plug will protrude too far into the combustion chamber, and the piston at TDC may strike the electrode. However, if the reach is too short, the plug electrode may not extend far enough into the cylinder head, and combustion efficiency will be reduced. A spark plug must reach into the combustion chamber far enough so that the spark gap will be properly positioned in the combustion chamber without interfering with the turbulence of the air-fuel mixture or reducing combustion action.
The spark plug wires carry the high voltage electric current from the distributor cap side terminals to the spark plugs. In vehicles with distributorless ignition, the spark plug wires carry coil voltage directly to the spark plugs. The two types of spark plug wires are solid wire and resistance wire.
Solid wire spark plug wires are used on older vehicles. The wire conductor is simply a strand of metal wire. Solid wires can cause radio interference and are no longer used.
Resistance spark plug wires consist of carbon-impregnated strands of rayon braid. They are used on modern vehicle because they contain internal resistance that prevents radio interference. Also known as radio interference wires, they have approximately 10,000 ohms per foot. This prevents high voltage-induced popping or cracking of the radio speakers.
On the outer ends of the spark plug wires, boots protect the metal connectors from corrosion, oil, and moisture that would permit high voltage to leak across the terminal to the shell of the spark plug.
The basic difference between the contact point and the electronic ignition system is in the primary circuit. The primary circuit in a contact point ignition system is open and closed by contact points. In the electronic system, the primary circuit is open and closed by the electronic control unit (ECU).
The secondary circuits are practically the same for the two systems. The difference is that the distributor, ignition coil, and wiring are altered to handle the high voltage produced by the electronic ignition system. One advantage of this higher voltage (up to 60,000 volts) is that spark plugs with wider gaps can be used. This results in a longer spark, which can ignite leaner air-fuel mixtures. As a result, engines can run on leaner mixtures for better fuel economy and lower emissions.
The components of an electronic ignition system regardless of the manufacturer all perform the same functions. Each manufacturer has its own preferred terminology and location of the components. The basic components of an electronic ignition system are as follows:
Operation With the engine running, the trigger wheel rotates inside the distributor. As a tooth of the trigger wheel passes the pickup coil, the magnetic field strengthens around the pickup coil. This action changes the output voltage or current flow through the coil. As a result, an electrical surge is sent to the electronic control unit as the trigger wheel teeth pass the pickup coil.
The electronic control unit increases the electrical surges into on/off cycles for the ignition coil. When the ECU is on, current passes through the primary windings of the ignition coil and develops a magnetic field. Then, when the trigger wheel and pickup coil turn off the ECU, the magnetic field inside the ignition coil collapses and fires a sparkplug.
Ignition timing refers to how early or late the spark plugs fire in relation to the position of the engine pistons. Ignition timing must vary with engine speed, load, and temperature.
Timing advance happens when the spark plugs fire sooner than the compression strokes of the engine. The timing is set several degrees before top dead center (TDC). More time advance is required at higher speeds to give combustion enough time to develop pressure on the power stroke.
Timing retard happens when the spark plugs fire later on the compression strokes. This is the opposite of timing advance. Spark retard is required at lower speeds and under high load conditions. Timing retard prevents the fuel from burning too much on the compression stroke, which would cause spark knock or ping.
The basic methods to control ignition system timing are as follows:
Centrifugal advance makes the ignition coil and spark plugs fire sooner as engine speed increases, using spring-loaded weights, centrifugal force, and lever action to rotate the distributor cam or trigger wheel. Spark timing is advanced by rotating the distributor cam or trigger wheel against distributor shaft rotation. This action helps correct ignition timing for maximum engine power. Basically the centrifugal advance consists of two advance weights, two springs, and an advance lever.
During periods of low engine speed, the springs hold the advance weights inward towards the distributor cam or trigger wheel. At this time there is not enough centrifugal force to push the weights outward. Timing stays at its normal initial setting.
As speed increases, centrifugal force on the weights moves them outwards against spring tension. This movement causes the distributor cam or trigger wheel to move ahead. With this design, the higher the engine speed, the faster the distributor shaft turns, the farther out the advance weights move, and the farther ahead the cam or trigger wheel is moved forward or advanced. At a preset engine speed, the lever strikes a stop and centrifugal advance reaches maximum.
The action of the centrifugal advance causes the contact points to open sooner, or the trigger wheel and pickup coil turn off the ECU sooner. This causes the ignition coil to fire with the engine pistons not as far up in the cylinders.
The vacuum advance provides additional spark advance when engine load is low at part throttle position. It is a method of matching ignition timing with engine load. The vacuum advance increases fuel economy because it helps maintain idle fuel spark advance at all times. A vacuum advance consists of a vacuum diaphragm, link, movable distributor plate, and a vacuum supply hose.
At idle, the vacuum port from the carburetor or throttle body to the distributor advance is covered, thereby no vacuum is applied to the vacuum diaphragm, and spark timing is not advanced. At part throttle, the throttle valve uncovers the vacuum port and the port is exposed to engine vacuum.
The vacuum pulls the diaphragm outward against spring force. The diaphragm is linked to a movable distributor plate, which is rotated against distributor shaft rotation and spark timing is advanced. The vacuum advance does not produce any advance at full throttle. When the throttle valve is wide open, vacuum is almost zero. Thus vacuum is not applied to the distributor diaphragm and the vacuum advance does not operate.
The computerized advance, also known as an electronic spark advance system, uses various engine sensors and a computer to control ignition timing. The engine sensors check various operating conditions and sends electrical data to the computer. The computer can change ignition timing for maximum engine efficiency.
Ignition system engine sensors include the following:
The computer receives different current or voltage levels (input signals) from these sensors. It is programmed to adjust ignition timing based on engine conditions. The computer may be mounted on the air cleaner, under the dash, on a fender panel, or under a seat.
The following is an example of the operation of a computerized advance. A vehicle is traveling down the road at 50 mph; the speed sensor detects moderate engine speed. The throttle position sensor detects part throttle, and the air inlet and coolant temperature sensors report normal operating temperatures. The intake vacuum sensor sends high vacuum signals to the computer.
The computer receives all the data and calculates that the engine requires maximum spark advance. The timing would occur several degrees before TDC on the compression stroke. This action assures that high fuel economy is attained on the road.
If the operator begins to pass another vehicle, intake vacuum sensor detects a vacuum drop to near zero and a signal is sent to the computer. The throttle position sensor detects a wide open throttle and other sensor outputs say the same. The computer receives and calculates the data, then, if required, retards ignition timing to prevent spark knock or ping.
Ignition troubles can result from a myriad of problems, from faulty components to loose or damaged wiring. Unless the vehicle stops on the job, the operator will report trouble indications, and the equipment is turned in to the shop for repairs.
Unless the trouble is known, a systematic procedure should be followed to locate the cause. Remember, electric current will follow the path of least resistance. Trace ignition wiring while checking for grounds, shorts, and open circuits. Bare wires, loose connections, and corrosion are found through visual inspection.
After checking the system, you must evaluate the symptoms and narrow down the possible causes. Use your knowledge of system operation, a service manual troubleshooting chart, basic testing methods, and common sense to locate the trouble. Many shops have specialized equipment that provides the mechanic a quick and easy means of diagnosing ignition system malfunctions.
Bad spark plugs cause a wide range of problems such as misfiring, lack of power, poor fuel economy, and hard starting. After prolonged use, the spark plug tip can become coated with ash, oil, and other residue. The spark plug electrodes can also burn and widen the gap. This makes it more difficult for the ignition system to produce an electric arc between the electrodes.
To read spark plugs closely, inspect and analyze the condition of each spark plug tip and insulator. This will give you information on the condition of the engine, the fuel system, and the ignition system. The conditions commonly encountered with spark plugs are as follows:
Figure 29 — Spark plug conditions.
When a spark plug is removed for cleaning or inspection, it should be re-gapped to the engine manufacturer’s specifications. New spark plugs must also be re-gapped before installation, as they may have been dropped or mishandled and may not be within specifications.
Use a wire type feeler gauge to measure spark plug gap. Slide the feeler gauge between the electrodes. If needed, bend the side electrode until the feeler gauge fits snugly. The gauge should drag slightly as it is pulled in and out of the gap. Spark plug gaps vary from 0.030 inch on contact point ignitions to over 0.060 inch on electronic ignition systems.
When you are reinstalling spark plugs, tighten them to the manufacturer’s recommendation. Some manufacturers give spark plug torque, while others recommend bottoming the plugs on the seat and then turning an additional one-quarter to one-half turn. Refer to the manufacturer’s service manual for exact procedures.
A faulty spark wire can either have a burned or broken conductor, or it could have deteriorated insulation. Most spark plugs wires have a resistance conductor that can be easily separated. If the conductor is broken, voltage and current cannot reach the spark plug. If the insulation is faulty, sparks may leak through to ground or to another wire instead of reaching the spark plugs. To test the wires for proper operation, you can perform the following:
Installing new spark plug wire is a simply task, especially when you replace one wire at a time. Wire replacement is more complicated if all of the wires have been removed. Then you must use engine firing order and cylinder numbers to route each wire correctly. You can use service manuals to trace the wires from each distributor cap tower to the correct spark plug.
The distributor is critical to the proper operation of the ignition system. The distributor senses engine speed, alters ignition timing, and distributes high voltage to the spark plugs. If any part of the distributor is faulty, engine performance suffers.
When problems point to possible distributor cap or rotor troubles, remove and inspect them. The distributor cap should be carefully checked to see that sparks have not been arcing from point to point. Both interior and exterior must be clean. The firing points should not be eroded, and the interior of the towers must be clean.
The rotor tip, from which the high-tension spark jumps to each distributor cap terminal, should not be worn. It also should be checked for excessive burning, carbon trace, looseness, or other damage. Any wear or irregularity will result in excessive resistance to the high-tension spark. Make sure that the rotor fits snugly on the distributor shaft.
A common problem arises when a carbon trace forms on the inside of the distributor cap or outer edge of the rotor. The carbon trace will short coil voltage to ground or to a wrong terminal lug in the distributor cap. A carbon trace will cause the spark plugs to either fire poorly or not at all.
Using a droplight, check the inside of the distributor cap for cracks and carbon trace. Carbon trace is black, which makes it hard to see on a black colored distributor cap. If you find carbon trace or a crack, replace the distributor cap or rotor.
In a contact point distributor, there are two areas of concern: the contact points and the condenser.
Bad contact points cause a variety of engine performance problems. These problems include high speed missing, no-start problems, and many other ignition troubles. Visually inspect the surfaces of the contact points to determine their condition. Points with burned and pitted contacts or with a worn rubbing block must be replaced. However, if the points look good, point resistance should be measured. Turn the engine over until the points are closed and then use an ohmmeter to connect the meter to the primary point lead and to ground. If resistance reading is too high, the points are burned and must be replaced.
A faulty condenser may leak (allow some DC current to flow to ground), be shorted (direct electrical connection to ground), or be opened (broken lead wire to the condenser foils). If the condenser is leaking or open, it will cause point arcing and burning. If the condenser is shorted, primary current will flow to ground and the engine will not start. To test a condenser using an ohmmeter, connect the meter to the condenser and to ground. The meter should register slightly and then return to infinity (maximum resistance). Any continuous reading other than infinity indicates that the condenser is leaking and must be replaced.
Installing contact points is a relatively simple procedure but must be done with precision and care in order to achieve good engine performance and economy. Make sure the points are clean and free of any foreign material.
Proper alignment of the contact points is extremely important (Figure 30). If the faces of the contact points do not touch each other fully, heat generated by the primary current cannot be dissipated and rapid burning takes place. The contacts are aligned by bending the stationary contact bracket only. Never bend the movable contact arm. Ensure the contact arm-rubbing block rests flush against the distributor cam. Place a small amount of an approved lubricant on the distributor cam to reduce friction between the cam and rubbing block. Once you have installed the points, you can adjust them using either a feeler gauge or dwell meter.
Figure 30 — Contact point alignment.
To use a feeler gauge to set the contact points, turn the engine over until the points are fully open. The rubbing block should be on top of a distributor cam lobe. With the points open, slide the specified thickness feeler gauge between them. Adjust the points so that there is a slight drag on the blade of the feeler gauge. Depending upon point design, use a screwdriver or Allen wrench to open and close the points. Tighten the hold-down screws and recheck the point gap. Typically point gap settings average around .015 inch for eight-cylinder engines and .025 inch for six- and four-cylinder engines. For the gap set of the engine you are working on, consult the manufacturer’s service manual.
Ensure the feeler gauge is clean before inserting it between the points. Oil and grease reduces the service life of the points.
To use a dwell meter for adjusting contact points, connect the red lead of the dwell meter to the distributor side of the ignition coil (wire going to the contact points). Connect the black lead to ground.
If the distributor cap has an adjustment window, the points should be set with the engine running. With the meter controls set properly, adjust the points through the window of the distributor cap using an Allen wrench or a special screwdriver. Turn the point adjustment screw until the dwell meter reads within manufacturer’s specification. However, if the distributor cap does not have an adjustment window, remove the distributor cap and ground the ignition coil wire. Then crank the engine; this action will simulate engine operation and allow point adjustment with the dwell meter.
Dwell specifications vary with the number of cylinders. An eight-cylinder engine requires 30 degrees of dwell. An engine with few cylinders requires more dwell time. Always consult the manufacturer’s service manual for exact dwell values.
Dwell should remain constant as engine speed increases or decreases. However, if the distributor is worn, you can have a change in the dwell meter reading. This is known as dwell variation. If dwell varies more than 3 degrees, the distributor should either be replaced or rebuilt. Also, a change in the point gap or dwell will change ignition timing. For this reason, the points should always be adjusted before ignition timing.
Most electronic ignition distributors use a pickup coil to sense trigger wheel rotation and speed. The pickup coil sends small electrical impulses to the ECU. If the distributor fails to produce these electrical impulses properly, the ignition system can quit functioning.
A faulty pickup coil will produce a wide range of engine troubles, such as stalling, loss of power, or failure to start at all. If the small windings in the pickup coil break, they will cause problems only under certain conditions. It is important to know how to test a pickup coil for proper operation.
The pickup coil ohmmeter test compares actual pickup resistance with the manufacturer’s specifications. If the resistance is too high or low, the pickup coil is faulty. To perform this test, connect the ohmmeter across the output leads of the pickup coil. Wiggle the wire to the pickup coil and observe the meter reading. This will assist in locating any breaks in the wires to the pickup. Also, using a screwdriver, lightly tap the coil. This action will uncover any break in the coil windings.
Pickup coil resistance varies between 250 and 1,500 ohms, and you should refer to the service manual for exact specifications. Any change in the readings during the pickup coil resistance test indicates the coil should be replaced. Refer to the manufacturer’s service manual for instructions for the removal and replacement of the pickup coil.
Once you have replaced the pickup coil, you need to set the pickup coil air gap. The air gap is the space between the pickup coil and the trigger wheel tooth. To obtain an accurate reading, use a nonmagnetic feeler gauge (plastic or brass).
With one tooth of the trigger wheel pointing at the pickup coil, slide the correct thickness non-magnetic feeler gauge between the trigger wheel and the pickup coil. Move the pickup coil in or out until the correct air gap is set. Tighten the pickup coil screws and double check the air gap setting.
The ignition system must be timed so the sparks jump across the spark plug gaps at exactly the right time. Adjusting the distributor on the engine so that the spark occurs at this correct time is called setting the ignition timing. The ignition timing is normally set at idle or a speed specified by the engine manufacturer. Before measuring engine timing, disconnect and plug the vacuum advance hose going to the distributor. This action prevents the vacuum advance from functioning and upsetting the readings. Make the adjustment by loosening the distributor hold-down screw and turning the distributor in its mounting.
Turning the distributor housing against the distributor shaft rotation advances the timing. Turning the distributor housing with shaft rotation retards the timing (Figure 31). When the ignition timing is too advanced, the engine may suffer from spark knock or ping.
Figure 31 — Determining direction of rotor rotation.
When ignition timing is too retarded, the engine will have poor fuel economy and power and will be very sluggish during acceleration. If extremely retarded, combustion flames blowing out of the open exhaust valve can overheat the engine and crack the exhaust manifolds.
A timing light is used to measure ignition timing. It normally has three leads—two small leads that connect to the battery, and one larger lead that connects to the number one spark plug wire. Depending on the type of timing light, the large lead may clip around the plug wire (inductive type), or it may need to be connected directly to the metal terminal of the plug wire (conventional type).
Draw a chalk line over the correct timing mark. This will make it easier to see. The timing marks may be either on the front cover in harmonic balance of the engine, or they may be on the engine flywheel.
With the engine running, aim the flashing timing light at the timing mark and reference pointer. The flashing timing light will make the mark appear to stand still. If the timing mark and the pointer do not line up, turn the distributor in its mounting until the timing mark and pointer are aligned. Tighten the distributor hold-down screw.
Keep your hands and the timing light leads from the engine fan and belts. The spinning fan and belts can damage the light or cause serious personal injury.
After the initial ignition timing, you should check to see if the automatic advance mechanism is working. This can be done by keeping the timing light flashes aimed at the timing mark and gradually increasing speed. If the advance mechanism is operating, the timing mark should move away from the pointer. If the timing mark fails to move as the speed increases or it hesitates and then suddenly jumps, the advance mechanism is faulty and should either be repaired or replaced.
Replace the distributor vacuum line and see if timing still conforms to the manufacturer’s specifications. If the timing is NOT advanced when the vacuum line is connected and the throttle is opened slightly, the vacuum advance unit or tubing is defective.
Most computer-controlled ignition systems have no provision for timing adjustment. A few, however, have a tiny screw or lever on the computer for small ignition timing changes.
A computer-controlled ignition system has what is known as base timing. Base timing is the ignition timing without computer-controlled advance. Base timing is checked by disconnecting a wire connector in the computer wiring harness. This wire connector may be found on or near the engine or sometimes next to the distributor. When in the base timing mode, a conventional timing light can be used to measure ignition timing. If ignition timing is not correct, you can rotate the distributor, in some cases, or move the mounting for the engine speed or crank position sensor. If base timing cannot be adjusted, the electronic control unit or other components will have to be replaced. Always refer to the manufacturer’s service manual when timing a computer-controlled ignition system.
|Test Your Knowledge
9. Of the two circuits within the ignition circuit, which one uses conventional wiring?
10. What are the two types of sparkplugs?
- To Table of Contents -
The lighting circuit includes the battery, vehicle frame, all the lights, and various switches that control their use. The lighting circuit is known as a single-wire system since it uses the vehicle frame for the return.
The complete lighting circuit of a vehicle can be broken down into individual circuits, each having one or more lights and switches. In each separate circuit, the lights are connected in parallel, and the controlling switch is in series between the group of lights and the battery. The marker lights, for example, are connected in parallel and are controlled by a single switch.
In some installations, one switch controls the connections to the battery, while a selector switch determines which of two circuits is energized. The headlights, with their high and low beams, are an example of this type of circuit. In some instances, such as the courtesy lights, several switches may be connected in parallel so that any switch may be used to turn on the light.
When a wiring diagram is being studied, all light circuits can be traced from the battery through the ammeter to the switch (or switches) to the individual light.
The headlights are sealed beam lamps that illuminate the road during nighttime operation (Figure 32). Headlights consist of a lens, one or two elements, and an integral reflector. When current flows through the element, the element gets white hot and glows. The reflector and lens direct the light forward. Many modern passenger vehicles use a halogen or HID headlights.
Figure 32 — Sealed beam headlight assembly
The headlight switch is an on/off switch and rheostat (variable resistor) in the dash panel or on the steering column. The headlight switch controls current flow to the lamps of the headlight system. The rheostat is for adjusting the brightness of the instrument panel lights..
Small gas-filled incandescent lamps with tungsten filaments are used on automotive and construction equipment (Figure 33). The filaments supply the light when sufficient current is flowing through them. They are designed to operate on a low voltage current of 12 or 24 volts, depending upon the voltage of the electrical system used.
Figure 33 — Different types of lamps.
Lamps are rated as to size by the candlepower (luminous intensity) they produce. They range from small 1/2-candlepower bulbs to large 50-candlepower bulbs. The greater the candlepower of the lamp, the more current it requires when lighted. Lamps are identified by a number on the base. When you replace a lamp in a vehicle, be sure the new lamp is of the proper rating. The lamps within the vehicle will be of the single- or double-contact types with nibs to fit bayonet sockets (Figure 34).
Figure 34 — Double-contact bulb and bayonet socket.
Most vehicles made today use a halogen headlamp bulb insert (Figure 35, View A). These are small heat-resistant quartz bulbs filled with halogen gas to protect the filament from damage. They are inserted to a headlight lens assembly. This assembly will protect the light bulb and disperse the light given from the halogen bulb.
Never touch the glass surface of a halogen or HID light. The oil in your skin and the high operating temperature can shorten the life of the bulb or cause the glass to shatter.
The white halogen bulb increases visibility and increases output by about 25% while drawing the same amount of current. A typical low beam bulb is 45 watts and a high beam bulb is 65 watts.
Figure 35 — Halogen and HID headlights.
A high intensity discharge lamp does not use a filament (Figure 35, View B). Instead, a high voltage electric arc flows between two electrodes in the bulb. This arc excites xenon vapor contained in the bulb, producing a bright blue-white light.
An external ballast is used to convert battery voltage into high-voltage AC to create and maintain the arc. When it is first turned on, an igniter works with the ballast to provide several thousand volts to establish the arc. The ballast then provides as many as 450 volts to maintain the arc. As the bulb warms up, the voltage needed to maintain the lamp can be as low as 50 volts.
HID lights produce more light than a standard halogen bulb while consuming less power, and they last longer.
HID bulbs require a large amount of voltage for startup: beware of a shock hazard. Also, HID bulbs are under pressure when hot and may lead to an explosion hazard.
A light emitting diode is a semiconductor that will emit light when electrically energized. The LED converts electricity directly into light; this makes it much more efficient than a normal filament bulb.
The LED is an N-P junction with special doped semiconductors. When energized, photons (electrons) are emitted from the semiconductor substance. We then see these photons as light.
Military vehicles used in tactical situations are equipped with a headlight switch that is integrated with the blackout lighting switch (Figure 36).
Figure 36 — Blackout light/ headlight switch.
The blackout select is operated by a 2-way rocker switch. This switch allows an operator to select between normal or blackout mode. To select normal mode, press the smaller bottom switch up and hold, while pressing the main switch down. To select blackout mode, instead of pressing the main switch down, press it up.
In blackout mode, the backup alarm will not operate.
The purposes of blackout lighting are:
Here are the three types of blackout lighting:
Figure 37 — Blackout driving light.
Figure 38 — Blackout marker light.
Figure 39 — Blackout composite light.
Blackout lighting control switches are designed to prevent the service lighting from being turned on accidentally.
Vehicles that operate on any public road must be equipped with turn signals. These signals indicate a left or right turn by providing a flashing light signal at the rear and front of the vehicle.
The turn-signal switch is located on the steering column (Figure 40). It is designed to shut off automatically after the turn is completed by the action of the canceling cam.
Figure 40 — Turn signal switch.
A wiring diagram for a typical turn-signal system is shown in Figure 41. A common design for a turn signal system is to use the same rear light for both the stop and turn signals. This somewhat complicates the design of the switch in that the stoplight circuit must pass through the turn-signal switch. When the turn signal switch is turned off, it must pass stoplight current to the rear lights. As a left or right turn signal is selected, the stoplight circuit is open and the turn signal circuit is closed to the respective rear light.
Figure 41 — Turn signal wiring diagram.
The turn signal flasher unit creates the flashing of the turn signal lights (Figure 42). It consists basically of a bimetallic (two dissimilar metals bonded together) strip wrapped in a wire coil. The bimetallic strip serves as one of the contact points.
Figure 42 — Flasher unit.
When the turn signals are actuated, current flows into the flasher—first through the heating coil to the bimetallic strip, then through the contact points, then out of the flasher, where the circuit is completed through the turn-signal light. This sequence of events will repeat a few times a second, causing a steady flashing of the turn signals.
The backup light system provides visibility to the rear of the vehicle at night and a warning to the pedestrians, whenever the vehicle is shifted into reverse. The backup light system has a fuse, gearshift or transmission-mounted switch, two backup lights, and wiring to connect these components.
The backup light switch closes the light circuit when the transmission is shifted into reverse. The most common backup light switch configurations are as follows:
All vehicles that are used on public highways must be equipped with a stoplight system. The stoplight system consists of a fuse, brake light switch, two rear warning lights, and related wiring (Figure 43).
Figure 43 — Brake light switch.
The brake light switch on most automotive equipment is mounted on the brake pedal. When the brake pedal is pressed, it closes the switch and turns on the rear brake lights. On construction and tactical equipment, you may find a pressure light switch. This type of switch uses either air or hydraulic pressure, depending on the equipment. It is mounted on the master cylinder of the hydraulic brake system or is attached to the brake valve on an air brake system. As the brakes are depressed, either air or hydraulic pressure builds on a diaphragm inside the switch. The diaphragm closes, allowing electrical current to turn on the rear brake lights.
The emergency light system, also termed hazard warning system, is designed to signal oncoming traffic that a vehicle has stopped, stalled, or pulled over to the side of the road. The system consists of a switch, flasher unit, four turn-signal lights, and related wiring. The switch is normally a push-pull switch mounted on the steering column.
When the switch is closed, current flows through the emergency flasher. Like a turn signal flasher, the emergency flasher opens and closes the circuit to the lights. This causes all four turn signals to flash.
Fuses are safety devices placed in electrical circuits to protect wires and electrical units from a heavy flow of current. Each circuit, or at least each individual electrical system, is provided with a fuse that has an ampere rating for the maximum current required to operate the units. The fuse element is made from metal with a low-melting point and forms the weakest point of the electrical circuit. In case of a short circuit or other trouble, the fuse will be burned out first and open the circuit just as a switch would do. Examination of a burnt-out fuse usually gives an indication of the problem. A discolored sight glass indicates the circuit has a short either in the wiring or in one of its components. If the glass is clear, the problem is an overloaded circuit. Be sure when replacing a fuse that it has a rating equal to the one burned out. Ensure that the trouble of the failure has been found and repaired.
A circuit breaker performs the same function as a fuse. It disconnects the power source from the circuit when current becomes too high. The circuit breaker will remain open until the trouble is corrected. Once the trouble is corrected, a circuit breaker will automatically reset itself when current returns to normal levels. The fuses and circuit breakers can usually be found behind the instrument panel on a fuse block (Figure 44).
Figure 44 — Fuse block.
A mini fuse is a blade type fuse with two prongs that fit into sockets in the fuse block (Figure 45, View A). Mini fuses are color coded in accordance to the ampere rating between 1 and 40 amps. They are the smaller type of blade fuse with a dimension of 10.9x3.6x16.3mm.
A conventional fuse is a blade type fuse and is a larger version of the mini fuse (Figure 45, View B). Conventional fuses also are color coded in accordance to the ampere rating between 1 and 40 amps. They are the regular type of blade fuse with a dimension of 19.1x5.1x18.5mm.
A maxi fuse is a also blade type fuse with two prongs that fit into sockets; however, they are quite a bit larger and are usually found under the hood (Figure 45, View C). The maxi fuse is available in current ratings from 20 to 80 amps. They are color coded in accordance to the ampere rating and their dimensions are 29.2x8.5x34.3mm.
Figure 45— Fuses.
A circuit breaker performs the same function as a fuse. The difference is that the circuit breaker is still usable after it trips. It will sense a high current condition, disconnect the circuit temporarily, and then if the current draw returns to normal, it will reset itself.
A circuit breaker contains a bi-metal strip that remains cool and straight under normal load. Under high current load, the metal strip heats up, bends or warps, and opens the breaker.
Type 1 circuit breakers are cycling circuit breakers (Figure 46). This means that after the breaker cools down, the metal strip straightens out again and closes the circuit. These are sometimes seen in headlight, fog light, and windshield wiper circuits.
Figure 46 — Cycling circuit breaker.
Type 2 circuit breakers are non-cycling breakers. This means that after the breaker heats up, the current flows through an armature on the breaker. The armature heats up and bends away from the contact points. Now the electricity can flow only through a resister, also mounted on the breaker. When a non-cycling breaker trips, current can pass only through the resistor, resulting in greatly reduced current and voltage to the circuit.
To reset this circuit breaker, you must open the circuit and allow it to cool off. The cooling effect will allow the metal to straighten and make contact with the points again. Non-cycling circuit breakers are used extensively in truck electrical circuits.
|Test Your Knowledge
11. By what percentage is light output increased when using halogen headlights?
12. What component of the headlight switch allows for adjusting the brightness of the instrument panel lights?
13. On most automotive vehicles, the brake light switch is mounted at what location?
- To Table of Contents -
The instrument panel is placed so that the instruments and gauges can easily be read by the operator. They inform the operator of the vehicle speed, engine temperature, oil pressure, rate of charge or discharge of the battery, amount of fuel in the fuel tank, and distance traveled.
The battery condition gauge is one of the most important gauges on the vehicle. If the gauge is interpreted properly, it can be used to troubleshoot or prevent breakdowns. The following are the three basic configurations of battery condition gauges—ammeter, voltmeter, and indicator lamp.
The ammeter is used to indicate the amount of current flowing to and from the battery. It does NOT give an indication of total charging output because of other units in the electrical system. If the ammeter shows a 10-ampere discharge, it indicates that a 100 ampere-hour battery would be discharged in 10 hours, as long as the discharge rate remained the same. Current flowing from the battery to the starting motor is never sent through the ammeter, because the great quantities of amperes used (200 to 600 amperes) cannot be measured due to its limited capacity. In a typical ammeter, all the current flowing to and from the battery, except for starting, actually is sent through a coil to produce a magnetic field that deflects the ammeter needle in proportion to the amount of current (Figure 47). The coil is matched to the maximum current output of the charging unit, and this varies with different applications.
Figure 47 — Ammeter schematic.
Figure 48 — Voltmeter schematic.
The voltmeter provides a more accurate indication of the condition of the electrical system and is easier to interpret by the operator (Figure 48). During vehicle operation, the voltage indicated on the voltmeter is considered to be normal in a range of 13.2 to 14.5 volts for a 12-volt electrical system. As long as the system voltage remains in this range, the operator can assume that no problem exists. This contrasts with an ammeter, which gives the operator no indication of problems, such as an improperly calibrated voltage regulator, which could allow the battery to be drained by regulating system voltage to a level below normal.
The indicator lamp has gained popularity as an electrical system condition gauge over the years. Although it does not provide as detailed analysis of the electrical system condition as a gauge, it is considered more useful to the average vehicle operator. This is because it is highly visible when a malfunction occurs, whereas a gauge usually is ignored because the average vehicle operator does not know how to interpret its readings. The indicator lamp can be used in two different ways to indicate an electrical malfunction:
Figure 49 — Low voltage warning lamp schematic.
Figure 50 — No-charge indicator schematic.
Most fuel gauges are operated electrically and are composed of two units—the gauge, mounted on the instrument panel; and the sending unit, mounted in the fuel tank. The ignition switch is included in the fuel gauge circuit, so the gauge operates only when the ignition switch is in the ON position. The basic fuel gauge circuit uses a variable resistor to operate either a bimetal or magnetic type indicator assembly (Figure 51).
Figure 51 — Fuel gauge schematic.
Located in the trunk, the sending unit consists of a float and arm that operate a variable resistor. When the fuel tank is empty, the float is down so the variable resistance will be high. This allows only a little amount of current to flow through the fuel gauge. The bimetal arm stays cool and the needle shows that the tank is low.
When the tank is filled, the float rises to the top of the tank. This slides the wiper to the low resistance position on the variable resistor. More current then flows through the fuel gauge circuit. The bimetal arm heats up and warps to move the needle to the full side of the gauge.
A pressure gauge is used widely in automotive and construction applications to keep track of such things as oil pressure, fuel line pressure, air brake system pressure, and the pressure in the hydraulic systems. Depending on the equipment, a mechanical gauge, an electrical gauge, or an indicator lamp may be used.
The mechanical gauge uses a thin tube to carry an actual pressure sample directly to the gauge (Figure 52). The gauge basically consists of a hollow, flexible C-shaped tube called a bourbon tube. As air or fluid pressure is applied to the bourbon tube, it tends to straighten out. As it straightens, the attached pointer moves, giving a reading.
Figure 52 — Mechanical oil pressure gauge.
The electric gauge may be of the thermostatic or magnetic type as previous discussed (Figure 53).
Figure 53 — Electric oil pressure gauge.
The sending unit that is used with each gauge type varies as follows:
The indicator lamp (warning light) is used in place of a gauge on many vehicles. The warning light, although not an accurate indicator, is valuable because of its high visibility in the event of a low-pressure condition. The warning light receives battery power
through the ignition switch. The circuit to ground is completed through a sending unit. The sending unit consists of a pressure-sensitive diaphragm that operates a set of contact points that are calibrated to turn on the warning light whenever pressure drops below a set pressure.
The temperature gauge is a very important indicator in construction and automotive equipment. The most common uses are to indicate engine coolant, transmission fluid, differential oil, and hydraulic system temperatures. Depending on the type of equipment, the gauge may be mechanical, electric, or a warning light.
The electric gauge may be the thermostatic or magnetic type, as described previously. The sending unit that is used varies, depending upon application (Figure 54).
Figure 54 — Temperature sending unit.
The sending unit used with the thermostatic gauge consists of two bimetallic strips, each having a contact point. One bimetallic strip is heated electrically. The other strip bends to increase the tension of the contact points. The different positions of the bimetallic strip create the gauge readings.
The sending unit used with the magnetic gauge contains an electronic device called a thermistor whose resistance decreases proportionally with an increase in temperature.
The magnetic gauge contains a bourbon tube and operates by the same principles as the mechanical pressure gauge.
The indicator lamp (warning light) operates by the same principle as the indicator light previously discussed.
A transmission temperature gauge operates on the same principles as the engine temperature gauge. The sending unit, a gauge and connection wire, may be mounted in the transmission oil pan, the cooling line between the radiator and the transmission, or in the valve body of the transmission. The importance of a transmission temperature gauge is that if the automatic transmission fluid gets too hot, it can actually start to boil. When this occurs, catastrophic transmission failure is eminent
Both the mechanical speedometer and the tachometer consist of a permanent magnet rotated by a flexible shaft. Surrounding the rotating magnet is a metal cup attached to the indicating needle. The revolving magnetic field exerts a pull on the cup that forces it to rotate. The rotation of the cup is countered by a calibrated hairspring. The influence of the hairspring and the rotating magnetic field on the cup produces accurate readings by the attached needle. The flexible shaft consists of a flexible outer casing made of either steel or plastic and an inner drive core made of wire-wound spring steel. Both ends of the core are molded square so they can fit into the driving member at one end and the driven member at the other end, and can transmit torque.
Gears on the transmission output shaft turn the flexible shaft that drives the speedometer. This shaft is referred to as the speedometer cable. A gear on the ignition distributor shaft turns the flexible shaft that drives the tachometer. This shaft is referred to as the tachometer cable.
The odometer of the mechanical speedometer is driven by a series of gears that originate at a spiral gear on the input shaft. The odometer consists of a series of drums with digits printed on the outer circumference that range from zero to nine. The drums are geared to each other so that each time the one farthest to the right makes one revolution, it will cause the one to its immediate left to advance one digit. The second to the right then will advance the drum to its immediate left one digit for every revolution it makes. This sequence continues to the left through the entire series of drums. The odometer usually contains six digits to record 99,999.9 miles or kilometers. However, models with trip odometers do not record tenths, therefore contain only five digits. When the odometer reaches its highest value, it will automatically reset to zero. Newer vehicles incorporate a small dye pad in the odometer to color the drum of its highest digit to indicate the total mileage is in excess of the capability of the odometer.
The electric speedometer and tachometer use a mechanically driven permanent magnet generator to supply power to a small electric motor (Figure 55). The electric motor then is used to rotate the input shaft of the speedometer or tachometer. The voltage from the generator will increase proportionally with speed, and speed will likewise increase proportionally with voltage enabling the gauges to indicate speed.
Figure 55 — Electric speedometer and tachometer operation.
The signal generator for the speedometer is usually driven by the transmission output shaft through gears. The signal generator for the tachometer usually is driven by the distributor through a power takeoff on gasoline engines. When the tachometer is used with a diesel engine, a special power takeoff provision is made, usually on the camshaft drive.
Electronic speedometers and tachometers are self-contained units that use an electric signal from the engine or transmission. They differ from the electric unit in that they use a generated signal as the driving force. The gauge is transistorized and will supply information through either a magnetic analog (dial) or light-emitting diode (LED) digital gauge display. The gauge unit derives its input signal in the following ways:
An electronic tachometer obtains a pulse signal from the ignition distributor as it switches the coil on and off. The pulse speed at this point will change proportionally with engine speed. This is the most popular signal source for a tachometer that is used on a gasoline engine.
A tachometer that is used with a diesel engine uses the alternating current generated by the stator terminal of the alternator as a signal. The frequency of the AC current will change proportionally with engine speed.
An electronic speedometer derives its signal from a magnetic pickup coil that has its field interrupted by a rotating pole piece. The pickup coil is located strategically in the transmission case to interact with the reluctor teeth on the input shaft.
The horn currently used on automotive vehicles is the electric vibrating type. The electric vibrating horn system typically consists of a fuse, horn button switch, relay, horn assembly, and related wiring. When the operator presses the horn button, it closes the horn switch and activates the horn relay. This completes the circuit, and current is allowed through the relay circuit and to the horn.
Most horns have a diaphragm that vibrates by means of an electromagnetic. When the horn is energized, the electromagnet pulls on the horn diaphragm. This movement opens a set of contact points inside the horn. This action allows the diaphragm to flex back towards its normal position. This cycle is repeated rapidly. The vibrations of the diaphragm within the air column produce the note of the horn.
Tone and volume adjustments are made by loosening the adjusting locknut and turning the adjusting nut. This very sensitive adjustment controls the current consumed by the horn. Increasing the current increases the volume. However, too much current will make the horn sputter and may lock the diaphragm.
When an electric horn will not produce sound, check the fuse, the connections, and test for voltage at the horn terminal. If the horn sounds continuously, a faulty horn switch is the most probable cause. A faulty horn relay is another cause of horn problems. The contacts inside the relay may be burned or stuck together.
The windshield wiper system is one of the most important safety factors on any piece of equipment. A typical electric windshield wiper system consists of a switch, motor assembly, wiper linkage and arms, and wiper blades. The descriptions of the components are as follows:
The windshield wiper switch is a multi-position switch, which may contain a rheostat. Each switch position provides for different wiping speeds. The rheostat, if provided, operates the delay mode for a slow wiping action. This permits the operator to select a delayed wipe from every 3 to 20 seconds. A relay is frequently used to complete the circuit between the battery voltage and the wiper motor.
The wiper motor assembly operates on one, two, or three speeds (Figure 56). The motor has a worm gear on the armature shaft that drives one or two gears, and in turn operates the linkage to the wiper arms. The motor is a small shunt-wound DC motor. Resistors are placed in the control circuit from the switch to reduce the current and provide different operating speeds.
Figure 56 — Wiper motor assembly.
The wiper linkage and arms transfer motion from the wiper motor transmission to the wiper blades. The rubber wiper blades fit on the wiper arms.
The wiper blade is a flexible rubber squeegee-type device. It may be steel or plastic backed and is designed to maintain total contact with the windshield throughout the stroke. Wiper blades should be inspected periodically. If they are hardened, cut, or split, they should be replaced.
When electrical problems occur in the windshield wiper system, use the service manual and its wiring diagram of the circuit. First check the fuses, electrical connections, and all grounds. Then proceed with checking the components.
|Test Your Knowledge
14. Which type of battery condition gauge provides the most accurate indication of the condition of the electrical system?
15. The signal generator for an electric tachometer used on a gasoline engine is driven by what component?
16. What type of oil pressure gauge has a bourbon tube?
- To Table of Contents -
Electrical power and control signals must be delivered to electrical devices reliably and safely so that the electrical system functions are not impaired or converted to hazards. To fulfill power distribution, military vehicles use one- and two-wire circuits, wiring harnesses, and terminal connections.
Among your many duties will be the job of maintaining and repairing automotive electrical systems. All vehicles are not wired in exactly the same manner; however, once you understand the circuit of one vehicle, you should be able to trace an electrical circuit of any vehicle using wiring diagrams and color codes.
Tracing wiring circuits, particularly those connecting lights or warning and signal devices, is no simple task. Branch circuits making up the individual systems have one wire to conduct electricity from the battery to the unit requiring it, and ground connections at the battery and the unit to complete the circuit. These are called one-wire circuits or branches of a ground return system. In automotive electrical systems with branch circuits that lead to all parts of the equipment, the ground return system saves installation time and eliminates the need for an additional wiring to complete the circuit. The all-metal construction of the automotive equipment makes it possible to use this system.
The two-wire circuit requires two wires to complete the electrical circuit—one wire from the source of electrical energy to the unit it will operate, and another wire to complete the circuit from the unit back to the source of the electrical power.
Two-wire circuits provide positive connection for light and electrical brakes on some trailers. The coupling between the trailer and the equipment, although made of metal and a conductor of electricity, has to be jointed to move freely. The rather loose joint or coupling does not provide the positive and continuous connection required to use a ground return system between two vehicles. The two-wire circuit is commonly used on equipment subject to frequent or heavy vibrations. Tracked equipment, off-road vehicles (tactical), and many types of construction equipment are wired in this manner.
Shielded wire has a center conductor that is surrounded by an outer metal shield. Insulation is used to separate the shield and the conductor. This construction keeps magnetic pulses from being inducted into the center conductor causing unwanted voltage pulses.
This type of wire is mostly used for the automotive antenna. The lead must be protected from the magnetic fields from the engine’s ignition system to prevent static from being heard over the radio.
There is also twisted shield wire. This type of wire uses multiple insulated conductors wrapped around each other. This design still provides the protection from the magnetic fields and is used to connect the computer to various sensors, particularly those near the ignition system. Twisted shield wire helps keep high voltage pulses from interfering with the tiny voltage signals going between the computer and other sensors in the vehicle.
Unshielded wire is the most common type of wire found in automotive manufacturing. There is no shield on the wire except for the insulation wrapped around the wire to prevent accidental grounding. There is no special shield to protect the wire from electromagnetic force.
Wiring assemblies consist of wires and cables of definitely prescribed length, assembled together to form a subassembly that interconnect specific electrical components and/or equipment. The two basic types of wiring assemblies are as follows:
The cable assembly consists of a stranded conductor with insulation or a combination of insulated conductors enclosed in a covering or jacket from end to end. Terminating connections seal around the outer jacket so that the inner conductors are isolated completely from the environment. Cable assemblies may have two or more ends.
Figure 57— Wiring harness.
Wiring harness assemblies serve two purposes (Figure 57). They prevent chafing and loosening of terminals and connections caused by vibration and road shock while keeping the wires in a neat condition away from moving parts of the vehicle. Wiring harnesses contain two or more individual conductors laid parallel or twisted together and wrapped with binding material, such as tape, lacing cord, and wire ties. The binding materials do not isolate the conductors from the environment completely, and conductor terminations may or may not be sealed. Wiring harnesses also may have two or more ends.
Wires in the electrical system should be identified by a number, color, or code to facilitate tracing circuits during assembly, troubleshooting, or rewiring operations. This identification should appear on wiring schematics and diagrams and whenever practical on the individual wire. The assigned identification for a continuous electrical connection should be retained on a schematic diagram until the circuit characteristic is altered by a switching point or active component.
Wiring color codes are used by manufacturers to assist the mechanics in identifying the wires used in many circuits and making repairs in a minimum of time. No color code is common to all manufacturers. For this reason, the manufacturer’s service manual is a must for speedy troubleshooting and repairs.
Wiring found on military tactical equipment (M-series) has no color. All the wires used on these vehicles are black. Small metal tags stamped with numbers or codes are used to identify the wiring illustrated by diagrams in the technical manuals (Figure 58). These tags are securely fastened near the end of individual wires.
Figure 58 — Metal tag wire identification.
Wiring diagrams are drawings that show the relationship of the electrical components and wires in a circuit (Figure 59). They seldom show the routing of the wires within the electrical system of the vehicle.
Figure 59 — Wiring diagram.
Often you will find electrical symbols used in wiring diagrams to simulate individual components. Figure 60 shows some of the symbols you may encounter when tracing individual circuits in a wiring diagram.
Figure 60 — Wiring diagram symbols.
Wire terminals are divided into two major classes—the solder type and the solder-less type, which is also known as the pressure or crimp type. The solder type has a cup in which the wire is held by solder permanently. The solder-less type is connected to the wire by special tools that deform the barrel of the terminal and exert pressure on the wire to form a strong mechanical bond and electrical connection. Solder-less type terminals are gradually replacing solder type terminals in military equipment.
Wire in the electrical system should be supported by clamps or fastened by wire ties at various points about the vehicle. When installing new wiring, be sure to keep it away from any heat-producing component that would scorch or bum the insulation.
Wire passing through holes in the metal members of the frame or body should be protected by rubber grommets. If rubber grommets are not available, use a piece of rubber hose the size of the hole to protect the wiring from chafing or cutting on sharp edges.
|Test Your Knowledge
17. What type of wire circuit is commonly used on equipment that is subject to heavy vibrations?
18. How many different types of wiring assemblies are there?
- To Table of Contents -
In this handbook we discussed the different automotive electrical systems, their functions, and associated troubleshooting methods. Because there are so many different components and designs, always check with the manufacturer’s specifications when working on an unfamiliar circuit. Almost everything you work on will have an electrical circuit of some sort, and you need to be familiar with how the components operate. To be a good construction mechanic you will need to study these systems and stay up to date with current systems to keep them operating in peak condition
- To Table of Contents -
1. In a lead-acid battery, current is produced by what type of reaction?
2. A 12-volt lead-acid automotive battery consists of how many elements connected in series?
3. Why are the cell elements of a storage battery elevated inside the case?
4. When the temperature is 80°F, a fully charged lead-acid battery will produce what specific gravity reading?
5. When taking a hydrometer reading of a battery whose temperature is 100°F, you must make what modification to the reading to determine the actual specific gravity of the electrolyte?
6. What are the two methods for rating lead-acid storage batteries?
7. When charging batteries, you should take which action?
8. What procedure is considered the only safe way to mix electrolyte for a lead-acid battery?
9. When cleaning the top of a lead-acid battery, which combination should you use?
10. What test allows you to determine the general condition of a maintenance free battery?
11. When load testing a battery with a cold-cranking rating of 350 amps, you should load the battery to what total number of amps?
12. The current generated by an alternator is converted to direct current by means of what component?
13. What component of an alternator is mounted on the rotor shaft and provides current to the rotor windings?
14. In what manner are stator windings connected in an alternator?
15. What type of stator will provide good current output at low engine speeds?
16. A total of how many diodes are grounded in an alternator?
17. Grounding the field terminal of the alternator will result in damage to the _______.
18. By what means can the proper operation of a charging system containing an alternator be checked?
19. To determine if an alternator rotor is internally shorted, you can test the rotor windings with what device?
20. When performing a regulator bypass test, which method should you use to bypass the voltage regulator?
21. What mechanism relies on the principle of inertial force to make the drive pinion mesh with the flywheel?
22. In a starting circuit containing a solenoid, when is battery current supplied to the starter motor?
23. Field windings vary according to application. What is the most popular configuration used to provide a large amount of low-speed torque?
24. Which starting circuit component is common to all vehicles and equipment having automatic transmissions?
25. When it is necessary to adjust a neutral safety switch, which test equipment is required?
26. The battery-ignition circuit consists of a total of how many circuits?
27. In an ignition circuit, high voltage is directed to the spark plugs in the correct firing order by what component?
28. When troubleshooting an ignition circuit, you should change the manufacturer's specified heat range of the spark plugs when what condition exists?
D. 29. What component opens and closes the primary circuit of an electronic ignition system?
30. In a computerized timing advance mechanism, what sensor reports piston position to the computer?
31. A grayish tan deposit on the insulator of a spark plug indicates what condition?
32. How often should spark plugs be regapped?
33. You have performed a spark plug wire resistance test. The test should not show the resistance to be over 5,000 ohms per inch, or what total number of ohms?
34. On a distributor cap, which condition will short coil voltage to ground?
35. After installing contact points, you notice that the faces do not make full contact. What corrective action should you take?
36.To advance timing, you should turn the distributor housing in the same direction as the shaft rotation.
37. Most automotive and construction equipment lighting systems operate on what voltages?
38. You are operating a vehicle with a 12-volt electrical system. The voltmeter in the vehicle should indicate a reading that falls within what voltage range?
- To Table of Contents - |
26.8.1 Estimating linear regression parameters
We generally estimate the parameters of a linear model from data using linear algebra, which is the form of algebra that is applied to vectors and matrices. If you aren’t familiar with linear algebra, don’t worry – you won’t actually need to use it here, as R will do all the work for us. However, a brief excursion in linear algebra can provide some insight into how the model parameters are estimated in practice.
First, let’s introduce the idea of vectors and matrices; you’ve already encountered them in the context of R, but we will review them here. A matrix is a set of numbers that are arranged in a square or rectangle, such that there are one or more dimensions across which the matrix varies. It is customary to place different observation units (such as people) in the rows, and different variables in the columns. Let’s take our study time data from above. We could arrange these numbers in a matrix, which would have eight rows (one for each student) and two columns (one for study time, and one for grade). If you are thinking “that sounds like a data frame in R” you are exactly right! In fact, a data frame is a specialized version of a matrix, and we can convert a data frame to a matrix using the
df <- tibble( studyTime = c(2, 3, 5, 6, 6, 8, 10, 12) / 3, priorClass = c(0, 1, 1, 0, 1, 0, 1, 0) ) %>% mutate( grade = studyTime * betas + priorClass * betas + round(rnorm(8, mean = 70, sd = 5)) ) df_matrix <- df %>% dplyr::select(studyTime, grade) %>% as.matrix()
We can write the general linear model in linear algebra as follows:
This looks very much like the earlier equation that we used, except that the letters are all capitalized, which is meant to express the fact that they are vectors.
We know that the grade data go into the Y matrix, but what goes into the matrix? Remember from our initial discussion of linear regression that we need to add a constant in addition to our independent variable of interest, so our matrix (which we call the design matrix) needs to include two columns: one representing the study time variable, and one column with the same value for each individual (which we generally fill with all ones). We can view the resulting design matrix graphically (see Figure 26.7).
The rules of matrix multiplication tell us that the dimensions of the matrices have to match with one another; in this case, the design matrix has dimensions of 8 (rows) X 2 (columns) and the Y variable has dimensions of 8 X 1. Therefore, the matrix needs to have dimensions 2 X 1, since an 8 X 2 matrix multiplied by a 2 X 1 matrix results in an 8 X 1 matrix (as the matching middle dimensions drop out). The interpretation of the two values in the matrix is that they are the values to be multipled by study time and 1 respectively to obtain the estimated grade for each individual. We can also view the linear model as a set of individual equations for each individual:
Remember that our goal is to determine the best fitting values of given the known values of and . A naive way to do this would be to solve for using simple algebra – here we drop the error term because it’s out of our control:
The challenge here is that and are now matrices, not single numbers – but the rules of linear algebra tell us how to divide by a matrix, which is the same as multiplying by the inverse of the matrix (referred to as ). We can do this in R:
# compute beta estimates using linear algebra #create Y variable 8 x 1 matrix Y <- as.matrix(df$grade) #create X variable 8 x 2 matrix X <- matrix(0, nrow = 8, ncol = 2) #assign studyTime values to first column in X matrix X[, 1] <- as.matrix(df$studyTime) #assign constant of 1 to second column in X matrix X[, 2] <- 1 # compute inverse of X using ginv() # %*% is the R matrix multiplication operator beta_hat <- ginv(X) %*% Y #multiple the inverse of X by Y print(beta_hat)
## [,1] ## [1,] 4.3 ## [2,] 76.0
Anyone who is interested in serious use of statistical methods is highly encouraged to invest some time in learning linear algebra, as it provides the basis for nearly all of the tools that are used in standard statistics. |
|This article is part of a series on|
|Related security categories|
Multi-factor authentication (MFA) is a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism – typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are).
Two-factor authentication (also known as 2FA) is a method of confirming a user's claimed identity by utilizing a combination of two different components. Two-factor authentication is a type of multi-factor authentication.
A good example from everyday life is the withdrawing of money from a cash machine; only the correct combination of a bank card (something that the user possesses) and a PIN (personal identification number, something that the user knows) allows the transaction to be carried out.
- 1 Authentication factors
- 2 Mobile phone two-factor authentication
- 3 Legislation
- 4 Security
- 5 Industry regulation
- 6 Implementation considerations
- 7 Examples
- 8 See also
- 9 References
- 10 External links
The use of multiple authentication factors to prove one's identity is based on the premise that an unauthorized actor is unlikely to be able to supply the factors required for access. If, in an authentication attempt, at least one of the components is missing or supplied incorrectly, the user's identity is not established with sufficient certainty and access to the asset (e.g., a building, or data) being protected by multi-factor authentication then remains blocked. The authentication factors of a multi-factor authentication scheme may include:
- some physical object in the possession of the user, such as a USB stick with a secret token, a bank card, a key, etc.
- some secret known to the user, such as a password, PIN, TAN, etc.
- some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc.
Knowledge factors are the most commonly used form of authentication. In this form, the user is required to prove knowledge of a secret in order to authenticate.
A password is a secret word or string of characters that is used for user authentication. This is the most commonly used mechanism of authentication. Many multi-factor authentication techniques rely on password as one factor of authentication. Variations include both longer ones formed from multiple words (a passphrase) and the shorter, purely numeric, personal identification number (PIN) commonly used for ATM access. Traditionally, passwords are expected to be memorized.
Many secret questions such as "Where were you born?" are poor examples of a knowledge factor because they may be known to a wide group of people, or be able to be researched.
Possession factors ("something only the user has") have been used for authentication for centuries, in the form of a key to a lock. The basic principle is that the key embodies a secret which is shared between the lock and the key, and the same principle underlies possession factor authentication in computer systems. A security token is an example of a possession factor.
Disconnected tokens have no connections to the client computer. They typically use a built-in screen to display the generated authentication data, which is manually typed in by the user.
Connected tokens are devices that are physically connected to the computer to be used, and transmit data automatically. There are a number of different types, including card readers, wireless tags and USB tokens.
Mobile phone two-factor authentication
The major drawback of authentication performed including something that the user possesses is that the physical token (the USB stick, the bank card, the key or similar) must be carried around by the user, practically at all times. Loss and theft are a risk. There are also costs involved in procuring and subsequently replacing tokens of this kind. In addition, there are inherent conflicts and unavoidable trade-offs between usability and security.
Mobile phone two-factor authentication, where devices such as mobile phones and smartphones serve as "something that the user possesses", was developed to provide an alternative method that would avoid such issues. To authenticate themselves, people can use their personal access license (i.e. something that only the individual user knows) plus a one-time-valid, dynamic passcode consisting of digits. The code can be sent to their mobile device by SMS or via a special app. The advantage of this method is that there is no need for an additional, dedicated token, as users tend to carry their mobile devices around at all times anyway.
Some professional two-factor authentication solutions also ensure that there is always a valid passcode available for users. If one has already used a sequence of digits (passcode), this is automatically deleted and the system sends a new code to the mobile device. And if the new code is not entered within a specified time limit, the system automatically replaces it. This ensures that no old, already used codes are left on mobile devices. For added security, it is possible to specify how many incorrect entries are permitted before the system blocks access.
Security of the mobile-delivered security tokens fully depends on the mobile operator's operational security and can be easily breached by wiretapping or SIM-cloning by national security agencies.
Advantages of mobile phone two-factor authentication
- No additional tokens are necessary because it uses mobile devices that are (usually) carried all the time.
- As they are constantly changed, dynamically generated passcodes are safer to use than fixed (static) log-in information.
- Depending on the solution, passcodes that have been used are automatically replaced in order to ensure that a valid code is always available; acute transmission/reception problems do not therefore prevent logins.
- The option to specify a maximum permitted number of incorrect entries reduces the risk of attacks by unauthorized persons.
- It is user friendly.
Disadvantages of mobile phone two-factor authentication
- The mobile phone must be carried by the user, charged, and kept in range of a cellular network whenever authentication might be necessary. If the phone is unable to display messages, such as if it becomes damaged or shuts down for an update or due to temperature extremes (e.g. winter exposure), access is often impossible without backup plans.
- The user must share their personal mobile number with the provider, reducing personal privacy and potentially allowing spam.
- Text messages to mobile phones using SMS are insecure and can be intercepted. The token can thus be stolen and used by third parties.
- Text messages may not be delivered instantly, adding additional delays to the authentication process.
- Account recovery typically bypasses mobile phone two-factor authentication.
- Modern smart phones are used both for browsing email and for receiving SMS. Email is usually always logged in. So if the phone is lost or stolen, all accounts for which the email is the key can be hacked as the phone can receive the second factor. So smart phones combine the two factors into one factor.
- Mobile phones can be stolen, potentially allowing the thief to gain access into the user's accounts
- SIM cloning is not hard which is a boon for hackers.
Advances in mobile two-factor authentication
Advances in research of two-factor authentication for mobile devices consider different methods in which a second factor can be implemented while not posing a hindrance to the user. With the continued use and improvements in the accuracy of mobile hardware such as GPS, microphone, and gyro/acceleromoter, the ability to use them as a second factor of authentication is becoming more trustworthy. For example, by recording the ambient noise of the user’s location from a mobile device and comparing it with the recording of the ambient noise from the computer in the same room on which the user is trying to authenticate, one is able to have an effective second factor of authentication. This also reduces the amount of time and effort needed to complete the process.
Details for authentication in the USA are defined with the Homeland Security Presidential Directive 12 (HSPD-12).
Existing authentication methodologies involve the explained three types of basic "factors". Authentication methods that depend on more than one factor are more difficult to compromise than single-factor methods.
IT regulatory standards for access to Federal Government systems require the use of multi-factor authentication to access sensitive IT resources, for example when logging on to network devices to perform administrative tasks and when accessing any computer using a privileged login.
In 2005, the United States' Federal Financial Institutions Examination Council issued guidance for financial institutions recommending financial institutions conduct risk-based assessments, evaluate customer awareness programs, and develop security measures to reliably authenticate customers remotely accessing online financial services, officially recommending the use of authentication methods that depend on more than one factor (specifically, what a user knows, has, and is) to determine the user's identity. In response to the publication, numerous authentication vendors began improperly promoting challenge-questions, secret images, and other knowledge-based methods as "multi-factor" authentication. Due to the resulting confusion and widespread adoption of such methods, on August 15, 2006, the FFIEC published supplemental guidelines—which states that by definition, a "true" multi-factor authentication system must use distinct instances of the three factors of authentication it had defined, and not just use multiple instances of a single factor.
According to proponents, multi-factor authentication could drastically reduce the incidence of online identity theft and other online fraud, because the victim's password would no longer be enough to give a thief permanent access to their information. However, many multi-factor authentication approaches remain vulnerable to phishing, man-in-the-browser, and man-in-the-middle attacks.
Multi-factor authentication may be ineffective against modern threats, like ATM skimming, phishing, and malware.
Payment Card Industry Data Security Standard (PCI-DSS)
The Payment Card Industry (PCI) Data Security Standard, requirement 8.3, requires the use of MFA for all remote network access that originates from outside the network to a Card Data Environment (CDE). Beginning with PCI-DSS version 3.2, the use of MFA is required for all administrative access to the CDE, even if the user is within a trusted network.
Many multi-factor authentication products require users to deploy client software to make multi-factor authentication systems work. Some vendors have created separate installation packages for network login, Web access credentials and VPN connection credentials. For such products, there may be four or five different software packages to push down to the client PC in order to make use of the token or smart card. This translates to four or five packages on which version control has to be performed, and four or five packages to check for conflicts with business applications. If access can be operated using web pages, it is possible to limit the overheads outlined above to a single application. With other multi-factor authentication solutions, such as "virtual" tokens and some hardware token products, no software must be installed by end users.
There are drawbacks to multi-factor authentication that are keeping many approaches from becoming widespread. Some consumers have difficulty keeping track of a hardware token or USB plug. Many consumers do not have the technical skills needed to install a client-side software certificate by themselves. Generally, multi-factor solutions require additional investment for implementation and costs for maintenance. Most hardware token-based systems are proprietary and some vendors charge an annual fee per user. Deployment of hardware tokens is logistically challenging. Hardware tokens may get damaged or lost and issuance of tokens in large industries such as banking or even within large enterprises needs to be managed. In addition to deployment costs, multi-factor authentication often carries significant additional support costs. A 2008 survey of over 120 U.S. credit unions by the Credit Union Journal reported on the support costs associated with two-factor authentication. In their report, software certificates and software toolbar approaches were reported to have the highest support costs.
Several popular web services employ multi-factor authentication, usually as an optional feature that is deactivated by default.
- Two-factor authentication
- Many Internet services (among them: Google, Amazon AWS) use open Time-based One-time Password Algorithm (TOTP) to support multi-factor or two-factor authentication
- Comparison of authentication solutions
- Identity management
- Mutual authentication
- Reliance authentication
- Strong authentication
- "Two-factor authentication: What you need to know (FAQ) - CNET". CNET. Retrieved 2015-10-31.
- "How to extract data from an iCloud account with two-factor authentication activated". iphonebackupextractor.com. Retrieved 2016-06-08.
- "What is 2FA?". Retrieved 19 February 2015.
- "Securenvoy - what is 2 factor authentication?". Retrieved April 3, 2015.
- de Borde, Duncan. "Two-factor authentication" (PDF). Archived from the original (PDF) on January 12, 2012.
- van Tilborg, Henk C.A.; Jajodia, Sushil, eds. (2011). Encyclopedia of Cryptography and Security, Volume 1. Springer Science & Business Media. p. 1305. ISBN 9781441959058.
- Biometrics for Identification and Authentication - Advice on Product Selection
- "Mobile Two Factor Authentication" (PDF). securenvoy.com. Retrieved August 30, 2016.
- "How Russia Works on Intercepting Messaging Apps - bellingcat". bellingcat. 2016-04-30. Retrieved 2016-04-30.
- SSMS – A Secure SMS Messaging Protocol for the M-Payment Systems, Proceedings of the 13th IEEE Symposium on Computers and Communications (ISCC'08), pp. 700–705, July 2008 arXiv:1002.3171
- Rosenblatt, Seth; Cipriani, Jason (June 15, 2015). "Two-factor authentication: What you need to know (FAQ)". CNET. Retrieved 2016-03-17.
- "Sound-Proof: Usable Two-Factor Authentication Based on Ambient Sound | USENIX". www.usenix.org. Retrieved 2016-02-24.
- US Security Directive as issued on August 12, 2007 Archived September 16, 2012, at the Wayback Machine.
- "Frequently Asked Questions on FFIEC Guidance on Authentication in an Internet Banking Environment", August 15, 2006[dead link]
- "SANS Institute, Critical Control 10: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches".
- "SANS Institute, Critical Control 12: Controlled Use of Administrative Privileges".
- "Electronic Authentication Guide" (PDF). Special Publication 800-63-2. NIST. 2013. Retrieved 2014-11-06.
- "FFIEC Press Release". 2005-10-12. Retrieved 2011-05-13.
- FFIEC (2006-08-15). "Frequently Asked Questions on FFIEC Guidance on Authentication in an Internet Banking Environment" (PDF). Retrieved 2012-01-14.
- Brian Krebs (July 10, 2006). "Security Fix - Citibank Phish Spoofs 2-Factor Authentication". Washington Post. Retrieved 20 September 2016. line feed character in
|title=at position 13 (help)
- Bruce Schneier (March 2005). "The Failure of Two-Factor Authentication". Schneier on Security. Retrieved 20 September 2016.
- "The Failure of Two-Factor Authentication - Schneier on Security". schneier.com. Retrieved 23 October 2015.
- "Official PCI Security Standards Council Site - Verify PCI Compliance, Download Data Security and Credit Card Security Standards". www.pcisecuritystandards.org. Retrieved 2016-07-25.
- "For PCI MFA Is Now Required For Everyone | Centrify Blog". blog.centrify.com. Retrieved 2016-07-25.
- GORDON, WHITSON (3 September 2012). "Two-Factor Authentication: The Big List Of Everywhere You Should Enable It Right Now". LifeHacker. Australia. Retrieved 1 November 2012.
- Attackers breached the servers of RSA and stole information that could be used to compromise the security of two-factor authentication tokens used by 40 million employees (register.com, 18 Mar 2011)
- Banks to Use Two-factor Authentication by End of 2006, (slashdot.org, 20 Oct 2005)
- List of commonly used websites and whether or not they support Two-Factor Authentication
- Microsoft to abandon passwords, Microsoft preparing to dump passwords in favour of two-factor authentication in forthcoming versions of Windows (vnunet.com, 14 Mar 2005) |
Beta thalassemias (β thalassemias) are a group of inherited blood disorders. They are forms of thalassemia caused by reduced or absent synthesis of the beta chains of hemoglobin that result in variable outcomes ranging from severe anemia to clinically asymptomatic individuals. Global annual incidence is estimated at one in 100,000. Beta thalassemias are caused by mutations in the HBB gene on chromosome 11, inherited in an autosomal recessive fashion. The severity of the disease depends on the nature of the mutation.
|Other names||Microcytemia, beta type|
|Beta thalassemia genetics, the picture shows one example of how beta thalassemia is inherited. The beta globin gene is located on chromosome 11. A child inherits two beta globin genes (one from each parent).|
|Types||Thalassemia minor, intermediate and major|
|Causes||Mutations in the HBB gene|
|Diagnostic method||DNA analysis|
|Treatment||Depends on type (see types)|
HBB blockage over time leads to decreased beta-chain synthesis. The body's inability to construct new beta-chains leads to the underproduction of HbA. Reductions in HbA available overall to fill the red blood cells in turn leads to microcytic anemia. Microcytic anemia ultimately develops in respect to inadequate HBB protein for sufficient red blood cell functioning. Due to this factor, the patient may require blood transfusions to make up for the blockage in the beta-chains. Repeated blood transfusions cause severe problems associated with iron overload.
- 1 Signs and symptoms
- 2 Cause
- 3 Risk factors
- 4 Diagnosis
- 5 Prevention
- 6 Treatment
- 7 Epidemiology
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
Signs and symptomsEdit
Three main forms have been described: thalassemia major, thalassemia intermedia, and thalassemia minor. All people with thalassemia are susceptible to health complications that involve the spleen (which is often enlarged and frequently removed) and gallstones. These complications are mostly found in thalassemia major and intermedia patients. Individuals with beta thalassemia major usually present within the first two years of life with severe anemia, poor growth, and skeletal abnormalities during infancy. Untreated thalassemia major eventually leads to death, usually by heart failure; therefore, birth screening is very important.
Excess iron causes serious complications within the liver, heart, and endocrine glands. Severe symptoms include liver cirrhosis, liver fibrosis, and in extreme cases, liver cancer. Heart failure, growth impairment, diabetes and osteoporosis are life-threatening conditions which can be caused by TM. The main cardiac abnormalities seen as a result of thalassemia and iron overload include left ventricular systolic and diastolic dysfunction, pulmonary hypertension, valvulopathy, arrhythmias, and pericarditis. Increased gastrointestinal iron absorption is seen in all grades of beta thalassemia, and increased red blood cell destruction by the spleen due to ineffective erythropoiesis further releases additional iron into the bloodstream.
Two major groups of mutations can be distinguished:
- Nondeletion forms: These defects, in general, involve a single base substitution or small insertions near or upstream of the β globin gene. Most often, mutations occur in the promoter regions preceding the beta-globin genes. Less often, abnormal splice variants are believed to contribute to the disease.
- Deletion forms: Deletions of different sizes involving the β globin gene produce different syndromes such as (βo) or hereditary persistence of fetal hemoglobin syndromes.
|Thalassemia minor||Heterozygous form: Only one of β globin alleles bears a mutation. Individuals will suffer from microcytic anemia. Detection usually involves lower than normal mean corpuscular volume value (<80 fL).||β+/β|
|Thalassemia intermedia||Affected individuals can often manage a normal life but may need occasional transfusions, e.g., at times of illness or pregnancy, depending on the severity of their anemia.||β+/β+|
|Thalassemia major||Mediterranean anemia; Cooley anemia||Homozygous form: Occurs when both alleles have thalassemia mutations. This is a severe microcytic, hypochromic anemia. Untreated, it causes anemia, splenomegaly and severe bone deformities. It progresses to death before age 20. Treatment consists of periodic blood transfusion; splenectomy for splenomegaly and chelation of transfusion-related iron overload.||βo/βo|
Beta thalassemia is a hereditary disease affecting hemoglobin. As with about half of all hereditary diseases, an inherited mutation damages the assembly of the messenger-type RNA (mRNA) that is transcribed from a chromosome. DNA contains both the instructions (genes) for stringing amino acids together into proteins, as well as stretches of DNA that play important roles in regulating produced protein levels.
In thalassemia, an additional, contiguous length or a discontinuous fragment of non-coding instructions is included in the mRNA. This happens because the mutation obliterates the boundary between the intronic and exonic portions of the DNA template. Because all the coding sections may still be present, normal hemoglobin may be produced and the added genetic material, if it produces pathology, instead disrupts regulatory functions enough to produce anemia. Hemoblogin's normal alpha and beta subunits each have an iron-containing central portion (heme) that allows the protein chain of a subunit to fold around it. Normal adult hemoglobin contains 2 alpha and 2 beta subunits. Thalassemias typically affect only the mRNAs for production of the beta chains (hence the name). Since the mutation may be a change in only a single base (single-nucleotide polymorphism), on-going efforts seek gene therapies to make that single correction.
Family history and ancestry are factors which increase the risk of beta thalassemia. Depending on family history, if a person's parents or grandparents had beta thalassemia major or intermedia, there is a 75% (3 out of 4) probability (see inheritance chart at top of page) of the mutated gene being inherited by an offspring. Even if a child does not have beta thalassemia major or intermedia, they can still be a carrier, possibly resulting in future generations of their offspring having beta thalassemia.
Another risk factor is ancestry. Beta thalassemia occurs most often in people of Italian, Greek, Middle Eastern, Southern Asian, and African ancestry.
Abdominal pain due to hypersplenism, splenic infarction and right-upper quadrant pain caused by gallstones are major clinical manifestations. However, diagnosing thalassemiæ from symptoms alone is inadequate. Physicians note these signs as associative due to this disease's complexity. The following associative signs can attest to the severity of the phenotype: pallor, poor growth, inadequate food intake, splenomegaly, jaundice, maxillary hyperplasia, dental malocclusion, cholelithiasis, systolic ejection murmur in the presence of severe anemia and pathologic fractures. Based on symptoms, tests are ordered for a differential diagnosis. These tests include complete blood count; hemoglobin electrophoresis; serum transferrin, ferritin, total iron-binding capacity; urine urobilin and urobilogen; peripheral blood smear, which may show codocytes, or target cells; hematocrit; and serum bilirubin. The expected pattern on hemoglobin electrophoresis in people with beta-thalassemia is an increased level of hemoglobin A2 and slightly increased hemoglobin F.
Skeletal changes associated with expansion of the bone marrow:
- chipmunk facies: bossing of the skull, prominent malar eminence, depression of the bridge of the nose, tendency to a mongoloid slant of the eye, and exposure of the upper teeth due to hypertrophy of the maxillae.
- hair-on-end (or "crew cut") on skull X-ray: new bone formation due to the inner table.
All beta thalassemias may exhibit abnormal red blood cells, a family history is followed by DNA analysis. This test is used to investigate deletions and mutations in the alpha- and beta-globin-producing genes. Family studies can be done to evaluate carrier status and the types of mutations present in other family members. DNA testing is not routine, but can help diagnose thalassemia and determine carrier status. In most cases the treating physician uses a clinical prediagnosis assessing anemia symptoms: fatigue, breathlessness and poor exercise tolerance. Further genetic analysis may include HPLC should routine electrophoresis prove difficult.
Beta thalassemia is a hereditary disease allowing for a preventative treatment by carrier screening and prenatal diagnosis. It can be prevented if one parent has normal genes, giving rise to screenings that empower carriers to select partners with normal hemoglobin. A study aimed at detecting the genes that could give rise to offspring with sickle cell disease. Patients diagnosed with beta thalassemia have MCH ≤ 26 pg and an RDW < 19. Of 10,148 patients, 1,739 patients had a hemoglobin phenotype and RDW consistent with beta thalassemia. After the narrowing of patients, the HbA2 levels were tested presenting 77 patients with beta thalassemia. This screening procedure proved insensitive in populations of West African ancestry because of the indicators has high prevalence of alpha thalassemia. Countries have programs distributing information about the reproductive risks associated with carriers of haemoglobinopathies. Thalassemia carrier screening programs have educational programs in schools, armed forces, and through mass media as well as providing counseling to carriers and carrier couples. Screening has shown reduced incidence; by 1995 the prevalence in Italy reduced from 1:250 to 1:4000, and a 95% decrease in that region. The decrease in incidence has benefitted those affected with thalassemia, as the demand for blood has decreased, therefore improving the supply of treatment.
Beta thalassemia majorEdit
Affected children require regular lifelong blood transfusion and can have complications, which may involve the spleen. Bone marrow transplants can be curative for some children. Patients receive frequent blood transfusions that lead to or potentiate iron overload. Iron chelation treatment is necessary to prevent damage to internal organs. Advances in iron chelation treatments allow patients with thalassemia major to live long lives with access to proper treatment. Popular chelators include deferoxamine and deferiprone.
The most common patient deferoxamine complaint is that they are painful and inconvenient. The oral chelator deferasirox was approved for use in 2005 in some countries, it offers some hope with compliance at a higher cost. Bone marrow transplantation is the only cure and is indicated for patients with severe thalassemia major. Transplantation can eliminate a patient's dependence on transfusions. Absent a matching donor, a savior sibling can be conceived by preimplantation genetic diagnosis (PGD) to be free of the disease as well as to match the recipient's human leukocyte antigen (HLA) type.
Scientists at Weill Cornell Medical College have developed a gene therapy strategy that could feasibly treat both beta-thalassemia and sickle cell disease. The technology is based on delivery of a lentiviral vector carrying both the human β-globin gene and an ankyrin insulator to improve gene transcription and translation, and boost levels of β-globin production.
Patients with thalassemia major are more inclined to have a splenectomy. The medical cases of splenectomies have been declining in recent years due to decreased prevalence of hypersplenism in adequately transfused patients. Patients with hypersplenism are inclined to have a lower amount of healthy blood cells in their body than normal and reveal symptoms of anemia. Iron rich patients need a splenectomy to reduce the probability of an iron overload. The different surgical techniques are the open and laparoscopic method. The laparoscopic method requires longer operating time but a shorter recovery period with no surgical scar. If it is unnecessary to remove the entire spleen a partial splenectomy may occur; this method preserves some of the immune function while reducing the probability of hypersplenism. Surgeons who chose Laparoscopic splenectomy must administer an appropriate immunization at least two weeks before the surgery. On the operating table the patient must be placed at a 30˚ to 40˚ position with his or her left arm elevated above the head to properly make the incision. The camera is inserted along with four other trocars: one placed in the left subcostal area, one inserted at the midpoint between the first and third, one 4 cm right of the midline, and the fourth positioned on the midline to retract the spleen.
Long-term transfusion therapy to maintain the patient’s hemoglobin level above 9-10 g/dL (normal levels are 13.8 for males, and 12.1 for females). Patients are transfused by meeting strict criteria ensuring their safety. They must have: confirmed laboratory diagnosis of thalassemia major, and hemoglobin levels less than 7g/dL, to be eligible for the transfusion. To ensure quality blood transfusions, the packed red blood cells should be leucoreduced with a minimum of 40g of hemoglobin content. By having leucoreduced blood packets, the patient is at a lower risk to develop adverse reactions by contaminated white cells and preventing platelet alloimmunisation. Pre-storage filtration of whole blood offers high efficiency for removal and low residual of leukocytes; It is the preferred method of leucoreduction compared to pre-transfusion and bedside filtration. Patients with allergic transfusion reactions or unusual red cell antibodies must received “washed red cells” or “cryopreserved red cells.” Washed red cells have been removed of plasma proteins that would have become a target of the patient’s antibodies allowing the transfusion to be carried out safely. Cryopreserved red cells are used to maintain a supply of rare donor units for patients with unusual red cell antibodies or missing common red cell antigens. The transfusion programs available involve lifelong regular blood transfusion to main the pre-transfusion hemoglobin level above 9-10 g/gL. The monthly transfusions promote normal growth, physical activities, suppress bone marrow activity, and minimize iron accumulation. It has been announced the starting of the first clinical trial with CRISPR/Cas9 in Europe in 2018.
Iron overload is an unavoidable consequence of chronic transfusion therapy, necessary for patients with beta thalassemia. Iron chelation is a medical therapy that avoids the complications of iron overload. The iron overload can be removed by Deferasirox, an oral iron chelator, which has a dose- dependent effect on iron burden. Every unit of transfused blood contains 200–250 mg of iron and the body has no natural mechanism to remove excess iron. Deferasirox is a vital part in the patients health after blood transfusions.During normal iron homeostasis the circulating iron is bound to transferrin, but with an iron overload, the ability for transferrin to bind iron is exceeded and non-transferrin bound iron is formed. It represents a potentially toxic iron form due to its high propensity to induce oxygen species and is responsible for cellular damage. The prevention of iron overload protects patients from morbidity and mortality. The primary aim is to bind to and remove iron from the body and a rate equal to the rate of transfusional iron input or greater than iron input.
Beta thalassemia intermediaEdit
Patients may require episodic blood transfusions. Transfusion-dependent patients develop iron overload and require chelation therapy to remove the excess iron. Transmission is autosomal recessive; however, dominant mutations and compound heterozygotes have been reported. Genetic counseling is recommended and prenatal diagnosis may be offered. Alleles without a mutation that reduces function are characterized as (β). Mutations are characterized as (βo) if they prevent any formation of β chains, mutations are characterized as (β+) if they allow some β chain formation to occur.
Beta thalassemia minorEdit
Patients are often monitored without treatment. While many of those with minor status do not require transfusion therapy, they still risk iron overload, particularly in the liver. A serum ferritin test checks iron levels and can point to further treatment. Although not life-threatening on its own, it can affect quality of life due to the anemia. Minor often coexists with other conditions such as asthma and can cause iron overload of the liver and in those with non-alcoholic fatty liver disease, lead to more severe outcomes.
The beta form of thalassemia is particularly prevalent among the Mediterranean peoples and this geographical association is responsible for its naming: thalassa (θάλασσα) is the Greek word for sea and haima (αἷμα) is the Greek word for blood. In Europe, the highest concentrations of the disease are found in Greece and the Turkish coastal regions. The major Mediterranean islands (except the Balearics) such as Sicily, Sardinia, Corsica, Cyprus, Malta and Crete are heavily affected in particular. Other Mediterranean peoples, as well as those in the vicinity of the Mediterranean, also have high incidence rates, including people from West Asia and North Africa. The data indicate that 15% of the Greek and Turkish Cypriots are carriers of beta-thalassaemia genes, while 10% of the population carry alpha-thalassaemia genes.
The thalassemia trait may confer a degree of protection against malaria, which is or was prevalent in the regions where the trait is common, thus conferring a selective survival advantage on carriers (known as heterozygous advantage), thus perpetuating the mutation. In that respect, the various thalassemias resemble another genetic disorder affecting hemoglobin, sickle-cell disease.
The disorder affects all genders but is more prevalent in certain ethnicities and age groups. 20 people die per year causing thalassemia to be listed as a “rare disease”. In the United States, thalassemia’s prevalence is approximately 1 in 272,000 or 1,000 people. There have been 4,000 hospitalized cases in England in 2002 and 9,233 consultant episodes for thalassemia. Men accounted for 53% of hospital consultant episodes and women accounted for 47%. The mean patient age is 23 with only 1% of consultants the patient is older than 75 and 69% were 15-59 year olds. The Children’s Hospital Oakland formed an international network to combat thalassemia. “It is the world’s most common genetic blood disorder and is rapidly increasing”. 7% of the world’s population are carriers and 400,000 babies are born with the trait annually. It is usually fatal in infancy if blood transfusions are not initiated immediately.
- "Beta thalassemia". Genetics Home Reference. Retrieved 2015-05-26.
- Advani, Pooja. "Beta Thalassemia Treatment & Management". Medscape. Retrieved 4 April 2017.
- McKinney, Emily Slone; James, Susan R.; Murray, Sharon Smith; Nelson, Kristine; Ashwill, Jean (2014-04-17). Maternal-Child Nursing. Elsevier Health Sciences. ISBN 9780323293778.
- Galanello, Renzo; Origa, Raffaella (21 May 2010). "Beta-thalassemia". Orphanet J Rare Dis. 5: 11. doi:10.1186/1750-1172-5-11. PMC 2893117. PMID 20492708.
- Goldman, Lee; Schafer, Andrew I. (2015-04-21). Goldman-Cecil Medicine: Expert Consult - Online. Elsevier Health Sciences. ISBN 9780323322850.
- Carton, James (2012-02-16). Oxford Handbook of Clinical Pathology. OUP Oxford. ISBN 9780191629938.
- Perkin, Ronald M.; Newton, Dale A.; Swift, James D. (2008). Pediatric Hospital Medicine: Textbook of Inpatient Management. Lippincott Williams & Wilkins. ISBN 9780781770323.
- Galanello, Renzo; Origa, Raffaella (2010-05-21). "Beta-thalassemia". Orphanet Journal of Rare Diseases. 5 (1): 11. doi:10.1186/1750-1172-5-11. ISSN 1750-1172. PMC 2893117. PMID 20492708.
- Introduction to Pathology for the Physical Therapist Assistant. Jones & Bartlett Publishers. 2011. ISBN 9780763799083.
- Anderson, Gregory J.; McLaren, Gordon D. (2012-01-16). Iron Physiology and Pathophysiology in Humans. Springer Science & Business Media. ISBN 9781603274845.
- Barton, James C.; Edwards, Corwin Q.; Phatak, Pradyumna D.; Britton, Robert S.; Bacon, Bruce R. (2010-07-22). Handbook of Iron Overload Disorders. Cambridge University Press. ISBN 9781139489393.
- McCance, Kathryn L.; Huether, Sue E. (2013-12-13). Pathophysiology: The Biologic Basis for Disease in Adults and Children. Elsevier Health Sciences. ISBN 9780323088541.
- Leonard, Debra G. B. (2007-11-25). Molecular Pathology in Clinical Practice. Springer Science & Business Media. ISBN 9780387332277.
- Bowen, Juan M.; Mazzaferri, Ernest L. (2012-12-06). Contemporary Internal Medicine: Clinical Case Studies. Springer Science & Business Media. ISBN 9781461567134.
- Disorders, National Organization for Rare (2003). NORD Guide to Rare Disorders. Lippincott Williams & Wilkins. ISBN 9780781730631.
- Barton, James C.; Edwards, Corwin Q. (2000-01-13). Hemochromatosis: Genetics, Pathophysiology, Diagnosis and Treatment. Cambridge University Press. ISBN 9780521593809.
- Wilkins, Lippincott Williams & (2009). Professional Guide to Diseases. Lippincott Williams & Wilkins. ISBN 9780781778992.
- Ward, Amanda J; Cooper, Thomas A (2009). "The pathobiology of splicing". The Journal of Pathology. 220 (2): 152–63. doi:10.1002/path.2649. PMC 2855871. PMID 19918805.
- "the definition of dna". Dictionary.com. Retrieved 2015-05-26.
- Okpala, Iheanyi (2008-04-15). Practical Management of Haemoglobinopathies. John Wiley & Sons. ISBN 9781405140201.
- Vasudevan, D. M.; Sreekumari, S.; Vaidyanathan, Kannan (2011-11-01). Textbook of Biochemistry for Dental Students. JP Medical Ltd. ISBN 9789350254882.
- Taeusch, H. William; Ballard, Roberta A.; Gleason, Christine A.; Avery, Mary Ellen (2005). Avery's Diseases of the Newborn. Elsevier Health Sciences. ISBN 978-0721693477.
- Beta Thalassemia: New Insights for the Healthcare Professional: 2013 Edition: ScholarlyBrief. ScholarlyEditions. 2013-07-22. ISBN 9781481663472.
- "Risk Factors". Mayo Clinic. Retrieved 4 April 2017.
- "How Are Thalassemias Diagnosed? - NHLBI, NIH". www.nhlbi.nih.gov. Retrieved 2015-05-26.
- Target Cells, Imperial College of London Department of Medicine
- Orkin, Stuart H.; Nathan, David G.; Ginsburg, David; Look, A. Thomas; Fisher, David E.; Lux, Samuel (2009). Nathan and Oski's Hematology of Infancy and Childhood (7th ed.). Philadelphia: Saunders. ISBN 978-1-4160-3430-8.[page needed]
- "What Are the Signs and Symptoms of Thalassemias? - NHLBI, NIH". www.nhlbi.nih.gov. Retrieved 2015-05-26.
- Galanello, Renzo; Origa, Raffaella (2010). "Beta-thalassemia". Orphanet Journal of Rare Diseases. 5 (1): 11. doi:10.1186/1750-1172-5-11. PMC 2893117. PMID 20492708.
- Schrijver, Iris (2011-09-09). Diagnostic Molecular Pathology in Practice: A Case-Based Approach. Springer Science & Business Media. ISBN 9783642196775.
- Cousens, N. E.; Gaff, C. L.; Metcalfe, S. A.; Delatycki, M. B. (2010). "Carrier screening for Beta-thalassaemia: a review of international practice". European Journal of Human Genetics. 18 (10): 1077–83. doi:10.1038/ejhg.2010.90. PMC 2987452. PMID 20571509.
- "Screening for the beta-thalassaemia trait: hazards among populations of West African Ancestry". Retrieved 4 April 2017.
- Muncie, Herbert L.; Campbell, James S. (2009). "Alpha and Beta Thalassemia". American Family Physician. 80 (4): 339–44. PMID 19678601.
- Greer, John P.; Arber, Daniel A.; Glader, Bertil; List, Alan F.; Means, Robert T.; Paraskevas, Frixos; Rodgers, George M. (2013-08-29). Wintrobe's Clinical Hematology. Lippincott Williams & Wilkins. ISBN 9781469846224.
- Greer, John P.; Arber, Daniel A.; Glader, Bertil; List, Alan F.; Means, Robert T.; Paraskevas, Frixos; Rodgers, George M. (2013-08-29). Wintrobe's Clinical Hematology. Lippincott Williams & Wilkins. ISBN 9781469846224.
- Hydroxamic Acids: Advances in Research and Application: 2011 Edition: ScholarlyPaper. ScholarlyEditions. 2012-01-09. ISBN 9781464952081.
- "NCBI - WWW Error Blocked Diagnostic". pubchem.ncbi.nlm.nih.gov. Retrieved 2015-05-26.
- "Deferoxamine". livertox.nih.gov. Retrieved 2015-05-26.
- Sabloff, Mitchell; Chandy, Mammen; Wang, Zhiwei; Logan, Brent R.; Ghavamzadeh, Ardeshir; Li, Chi-Kong; Irfan, Syed Mohammad; Bredeson, Christopher N.; Cowan, Morton J. (2011). "HLA-matched sibling bone marrow transplantation for β-thalassemia major". Blood. 117 (5): 1745–1750. doi:10.1182/blood-2010-09-306829. ISSN 0006-4971. PMC 3056598. PMID 21119108.
- "Gene Therapy Shows Promise for Treating Beta-Thalassemia and Sickle Cell Disease". 2012-03-28. Retrieved 2015-10-15.
- Uranüs, Selman. "Splenectomy for hematological disorders". NCBI. Retrieved 4 April 2017.
- A, Cohen. "Blood Transfusion Therapy in β-Thalassaemia Major". NCBI. Retrieved 4 April 2017.
- CRISPR Therapeutics and Vertex Pharmaceuticals are taking action to start a first clinical trial with CRISPR/Cas9 in Europe in 2018. by Clara Rodríguez Fernández on 13/12/2017
- Cappellini, Maria Domenica (2007). "Exjade® (deferasirox, ICL670) in the treatment of chronic iron overload associated with blood transfusion". Therapeutics and Clinical Risk Management. 3 (2): 291–299. doi:10.2147/tcrm.2007.3.2.291. ISSN 1176-6336. PMC 1936310. PMID 18360637.
- Advani, Pooja. "Beta Thalassemia Medication". Medscape. Retrieved 4 April 2017.
- Schwartz, M. William (2012). The 5 Minute Pediatric Consult. Lippincott Williams & Wilkins. ISBN 9781451116564.
- Porwit, Anna; McCullough, Jeffrey; Erber, Wendy N. (2011-05-27). Blood and Bone Marrow Pathology. Elsevier Health Sciences. ISBN 978-0702045356.
- Hemoglobinopathies. Jaypee Brothers Publishers. 2006. ISBN 9788180616693.
- Torre, Dario M.; Lamb, Geoffrey C.; Ruiswyk, Jerome Van; Schapira, Ralph M. (2009). Kochar's Clinical Medicine for Students. Lippincott Williams & Wilkins. ISBN 9780781766999.
- Brissot, Pierre; Cappellini, Maria Domenica (2014). "LIVER DISEASE". Thalassaemia International Federation. Cite journal requires
- "WHO | Global epidemiology of haemoglobin disorders and derived service indicators". www.who.int. Retrieved 2015-05-26.
- Berg, Sheri; Bittner, Edward A. (2013-10-16). The MGH Review of Critical Care Medicine. Lippincott Williams & Wilkins. ISBN 9781451173680.
- Haematology Made Easy. AuthorHouse. 2013-02-06. ISBN 9781477246511.
- Abouelmagd, Ahmed; Ageely, Hussein M. (2013). Basic Genetics: A Primer Covering Molecular Composition of Genetic Material, Gene Expression and Genetic Engineering, and Mutations and Human Genetic. Universal-Publishers. ISBN 9781612331928.
- Weatherall, David J (2010). "Chapter 47. The Thalassemias: Disorders of Globin Synthesis". In Lichtman, MA; Kipps, TJ; Seligsohn, U; Kaushansky, K; Prchal, JT (eds.). The Thalassemias: Disorders of Globin Synthesis. Williams Hematology (8 ed.). The McGraw-Hill Companies.
- "Statistics about Thalassemia". Right Diagnosis. Retrieved 4 April 2017.
- "Thalassemia: Genetic Blood Disorder Expected To Double In Next Few Decades". ScienceDaily. Retrieved 4 April 2017.
- Cao, Antonio; Galanello, Renzo (2010). "Beta-Thalassemia". In Pagon, Roberta A; Bird, Thomas D; Dolan, Cynthia R; Stephens, Karen; Adam, Margaret P (eds.). GeneReviews. University of Washington, Seattle. PMID 20301599.
- Bahal, Raman; McNeer, Nicole Ali; Quijano, Elias; Liu, Yanfeng; Sulkowski, Parker; Turchick, Audrey; Lu, Yi-Chien; Bhunia, Dinesh C.; Manna, Arunava; Greiner, Dale L.; Brehm, Michael A.; Cheng, Christopher J.; López-Giráldez, Francesc; Ricciardi, Adele; Beloor, Jagadish; Krause, Diane S.; Kumar, Priti; Gallagher, Patrick G.; Braddock, Demetrios T.; Saltzman, W. Mark; Ly, Danith H.; Glazer, Peter M. (26 October 2016). "In vivo correction of anaemia in β-thalassemic mice by γPNA-mediated gene editing with nanoparticle delivery". Nature Communications. 7: 13304. Bibcode:2016NatCo...713304B. doi:10.1038/ncomms13304. ISSN 2041-1723. PMC 5095181. PMID 27782131. |
In linguistics, an argument is an expression that helps complete the meaning of a predicate, and in this regard, the complement is a closely related concept. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction is generally believed to exist in all languages. In syntax, the terms argument and complement overlap in meaning and use to a large extent. Dependency grammars sometimes call arguments actants, following Tesnière (1959).
The area of grammar that explores the nature of predicates, their arguments, and adjuncts is called valency theory. Predicates have a valence; they determine the number and type of arguments that can or must appear in their environment. The valence of predicates is also investigated in terms of subcategorization.
Arguments and adjuncts
The basic analysis of the syntax and semantics of clauses relies heavily on the distinction between arguments and adjuncts. The clause predicate, which is often a content verb, demands certain arguments. That is, the arguments are necessary in order to complete the meaning of the verb. The adjuncts that appear, in contrast, are not necessary in this sense. The subject phrase and object phrase are the two most frequently occurring arguments of verbal predicates. For instance:
- Jill likes Jack.
- Sam fried the meat.
- The old man helped the young man.
Each of these sentences contains two arguments (in bold), the first noun (phrase) being the subject argument, and the second the object argument. Jill, for example, is the subject argument of the predicate likes, and Jack is its object argument. Verbal predicates that demand just a subject argument (e.g. sleep, work, relax) are intransitive, verbal predicates that demand an object argument as well (e.g. like, fry, help) are transitive, and verbal predicates that demand two object arguments are ditransitive (e.g. give, loan, send) .
When additional information is added to our three example sentences, one is dealing with adjuncts, e.g.
- Jill really likes Jack.
- Jill likes Jack most of the time.
- Jill likes Jack when the sun shines.
- Jill likes Jack because he's friendly.
The added phrases (in bold) are adjuncts; they provide additional information that is not necessary to complete the meaning of the predicate likes. One key difference between arguments and adjuncts is that the appearance of a given argument is often obligatory, whereas adjuncts appear optionally. While typical verb arguments are subject or object nouns or noun phrases as in the examples above, they can also be prepositional phrases (PPs) (or even other categories). The PPs in bold in the following sentences are arguments:
- Sam put the pen on the chair.
- Larry does not put up with that.
- Bill is getting on my case.
We know that these PPs are (or contain) arguments because when we attempt to omit them, the result is unacceptable:
- *Sam put the pen.
- *Larry does not put up.
- *Bill is getting.
Subject and object arguments are known as core arguments; core arguments can be suppressed, added, or exchanged in different ways, using voice operations like passivization, antipassivization, application, incorporation, etc. Prepositional arguments, which are also called oblique arguments, however, do not tend to undergo the same processes.
Syntactic vs. semantic arguments
An important distinction acknowledges both syntactic and semantic arguments. Content verbs determine the number and type of syntactic arguments that can or must appear in their environment; they impose specific syntactic functions (e.g. subject, object, oblique, specific preposition, possessor, etc.) onto their arguments. These syntactic functions will vary as the form of the predicate varies (e.g. active verb, passive participle, gerund, nominal, etc.). In languages that have morphological case, the arguments of a predicate must appear with the correct case markings (e.g. nominative, accusative, dative, genitive, etc.) imposed on them by their predicate. The semantic arguments of the predicate, in contrast, remain consistent, e.g.
- Jack is liked by Jill.
- Jill's liking Jack
- Jack's being liked by Jill
- the liking of Jack by Jill
- Jill's like for Jack
The predicate 'like' appears in various forms in these examples, which means that the syntactic functions of the arguments associated with Jack and Jill vary. The object of the active sentence, for instance, becomes the subject of the passive sentence. Despite this variation in syntactic functions, the arguments remain semantically consistent. In each case, Jill is the experiencer (= the one doing the liking) and Jack is the one being experienced (= the one being liked). In other words, the syntactic arguments are subject to syntactic variation in terms of syntactic functions, whereas the thematic roles of the arguments of the given predicate remain consistent as the form of that predicate changes.
The syntactic arguments of a given verb can also vary across languages. For example, the verb put in English requires three syntactic arguments: subject, object, locative (e. g. He put the book into the box). These syntactic arguments correspond to the three semantic arguments agent, theme, and goal. The Japanese verb oku 'put', in contrast, has the same three semantic arguments, but the syntactic arguments differ, since Japanese does not require three syntactic arguments, so it is correct to say Kare ga hon o oita ("He put the book"). The equivalent sentence in English is ungrammatical without the required locative argument, as the examples involving put above demonstrate. For this reason, a slight paraphrase is required to render the nearest grammatical equivalent in English: He positioned the book or He deposited the book.
Distinguishing between arguments and adjuncts
Arguments vs. adjuncts
||This article may be confusing or unclear to readers. (January 2013)|
A large body of literature has been devoted to distinguishing arguments from adjuncts. Numerous syntactic tests have been devised for this purpose. One such test is the relative clause diagnostic. If the test constituent can appear after the combination which occurred/happened in a relative clause, it is an adjunct, not an argument, e.g.
- Bill left on Tuesday. → Bill left, which happened on Tuesday. - on Tuesday is an adjunct.
- Susan stopped due to the weather. → Susan stopped, which occurred due to the weather. - due to the weather is an adjunct.
- Fred tried to say something twice. → Fred tried to say something, which occurred twice. - twice is an adjunct.
The same diagnostic results in unacceptable relative clauses (and sentences) when the test constituent is an argument, e.g.
- Bill left home. → *Bill left, which happened home. - home is an argument.
- Susan stopped her objections. → *Susan stopped, which occurred her objections. - her objections is an argument.
- Fred tried to say something. → *Fred tried to say, which happened something. - something is an argument.
This test succeeds at identifying prepositional arguments as well:
- We are waiting for Susan. → *We are waiting, which is happening for Susan. - for Susan is an argument.
- Tom put the knife in the drawer. → *Tom put the knife, which occurred in the drawer. - in the drawer is an argument.
- We laughed at you. → *We laughed, which occurred at you. - at you is an argument.
The utility of the relative clause test is, however, limited. It incorrectly suggests, for instance, that modal adverbs (e.g. probably, certainly, maybe) and manner expressions (e.g. quickly, carefully, totally) are arguments. If a constituent passes the relative clause test, however, one can be sure that it is not an argument.
Obligatory vs. optional arguments
A further division blurs the line between arguments and adjuncts. Many arguments behave like adjuncts with respect to another diagnostic, the omission diagnostic. Adjuncts can always be omitted from the phrase, clause, or sentence in which they appear without rendering the resulting expression unacceptable. Some arguments (obligatory ones), in contrast, cannot be omitted. There are many other arguments, however, that are identified as arguments by the relative clause diagnostic but that can nevertheless be omitted, e.g.
- a. She cleaned the kitchen.
- b. She cleaned. - the kitchen is an optional argument.
- a. We are waiting for Larry.
- b. We are waiting. - for Larry is an optional argument.
- a. Susan was working on the model.
- b. Susan was working. - on the model is an optional argument.
The relative clause diagnostic would identify the constituents in bold as arguments. The omission diagnostic here, however, demonstrates that they are not obligatory arguments. They are, rather, optional. The insight, then, is that a three-way division is needed. On the one hand, one distinguishes between arguments and adjuncts, and on the other hand, one allows for a further division between obligatory and optional arguments.
Arguments and adjuncts in noun phrases
Most work on the distinction between arguments and adjuncts has been conducted at the clause level and has focused on arguments and adjuncts to verbal predicates. The distinction is crucial for the analysis of noun phrases as well, however. If it is altered somewhat, the relative clause diagnostic can also be used to distinguish arguments from adjuncts in noun phrases, e.g.
- Bill's bold reading of the poem after lunch
- *bold reading of the poem after lunch that was Bill's - Bill's is an argument.
- Bill's reading of the poem after lunch that was bold - bold is an adjunct
- *Bill's bold reading after lunch that was of the poem - of the poem is an argument
- Bill's bold reading of the poem that was after lunch - after lunch is an adjunct
- Bill's bold reading of the poem after lunch
The diagnostic identifies Bill's and of the poem as arguments, and bold and after lunch as adjuncts.
Representing arguments and adjuncts
The distinction between arguments and adjuncts is often indicated in the tree structures used to represent syntactic structure. In phrase structure grammars, an adjunct is "adjoined" to a projection of its head predicate in such a manner that distinguishes it from the arguments of that predicate. The distinction is quite visible in theories that employ the X-bar schema, e.g.
The complement argument appears as a sister of the head X, and the specifier argument appears as a daughter of XP. The optional adjuncts appear in one of a number of positions adjoined to a bar-projection of X or to XP.
Theories of syntax that acknowledge n-ary branching structures and hence construe syntactic structure as being flatter than the layered structures associated with the X-bar schema must employ some other means to distinguish between arguments and adjuncts. In this regard, some dependency grammars employ an arrow convention. Arguments receive a "normal" dependency edge, whereas adjuncts receive an arrow edge. In the following tree, an arrow points away from an adjunct toward the governor of that adjunct:
The arrow edges in the tree identify four constituents (= complete subtrees) as adjuncts: At one time, actually, in congress, and for fun. The normal dependency edges (= non-arrows) identify the other constituents as arguments of their heads. Thus Sam, a duck, and to his representative in congress are identified as arguments of the verbal predicate wanted to send.
The distinction between arguments and adjuncts is crucial to most theories of syntax and grammar. Arguments behave differently from adjuncts in numerous ways. Theories of binding, coordination, discontinuities, ellipsis, etc. must acknowledge and build on the distinction. When one examines these areas of syntax, what one finds is that arguments consistently behave differently from adjuncts and that without the distinction, our ability to investigate and understand these phenomena would be seriously hindered.
- Dependency grammar
- Meaning-text theory
- Phrase structure grammar
- Predicate (grammar)
- Subcategorization frame
- Theta criterion
- Theta role
- Most grammars define the argument in this manner, i.e. it is an expression that helps complete the meaning of a predicate (a verb). See for instance Tesnière (1969: 128).
- Concerning the completion of a predicates meaning via its arguments, see for instance Kroeger (2004:9ff.).
- Geeraerts, Dirk; Cuyckens, Hubert (2007). The Oxford Handbook of Cognitive Linguistics. Oxford University Press US. ISBN 0-19-514378-7.
- For instance, see the essays on valency theory in Ágel et al. (2003/6).
- See Eroms (2000) and Osborne and Groß (2012) in this regard.
- Ágel, V., L. Eichinger, H.-W. Eroms, P. Hellwig, H. Heringer, and H. Lobin (eds.) 2003/6. Dependency and valency: An international handbook of contemporary research. Berlin: Walter de Gruyter.
- Eroms, H.-W. 2000. Syntax der deutschen Sprache. Berlin: de Gruyter.
- Kroeger, P. 2004. Analyzing syntax: A lexical-functional approach. Cambridge, UK: Cambridge University Press.
- Osborne, T. and T. Groß 2012. Constructions are catenae: Construction Grammar meets dependency grammar. Cognitive Linguistics 23, 1, 163-214.
- Tesnière, L. 1959. Éléments de syntaxe structurale. Paris: Klincksieck.
- Tesnière, L. 1969. Éléments de syntaxe structurale. 2nd edition. Paris: Klincksieck. |
Bit is the basic unit of measurement of information. In addition to bits, there are many other units of measure. This article will introduce the basic units of measurement in computers.
1. Bit – BInary digiT (b)
Bit is the smallest unit of data stored in a computer. All data must be encoded in bits for the computer to understand. A binary digit has 2 states: 0 or 1.
2. Byte (B)
A byte consists of 8 bits. Byte (B) usually used to represent the capacity of data storage in the computer.
3. Conversion table between units of measurement
In addition to bits and bytes, we have larger units of measurement to meet the growing storage needs.
|Unit||Shortened||Capacity (in decimal)||Capacity (in binary)|
|Bit||b||0 hoặc 1||0 hoặc 1|
|Byte||B||8 bit||xxxxxxxx bit|
|KiloByte||KB||103 B||210 B = 1024 B|
|MegaByte||MB||103 KB||210 KB|
|GigaByte||GB||103 MB||210 MB|
|TeraByte||TB||103 GB||210 GB|
|PetaByte||PB||103 TB||210 TB|
|ExaByte||EB||103 PB||210 PB|
|ZettaByte||ZB||103 EB||210 EB|
|YottaByte||YB||103 ZB||210 ZB|
Note, depending on the definition, we have 1 KiloByte (KB) equal to 1000 Bytes or 1024 Bytes. Hard drive manufacturers often use of decimal in measuring the capacity (1 KiloByte (KB) equals 1000 Byte) to represent the storage capacity of hard drives. This explains, a drive has a capacity of 500 GB but the actual amount of storage is always less than 500 GB.
3. Hertz (Hz)
Hertz (Hz) is a unit used to measure the processing speed of the CPU in a computer. The larger this value is, the higher the computer’s processing speed. Processing speed of CPU is usually measured in Megahertz (MHz), Gigahertz (GHz).
4. bit per second (bps)
bps is a unit of measurement of data transfer rate (DTR) per second in bits. And Bps (Bytes per second) is a unit of measurement of data transfer rate per second in Byte. The transfer rate can be the transfer rate between CPU and RAM, between computers on the network, etc.
Note, the bit symbol is the letter b (lowercase b), the Byte symbol is the letter B (uppercase B).
|Name||Symbol||Data transfer rate|
|Bit per second||bps||1 bps|
|Byte per second||Bps||8 bps|
|Kilobit per second||Kbps||1000 bps|
|Megabit per second||Mbps||1000 Kbps|
|Gigabit per second||Gbps||1000 Mbps|
|Kilobyte per second||KBps||8.1000 bps|
|Megabyte per second||MBps||8.1000 KBps|
|Gigabyte per second||GBps||8.1000 MBps|
1 Byte = 8 bit so 1 Bps = 8 bps, the same for KBps, MBps,…
5. Rotation speed of hard disk drive
With HDD (hard disk drive), the speed of the hard drive featured by RPM (revolutions per minute) which means the number of revolutions per minute. Today, hard disk drives typically have a rotational speed of 5400 RPM or 7200 RPM. |
Skip to main content
Get your Wikispaces Classroom now:
the easiest way to manage your class.
Pages and Files
Ratio and Rates
Equations & Inequalities
Perimeter & Area
Volume, Capacity & Mass
Reading & Drawing Graphs
Below in italics are the outcomes for this unit. Please do not edit them. Use them as the headings for lessons. You can then add a couple of sentences to explain how to do the maths, you may add a picture or a link to a relevant site.
Here's a short
to help you understand some of the basic tools and uses of geometry.
labelling and naming triangles (eg ABC) and quadrilaterals (eg ABCD) in text and on diagrams
using the common conventions to mark equal intervals on diagrams
Imagine trying to live in a country where you don’t know the language. It would be difficult to communicate even the most basic of things. That is why we need to use a universal language for mathematics, for example labelling and naming shapes are important so we all understand exactly what is being communicated.
For example, this is the interval AB:
If I place another interval in, we have:
Now we have two intervals, which we can identify as AB and AC. So now if I mention AC, you know exactly which interval I am talking about.
Joining B and C, we get a triangle, which we will call ABC.
Again, it is vital to label points and use proper names to identify intervals, angles and shapes so that you can communicate effectively.
There are many other labels that we use in diagrams. One of the more common ones is the symbol for equal lengths. Put simply, a small line is places across all intervals that have equal length.
This diagram shows the quadrilateral ABCD and indicates that the interval AB is equal to the interval DC. This can be written as AB = DC.
The diagram below shows that AB = DC and that BC = AD. Note that two lines are used on BC and AD to indicate that they are equal to each other, but not equal to the other intervals marked; so as the diagrams get more complex, you can use three lines or more to indicate equal lengths.
to see how much you remember.
recognising and classifying types of triangles on the basis of their properties (acute-angled triangles, right-angled triangles, obtuse-angled triangles, scalene triangles, isosceles triangles and equilateral triangles)
Human blood is classified using two different methods:
• The ABO system uses the presence of antibodies and has the classifications A, B, AB and O
• The Rh system classifies a blood type as either positive or negative.
Using both, we get the blood types (like O+, AB-, etc).
Triangles can also be classified in two different ways – one based on the lengths (equilateral, isosceles and scalene) and the other based on angles (acute, obtuse and right-angled). Hence any triangle can be classified using both classification systems.
This is an
classifying triangles based on side lengths. This one is based on
constructing various types of triangles using geometrical instruments, given different information
eg the lengths of all sides, two sides and the included angle, and two angles and one side
Below are some short videos on how to do particular constructions. Doing these correctly and remembering what to do in an exam WILL TAKE PRACTICE! Do more than one.
Constructing a triangle given three sides.
Constructing a triangle given two lengths and one angle.
Constructing a triangle given one length and two angles.
distinguishing between convex and non-convex quadrilaterals (the diagonals of a convex quadrilateral lie inside the figure)
A polygon is any two–dimensional shape that is made up solely of straight lines (so a semi-circle is NOT a polygon). Convex polygons have vertices that point outwards while non-convex (or concave) polygons have vertices that point (or cave) inwards.
Another method to determine if a polygon is convex or non-convex is the diagonal test. If you draw lines between every vertice, all these lines will be inside the shape for a convex polygon; if any lie outside the shape, it is a non-convex polygon.
All diagonals are inside
Diagonal CE is outside
investigating the properties of special quadrilaterals (trapeziums, kites, parallelograms, rectangles, squares and rhombuses) by using symmetry, paper folding, measurement and/or applying geometrical reasoning
classifying special quadrilaterals on the basis of their properties
investigating the line symmetries and the order of rotational symmetry of the special quadrilaterals
Here's a video on point or
Here's one about
in the world.
constructing various types of quadrilaterals
help on how to format text
Turn off "Getting Started" |
Mathematical notation is a system of symbolic representations of mathematical objects and ideas. Mathematical notations are used in mathematics, the physical sciences, engineering, and economics. Mathematical notations include relatively simple symbolic representations, such as the numbers 0, 1 and 2, function symbols sin and +; conceptual symbols, such as lim, dy/dx, equations and variables; and complex diagrammatic notations such as Penrose graphical notation and Coxeter–Dynkin diagrams.
A mathematical notation is a writing system used for recording concepts in mathematics.
- The notation uses symbols or symbolic expressions which are intended to have a precise semantic meaning.
- In the history of mathematics, these symbols have denoted numbers, shapes, patterns, and change. The notation can also include symbols for parts of the conventional discourse between mathematicians, when viewing mathematics as a language.
The media used for writing are recounted below, but common materials currently include paper and pencil, board and chalk (or dry-erase marker), and electronic media. Systematic adherence to mathematical concepts is a fundamental concept of mathematical notation. (See also some related concepts: Logical argument, Mathematical logic, and Model theory.)
A mathematical expression is a sequence of symbols which can be evaluated. For example, if the symbols represent numbers, the expressions are evaluated according to a conventional order of operations which provides for calculation, if possible, of any expressions within parentheses, followed by any exponents and roots, then multiplications and divisions and finally any additions or subtractions, all done from left to right. In a computer language, these rules are implemented by the compilers. For more on expression evaluation, see the computer science topics: eager evaluation, lazy evaluation, and evaluation operator.
Precise semantic meaning
Modern mathematics needs to be precise, because ambiguous notations do not allow formal proofs. Suppose that we have statements, denoted by some formal sequence of symbols, about some objects (for example, numbers, shapes, patterns). Until the statements can be shown to be valid, their meaning is not yet resolved. While reasoning, we might let the symbols refer to those denoted objects, perhaps in a model. The semantics of that object has a heuristic side and a deductive side. In either case, we might want to know the properties of that object, which we might then list in an intensional definition.
Those properties might then be expressed by some well-known and agreed-upon symbols from a table of mathematical symbols. This mathematical notation might include annotation such as
- "All x", "No x", "There is an x" (or its equivalent, "Some x"), "A set", "A function"
- "A mapping from the real numbers to the complex numbers"
In different contexts, the same symbol or notation can be used to represent different concepts. Therefore, to fully understand a piece of mathematical writing, it is important to first check the definitions that an author gives for the notations that are being used. This may be problematic if the author assumes the reader is already familiar with the notation in use.
It is believed that a mathematical notation to represent counting was first developed at least 50,000 years ago — early mathematical ideas such as finger counting have also been represented by collections of rocks, sticks, bone, clay, stone, wood carvings, and knotted ropes. The tally stick is a timeless way of counting. Perhaps the oldest known mathematical texts are those of ancient Sumer. The Census Quipu of the Andes and the Ishango Bone from Africa both used the tally mark method of accounting for numerical concepts.
The development of zero as a number is one of the most important developments in early mathematics. It was used as a placeholder by the Babylonians and Greek Egyptians, and then as an integer by the Mayans, Indians and Arabs. (See The history of zero for more information.)
Geometry becomes analytic
The mathematical viewpoints in geometry did not lend themselves well to counting. The natural numbers, their relationship to fractions, and the identification of continuous quantities actually took millennia to take form, and even longer to allow for the development of notation. It was not until the invention of analytic geometry by René Descartes that geometry became more subject to a numerical notation. Some symbolic shortcuts for mathematical concepts came to be used in the publication of geometric proofs. Moreover, the power and authority of geometry's theorem and proof structure greatly influenced non-geometric treatises, Isaac Newton's Principia Mathematica, for example.
Counting is mechanized
After the rise of Boolean algebra and the development of positional notation, it became possible to mechanize simple circuits for counting, first by mechanical means, such as gears and rods, using rotation and translation to represent changes of state, then by electrical means, using changes in voltage and current to represent the analogs of quantity. Today, computers use standard circuits to both store and change quantities, which represent not only numbers but pictures, sound, motion, and control.
The 18th and 19th centuries saw the creation and standardization of mathematical notation as used today. Euler was responsible for many of the notations in use today: the use of a, b, c for constants and x, y, z for unknowns, e for the base of the natural logarithm, sigma (Σ) for summation, i for the imaginary unit, and the functional notation f(x). He also popularized the use of π for Archimedes constant (due to William Jones' proposal for the use of π in this way based on the earlier notation of William Oughtred). Many fields of mathematics bear the imprint of their creators for notation: the differential operator is due to Leibniz, the cardinal infinities to Georg Cantor (in addition to the lemniscate (∞) of John Wallis), the congruence symbol (≡) to Gauss, and so forth.
The rise of expression evaluators such as calculators and slide rules were only part of what was required to mathematicize civilization. Today, keyboard-based notations are used for the e-mail of mathematical expressions, the Internet shorthand notation.[dubious ] The wide use of programming languages, which teach their users the need for rigor in the statement of a mathematical expression (or else the compiler will not accept the formula) are all contributing toward a more mathematical viewpoint across all walks of life. Mathematically oriented markup languages such as TeX, LaTeX and, more recently, MathML are powerful enough that they qualify as mathematical notations in their own right.
For some people, computerized visualizations have been a boon to comprehending mathematics that mere symbolic notation could not provide. They can benefit from the wide availability of devices, which offer more graphical, visual, aural, and tactile feedback.
In the history of writing, ideographic symbols arose first, as more-or-less direct renderings of some concrete item. This has come full circle with the rise of computer visualization systems, which can be applied to abstract visualizations as well, such as for rendering some projections of a Calabi–Yau manifold.
Examples of abstract visualization which properly belong to the mathematical imagination can be found, for example in computer graphics. The need for such models abounds, for example, when the measures for the subject of study are actually random variables and not really ordinary mathematical functions.
Non-Latin-based mathematical notation
Modern Arabic mathematical notation is based mostly on the Arabic alphabet and is used widely in the Arab world, especially in pre-tertiary education. (Western notation uses Arabic numerals, but the Arabic notation also replaces Latin letters and related symbols with Arabic script.)
- Abuse of notation
- Bourbaki dangerous bend symbol
- History of mathematical notation
- ISO 31-11
- ISO/IEC 80000-2
- Knuth's up-arrow notation
- Mathematical Alphanumeric Symbols
- Notation in probability
- Scientific notation
- Table of mathematical symbols
- Typographical conventions in mathematical formulae
- Modern Arabic mathematical notation
- List of mathematical symbols
- An Introduction to the History of Mathematics (6th Edition) by Howard Eves (1990)p.9
- Georges Ifrah notes that humans learned to count on their hands. Ifrah shows, for example, a picture of Boethius (who lived 480–524 or 525) reckoning on his fingers in Ifrah 2000, p. 48.
- Gottfried Wilhelm Leibnitz
- Ifrah, Georges (2000), The Universal History of Numbers: From prehistory to the invention of the computer., John Wiley and Sons, p. 48, ISBN 0-471-39340-1. Translated from the French by David Bellos, E.F. Harding, Sophie Wood and Ian Monk. Ifrah supports his thesis by quoting idiomatic phrases from languages across the entire world.
- Earliest Uses of Various Mathematical Symbols
- Mathematical ASCII Notation how to type math notation in any text editor.
- Mathematics as a Language at cut-the-knot
- Stephen Wolfram: Mathematical Notation: Past and Future. October 2000. Transcript of a keynote address presented at MathML and Math on the Web: MathML International Conference. |
In the late 1700s, French astronomer Charles Messier (1730-1817) compiled a catalog of fuzzy objects he came across while searching for comets. Messier compiled a list of objects to avoid in order not to confuse them with comets. In 1771, a list of 45 entries was published with future updates as new lists of objects where generated. Messier's final list contained 103 entries. Several more objects were added after studies of Messier's papers and correspondence. The final list contained 110 objects. Ironically, Messier's list of astronomical objects turned out to be the deep-sky objects most visible in amateur astronomer's telescopes and one of the most popular deep-sky lists used by amateur astronomers today.
An 18th century comet hunter probably best known for compiling a list of 110 celestial objects known as the "Messier Catalog" that is still in use today.
A French astronomer and comet hunter who prepared one of the earliest catalogs of nebulous objects and star clusters. The entire Messier catalog of 110 star clusters, nebulae and galaxies can be seen with a 3-inch telescope under dark skies.
The 18th-century French astronomer who compiled a list of 110 fuzzy, diffuse objects that appeared at fixed positions in the sky. Being a comet-hunter, Messier compiled this list of objects which he knew were not comets. His list is now well known to professional and amateur astronomers as containing the brightest and most striking nebulae, star clusters, and galaxies in the sky.
While hunting for comets in the skies above France, 18th century astronomer Charles Messier made a list of the positions of about 100 fuzzy, diffuse looking objects which appeared at fixed positions in the sky. Although these objects looked like comets, Messier knew that since they did not move with respect to the background stars they could not be the undiscovered comets he was searching for. These objects are now well known to modern astronomers to be among the brightest and most striking gaseous nebulae, star clusters, and galaxies. Objects on Messier's list are still referred to by their "Messier number". For example the Andromeda Galaxy, the 31st object on the list, is known as M31. |
A signal is an electromagnetic or electrical current that is used for carrying data from one system or network to another. The signal is a function that conveys information about a phenomenon.
In electronics and telecommunications, it refers to any time-varying voltage that is an electromagnetic wave which carries information. A signal can also be defined as an observable change in quality such as quantity. There are two main types of signals: Analog signal and Digital signal.
In this Analog and Digital difference tutorial, you will learn:
- What is Signal?
- What is an Analog Signal?
- What is a Digital Signal?
- Characteristics OF Analog Signal
- Characteristics of Digital Signals
- Difference Between Analog and Digital Signal
- Advantages of Analog Signals
- Advantages of Digital Signals
- Disadvantages of Analog Signals
- Disadvantage of Digital Signals
Analog signal is a continuous signal in which one time-varying quantity represents another time-based variable. These kind of signals works with physical values and natural phenomena such as earthquake, frequency, volcano, speed of wind, weight, lighting, etc.
A digital signal is a signal that is used to represent data as a sequence of separate values at any point in time. It can only take on one of a fixed number of values. This type of signal represents a real number within a constant range of values. Now, let’s learn some key difference between Digital and Analog signals.
- An analog signal is a continuous signal whereas Digital signals are time separated signals.
- Analog signal is denoted by sine waves while It is denoted by square waves
- Analog signal uses a continuous range of values that help you to represent information on the other hand digital signal uses discrete 0 and 1 to represent information.
- Comparing Digital vs Analog signals, The analog signal bandwidth is low while the bandwidth of the digital signal is high.
- Analog instruments give considerable observational errors whereas Digital instruments never cause any kind of observational errors.
- Analog hardware never offers flexible implementation, but Digital hardware offers flexibility in implementation.
- Comparing Analog vs Digital signal, Analog signals are suited for audio and video transmission while Digital signals are suited for Computing and digital electronics.
Here, are essential characteristics of Analog Signal
- These type of electronic signals are time-varying
- Minimum and maximum values which is either positive or negative.
- It can be either periodic or non-periodic.
- Analog Signal works on continuous data.
- The accuracy of the analog signal is not high when compared to the digital signal.
- It helps you to measure natural or physical values.
- Analog signal output form is like Curve, Line, or Graph, so it may not be meaningful to all.
Here, are essential characteristics of Digital signals
- Digital signals are time separated signals.
- This type of electronic l signals can be processed and transmitted better compared to analog signal.
- Digital signals are versatile, so it is widely used.
- The accuracy of the digital signal is better than that of the analog signal.
Here are the important difference between Analog and Digital transmission:
|An analog signal is a continuous signal that represents physical measurements.||Digital signals are time separated signals which are generated using digital modulation.|
|It is denoted by sine waves||It is denoted by square waves|
|It uses a continuous range of values that help you to represent information.||Digital signal uses discrete 0 and 1 to represent information.|
|Temperature sensors, FM radio signals, Photocells, Light sensor, Resistive touch screen are examples of Analog signals.||Computers, CDs, DVDs are some examples of Digital signal.|
|The analog signal bandwidth is low||The digital signal bandwidth is high.|
|Analog signals are deteriorated by noise throughout transmission as well as write/read cycle.||Relatively a noise-immune system without deterioration during the transmission process and write/read cycle.|
|Analog hardware never offers flexible implementation.||Digital hardware offers flexibility in implementation.|
|It is suited for audio and video transmission.||It is suited for Computing and digital electronics.|
|Processing can be done in real-time and consumes lesser bandwidth compared to a digital signal.||It never gives a guarantee that digital signal processing can be performed in real time.|
|Analog instruments usually have s scale which is cramped at lower end and gives considerable observational errors.||Digital instruments never cause any kind of observational errors.|
|Analog signal doesn’t offer any fixed range.||Digital signal has a finite number, i.e., 0 and 1.|
Here, are pros/benefits of Analog Signals
- Easier in processing
- Best suited for audio and video transmission.
- It has a low cost and is portable.
- It has a much higher density so that it can present more refined information.
- Not necessary to buy a new graphics board.
- Uses less bandwidth than digital sounds
- Provide more accurate representation of a sound
- It is the natural form of a sound.
Here, are pros/advantages of Digital Signals:
- Digital data can be easily compressed.
- Any information in the digital form can be encrypted.
- Equipment that uses digital signals is more common and less expensive.
- Digital signal makes running instruments free from observation errors like parallax and approximation errors.
- A lot of editing tools are available
- You can edit the sound without altering the original copy
- Easy to transmit the data over networks
Here are cons/drawback of Analog Signals:
- Analog tends to have a lower quality signal than digital.
- The cables are sensitive to external influences.
- The cost of the Analog wire is high and not easily portable.
- Low availability of models with digital interfaces.
- Recording analog sound on tape is quite expensive if the tape is damaged
- It offers limitations in editing
- Tape is becoming hard to find
- It is quite difficult to synchronize analog sound
- Quality is easily lost
- Data can become corrupted
- Plenty of recording devices and formats which can become confusing to store a digital signal
- Digital sounds can cut an analog sound wave which means that you can’t get a perfect reproduction of a sound
- Offers poor multi-user interfaces
- Sampling may cause loss of information.
- A/D and D/A demands mixed-signal hardware
- Processor speed is limited
- Develop quantization and round-off errors
- It requires greater bandwidth
- Systems and processing is more complex. |
As travel to Mars becomes closer to reality, new research is exploring the risks associated with the deep space journey.
The first study on how exposure to deep space radiation could impact the human body during a mission to Mars shows that there could be an increased risk of leukemia.
The study, published in the journal Leukemia, was conducted by researchers at Wake Forest Institute for Regenerative Medicine with funding from NASA.
Senior researcher Christopher Porada told CBC News that the impact of deep space radiation isn't something the scientific community knows a lot about.
To date, research on how space travel impacts human physiology has focused on the effects of zero gravity, he said.
"Astronauts would be exposed for a long period of time to these new types of radiation that we don't know what the effects may be," said Porada, who's also an associate professor of regenerative medicine.
Porada's team looked at the combined effects of two kinds of radiation in deep space, solar energetic particles — which are also a factor in space travel closer to earth — and galactic cosmic ray radiation, found in deep space.
Cosmic radiation is made up of heavy metal nuclei that contain relatively low levels of radiation but have extremely high energy particles, he said. This increases the radiation risk because those high-energy particles release more radiation than the ones used in X-rays.
"The energy [in the particles] is so high there isn't type of shielding that could can be put on the ship that could protect astronauts from radiation."
Another challenge is that the protection in place for solar radiation, especially aluminum, can increase exposure to cosmic radiation, Porada said. Parts of the hull designed to protect it and the astronauts from the effects of the sun act as a kind of conductor for the cosmic radiation.
In order to test how a human would respond to deep space radiation, Porada's team calculated the total radiation expected from a 400-day round-trip to Mars and used a particle accelerator at NASA to create a beam of radiation and expose human stem cells to the simulated radiation, he said.
Once exposed to radiation, the stem cells were implanted into mice so researchers could track the effects.
Porada is quick to point out that instead of the stem cells being exposed to radiation over a months-long journey to Mars, they received the radiation in the span of a few minutes.
'Some studies suggest if you take the same dose protracted over a long period of time more of the cells may survive, but may have mutations.' - Lead researcher Christopher Porada
"It's certainly more damaging in the short run to give a huge blast of radiation in a small time period," Porada said. "Some studies suggest if you take the same dose protracted over a long period of time more of the cells may survive, but may have mutations."
Another possible difference in the outcome is when the human cells are exposed to radiation. In Porada's research, human stem cells received radiation and were then put into mice. Similar research at Columbia University found that human stem cells implanted into the mice prior to radiation exposure didn't develop leukemia.
"We are having an ongoing dialogue to try to figure out what may be causing the differences between the results," Porada said.
One of the next steps in research on deep space radiation will be how to mitigate the risks, he said.
Research is already underway to test how dietary supplements could help protect astronauts from the radiation, Porada said.
There is also research into creating protection for a space craft that mimicks the earth's magnetosphere so astronauts aren't exposed to the radiation while in transit to Mars.
"[It's about] knowing that there may be a risk, but finding ways to get around this risk so the mission can go ahead," Porada said. |
x86 Assembly/X86 Architecture
x86 Architecture[edit | edit source]
The x86 architecture has 8 General-Purpose Registers (GPR), 6 Segment Registers, 1 Flags Register and an Instruction Pointer. 64-bit x86 has additional registers.
General-Purpose Registers (GPR) - 16-bit naming conventions[edit | edit source]
The 8 GPRs are:
- Accumulator register (AX). Used in arithmetic operations
- Counter register (CX). Used in shift/rotate instructions and loops.
- Data register (DX). Used in arithmetic operations and I/O operations.
- Base register (BX). Used as a pointer to data (located in segment register DS, when in segmented mode).
- Stack Pointer register (SP). Pointer to the top of the stack.
- Stack Base Pointer register (BP). Used to point to the base of the stack.
- Source Index register (SI). Used as a pointer to a source in stream operations.
- Destination Index register (DI). Used as a pointer to a destination in stream operations.
The order in which they are listed here is for a reason: it is the same order that is used in a push-to-stack operation, which will be covered later.
All registers can be accessed in 16-bit and 32-bit modes. In 16-bit mode, the register is identified by its two-letter abbreviation from the list above. In 32-bit mode, this two-letter abbreviation is prefixed with an 'E' (extended). For example, 'EAX' is the accumulator register as a 32-bit value.
Similarly, in the 64-bit version, the 'E' is replaced with an 'R' (register), so the 64-bit version of 'EAX' is called 'RAX'.
It is also possible to address the first four registers (AX, CX, DX and BX) in their size of 16-bit as two 8-bit halves. The least significant byte (LSB), or low half, is identified by replacing the 'X' with an 'L'. The most significant byte (MSB), or high half, uses an 'H' instead. For example, CL is the LSB of the counter register, whereas CH is its MSB.
In total, this gives us five ways to access the accumulator, counter, data and base registers: 64-bit, 32-bit, 16-bit, 8-bit LSB, and 8-bit MSB. The other four are accessed in only four ways: 64-bit, 32-bit, 16-bit, and 8-bit. The following table summarises this:
|Register||Accumulator||Counter||Data||Base||Stack Pointer||Stack Base Pointer||Source||Destination|
Segment Registers[edit | edit source]
The 6 Segment Registers are:
- Stack Segment (SS). Pointer to the stack.
- Code Segment (CS). Pointer to the code.
- Data Segment (DS). Pointer to the data.
- Extra Segment (ES). Pointer to extra data ('E' stands for 'Extra').
- F Segment (FS). Pointer to more extra data ('F' comes after 'E').
- G Segment (GS). Pointer to still more extra data ('G' comes after 'F').
Most applications on most modern operating systems (like FreeBSD, Linux or Microsoft Windows) use a memory model that points nearly all segment registers to the same place (and uses paging instead), effectively disabling their use. Typically the use of FS or GS is an exception to this rule, instead being used to point at thread-specific data.
EFLAGS Register[edit | edit source]
The EFLAGS is a 32-bit register used as a collection of bits representing Boolean values to store the results of operations and the state of the processor.
The names of these bits are:
The bits named 0 and 1 are reserved bits and shouldn't be modified.
|The different use of these flags are:|
|0.||CF : Carry Flag. Set if the last arithmetic operation carried (addition) or borrowed (subtraction) a bit beyond the size of the register. This is then checked when the operation is followed with an add-with-carry or subtract-with-borrow to deal with values too large for just one register to contain.|
|2.||PF : Parity Flag. Set if the number of set bits in the least significant byte is a multiple of 2.|
|4.||AF : Adjust Flag. Carry of Binary Code Decimal (BCD) numbers arithmetic operations.|
|6.||ZF : Zero Flag. Set if the result of an operation is Zero (0).|
|7.||SF : Sign Flag. Set if the result of an operation is negative.|
|8.||TF : Trap Flag. Set if step by step debugging.|
|9.||IF : Interruption Flag. Set if interrupts are enabled.|
|10.||DF : Direction Flag. Stream direction. If set, string operations will decrement their pointer rather than incrementing it, reading memory backwards.|
|11.||OF : Overflow Flag. Set if signed arithmetic operations result in a value too large for the register to contain.|
|12-13.||IOPL : I/O Privilege Level field (2 bits). I/O Privilege Level of the current process.|
|14.||NT : Nested Task flag. Controls chaining of interrupts. Set if the current process is linked to the next process.|
|16.||RF : Resume Flag. Response to debug exceptions.|
|17.||VM : Virtual-8086 Mode. Set if in 8086 compatibility mode.|
|18.||AC : Alignment Check. Set if alignment checking of memory references is done.|
|19.||VIF : Virtual Interrupt Flag. Virtual image of IF.|
|20.||VIP : Virtual Interrupt Pending flag. Set if an interrupt is pending.|
|21.||ID : Identification Flag. Support for CPUID instruction if can be set.|
Instruction Pointer[edit | edit source]
The EIP register contains the address of the next instruction to be executed if no branching is done.
EIP can only be read through the stack after a
Memory[edit | edit source]
The x86 architecture is little-endian, meaning that multi-byte values are written least significant byte first. (This refers only to the ordering of the bytes, not to the bits.)
So the 32 bit value B3B2B1B016 on an x86 would be represented in memory as:
For example, the 32 bits double word 0x1BA583D4 (the 0x denotes hexadecimal) would be written in memory as:
This will be seen as
0xD4 0x83 0xA5 0x1B when doing a memory dump.
Two's Complement Representation[edit | edit source]
Two's complement is the standard way of representing negative integers in binary. The sign is changed by inverting all of the bits and adding one.
0001 represents decimal 1
1111 represents decimal -1
Addressing modes[edit | edit source]
The addressing mode indicates the manner in which the operand is presented.
- Register Addressing
- (operand address R is in the address field)
mov ax, bx ; moves contents of register bx into ax
- (actual value is in the field)
mov ax, 1 ; moves value of 1 into register ax
mov ax, 010Ch ; moves value of 0x010C into register ax
- Direct memory addressing
- (operand address is in the address field)
.data my_var dw 0abcdh ; my_var = 0xabcd .code mov ax, [my_var] ; copy my_var content into ax (ax=0xabcd)
- Direct offset addressing
- (uses arithmetics to modify address)
byte_table db 12, 15, 16, 22 ; table of bytes mov al, [byte_table + 2] mov al, byte_table ; same as previous instruction
- Register Indirect
- (field points to a register that contains the operand address)
mov ax, [di]
- The registers used for indirect addressing are BX, BP, SI, DI
General-purpose registers (64-bit naming conventions)[edit | edit source]
64-bit x86 adds 8 more general-purpose registers, named R8, R9, R10 and so on up to R15.
- R8–R15 are the new 64-bit registers.
- R8D–R15D are the lowermost 32 bits of each register.
- R8W–R15W are the lowermost 16 bits of each register.
- R8B–R15B are the lowermost 8 bits of each register.
As well, 64-bit x86 includes SSE2, so each 64-bit x86 CPU has at least 8 registers (named XMM0–XMM7) that are 128 bits wide, but only accessible through SSE instructions. They cannot be used for quadruple-precision (128-bit) floating-point arithmetic, but they can each hold 2 double-precision or 4 single-precision floating-point values for a SIMD parallel instruction. They can also be operated on as 128-bit integers or vectors of shorter integers. If the processor supports AVX, as newer Intel and AMD desktop CPUs do, then each of these registers is actually the lower half of a 256-bit register (named YMM0–YMM7), the whole of which can be accessed with AVX instructions for further parallelization.
Stack[edit | edit source]
The stack is a Last In First Out (LIFO) data structure; data is pushed onto it and popped off of it in the reverse order.
mov ax, 006Ah mov bx, F79Ah mov cx, 1124h push ax ; push the value in AX onto the top of the stack, which now holds the value 0x006A. push bx ; do the same thing to the value in BX; the stack now has 0x006A and 0xF79A. push cx ; now the stack has 0x006A, 0xF79A, and 0x1124. call do_stuff ; do some stuff. The function is not forced to save the registers it uses, hence us saving them. pop cx ; pop the element on top of the stack, 0x1124, into CX; the stack now has 0x006A and 0xF79A. pop bx ; pop the element on top of the stack, 0xF79A, into BX; the stack now has just 0x006A. pop ax ; pop the element on top of the stack, 0x006A, into AX; the stack is now empty.
The Stack is usually used to pass arguments to functions or procedures and also to keep track of control flow when the
call instruction is used. The other common use of the Stack is temporarily saving registers.
CPU Operation Modes[edit | edit source]
Real Mode[edit | edit source]
Real Mode is a holdover from the original Intel 8086. You generally won't need to know anything about it (unless you are programming for a DOS-based system or, more likely, writing a boot loader that is directly called by the BIOS).
The Intel 8086 accessed memory using 20-bit addresses. But, as the processor itself was 16-bit, Intel invented an addressing scheme that provided a way of mapping a 20-bit addressing space into 16-bit words. Today's x86 processors start in the so-called Real Mode, which is an operating mode that mimics the behavior of the 8086, with some very tiny differences, for backwards compatibility.
In Real Mode, a segment and an offset register are used together to yield a final memory address. The value in the segment register is multiplied by 16 (shifted 4 bits to the left) and the offset is added to the result. This provides a usable address space of 1 MB. However, a quirk in the addressing scheme allows access past the 1 MB limit if a segment address of 0xFFFF (the highest possible) is used; on the 8086 and 8088, all accesses to this area wrapped around to the low end of memory, but on the 80286 and later, up to 65520 bytes past the 1 MB mark can be addressed this way if the A20 address line is enabled. See: The A20 Gate Saga.
One benefit shared by Real Mode segmentation and by Protected Mode Multi-Segment Memory Model is that all addresses must be given relative to another address (this is, the segment base address). A program can have its own address space and completely ignore the segment registers, and thus no pointers have to be relocated to run the program. Programs can perform near calls and jumps within the same segment, and data is always relative to segment base addresses (which in the Real Mode addressing scheme are computed from the values loaded in the Segment Registers).
This is what the DOS *.COM format does; the contents of the file are loaded into memory and blindly run. However, due to the fact that Real Mode segments are always 64 KB long, COM files could not be larger than that (in fact, they had to fit into 65280 bytes, since DOS used the first 256 bytes of a segment for housekeeping data); for many years this wasn't a problem.
Protected Mode[edit | edit source]
Flat Memory Model[edit | edit source]
If programming in a modern 32-bit operating system (such as Linux, Windows), you are basically programming in flat 32-bit mode. Any register can be used in addressing, and it is generally more efficient to use a full 32-bit register instead of a 16-bit register part. Additionally, segment registers are generally unused in flat mode, and using them in flat mode is not considered best practice.
Multi-Segmented Memory Model[edit | edit source]
Using a 32-bit register to address memory, the program can access (almost) all of the memory in a modern computer. For earlier processors (with only 16-bit registers) the segmented memory model was used. The 'CS', 'DS', and 'ES' registers are used to point to the different chunks of memory. For a small program (small model) the CS=DS=ES. For larger memory models, these 'segments' can point to different locations.
Long Mode[edit | edit source]
The term "Long Mode" refers to the 64-bit mode. |
By John DeJesus
7 min read
With probability problems in a math class, the probabilities you need are either given to you or it is relatively easy to compute them in a straight-forward manner.
But in reality, this is not the case. You need to compute the probability yourself based on the situation. That is where probability distributions can help.
Today we are going to explore the hypergeometric probability distribution by:
- Explaining what situations it is useful for.
- Giving the information that you need to apply this distribution.
- Coding some of the computations from scratch using Python.
- Applying our code to problems.
When do we use the hypergeometric distribution?
The hypergeometric distribution is a discrete probability distribution. It is used when you want to determine the probability of obtaining a certain number of successes without replacement from a specific sample size. This is similar to the binomial distribution, but this time you are not given the probability of a single success. Some example situations to apply this distribution are:
- The probability of getting 3 spades in a 5 card hand in poker.
- The probability of getting 4 to 5 non-land cards in an opening hand in Magic: The Gathering for a standard 60 card deck.
- The probability of drawing 60% boys for the freshman class from a mixed-gender group randomly selected in a charter school admissions lottery.
What parameters (information) do we need for hypergeometric computations?
To compute the probability mass function (aka a single instance) of a hypergeometric distribution, we need:
a) The total number of items we are drawing from (called N).
b) The total number of desired items in N (called A).
c) The number of draws from N we will make (called n).
d) The number of desired items in our draw of n items (called x).
There are different letters used for these variables depending on the tutorial. I am using the letters used from the video I posted below where I initially learned about the hypergeometric distribution.
Coding the Hypergeometric PMF, CDF, and plot functions from scratch
Recall the Probability Mass Function (PMF) is what allows us to compute the probability of a single situation. In our case, that is the specific value for x above. The hypergeometric distribution PMF is below.
import numpy as np import matplotlib.pyplot as plt from scipy.special import comb def hypergeom_pmf(N, A, n, x): ''' Probability Mass Function for Hypergeometric Distribution :param N: population size :param A: total number of desired items in N :param n: number of draws made from N :param x: number of desired items in our draw of n items :returns: PMF computed at x ''' Achoosex = comb(A,x) NAchoosenx = comb(N-A, n-x) Nchoosen = comb(N,n) return (Achoosex)*NAchoosenx/Nchoosen
numpy for our computations later with our other functions.
Matplotlib will be to create our plot function later. The
comb function from
scipy is a built-in function to compute our 3 combinations in our PMF. We create a variable for each combination we need to compute and return the computation for the PMF.
The Cumulative Distribution Function (CDF) is a function that computes the total probabilities for a range of values for x. This will allow us to solve the second example with the Magic: The Gathering game. Another example is determining the probability of obtaining at most 2 spades in a five-card hand (aka 2 or fewer spades).
To answer the spades question with at most 2 spades, we need the CDF below:
For the Magic: The Gathering game scenario, we can use the above function, but it needs to start at 4 and end at 5.
Luckily with Python, we can create a function flexible enough to handle both of these problems.
def hypergeom_cdf(N, A, n, t, min_value=None): ''' Cumulative Density Funtion for Hypergeometric Distribution :param N: population size :param A: total number of desired items in N :param n: number of draws made from N :param t: number of desired items in our draw of n items up to t :returns: CDF computed up to t ''' if min_value: return np.sum([hypergeom_pmf(N, A, n, x) for x in range(min_value, t+1)]) return np.sum([hypergeom_pmf(N, A, n, x) for x in range(t+1)])
Finally, below is the code needed to plot the distribution for your situation. Some of the code and styling was based on this example from this scipy docs page.
def hypergeom_plot(N, A, n): ''' Visualization of Hypergeometric Distribution for given parameters :param N: population size :param A: total number of desired items in N :param n: number of draws made from N :returns: Plot of Hypergeometric Distribution for given parameters ''' x = np.arange(0, n+1) y = [hypergeom_pmf(N, A, n, x) for x in range(n+1)] plt.plot(x, y, 'bo') plt.vlines(x, 0, y, lw=2) plt.xlabel('# of desired items in our draw') plt.ylabel('Probablities') plt.title('Hypergeometric Distribution Plot') plt.show()
Notice that since we are showing all possibilities from 0 to n, we do not need to create a parameter for x in this function. To see the hypergeometric distribution of the card scenario with a hand of 5, set
N = 52,
A = 13 and
n = 5. You will then obtain the plot below:
Applying our code to problems
Now to make use of our functions. To answer the first question we use the following parameters in the
hypergeom_pmf since we want for a single instance:
N = 52 because there are 52 cards in a deck of cards.
A = 13 since there are 13 spades total in a deck.
n = 5 since we are drawing a 5 card opening hand.
x = 3 since we want to draw 3 spades in our opening hand.
hypergeom_pmf(52, 13, 5, 3) gives us a probability of 0.08154261704681873, which is about 8.1%. Not a good chance of getting that hand.
For the second problem, I will provide some quick background information for you. Magic: The Gathering (Magic for short) is a collectible trading card game where players use creatures and spells to beat their opponents. Players start each game with 7 cards from a 60 card deck.
A deck consists of land cards that allow you to cast your spells, and the spells themselves (non-land cards). Depending on the deck, you usually want at 4–5 non-land cards in your opening hand and about 23 land cards in your deck.
Assuming the above, let’s compute the probability of getting 4 or 5 non-land cards. We will use the
hypergeom_cdf with the following parameters:
N = 60 since a deck has 60 cards.
A = 37 since we are assuming there are 23 lands in our 60 card deck.
n = 7 since we will be starting with a 7 card hand.
t = 5 since that is the max number of non-lands we want.
min_value = 4 since that is the minimum number of non-land cards we want in our opening hand.
hypergeom_cdf(60, 37, 7, 5, 4) gives us a probability of 0.5884090217751665, which is about 59%. This means we have about a 59% chance of drawing 4 or 5 non-land cards in our opening hand of 7. Not bad. How would you change the number of lands and spells in your deck to increase this probability?
Suppose you run a charter school and you admit students through a lottery system. You have 734 applicants, of which 321 are boys and 413 girls. You are only admitting the first 150 students that are randomly selected. Considering your school has more girls than boys, you hope to get more boys this year. Suppose you want to know what your chances are of admitting 90 boys to your school (60% of the 150).
Take a sec to compute your answer then check past the picture below.
You should have used the
hypergeom_pmf since this is a single instance probability. Using
hypergeom_pmf(734, 321, 150, 90) you should have obtained 3.1730164380350626e-06…which is less than one percent… Let’s see a visual of the distribution to see where we have a better chance.
Seems we would have a better chance if we try to get about 65 boys, but the probabilities are still fairly low. Maybe you should change your admission process…
Harbin Mahjong Opening Hand Problems
Now to get you to practice with a game you may not be familiar with. Mahjong is a famous card game played all over China but has the same general win condition. It is often played for money, but that is not how I play. I am also not encouraging you to gamble…
The version of Mahjong we will be discussing is from a northeast area of China known as Harbin. It is home to the famous Harbin Ice Festival and my in-laws.
The game has 3 suits, numbered one to nine, with 4 copies of each card. This so far totals 108 cards. In addition, there are 4 copies of the Zhong card (the red block standing up in the image above), bring our total to 112 cards. In Harbin mahjong, you win based on fulfilling particular conditions in addition to the winning hand. By getting at least 3 Zhong cards in your opening hand, you cover almost all of those conditions.
The question is what is the probability of obtaining at least 3 Zhong cards in your opening hand if you are not the dealer? This means your opening hand will have 13 cards. Try to compute this probability on your own using the correct function provided. Then post your answer in the comments below and I’ll let you know if you got it correct.
Thanks for reading! To see the code for this tutorial you can find it here at my Github.
If you would like a video tutorial of the information presented here, you can check out the video below. This is the one I used to learn about the hypergeometric distribution.
Until next time, |
Radiation pressure is the pressure exerted upon any surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength which is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules).
The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes. This particularly includes objects in outer space where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars' orbit by about 15,000 km (9,300 mi). Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures, and can sometimes dwarf the usual gas pressure, for instance in stellar interiors and thermonuclear weapons.
The radiation pressure of sunlight on earth is equivalent to that exerted by about a thousandth of a gram on an area of 1 square metre (measured in units of force: approx. 10 μN/m2).[not verified in body]
Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure.
The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface it is exposed to was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by impacting gas molecules).
Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below.
Radiation pressure from momentum of an electromagnetic waveEdit
According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum, which will be transferred to an opaque surface it strikes.
The energy flux (irradiance) of a plane wave is calculated using the Poynting vector , whose magnitude we denote by S. S divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. So, dimensionally, the Poynting vector is S=(power/area)=(rate of doing work/area)=(ΔF/Δt)Δx/area, which is the speed of light, c=Δx/Δt, times pressure, ΔF/area. That pressure is experienced as radiation pressure on the surface:
If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α, resulting in a pressure:
The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure.
Radiation pressure from reflectionEdit
The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave:
thus doubling the net radiation pressure on the surface:
For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double.
Radiation pressure by emissionEdit
Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie:
The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually very tiny. However, because black-body radiation increases rapidly with temperature (according to the fourth power of temperature as given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become very significant. This becomes important in stellar interiors which are at millions of degrees.
Radiation pressure in terms of photonsEdit
Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by:
The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance If over an area A has a power of IfA, this implies a flux of If/Ep photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically.
Compression in a uniform radiation fieldEdit
In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor: Since this trace equals 3P – u, we get
where u is the radiation density per unit volume.
This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature T: The body will be surrounded by a uniform radiation field described by the Planck black-body radiation law, and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space.
By using Stefan–Boltzmann law, this can be expressed as
where is the Stefan–Boltzmann constant.
Solar radiation pressureEdit
Solar radiation pressure is due to the sun's radiation at closer distances, thus especially within the Solar System. While it acts on all objects, its net effect is generally greater on smaller bodies since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure except when they are behind the shadow of a larger orbiting body.
All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail for instance.
Pressures of absorption and reflectionEdit
Solar radiation pressure at the earth's distance from the sun, may be calculated by dividing the solar constant GSC (above) by the speed of light c. For an absorbing sheet facing the sun, this is simply:
This result is in the S.I. unit Pascals, equivalent to N/m2 (newtons per square meter). For a sheet at an angle α to the sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of:
To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of:
Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure".
The solar constant is defined for the sun's radiation at the distance to the earth, also known as one astronomical unit (AU). Consequently, at a distance of R astronomical units (R thus being dimensionless), applying the inverse-square law, we would find:
Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in:
Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulae.
Calculated solar radiation pressure on perfect reflector at normal incidence (α = 0) Distance from sun Radiation pressure in μPa (μN/m2) 0.20 AU 227 0.39 AU (Mercury) 59.7 0.72 AU (Venus) 17.5 1.00 AU (Earth) 9.08 1.52 AU (Mars) 3.93 3.00 AU (Typical asteroid) 1.01 5.20 AU (Jupiter) 0.34
Radiation pressure perturbationsEdit
Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft.
Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules).
The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun.
A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects.
At any particular time, some facets will be exposed to the Sun and some will be in shadow. Each surface exposed to the Sun will be reflecting, absorbing, and emitting radiation. Facets in shadow will be emitting radiation. The summation of pressures across all of the facets will define the net force and torque on the body. These can be calculated using the equations in the preceding sections.
The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face will be more intense than that of the opposite face, resulting in a net force on the body that will affect its motion.
The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System.
While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on.
Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer solar system. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure.
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in From the Earth to the Moon.
A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance.
Cosmic effects of radiation pressureEdit
Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales.
The early universeEdit
Galaxy formation and evolutionEdit
The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material.
Clouds of dust and gasesEdit
The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions.
Clusters of starsEdit
Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster.
Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.
Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function.
Stellar planetary systemsEdit
Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure.
In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component.
Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail.
Laser applications of radiation pressureEdit
Lasers can be used as a source of monochromatic light with wavelength . With a set of lenses, one can focus the laser beam to a point that is in diameter (or ).
The radiation pressure of a 30 mW laser of 1064 nm can therefore be computed as follows:
This is used in optical tweezers.
Laser cooling is applied to cooling materials very close to absolute zero. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect.
Large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion.
The reflection of a laser pulse from the surface of an elastic solid gives rise to various types of elastic waves that propagate inside the solid. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Recently, such light-pressure-induced elastic waves were observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale.
- Stellar Atmospheres, D. Mihalas (1978), Second edition, W H Freeman & Co
- Eddington, A. S., & Eddington, A. S. (1988). The internal constitution of the stars. Cambridge University Press.
- Chandrasekhar, S. (2013). Radiative transfer. Courier Corporation.
- Eugene Hecht, "Optics", 4th edition (p. 57)
- Johannes Kepler (1619). De Cometis Libelli Tres.
- P. Lebedev, 1901, "Untersuchungen über die Druckkräfte des Lichtes", Annalen der Physik, 1901
- Nichols, E. F & Hull, G. F. (1903) The Pressure due to Radiation, The Astrophysical Journal, Vol.17 No.5, p.315-351
- Wright, Jerome L. (1992), Space Sailing, Gordon and Breach Science Publishers
- Shankar R., Principles of Quantum Mechanics, 2nd edition.
- Carroll, Bradley W. & Dale A. Ostlie, An Introduction to Modern Astrophysics, 2nd edition.
- Jackson, John David, (1999) Classical Electrodynamics.
- Kardar, Mehran. "Statistical Physics of Particles".
- Kopp, G.; Lean, J. L. (2011). "A new, lower value of total solar irradiance: Evidence and climate significance". Geophysical Research Letters. 38: n/a. doi:10.1029/2010GL045777.
- Georgevic, R. M. (1973) "The Solar Radiation Pressure Forces and Torques Model", The Journal of the Astronautical Sciences, Vol. 27, No. 1, Jan–Feb. First known publication describing how solar radiation pressure creates forces and torques that affect spacecraft.
- Einstein, A. (1909). On the development of our views concerning the nature and constitution of radiation. Translated in: The Collected Papers of Albert Einstein, vol. 2 (Princeton University Press, Princeton, 1989). Princeton, New Jersey: Princeton University Press. p. 391.
- Karel Velan, A. (1992), "The Birth of the First Generation of Stars", The Multi-Universe Cosmos, Springer US, pp. 267–278, doi:10.1007/978-1-4684-6030-8_22, ISBN 9781468460322
- The early universe. Unruh, W. G., Semenoff, G. W., North Atlantic Treaty Organization. Scientific Affairs Division. Dordrecht: D. Reidel. 1988. ISBN 9027726191. OCLC 16684785.CS1 maint: others (link)
- Longair, Malcolm S., 1941- (2008). Galaxy formation. Springer. ISBN 9783540734772. OCLC 212409895.CS1 maint: multiple names: authors list (link)
- Dale A. Ostlie and Bradley W. Carroll, An Introduction to Modern Astrophysics (2nd edition), page 341, Pearson, San Francisco, 2007
- Požar, T.; Možina, J. (2013). "Measurement of Elastic Waves Induced by the Reflection of Light". Physical Review Letters. 111 (18): 185501. doi:10.1103/Physrevlett.111.185501. PMID 24237537.
- Požar, T.; Laloš, J.; Babnik, A.; Petkovšek, R.; Bethune-Waddell, M.; Chau, K. J.; Lukasievicz, G. V. B.; Astrath, N. G. C. (2018). "Isolated detection of elastic waves driven by the momentum of light". Nature Communications. 9 (1): 3340. doi:10.1038/s41467-018-05706-3. PMC 6105914. PMID 30131489. |
Nuclear fuel cycle
The nuclear fuel cycle, also called nuclear fuel chain, is the progression of nuclear fuel through a series of differing stages. It consists of steps in the front end, which are the preparation of the fuel, steps in the service period in which the fuel is used during reactor operation, and steps in the back end, which are necessary to safely manage, contain, and either reprocess or dispose of spent nuclear fuel. If spent fuel is not reprocessed, the fuel cycle is referred to as an open fuel cycle (or a once-through fuel cycle); if the spent fuel is reprocessed, it is referred to as a closed fuel cycle.
- 1 Basic concepts
- 2 Front end
- 3 Service period
- 3.1 Transport of radioactive materials
- 3.2 In-core fuel management
- 3.3 On-load reactors
- 3.4 Interim storage
- 3.5 Transportation
- 3.6 Reprocessing
- 3.7 Partitioning and transmutation
- 3.8 Waste disposal
- 4 Fuel cycles
- 4.1 Once-through nuclear fuel cycle
- 4.2 Plutonium cycle
- 4.3 Minor actinides recycling
- 4.4 Thorium cycle
- 4.5 Current industrial activity
- 5 References
- 6 External links
Nuclear power relies on fissionable material that can sustain a chain reaction with neutrons. Examples of such materials include uranium and plutonium. Most nuclear reactors use a moderator to lower the kinetic energy of the neutrons and increase the probability that fission will occur. This allows reactors to use material with far lower concentration of fissile isotopes than are needed for nuclear weapons. Graphite and heavy water are the most effective moderators, because they slow the neutrons through collisions without absorbing them. Reactors using heavy water or graphite as the moderator can operate using natural uranium.
A light water reactor (LWR) uses water in the form that occurs in nature, and requires fuel enriched to higher concentrations of fissile isotopes. Typically, LWRs use uranium enriched to 3–5% U-235, the only fissile isotope that is found in significant quantity in nature. One alternative to this low-enriched uranium (LEU) fuel is mixed oxide (MOX) fuel produced by blending plutonium with natural or depleted uranium, and these fuels provide an avenue to utilize surplus weapons-grade plutonium. Another type of MOX fuel involves mixing LEU with thorium, which generates the fissile isotope U-233. Both plutonium and U-233 are produced from the absorption of neutrons by irradiating fertile materials in a reactor, in particular the common uranium isotope U-238 and thorium, respectively, and can be separated from spent uranium and thorium fuels in reprocessing plants.
Some reactors do not use moderators to slow the neutrons. Like nuclear weapons, which also use unmoderated or "fast" neutrons, these fast-neutron reactors require much higher concentrations of fissile isotopes in order to sustain a chain reaction. They are also capable of breeding fissile isotopes from fertile materials; a breeder reactor is one that generates more fissile material in this way than it consumes.
During the nuclear reaction inside a reactor, the fissile isotopes in nuclear fuel are consumed, producing more and more fission products, most of which are considered radioactive waste. The buildup of fission products and consumption of fissile isotopes eventually stop the nuclear reaction, causing the fuel to become a spent nuclear fuel. When 3% enriched LEU fuel is used, the spent fuel typically consists of roughly 1% U-235, 95% U-238, 1% plutonium and 3% fission products. Spent fuel and other high-level radioactive waste is extremely hazardous, although nuclear reactors produce relatively small volumes of waste compared to other power plants because of the high energy density of nuclear fuel. Safe management of these byproducts of nuclear power, including their storage and disposal, is a difficult problem for any country using nuclear power.
1 Uranium ore - the principal raw material of nuclear fuel
2 Yellowcake - the form in which uranium is transported to a conversion plant
3 UF6 - used in enrichment
4 Nuclear fuel - a compact, inert, insoluble solid
A deposit of uranium, such as uraninite, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs.
Naturally occurring uranium consists primarily of two isotopes U-238 and U-235, with 99.28% of the metal being U-238 while 0.71% is U-235, and the remaining 0.01% is mostly U-234. The number in such names refers to the isotope's atomic mass number, which is the number of protons plus the number of neutrons in the atomic nucleus.
The atomic nucleus of U-235 will nearly always fission when struck by a free neutron, and the isotope is therefore said to be a "fissile" isotope. The nucleus of a U-238 atom on the other hand, rather than undergoing fission when struck by a free neutron, will nearly always absorb the neutron and yield an atom of the isotope U-239. This isotope then undergoes natural radioactive decay to yield Pu-239, which, like U-235, is a fissile isotope. The atoms of U-238 are said to be fertile, because, through neutron irradiation in the core, some eventually yield atoms of fissile Pu-239.
Uranium ore can be extracted through conventional mining in open pit and underground methods similar to those used for mining other metals. In-situ leach mining methods also are used to mine uranium in the United States. In this technology, uranium is leached from the in-place ore through an array of regularly spaced wells and is then recovered from the leach solution at a surface plant. Uranium ores in the United States typically range from about 0.05 to 0.3% uranium oxide (U3O8). Some uranium deposits developed in other countries are of higher grade and are also larger than deposits mined in the United States. Uranium is also present in very low-grade amounts (50 to 200 parts per million) in some domestic phosphate-bearing deposits of marine origin. Because very large quantities of phosphate-bearing rock are mined for the production of wet-process phosphoric acid used in high analysis fertilizers and other phosphate chemicals, at some phosphate processing plants the uranium, although present in very low concentrations, can be economically recovered from the process stream.
Mined uranium ores normally are processed by grinding the ore materials to a uniform particle size and then treating the ore to extract the uranium by chemical leaching. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake", which is sold on the uranium market as U3O8. Note that the material isn't always yellow.
Usually milled Uranium oxide, U3O8 (Triuranium octaoxide) is then processed into either of two substances depending on the intended use.
For use in most reactors U3O8 is usually converted to Uranium hexafluoride (UF6), the input stock for most commercial uranium enrichment facilities. A solid at room temperature, uranium hexafluoride becomes gaseous at 57 °C (134 °F). At this stage of the cycle the uranium hexafluoride conversion product still has the natural isotopic mix (99.28% of U-238 plus 0.71% of U-235).
In the current nuclear industry the volume of material converted directly to UO2 is typically quite small compared to that converted to UF6.
The natural concentration (0.71%) of the fissionable isotope U-235 is less than that required to sustain a nuclear chain reaction in light water reactor cores. Accordingly UF6 produced from natural uranium sources must be enriched to a higher concentration of the fissionable isotope before being used as nuclear fuel in such reactors. The level of enrichment for a particular nuclear fuel order is specified by the customer according to the application they will use it for: light-water reactor fuel normally is enriched to 3.5% U-235, but uranium enriched to lower concentrations is also required. Enrichment is accomplished using any of several methods of isotope separation. Gaseous diffusion and gas centrifuge are the commonly used uranium enrichment methods, but new enrichment technologies are currently being developed.
The bulk (96%) of the byproduct from enrichment is depleted uranium (DU), which can be used for armor, kinetic energy penetrators, radiation shielding and ballast. As of 2008 there are vast quantities of depleted uranium in storage. The United States Department of Energy alone has 470,000 tonnes. About 95% of depleted uranium is stored as uranium hexafluoride (UF6).
For use as nuclear fuel, enriched uranium hexafluoride is converted into uranium dioxide (UO2) powder that is then processed into pellet form. The pellets are then fired in a high temperature sintering furnace to create hard, ceramic pellets of enriched uranium. The cylindrical pellets then undergo a grinding process to achieve a uniform pellet size. The pellets are stacked, according to each nuclear reactor core's design specifications, into tubes of corrosion-resistant metal alloy. The tubes are sealed to contain the fuel pellets: these tubes are called fuel rods. The finished fuel rods are grouped in special fuel assemblies that are then used to build up the nuclear fuel core of a power reactor.
The alloy used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy. For the most common types of reactors, boiling water reactors (BWR) and pressurized water reactors (PWR), the tubes are assembled into bundles with the tubes spaced precise distances apart. These bundles are then given a unique identification number, which enables them to be tracked from manufacture through use and into disposal.
Transport of radioactive materials
Transport is an integral part of the nuclear fuel cycle. There are nuclear power reactors in operation in several countries but uranium mining is viable in only a few areas. Also, in the course of over forty years of operation by the nuclear industry, a number of specialized facilities have been developed in various locations around the world to provide fuel cycle services and there is a need to transport nuclear materials to and from these facilities. Most transports of nuclear fuel material occur between different stages of the cycle, but occasionally a material may be transported between similar facilities. With some exceptions, nuclear fuel cycle materials are transported in solid form, the exception being uranium hexafluoride (UF6) which is considered a gas. Most of the material used in nuclear fuel is transported several times during the cycle. Transports are frequently international, and are often over large distances. Nuclear materials are generally transported by specialized transport companies.
Since nuclear materials are radioactive, it is important to ensure that radiation exposure of those involved in the transport of such materials and of the general public along transport routes is limited. Packaging for nuclear materials includes, where appropriate, shielding to reduce potential radiation exposures. In the case of some materials, such as fresh uranium fuel assemblies, the radiation levels are negligible and no shielding is required. Other materials, such as spent fuel and high-level waste, are highly radioactive and require special handling. To limit the risk in transporting highly radioactive materials, containers known as spent nuclear fuel shipping casks are used which are designed to maintain integrity under normal transportation conditions and during hypothetical accident conditions.
In-core fuel management
A nuclear reactor core is composed of a few hundred "assemblies", arranged in a regular array of cells, each cell being formed by a fuel or control rod surrounded, in most designs, by a moderator and coolant, which is water in most reactors.
Because of the fission process that consumes the fuels, the old fuel rods must be replaced periodically with fresh ones (this is called a (replacement) cycle). During a given replacement cycle only some of the assemblies (typically one-third) are replaced since fuel depletion occurs at different rates at different places within the reactor core. Furthermore, for efficiency reasons, it is not a good policy to put the new assemblies exactly at the location of the removed ones. Even bundles of the same age will have different burn-up levels due to their previous positions in the core. Thus the available bundles must be arranged in such a way that the yield is maximized, while safety limitations and operational constraints are satisfied. Consequently, reactor operators are faced with the so-called optimal fuel reloading problem, which consists of optimizing the rearrangement of all the assemblies, the old and fresh ones, while still maximizing the reactivity of the reactor core so as to maximise fuel burn-up and minimise fuel-cycle costs.
This is a discrete optimization problem, and computationally infeasible by current combinatorial methods, due to the huge number of permutations and the complexity of each computation. Many numerical methods have been proposed for solving it and many commercial software packages have been written to support fuel management. This is an ongoing issue in reactor operations as no definitive solution to this problem has been found. Operators use a combination of computational and empirical techniques to manage this problem.
The study of used fuel
Used nuclear fuel is studied in Post irradiation examination, where used fuel is examined to know more about the processes that occur in fuel during use, and how these might alter the outcome of an accident. For example, during normal use, the fuel expands due to thermal expansion, which can cause cracking. Most nuclear fuel is uranium dioxide, which is a cubic solid with a structure similar to that of calcium fluoride. In used fuel the solid state structure of most of the solid remains the same as that of pure cubic uranium dioxide. SIMFUEL is the name given to the simulated spent fuel which is made by mixing finely ground metal oxides, grinding as a slurry, spray drying it before heating in hydrogen/argon to 1700 oC. In SIMFUEL, 4.1% of the volume of the solid was in the form of metal nanoparticles which are made of molybdenum, ruthenium, rhodium and palladium. Most of these metal particles are of the ε phase (hexagonal) of Mo-Ru-Rh-Pd alloy, while smaller amounts of the α (cubic) and σ (tetragonal) phases of these metals were found in the SIMFUEL. Also present within the SIMFUEL was a cubic perovskite phase which is a barium strontium zirconate (BaxSr1−xZrO3).
Uranium dioxide is very insoluble in water, but after oxidation it can be converted to uranium trioxide or another uranium(VI) compound which is much more soluble. Uranium dioxide (UO2) can be oxidised to an oxygen rich hyperstoichiometric oxide (UO2+x) which can be further oxidised to U4O9, U3O7, U3O8 and UO3.2H2O.
Because used fuel contains alpha emitters (plutonium and the minor actinides), the effect of adding an alpha emitter (238Pu) to uranium dioxide on the leaching rate of the oxide has been investigated. For the crushed oxide, adding 238Pu tended to increase the rate of leaching, but the difference in the leaching rate between 0.1 and 10% 238Pu was very small.
The concentration of carbonate in the water which is in contact with the used fuel has a considerable effect on the rate of corrosion, because uranium(VI) forms soluble anionic carbonate complexes such as [UO2(CO3)2]2− and [UO2(CO3)3]4−. When carbonate ions are absent, and the water is not strongly acidic, the hexavalent uranium compounds which form on oxidation of uranium dioxide often form insoluble hydrated uranium trioxide phases.
Thin films of uranium dioxide can be deposited upon gold surfaces by ‘sputtering’ using uranium metal and an argon/oxygen gas mixture. These gold surfaces modified with uranium dioxide have been used for both cyclic voltammetry and AC impedance experiments, and these offer an insight into the likely leaching behaviour of uranium dioxide.
Fuel cladding interactions
The study of the nuclear fuel cycle includes the study of the behaviour of nuclear materials both under normal conditions and under accident conditions. For example, there has been much work on how uranium dioxide based fuel interacts with the zirconium alloy tubing used to cover it. During use, the fuel swells due to thermal expansion and then starts to react with the surface of the zirconium alloy, forming a new layer which contains both fuel and zirconium (from the cladding). Then, on the fuel side of this mixed layer, there is a layer of fuel which has a higher caesium to uranium ratio than most of the fuel. This is because xenon isotopes are formed as fission products that diffuse out of the lattice of the fuel into voids such as the narrow gap between the fuel and the cladding. After diffusing into these voids, it decays to caesium isotopes. Because of the thermal gradient which exists in the fuel during use, the volatile fission products tend to be driven from the centre of the pellet to the rim area. Below is a graph of the temperature of uranium metal, uranium nitride and uranium dioxide as a function of distance from the centre of a 20 mm diameter pellet with a rim temperature of 200 oC. The uranium dioxide (because of its poor thermal conductivity) will overheat at the centre of the pellet, while the other more thermally conductive forms of uranium remain below their melting points.
Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas; one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring.
The releases of radioactivity from normal operations are the small planned releases from uranium ore processing, enrichment, power reactors, reprocessing plants and waste stores. These can be in different chemical/physical form from releases which could occur under accident conditions. In addition the isotope signature of a hypothetical accident may be very different from that of a planned normal operational discharge of radioactivity to the environment.
Just because a radioisotope is released it does not mean it will enter a human and then cause harm. For instance, the migration of radioactivity can be altered by the binding of the radioisotope to the surfaces of soil particles. For example, caesium (Cs) binds tightly to clay minerals such as illite and montmorillonite, hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. But 137Cs is not able to migrate quickly through most soils and thus is unlikely to contaminate well water. Colloids of soil minerals can migrate through soil so simple binding of a metal to the surfaces of soil particles does not completely fix the metal.
According to Jiří Hála's text book, the distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioisotope is tightly bound to the minerals in the soil, then less radioactivity can be absorbed by crops and grass growing on the soil.
In dairy farming, one of the best countermeasures against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long-term gamma dose to humans due to 137Cs, as the gamma photons will be attenuated by their passage through the soil.
Even after the radioactive element arrives at the roots of the plant, the metal may be rejected by the biochemistry of the plant. The details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions has been reported. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes.
In livestock farming, an important countermeasure against 137Cs is to feed animals a small amount of Prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of Prussian blue per day. The Prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. This is a constant which can not be changed but the biological half-life is not a constant. It will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the Prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of Prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Note that a good[according to whom?] source of data on the subject of caesium in Chernobyl fallout exists at (Ukrainian Research Institute for Agricultural Radiology).
Release of radioactivity from fuel during normal use and accidents
The IAEA assume that under normal operation the coolant of a water-cooled reactor will contain some radioactivity but during a reactor accident the coolant radioactivity level may rise. The IAEA states that under a series of different conditions different amounts of the core inventory can be released from the fuel, the four conditions the IAEA consider are normal operation, a spike in coolant activity due to a sudden shutdown/loss of pressure (core remains covered with water), a cladding failure resulting in the release of the activity in the fuel/cladding gap (this could be due to the fuel being uncovered by the loss of water for 15–30 minutes where the cladding reached a temperature of 650-1250 oC) or a melting of the core (the fuel will have to be uncovered for at least 30 minutes, and the cladding would reach a temperature in excess of 1650 oC).
Based upon the assumption that a Pressurized water reactor contains 300 tons of water, and that the activity of the fuel of a 1 GWe reactor is as the IAEA predicts, then the coolant activity after an accident such as the Three Mile Island accident (where a core is uncovered and then recovered with water) can be predicted.
Releases from reprocessing under normal conditions
It is normal to allow used fuel to stand after the irradiation to allow the short-lived and radiotoxic iodine isotopes to decay away. In one experiment in the USA, fresh fuel which had not been allowed to decay was reprocessed (the Green run ) to investigate the effects of a large iodine release from the reprocessing of short cooled fuel. It is normal in reprocessing plants to scrub the off gases from the dissolver to prevent the emission of iodine. In addition to the emission of iodine the noble gases and tritium are released from the fuel when it is dissolved. It has been proposed that by voloxidation (heating the fuel in a furnace under oxidizing conditions) the majority of the tritium can be recovered from the fuel.
A paper was written on the radioactivity in oysters found in the Irish Sea. These were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. Additionally, a zinc activation product (65Zn) was found, which is thought to be due to the corrosion of magnox fuel cladding in spent fuel pools. It is likely that the modern releases of all these isotopes from the Windscale event is smaller.
Some reactor designs, such as RBMKs or CANDU reactors, can be refueled without being shut down. This is achieved through the use of many small pressure tubes to contain the fuel and coolant, as opposed to one large pressure vessel as in pressurized water reactor (PWR) or boiling water reactor (BWR) designs. Each tube can be individually isolated and refueled by an operator-controlled fueling machine, typically at a rate of up to 8 channels per day out of roughly 400 in CANDU reactors. On-load refueling allows for the optimal fuel reloading problem to be dealt with continuously, leading to more efficient use of fuel. This increase in efficiency is partially offset by the added complexity of having hundreds of pressure tubes and the fueling machines to service them.
After its operating cycle, the reactor is shut down for refueling. The fuel discharged at that time (spent fuel) is stored either at the reactor site (commonly in a spent fuel pool) or potentially in a common facility away from reactor sites. If on-site pool storage capacity is exceeded, it may be desirable to store the now cooled aged fuel in modular dry storage facilities known as Independent Spent Fuel Storage Installations (ISFSI) at the reactor site or at a facility away from the site. The spent fuel rods are usually stored in water or boric acid, which provides both cooling (the spent fuel continues to generate decay heat as a result of residual radioactive decay) and shielding to protect the environment from residual ionizing radiation, although after at least a year of cooling they may be moved to dry cask storage.
Spent fuel discharged from reactors contains appreciable quantities of fissile (U-235 and Pu-239), fertile (U-238), and other radioactive materials, including reaction poisons, which is why the fuel had to be removed. These fissile and fertile materials can be chemically separated and recovered from the spent fuel. The recovered uranium and plutonium can, if economic and institutional conditions permit, be recycled for use as nuclear fuel. This is currently not done for civilian spent nuclear fuel in the United States.
Mixed oxide, or MOX fuel, is a blend of reprocessed uranium and plutonium and depleted uranium which behaves similarly, although not identically, to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low-enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Currently, plants in Europe are reprocessing spent fuel from utilities in Europe and Japan. Reprocessing of spent commercial-reactor nuclear fuel is currently not permitted in the United States due to the perceived danger of nuclear proliferation. However, the recently announced Global Nuclear Energy Partnership would see the U.S. form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons.
Partitioning and transmutation
As an alternative to the disposal of the PUREX raffinate in glass or Synroc matrix, the most radiotoxic elements could be removed through advanced reprocessing. After separation, the minor actinides and some long-lived fission products could be converted to short-lived or stable isotopes by either neutron or photon irradiation. This is called transmutation. However, a strong and long-term international cooperation, many decades of research and huge investments remain necessary before to reach a mature industrial scale where the safety and the economical feasibility of partitioning and transmutation (P&T) could be demonstrated.
Actinides and fission products by half-life
|Actinides by decay chain||Half-life
|Fission products of 235U by yield|
No fission products
|226Ra№||247Bk||1.3 k – 1.6 k|
|240Pu||229Th||246Cmƒ||243Amƒ||4.7 k – 7.4 k|
|245Cmƒ||250Cm||8.3 k – 8.5 k|
|230Th№||231Pa№||32 k – 76 k|
|236Npƒ||233Uƒ||234U№||150 k – 250 k||‡||99Tc₡||126Sn|
|248Cm||242Pu||327 k – 375 k||79Se₡|
|237Npƒ||2.1 M – 6.5 M||135Cs₡||107Pd|
|236U||247Cmƒ||15 M – 24 M||129I₡|
... nor beyond 15.7 M years
|232Th№||238U№||235Uƒ№||0.7 G – 14.1 G|
Legend for superscript symbols
A current concern in the nuclear power field is the safe disposal and isolation of either spent fuel from reactors or, if the reprocessing option is used, wastes from reprocessing plants. These materials must be isolated from the biosphere until the radioactivity contained in them has diminished to a safe level. In the U.S., under the Nuclear Waste Policy Act of 1982 as amended, the Department of Energy has responsibility for the development of the waste disposal system for spent nuclear fuel and high-level radioactive waste. Current plans call for the ultimate disposal of the wastes in solid form in a licensed deep, stable geologic structure called a deep geological repository. The Department of Energy chose Yucca Mountain as the location for the repository. However, its opening has been repeatedly delayed. Since 1999 thousands of nuclear waste shipments have been stored at the Waste Isolation Pilot Plant in New Mexico.
Fast-neutron reactors can fission all actinides, while the thorium fuel cycle produces low levels of transuranics. Unlike LWRs, in principle these fuel cycles could recycle their plutonium and minor actinides and leave only fission products and activation products as waste. The highly radioactive medium-lived fission products Cs-137 and Sr-90 diminish by a factor of 10 each century; while the long-lived fission products have relatively low radioactivity, often compared favorably to that of the original uranium ore.
Although the most common terminology is fuel cycle, some argue that the term fuel chain is more accurate, because the spent fuel is never fully recycled. Spent fuel includes fission products, which generally must be treated as waste, as well as uranium, plutonium, and other transuranic elements. Where plutonium is recycled, it is normally reused once in light water reactors, although fast reactors could lead to more complete recycling of plutonium.
Once-through nuclear fuel cycle
Not a cycle per se, fuel is used once and then sent to storage without further processing save additional packaging to provide for better isolation from the biosphere. This method is favored by six countries: the United States, Canada, Sweden, Finland, Spain and South Africa. Some countries, notably Finland, Sweden and Canada, have designed repositories to permit future recovery of the material should the need arise, while others plan for permanent sequestration in a geological repository like the Yucca Mountain nuclear waste repository in the United States.
Several countries, including Japan, Switzerland, and previously Spain and Germany, are using or have used the reprocessing services offered by BNFL and COGEMA. Here, the fission products, minor actinides, activation products, and reprocessed uranium are separated from the reactor-grade plutonium, which can then be fabricated into MOX fuel. Because the proportion of the non-fissile even-mass isotopes of plutonium rises with each pass through the cycle, there are currently no plans to reuse plutonium from used MOX fuel for a third pass in a thermal reactor. However, if fast reactors become available, they may be able to burn these, or almost any other actinide isotopes.
The use of a medium-scale reprocessing facility onsite, and the use of pyroprocessing rather than the present day aqueous reprocessing, is claimed to considerably reduce the proliferation potential or possible diversion of fissile material as the processing facility is in-situ/integral. Similarly as plutonium is not separated on its own in the pyroprocessing cycle, rather all actinides are "electro-won" or "refined" from the spent fuel, the plutonium is never separated on its own, instead it comes over into the new fuel mixed with gamma and alpha emitting actinides, species that "self-protect" it in numerous possible thief scenarios.
Minor actinides recycling
It has been proposed that in addition to the use of plutonium, the minor actinides could be used in a critical power reactor. Tests are already being conducted in which americium is being used as a fuel.
A number of reactor designs, like the Integral Fast Reactor, have been designed for this rather different fuel cycle. In principle, it should be possible to derive energy from the fission of any actinide nucleus. With a careful reactor design, all the actinides in the fuel can be consumed, leaving only lighter elements with short half-lives. Whereas this has been done in prototype plants, no such reactor has ever been operated on a large scale.
It so happens that the neutron cross-section of many actinides decreases with increasing neutron energy, but the ratio of fission to simple activation (neutron capture) changes in favour of fission as the neutron energy increases. Thus with a sufficiently high neutron energy, it should be possible to destroy even curium without the generation of the transcurium metals. This could be very desirable as it would make it significantly easier to reprocess and handle the actinide fuel.
One promising alternative from this perspective is an accelerator-driven sub-critical reactor / subcritical reactor. Here a beam of either protons (United States and European designs) or electrons (Japanese design) is directed into a target. In the case of protons, very fast neutrons will spall off the target, while in the case of the electrons, very high energy photons will be generated. These high-energy neutrons and photons will then be able to cause the fission of the heavy actinides.
Such reactors compare very well to other neutron sources in terms of neutron energy:
- Thermal 0 to 100 eV
- Epithermal 100 eV to 100 keV
- Fast (from nuclear fission) 100 keV to 3 MeV
- DD fusion 2.5 MeV
- DT fusion 14 MeV
- Accelerator driven core 200 MeV (lead driven by 1.6 GeV protons)
- Muon-catalyzed fusion 7 GeV.
As an alternative, the curium-244, with a half-life of 18 years, could be left to decay into plutonium-240 before being used in fuel in a fast reactor.
Fuel or targets for this actinide transmutation
To date the nature of the fuel (targets) for actinide transformation has not been chosen.
If actinides are transmuted in a Subcritical reactor, it is likely that the fuel will have to be able to tolerate more thermal cycles than conventional fuel. An accelerator-driven sub-critical reactor is unlikely to be able to maintain a constant operation period for equally long times as a critical reactor, and each time the accelerator stops then the fuel will cool down.
On the other hand, if actinides are destroyed using a fast reactor, such as an Integral Fast Reactor, then the fuel will most likely not be exposed to many more thermal cycles than in a normal power station.
Depending on the matrix the process can generate more transuranics from the matrix. This could either be viewed as good (generate more fuel) or can be viewed as bad (generation of more radiotoxic transuranic elements). A series of different matrices exists which can control this production of heavy actinides.
Fissile nuclei, like Uranium-235, Plutonium-239 and Uranium-233 respond well to delayed neutrons and are thus important to keep a critical reactor stable, and this limits the amount of minor actinides that can be destroyed in a critical reactor. As a consequence, it is important that the chosen matrix allows the reactor to keep the ratio of fissile to non-fissile nuclei high, as this enables it to destroy the long-lived actinides safely. In contrast, the power output of a sub-critical reactor is limited by the intensity of the driving particle accelerator, and thus it need not contain any uranium or plutonium at all. In such a system, it may be preferable to have an inert matrix that doesn't produce additional long-lived isotopes.
Actinides in an inert matrix
Actinides in a thorium matrix
Thorium will on neutron bombardment form uranium-233. U-233 is fissile, and has a larger fission cross section than both U-235 and U-238, and thus it is far less likely to produce higher actinides through neutron capture.
Actinides in a uranium matrix
If the actinides are incorporated into a uranium-metal or uranium-oxide matrix, then the neutron capture of U-238 is likely to generate new plutonium-239. An advantage of mixing the actinides with uranium and plutonium is that the large fission cross sections of U-235 and Pu-239 for the less energetic delayed-neutrons could make the reaction stable enough to be carried out in a critical fast reactor, which is likely to be both cheaper and simpler than an accelerator driven system.
It is also possible to create a matrix made from a mix of the above-mentioned materials. This is most commonly done in fast reactors where one may wish to keep the breeding ratio of new fuel high enough to keep powering the reactor, but still low enough that the generated actinides can be safely destroyed without transporting them to another site. One way to do this is to use fuel where actinides and uranium is mixed with inert zirconium, producing fuel elements with the desired properties.
In the thorium fuel cycle thorium-232 absorbs a neutron in either a fast or thermal reactor. The thorium-233 beta decays to protactinium-233 and then to uranium-233, which in turn is used as fuel. Hence, like uranium-238, thorium-232 is a fertile material.
After starting the reactor with existing U-233 or some other fissile material such as U-235 or Pu-239, a breeding cycle similar to but more efficient than that with U-238 and plutonium can be created. The Th-232 absorbs a neutron to become Th-233 which quickly decays to protactinium-233. Protactinium-233 in turn decays with a half-life of 27 days to U-233. In some molten salt reactor designs, the Pa-233 is extracted and protected from neutrons (which could transform it to Pa-234 and then to U-234), until it has decayed to U-233. This is done in order to improve the breeding ratio which is low compared to fast reactors.
Thorium is at least 4-5 times more abundant in nature than all of uranium isotopes combined; thorium is fairly evenly spread around Earth with a lot of countries having huge supplies of it; preparation of thorium fuel does not require difficult and expensive enrichment processes; the thorium fuel cycle creates mainly Uranium-233 contaminated with Uranium-232 which makes it harder to use in a normal, pre-assembled nuclear weapon which is stable over long periods of time (unfortunately drawbacks are much lower for immediate use weapons or where final assembly occurs just prior to usage time); elimination of at least the transuranic portion of the nuclear waste problem is possible in MSR and other breeder reactor designs.
One of the earliest efforts to use a thorium fuel cycle took place at Oak Ridge National Laboratory in the 1960s. An experimental reactor was built based on molten salt reactor technology to study the feasibility of such an approach, using thorium fluoride salt kept hot enough to be liquid, thus eliminating the need for fabricating fuel elements. This effort culminated in the Molten-Salt Reactor Experiment that used 232Th as the fertile material and 233U as the fissile fuel. Due to a lack of funding, the MSR program was discontinued in 1976.
Current industrial activity
Currently the only isotopes used as nuclear fuel are uranium-235 (U-235), uranium-238 (U-238) and plutonium-239, although the proposed thorium fuel cycle has advantages. Some modern reactors, with minor modifications, can use thorium. Thorium is approximately three times more abundant in the Earth's crust than uranium (and 550 times more abundant than uranium-235). However, there has been little exploration for thorium resources, and thus the proved resource is small. Thorium is more plentiful than uranium in some countries, notably India.
Heavy water reactors and graphite-moderated reactors can use natural uranium, but the vast majority of the world's reactors require enriched uranium, in which the ratio of U-235 to U-238 is increased. In civilian reactors, the enrichment is increased to 3-5% U-235 and 95% U-238, but in naval reactors there is as much as 93% U-235.
- "How much depleted uranium hexafluoride is stored in the United States?". Depleted UF6 Management Information Network. Retrieved 2008-01-15.
- "Susquehanna Nuclear Energy Guide" (PDF). PPL Corporation. Archived from the original (PDF) on 2007-11-29. Retrieved 2008-01-15.
- "Nuclear Fuel Cycle | World Nuclear Transport Institute". Wnti.co.uk. Retrieved 2013-04-20.
- A good report on the microstructure of used fuel is Lucuta PG et al. (1991) J Nuclear Materials 178:48-60
- V.V. Rondinella VV et al. (2000) Radiochimica Acta 88:527-531
- For a review of the corrosion of uranium dioxide in a waste store which explains much of the chemistry, see Shoesmith DW (2000) J Nuclear Materials 282:1-31
- Miserque F et al. (2001) J Nuclear Materials 298:280-90
- Further reading on fuel cladding interactions: Tanaka K et al. (2006) J Nuclear Materials 357:58-68
- P. Soudek, Š. Valenová, Z. Vavříková and T. Vaněk, Journal of Environmental Radioactivity, 2006, 88, 236-250
- page 169 Generic Assessment Procedures for Determining Protective Actions During a Reactor Accident, IAEA-TECDOC-955, 1997
- page 173 Generic Assessment Procedures for Determining Protective Actions During a Reactor Accident, IAEA-TECDOC-955, 1997
- page 171 Generic Assessment Procedures for Determining Protective Actions During a Reactor Accident, IAEA-TECDOC-955, 1997
- A. Preston, J.W.R. Dutton and B.R. Harvey, Nature, 1968, 218, 689-690.
- Baetslé, L.H.; De Raedt, Ch. (1997). "Limitations of actinide recycle and fuel cycle consequences: a global analysis Part 1: Global fuel cycle analysis". Nuclear Engineering and Design. 168 (1–3): 191–201. doi:10.1016/S0029-5493(96)01374-X. ISSN 0029-5493.
- Plus radium (element 88). While actually a sub-actinide, it immediately precedes actinium (89) and follows a three-element gap of instability after polonium (84) where no nuclides have half-lives of at least four years (the longest-lived nuclide in the gap is radon-222 with a half life of less than four days). Radium's longest lived isotope, at 1,600 years, thus merits the element's inclusion here.
- Specifically from thermal neutron fission of U-235, e.g. in a typical nuclear reactor.
- Milsted, J.; Friedman, A. M.; Stevens, C. M. (1965). "The alpha half-life of berkelium-247; a new long-lived isomer of berkelium-248". Nuclear Physics. 71 (2): 299. Bibcode:1965NucPh..71..299M. doi:10.1016/0029-5582(65)90719-4.
"The isotopic analyses disclosed a species of mass 248 in constant abundance in three samples analysed over a period of about 10 months. This was ascribed to an isomer of Bk248 with a half-life greater than 9 y. No growth of Cf248 was detected, and a lower limit for the β− half-life can be set at about 104 y. No alpha activity attributable to the new isomer has been detected; the alpha half-life is probably greater than 300 y."
- This is the heaviest nuclide with a half-life of at least four years before the "Sea of Instability".
- Excluding those "classically stable" nuclides with half-lives significantly in excess of 232Th; e.g., while 113mCd has a half-life of only fourteen years, that of 113Cd is nearly eight quadrillion years.
- M. I. Ojovan, W.E. Lee. An Introduction to Nuclear Waste Immobilisation, Elsevier Science Publishers B.V., ISBN 0-08-044462-8, Amsterdam, 315pp. (2005).
- Harvey, L.D.D. (2010). Energy and the New Reality 2: Carbon-Free Energy Supply- section 8.4. Earthscan. ISBN 9781849710732.
- Dyck, Peter; Crijns, Martin J. "Management of Spent Fuel at Nuclear Power Plants". IAEA Bulletin. Archived from the original on 2007-12-10. Retrieved 2008-01-15.
- "Historical video about the Integral Fast Reactor (IFR) concept. Uploaded by - Nuclear Engineering at Argonne".
- Warin D.; Konings R.J.M; Haas D.; Maritin P.; Bonnerot J-M.; Vambenepe G.; Schram R.P.C.; Kuijper J.C.; Bakker K.; Conrad R. (October 2002). "The Preparation of the EFTTRA-T5 Americium Transmutation Experiment" (PDF). Seventh Information Exchange Meeting on Actinide and Fission Product Partitioning and Transmutation. Retrieved 2008-01-15.
- Gudowski, W. (August 2000). "Why Accelerator-Driven Transmutation of Wastes Enables Future Nuclear Power?" (PDF). XX International Linac Conference. Archived from the original (PDF) on 2007-11-29. Retrieved 2008-01-15.
- Heighway, E. A. (1994-08-01). "An overview of accelerator-driven transmutation technology" (PDF). Retrieved 2008-01-15.
- "Accelerator-driven Systems (ADS) and Fast Reactors (FR) in Advanced Nuclear Fuel Cycles" (PDF). Nuclear Energy Agency. Retrieved 2008-01-15.
- Brolly Á.; Vértes P. (March 2005). "Concept of a Small-scale Electron Accelerator Driven System for Nuclear Waste Transmutation Part 2. Investigation of burnup" (PDF). Retrieved 2008-01-15.
- See thorium fuel cycle
- See Thorium occurrence for discussion of abundance.
- Chidambaram R. (1997). "Towards an Energy Independent India". Nu-Power. Nuclear Power Corporation of India Limited. Archived from the original on 2007-12-17. Retrieved 2008-01-15. |
Communications Satellite Industry
Communications Satellite Industry
The beginning of the satellite communications era began with the publication of a paper written by Arthur C. Clarke in 1945. The paper described human-tended space stations designed to facilitate communications links for points on Earth. The key to this concept was the placement of space stations in geosynchronous Earth orbit (GEO), a location 35,786 kilometers (22,300 miles) above Earth. Objects in this orbit will revolve about Earth along its equatorial plane at the same rate as the planet rotates. Thus, a satellite or space station in GEO will seem fixed in the sky and will be directly above an observer at the equator. A communications satellite in GEO can "see" about one-third of Earth's surface, so to make global communications possible, three satellites need to be placed in this unique orbit.
Clarke envisioned a space station, rather than a satellite, as a communications outpost because he felt that astronauts would be needed to change vacuum tubes for the receivers and transmitters. However, the concept became extraordinarily complex and expensive when life support, food, and living quarters were factored in. For this reason, and because telephone and television services were perceived as adequate, Clarke's idea was not given much attention. In 1948 the vacuum tube was replaced by longer-lived solid-state transistors, marking the dawn of microelectronics. Humans, it seemed, might not be required to tend space-based communications systems after all. Nonetheless, questions remained: Would there be a demand for communications satellites, and, if so, how would they be placed in orbit?
During the mid-twentieth century, people were generally satisfied with telephone and television service, both of which were transmitted by way of cable and radio towers. However, telephone service overseas was exceptionally bad, and live television could not be received or transmitted over great distances. Properly positioned satellites could provide unobstructed communications for nearly all points on Earth as long as there was a method to put them in orbit.
Shortly after World War II, the United States acquired the expertise of German rocket engineers through a secret mission called Operation Paper-clip. The German rocket program, which produced the world's first true rocket, the V-2,*was highly valuable to the United States. These engineers were sent to New Mexico to work for the army using hundreds of acquired V-2 missiles. Within a decade, the German engineers produced powerfulmissiles called Jupiter, Juno, and Redstone. At the same time, the U.S Air Force was interested in fielding intercontinental ballistic missiles (ICBMs) and was separately developing the Atlas, Thor, and Titan rockets to meet this mission. The navy also had a rocket program and was working on a medium-range missile called Vanguard.
On October 4, 1957, the Soviet Union launched Sputnik I, a satellite whose purpose was to demonstrate Soviet technology. Americans were alarmed and demanded that the government establish a space program to regain prestige. President Dwight Eisenhower, they felt, did not do enough to prevent the United States from lagging behind the Soviets technologically. In truth, Eisenhower had directed the navy to launch a satellite on Vanguard, but the rocket was encountering setbacks. The mission to launch the first American satellite fell to the army, whose Juno instrument was doing remarkably well. The satellite Explorer 1 finally went up on January 31, 1958. Launching satellites was possible, and communications satellite concepts were now seriously being considered.
The First Communications Satellites
On December 18, 1958, the military's Satellite Communications Repeater (SCORE) was launched into low Earth orbit (LEO) by a U.S. Air Force Atlas. SCORE was designed to receive a transmission, record it on tape, and then relay the transmission to another point on Earth within hours. President Eisenhower used the opportunity to demonstrate American technology by transmitting a recorded Christmas greeting to the world, the first time in history a satellite was used for communications.
Recognizing the potential of satellite communications, John Pierce, director of AT&T's Bell Telephone Laboratories, developed projects designed to test various communications satellite concepts. The National Aeronautics and Space Administration (NASA), only two years old, planned to send an inflatable sphere into space for scientific research. Pierce wanted to use the opportunity to reflect signals off the balloon's metallic surface. On August 12, 1960, the sphere, called Echo 1, was successfully launched, and Pierce was encouraged by the reflective signal tests. Because Echo 1 had no electronic hardware, the satellite was described as passive. For communications to be effective, Pierce felt that active satellites were required.
Meanwhile, the military was continuing with the tape-recorded communications concept, developing new satellites called Courier. The first one was destroyed when the rocket exploded. Courier 2 was successfully launched on October 4, 1960, but failed after seventeen days of operation. During this time, significant military resources were being allocated to Atlas, Titan, and intelligence satellites, which took priority.
Two years after the Echo 1 experiments, Bell Laboratories created Telstar, an active communications satellite designed to operate in medium Earth orbit (MEO), about 5,000 kilometers (3,107 miles) above Earth's surface. During this time, NASA selected a satellite design from RCA called Relay to test MEO communications but agreed to launch Telstar as soon as it was ready. Telstar 1 was launched on July 10, 1962, and Relay 1 was sent up on December 13 of the same year. Both were successful, and despite Relay 1's greater sophistication, people remembered Telstar's live television broadcasts from the United States to locations in Europe.
Advantages and Disadvantages.
Soon the advantages and disadvantages regarding LEO and MEO communications satellites were being studied. One problem with communications satellites in orbits lower than geosynchronous is the number of satellites required to sustain uninterrupted transmissions. Whereas a single GEO satellite can cover 34 percent of Earth's surface, individual LEO and MEO satellites cover only between 2 and 20 percent. This means that a fleet of satellites, called a "constellation," is required for a communications network.
The major advantage in using LEO and MEO communications satellites is a minimization of latency, or the time delay between a transmitted signal and a response, often called the "echo effect." Even though transmissions travel at the speed of light, a time delay of 0.24 seconds for a round-trip signal through a GEO satellite can make phone calls problematic. Despite this drawback, sending three communications satellites to GEO would save money, and people would not need to wait years for an LEO or MEO constellation to be complete.
Shortly after the Soviet Union launched the first human into space,* President John Kennedy wanted a national plan for space exploration and settled on a series of programs that included the famous Apollo missions to the Moon. Less familiar but perhaps more significant for the long term, Congress, with the support of President Kennedy, authorized the establishment of an organization designed to integrate the nation's space-based communications network.
Formed in February 1963 by the Communications Satellite Act of 1962, the Communications Satellite Corporation, or Comsat, was given the task of creating a national communications satellite system in the earliest possible time. Half of Comsat would be publicly traded, and the other half would be purchased by satellite manufacturers. Comsat's first major hurdle was deciding what kind of satellite system it would pursue: LEO, MEO, or GEO. Because Telstar and Relay were successful, these MEO systems seemed the default choice. For uninterrupted communications service, however, about twenty satellites such as Telstar or Relay were needed, costing an estimated $200 million. The president of Comsat, Joseph Charyk, a veteran of satellite engineering programs, was not sure that this was the right way to proceed.
Meanwhile, Hughes Aircraft Company was developing the Syncom series of satellites, each designed to test communications technologies in GEO. The first two satellites were not entirely successful, but Syncom 3, launched on August 19, 1964, achieved a stationary GEO. Charyk was aware of the Syncom project early on and followed its progress closely. Comsat was beginning to realize that a GEO communications satellite network was the most practical in terms of cost. Nevertheless, Comsat asked a variety of companies to study the feasibility of LEO communications constellations in the event that a GEO system was unsuccessful. AT&T and RCA researched the merits of a random system, in which satellites drifted freely without any particular relationship to one another. STL and ITT studied the phased approach, where strings of satellites orbiting at LEO were spaced in such a way to allow for continuous, uninterrupted communications. Comsat finally decided on a GEO system, and on April 6, 1965, it launched Early Bird. This satellite also became a test bed for the latency problem, and methods to suppress the echo effect were successfully employed.
During this time, NASA continued to fund research in communications satellite technology, contributing to programs such as Applications Technology Satellites (ATS). Six ATS units were developed and launched, and each was designed to test various technologies related to bandwidth capacity and new components. Of particular importance was bandwidth capacity, the range of frequencies used in a satellite.
Satellite communications providers were particularly interested in boosting the capacity of transponders used for telephone conversations and television broadcasts. A telephone call, for example, uses about 5 kilohertz of bandwidth. A satellite with 50 kilohertz of bandwidth can handle ten calls simultaneously. Early satellites could only handle about thirty calls at one time and were easily overwhelmed. Research continued to improve the capacity problems, and digital technologies have significantly increased the number of simultaneous calls. Satellite engineers also designed antennas that did not interfere with systems orbiting nearby and recommended adequate separation between satellites to prevent signals from interfering.
After the establishment of Comsat, efforts were under way to approach the international community about setting up a global communications satellite network. Comsat dispatched several key people, along with U.S. State Department officials, to a dozen nations interested in the communications satellite market. In 1964, Intelsat was formed, and it started operations using part of the new Early Bird satellite launched in 1965. Comprised originally of twelve members, Intelsat is an organization that owns and operates global communications networks providing voice, video, and data services. Intelsat collects investment capital from its members and makes a profit from the sale or lease of satellite services. In 2000, Intelsat had 143 member countries and signatories, with Comsat still representing the United States.
Other international communications satellite organizations have since formed, such as Eutelsat, a cooperative formed in 1977 providing regional communications services for Europe. France, England, and Germany established the European Space Research Organization (ESRO) and the European Launch Development Organization (ELDO) shortly after the launch of an experimental communications satellite called Symphonie in 1967. ESRO was responsible for research, development, construction, and operation of payloads and ELDO handled launch activities. Because of management and system integration concerns, ESRO and ELDO merged to form the European Space Agency in 1974. Three years later, the Conference of European Posts and Telecommunications (CEPT) approved the formation of Eutelsat, which by 2000 had nearly fifty members.
Comsat was also asked to assist in the development of a regional communications satellite organization for southwestern Asia, northern Africa, and areas of southern Asia. Comsat agreed and was contracted to develop and build what later became known as Arabsat. Inmarsat, founded in 1982, is another international organization providing global communications services to seagoing vessels and oil platforms.
The Soviet Union, recognizing the benefits of a global communications satellite network, was not interested in a GEO system because of the country's northern location. A GEO system comprised of three satellites would miss parts of the Soviet Union. The Soviets developed an ingenious solution by launching communications satellites into highly elliptical orbits. The orbit consisted of a very close and fast approach over the Southern Hemisphere while tracing a slow and lengthy arc over the Soviet mainland. In 1965 the Soviet Union launched its first communications satellite as part of an ongoing system called Molniya, a name also assigned to the unusual orbit it occupies.
The Soviet Union, despite being approached by representatives of Comsat and the State Department to join Intelsat, declined membership and initiated a regional network in 1971 called Intersputnik. Intersputnik was successful during the following decades with its Gorizont, Express, and Gals satellites but experienced funding difficulties after the collapse of the Soviet Union in 1991. In the 1990s, however, Intersputnik was revitalized with a membership of twenty-three nations and the recent introduction of a new series of satellites, the Express-A.
Back to LEO?
In the early 1990s, LEO communications satellite constellations were revisited. Microelectronics was allowing for smaller satellites with greater capacities, and the launch industry was stronger than it was thirty years earlier. Two companies that pursued this concept were Iridium and Teledesic.
Iridium's plan was to loft about 100 satellites into several LEOs to provide uninterrupted cell phone and pager services anywhere on Earth. Iridium became the first company to provide these services on November 1, 1998. Sixty-six Iridium satellites, all built by Motorola, were launched in the late 1990s. Unfortunately, Iridium filed for bankruptcy in 1999.*
Despite the anticipated effect of Iridium's 1999 bankruptcy on the market, Teledesic, a company planning to provide computer networking, wireless Internet access, interactive media, and voice and video services, will use LEO satellites developed and built by Motorola. Founded by Craig McCaw and Microsoft founder Bill Gates with $9 billion in 1990, Teledesic also experienced financial troubles but by 2000 was prepared to tap into part of the market originally pursued by Iridium. With Lockheed Martin contracted to provide launch services for all 288 satellites plus spares, Teledesic plans to be operational in 2005.
By 1998 satellite communications services included telephone, television, radio, and data processing, and totaled about $65.9 billion in revenues, or almost 7 percent of the total telecommunications industry. During that year, about 215 communications satellites were in GEO and 187 in LEO.
see also Clarke, Arthur C. (volume 1); Communications, Future Needs in (volume 4); Ground Infrastructure (volume 1); Satellite Industry (volume 1).
Alper, Joel, and Joseph N. Pelton, eds. The Intelsat Global Satellite System. New York:American Institute of Aeronautics and Astronautics, 1984.
Brown, Martin P., ed. Compendium of Communication and Broadcast Satellites. New York: Institute of Electrical and Electronics Engineers, 1981.
Caprara, Giovanni. The Complete Encyclopedia of Space Satellites. New York: Portland House, 1986.
Clarke, Arthur C. "Extraterrestrial Relays: Can Rocket Stations Give World-wideRadio Coverage?" Wireless World, October (1945):305-308.
Hickman, William. Talking Moons: The Story of Communications Satellites. New York:World Publishing Company, 1970.
Launius, Roger D. NASA: A History of the U.S. Civil Space Program. Malabar, FL:Krieger Publishing, 1994.
McLucas, John L. Space Commerce. Cambridge, MA: Harvard University Press, 1991.
Sellers, Jerry Jon. Understanding Space: An Introduction to Astronautics. New York: Mc-Graw-Hill, 1994.
Walter, William J. Space Age. New York: Random House, 1992.
*In 1944 and 1945, the Nazis launched V-2 ballistic missiles toward England, but the assault came too late to turn the war in Germany's favor.
*On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first human in space, making a one-orbit, ninety-minute flight around Earth.
*In 2000 Iridium Satellite LLC purchased the Iridium satellite system and began selling satellite phone service at much lower prices than its predecessor, Iridium.
"Communications Satellite Industry." Space Sciences. . Encyclopedia.com. (January 17, 2019). https://www.encyclopedia.com/science/news-wires-white-papers-and-books/communications-satellite-industry
"Communications Satellite Industry." Space Sciences. . Retrieved January 17, 2019 from Encyclopedia.com: https://www.encyclopedia.com/science/news-wires-white-papers-and-books/communications-satellite-industry
Modern Language Association
The Chicago Manual of Style
American Psychological Association
COMMUNICATION SATELLITES. Artificial communication satellites can relay television, radio, and telephone communication between any two places on the globe and from space to other objects in space or on earth. The military, commercial companies, and amateurs from over twenty nations have hundreds of communication satellites orbiting the earth. This has been accomplished in a mere forty-five years.
The origin of artificial communications satellites began over a century ago with Guglielmo Marconi's electric waves transmission in 1896. The possibilities for satellites improved gradually with advances in short wave communication and radar in the 1930s, and with the possibilities of rocket flight after Robert H. Goddard's rocket demonstration in the 1920s. In 1945, British scientist and science fiction author Arthur C. Clarke published an article in which he predicted the launching of orbital rockets that would relay radio signals to earth. At last, on 4 October 1957, the Soviet Union launched Sputnik I, the first artificial satellite. Clarke's seemingly far-fetched prediction had come true in about ten years. It took over fifty years from the early possibilities to the first satellite, but the next forty-five years saw tremendous and rapid technical advancement and proliferation of worldwide satellite communication.
Early Communication Satellites
The United States entered the Space Age when it launched the Explorer 1 satellite in January 1958. At the end of 1958, an Atlas B rocket launched a SCORE communications satellite, which contained two radio receivers, two transmitters, and two tape recorders. It broadcast a taped Christmas greeting from President Dwight D. Eisenhower. Then, in August 1960, the National Aeronautics and Space Administration (NASA) launched Echo 1, a giant, ten-story Mylar balloon reflector that relayed voice signals. It was so bright it could be seen by the naked eye. Echo 1 launched the American satellite communication era.
At that time, there were two principal viewpoints toward satellite relay. One side favored the Echo passive satellite system, artificial "moons" that would reflect electromagnetic energy. The other view favored active satellites, which would carry their own equipment for reception and transmission. Courier 1B, launched in October 1960 shortly after Echo 1, was the first active transmitter and used solar cells and not chemical batteries for power. Telstar 1, the first commercial satellite, was built by AT&T and launched by NASA in 1962. It provided direct television transmission between the United States and Japan and Europe and proved the superiority of active satellite communication, as well as the capability of commercial satellites (COMSATS) to provide multi channel, wideband transmission.
Satellites receive signals from a ground station, amplify them, and then transmit them at a different frequency to another station. Most ground stations have huge antennas to receive transmissions. Smaller antennas than used in years past have been placed closer to the user, such as on top of a building. By using frequencies allocated solely to a satellite, rather than going through the earth microwave stations, communications are much faster. This allows for teleconferencing and for computer to computer communications.
In 1962, President John F. Kennedy signed legislation to create the Communications Satellite Corporation to represent the United States in a worldwide satellite system. In 1964, under United Nations auspices, the International Telecommunications Satellite Consortium (Intelsat) was formed. From then on, communication satellites had synchronous, high-altitude, elliptical orbits, which improved communications. The Intelsat 1 (Early Bird) was launched in 1965 for transatlantic communication service. It could transmit 240 simultaneous telephone calls or one color television channel between North America and Europe.
By 1970, the Intelsat 4s provided 4,000 voice circuits each; by 1990, each satellite could carry over 24,000 circuits. As of 2002, there were 19 Intelsats in orbit, as well as many other competing satellite communications systems in the United States and Europe. Intelsats can communicate with each other and with other satellite systems as well. For instance, Intelsats and the Russian satellites provide the hotline between Washington, D.C., and Moscow.
Development in communication satellites systems results from many sources. The first ham, or amateur, radio satellites were launched in 1961. By 1991, thirty-nine amateur communications satellites had been launched, many sent free as ballast on government rockets. As of 2002, there were six countries that owned their own communications satellites for domestic telephone service and some twenty-four countries that leased from the Intelsat systems for domestic service. Commercial satellites have been developed by some twenty countries and provide many communications services. Television programs can be transmitted internationally by beaming off satellites. Satellites also relay programs to cable television systems and homes equipped with dish antennas, until recently only a possibility for sophisticated military use.
One new technique of the 1990s is called frequency reuse, which expands the capabilities of satellites in several ways. It allows satellites to communicate with a number of ground stations using the same frequency. The beam widths can be adjusted to cover different-sized areas—from as large as the United States to as small as a single small state. Additionally, two stations far enough apart can receive different messages transmitted on the same frequency. Also, satellite antennas have been designed to transmit several beams of different sizes in different directions.
The satellite communications systems of NASA, called Tracking and Data Relay Satellites (TDRS), which began in 1983, provide links between space shuttles and ground control. By 1990 one TDRS satellite could relay all the data in a twenty-four volume encyclopedia in five seconds. The new TDRS converts solar energy to electricity and uses antennas to transmit up to 300 million bits of information per second per radio channel. The latest versions allow communication between spacecrafts, between a shuttle and a space station, or with the Hubble Space Telescope.
There is also now a mobile telecommunications network which provides data digital links and telephone and fax communication between ships or with airplanes on international flights. Ships can also use two satellites at two different locations for navigation purposes. Laser beams, operating in the blue-green wavelength which penetrates water, have been used for communication between satellites and submarines.
In the early 2000s, developments in satellites use networks of small satellites in low earth orbit (1,200 miles or less above the earth) to provide global telephone communications. The special telephones used allow access to regular telephone networks from anywhere on the globe, creating a true "global village."
Curtis, Anthony R. ed. Space Almanac. Houston: Gulf Publishing Co., 1992.
McGraw Hill Encyclopedia of Space. West Germany: Editions Rombaldi, 1967.
"Communication Satellites." Dictionary of American History. . Encyclopedia.com. (January 17, 2019). https://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/communication-satellites
"Communication Satellites." Dictionary of American History. . Retrieved January 17, 2019 from Encyclopedia.com: https://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/communication-satellites
Modern Language Association
The Chicago Manual of Style
American Psychological Association
communications satellite artificial satellite that functions as part of a global radio-communications network. Echo 1, the first communications satellite, launched in 1960, was an instrumented inflatable sphere that passively reflected radio signals back to earth. Later satellites carried with them electronic devices for receiving, amplifying, and rebroadcasting signals to earth. Relay 1, launched in 1962 by the National Aeronautics and Space Administration (NASA), was the basis for Telstar 1, a commercially sponsored experimental satellite. Geosynchronous orbits (in which the satellite remains over a single spot on the earth's surface) were first used by NASA's Syncom series and Early Bird (later renamed Intelsat 1), the world's first commercial communications satellite.
In 1962, the U.S. Congress passed the Communications Satellite Act, which led to the creation of the Communications Satellite Corporation (Comsat). Agencies from 17 other countries joined Comsat in 1964 in forming the International Telecommunications Satellite Consortium (Intelsat) for the purpose of establishing a global commercial communications network. Renamed the International Telecommunications Satellite Organization (ITSO) in 1974, the organization transferred its satellite network to a private corporation, Intelsat, Ltd., in 2001. Intelsat now operates network of more than 50 satellites to provide global telecommunications. It has orbited several series of Intelsat satellites. ITSO continues to oversee the public service obligations of Intelsat.
Inmarsat was established in 1979 to serve the maritime industry by developing satellite communications for ship management and distress and safety applications. Inmarsat was originally an intergovernmental organization called the International Maritime Satellite Organization but later changed its name to the International Mobile Satellite Organization to reflect its expansion into land, mobile, and aeronautical communications. In 1999 its telecommunications operations became a private company as Inmarsat, and the International Mobile Satellite Organization became responsible for overseeing Inmarsat's public service obligations. Inmarsat's users now include thousands of people who live or work in remote areas without reliable terrestrial networks. Inmarsat presently has more than ten satellites in geosynchronous orbits.
In addition to the Intelsat and Inmarsat satellites, many others are in orbit, some managed by private companies and others by government-owned operators. These are used by individual countries, organizations, and commercial ventures for internal communications or for business or military use. A new generation of satellites, called direct-broadcast satellites, transmits directly to small domestic antennas to provide such services as cablelike television programming.
See G. D. Gordon, Principles of Communications Satellites (1993); D. H. Martin, Communications Satellites, 1958–1995 (1996); B. G. Evans, ed., Satellite Communication Systems (3d ed. 1999).
"communications satellite." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (January 17, 2019). https://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/communications-satellite
"communications satellite." The Columbia Encyclopedia, 6th ed.. . Retrieved January 17, 2019 from Encyclopedia.com: https://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/communications-satellite
Modern Language Association
The Chicago Manual of Style
American Psychological Association
"Communications Satellite." World Encyclopedia. . Encyclopedia.com. (January 17, 2019). https://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/communications-satellite
"Communications Satellite." World Encyclopedia. . Retrieved January 17, 2019 from Encyclopedia.com: https://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/communications-satellite |
18 Classes and Methods
1. Object-oriented features
Python is an
object-oriented programming language, which means that it
provides features that support object-oriented programming. It is not
easy to define object-oriented programming, but we have already seen
some of its characteristics:
• Programs are
made up of object
definitions and function definitions, and most of the computation is
expressed in terms of operations on objects.
• Each object
definition corresponds to some object or concept in the real world, and
the functions that operate on that object correspond to the ways
real-world objects interact.
Methods are just
like functions, with two differences:
• Methods are
defined inside a class definition in order to make the relationship
between the class and the method explicit.
• The syntax for
invoking a method is different from the syntax for calling a function.
note that, the parentheses after the 'class Time' at the top of the
script is redundant. It was used often in older versions of python)
Please note that
'%.2d' makes '09' happen in the results. If you have '%.3d', it will
show you '009' in the results.
You can put the
printing function into the class Time() as a method.
two scripts got the same results. The first method has the class
'Time()' and the function 'print_time()' working independently.
However, the second method has the function embedded into the class
Time() and acts as a 'method' of the class Time(). In that case, the
function can be called as an attribute of the class and being called as
'Time.print_time()' in the script (the dot notation).
The third way is
even more concise.
the first parameter of a method is called self, so it would be more
common to write print_time like this:
made a very intuitive explanation on the differences among these
reason for this convention is an implicit metaphor:
The syntax for a function call, print_time(start), suggests that the
function is the active agent. It says something like, “Hey print_time!
Here’s an object for you to print.”
In object-oriented programming, the objects are the active agents. A
method invocation like start.print_time() says “Hey start! Please print
shifting responsibility from the
functions onto the objects makes it possible to write more versatile
functions (or methods), and makes it easier to maintain and reuse code
One more example
Look at the
following example and try to understand why we use 'self' there:
baby-steps of this code and try to understand this:
This also works:
I delete the 'start' variable in the last line of the script and it
starts working. The reason is 'start' itself contains the attributes
already. It is like calling the object 'start', hey, do this
'print_time()' fuction for yourself, so all its attributes are treated
by the function 'print_time()'.
in the function is working as a parameter that represents the object
is surprizing that this works as well.
However, a more
common to do this is using 'self' to pass the attributes of the object
to the function.
This just works
1. Create a class called
'Score' which has the following attributes: Score.chemistry,
Score.physics, Score.engineering, and Score.art. This class has a
method called 'scoreAverage(course1,course2,course3,course4)', which
can calculate the average of all courses and report a GPA according to
the following table:
an instance called 'Tommy_score" from 'Score', provide four scores as
the inputs and print out the GPA in the form of : 'Tommy has a GPA of |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.